Statistical model validation: Difference between revisions

Content deleted Content added
OAbot (talk | contribs)
m Open access bot: doi updated in citation with #oabot.
Citation bot (talk | contribs)
Alter: title, template type. Add: isbn, pages, chapter, authors 1-1. Removed proxy/dead URL that duplicated identifier. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | #UCB_toolbar
Line 19:
If new data becomes available, an existing model can be validated by assessing whether the new data is predicted by the old model. If the new data is not predicted by the old model, then the model might not be valid for the researcher's goals.
 
With this in mind, a modern approach is to validate a neural network is to test its performance on ___domain-shifted data. This ascertains if the model learned ___domain-invariant features.<ref>{{Cite journalbook |lastlast1=Feng |firstfirst1=Cheng |last2=Zhong |first2=Chaoliang |last3=Wang |first3=Jie |last4=Zhang |first4=Ying |last5=Sun |first5=Jun |last6=Yokota |first6=Yasuto |datetitle=JulyProceedings 2022of the Thirty-First International Joint Conference on Artificial Intelligence |titlechapter=Learning Unforgotten Domain-Invariant Representations for Online Unsupervised Domain Adaptation |urldate=http://dx.doi.org/10.24963/ijcai.July 2022/410 |journalpages=Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence2958–2965 |___location=California |publisher=International Joint Conferences on Artificial Intelligence Organization |doi=10.24963/ijcai.2022/410|isbn=978-1-956792-00-3 |doi-access=free }}</ref>
 
=== A Note of Caution ===