Normalization (machine learning): Difference between revisions

Content deleted Content added
m Fixed a reference. Please see Category:CS1 errors: dates.
Line 210:
\end{aligned}
</math>
 
== Transformers ==
Some normalization methods were designed for use in [[Transformer (deep learning architecture)|Transformers]].
 
The original 2017 Transformer used the "post-LN" configuration for its LayerNorms. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018,<ref>{{Citation |last=Wang |first=Qiang |title=Learning Deep Transformer Models for Machine Translation |date=2019-06-04 |url=https://arxiv.org/abs/1906.01787 |access-date=2024-10-18 |doi=10.48550/arXiv.1906.01787 |last2=Li |first2=Bei |last3=Xiao |first3=Tong |last4=Zhu |first4=Jingbo |last5=Li |first5=Changliang |last6=Wong |first6=Derek F. |last7=Chao |first7=Lidia S.}}</ref> was found to be easier to train, requiring no warm-up, leading to faster convergence.<ref name="auto1">{{cite arXiv |eprint=2002.04745 |class=cs.LG |first1=Ruibin |last1=Xiong |first2=Yunchang |last2=Yang |title=On Layer Normalization in the Transformer Architecture |date=2020-06-29 |last3=He |first3=Di |last4=Zheng |first4=Kai |last5=Zheng |first5=Shuxin |last6=Xing |first6=Chen |last7=Zhang |first7=Huishuai |last8=Lan |first8=Yanyan |last9=Wang |first9=Liwei |last10=Liu |first10=Tie-Yan}}</ref>
 
'''FixNorm'''<ref>{{Citation |last=Nguyen |first=Toan Q. |title=Improving Lexical Choice in Neural Machine Translation |date=2018-04-17 |url=https://arxiv.org/abs/1710.01329 |access-date=2024-10-18 |doi=10.48550/arXiv.1710.01329 |last2=Chiang |first2=David}}</ref> and '''ScaleNorm<ref>{{Cite journal |last=Nguyen |first=Toan Q. |last2=Salazar |first2=Julian |date=2019-11-02 |title=Transformers without Tears: Improving the Normalization of Self-Attention |url=https://arxiv.org/abs/1910.05895 |doi=10.5281/zenodo.3525484}}</ref>''' both normalize activation vectors in a Transformer. The FixNorm method divides the ''output'' vectors from a Transformer by their L2 norms, then multiply by a learned parameter <math>g</math>. The ScaleNorm replaces all LayerNorms inside a Transformer by division with L2 norm, then multiplying by a learned parameter <math>g'</math> (shared by all ScaleNorm modules of a Transformer). '''Query-Key normalization''' ('''QKNorm''')<ref>{{Cite journal |last=Henry |first=Alex |last2=Dachapally |first2=Prudhvi Raj |last3=Pawar |first3=Shubham Shantaram |last4=Chen |first4=Yuxuan |date=November 2020 |editor-last=Cohn |editor-first=Trevor |editor2-last=He |editor2-first=Yulan |editor3-last=Liu |editor3-first=Yang |title=Query-Key Normalization for Transformers |url=https://aclanthology.org/2020.findings-emnlp.379/ |journal=Findings of the Association for Computational Linguistics: EMNLP 2020 |___location=Online |publisher=Association for Computational Linguistics |pages=4246–4253 |doi=10.18653/v1/2020.findings-emnlp.379}}</ref> normalizes query and key vectors to have unit L2 norm.
 
In '''nGPT''', many vectors are normalized to have unit L2 norm:<ref>{{Citation |last=Loshchilov |first=Ilya |title=nGPT: Normalized Transformer with Representation Learning on the Hypersphere |date=2024-10-01 |url=https://arxiv.org/abs/2410.01131 |access-date=2024-10-18 |doi=10.48550/arXiv.2410.01131 |last2=Hsieh |first2=Cheng-Ping |last3=Sun |first3=Simeng |last4=Ginsburg |first4=Boris}}</ref> hidden state vectors, input and output embedding vectors, weight matrix columns, query and key vectors.
 
== Miscellaneous ==
'''Gradient normalization''' ('''GradNorm''')<ref>{{Cite journal |last1=Chen |first1=Zhao |last2=Badrinarayanan |first2=Vijay |last3=Lee |first3=Chen-Yu |last4=Rabinovich |first4=Andrew |date=2018-07-03 |title=GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks |url=https://proceedings.mlr.press/v80/chen18a.html |journal=Proceedings of the 35th International Conference on Machine Learning |language=en |publisher=PMLR |pages=794–803 |arxiv=1711.02257}}</ref> normalizes gradient vectors during backpropagation.
 
'''Query-Key normalization''' ('''QK-Norm''')<ref>{{Cite journal |last=Henry |first=Alex |last2=Dachapally |first2=Prudhvi Raj |last3=Pawar |first3=Shubham Shantaram |last4=Chen |first4=Yuxuan |date=November 2020 |editor-last=Cohn |editor-first=Trevor |editor2-last=He |editor2-first=Yulan |editor3-last=Liu |editor3-first=Yang |title=Query-Key Normalization for Transformers |url=https://aclanthology.org/2020.findings-emnlp.379/ |journal=Findings of the Association for Computational Linguistics: EMNLP 2020 |___location=Online |publisher=Association for Computational Linguistics |pages=4246–4253 |doi=10.18653/v1/2020.findings-emnlp.379}}</ref> is designed for [[Transformer (deep learning architecture)|Transformers]].
 
== See also ==