Some normalization methods were designed for use in [[Transformer (deep learning architecture)|transformers]].
The original 2017 transformer used the "post-LN" configuration for its LayerNorms. It was difficult to train, and required careful [[Hyperparameter optimization|hyperparameter tuning]] and a "warm-up" in [[learning rate]], where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018,<ref>{{Citation |last1=Wang |first1=Qiang |title=Learning Deep Transformer Models for Machine Translation |date=2019-06-04 |url=https://arxiv.org/abs/1906.01787 |access-date=2024-10-18 |arxiv=1906.01787 |last2=Li |first2=Bei |last3=Xiao |first3=Tong |last4=Zhu |first4=Jingbo |last5=Li |first5=Changliang |last6=Wong |first6=Derek F. |last7=Chao |first7=Lidia S.}}</ref> was found to be easier to train, requiring no warm-up, leading to faster convergence.<ref name="auto1">{{cite arXiv |eprint=2002.04745 |class=cs.LG |first1=Ruibin |last1=Xiong |first2=Yunchang |last2=Yang |title=On Layer Normalization in the Transformer Architecture |date=2020-06-29 |last3=He |first3=Di |last4=Zheng |first4=Kai |last5=Zheng |first5=Shuxin |last6=Xing |first6=Chen |last7=Zhang |first7=Huishuai |last8=Lan |first8=Yanyan |last9=Wang |first9=Liwei |last10=Liu |first10=Tie-Yan}}</ref>
'''FixNorm'''<ref>{{Citation |last1=Nguyen |first1=Toan Q. |title=Improving Lexical Choice in Neural Machine Translation |date=2018-04-17 |url=https://arxiv.org/abs/1710.01329 |access-date=2024-10-18 |arxiv=1710.01329 |last2=Chiang |first2=David}}</ref> and '''ScaleNorm<ref>{{Cite journal |last1=Nguyen |first1=Toan Q. |last2=Salazar |first2=Julian |date=2019-11-02 |title=Transformers without Tears: Improving the Normalization of Self-Attention |doi=10.5281/zenodo.3525484|arxiv=1910.05895 }}</ref>''' both normalize activation vectors in a transformer. The FixNorm method divides the ''output'' vectors from a transformer by their L2 norms, then multiplies by a learned parameter <math>g</math>. The ScaleNorm replaces all LayerNorms inside a transformer by division with L2 norm, then multiplying by a learned parameter <math>g'</math> (shared by all ScaleNorm modules of a transformer). '''Query-Key normalization''' ('''QKNorm''')<ref>{{Cite journal |last1=Henry |first1=Alex |last2=Dachapally |first2=Prudhvi Raj |last3=Pawar |first3=Shubham Shantaram |last4=Chen |first4=Yuxuan |date=November 2020 |editor-last=Cohn |editor-first=Trevor |editor2-last=He |editor2-first=Yulan |editor3-last=Liu |editor3-first=Yang |title=Query-Key Normalization for Transformers |url=https://aclanthology.org/2020.findings-emnlp.379/ |journal=Findings of the Association for Computational Linguistics: EMNLP 2020 |___location=Online |publisher=Association for Computational Linguistics |pages=4246–4253 |doi=10.18653/v1/2020.findings-emnlp.379|arxiv=2010.04245 }}</ref> normalizes query and key vectors to have unit L2 norm.
In '''nGPT''', many vectors are normalized to have unit L2 norm:<ref>{{Citation |last1=Loshchilov |first1=Ilya |title=nGPT: Normalized Transformer with Representation Learning on the Hypersphere |date=2024-10-01 |url=https://arxiv.org/abs/2410.01131 |access-date=2024-10-18 |arxiv=2410.01131 |last2=Hsieh |first2=Cheng-Ping |last3=Sun |first3=Simeng |last4=Ginsburg |first4=Boris}}</ref> hidden state vectors, input and output embedding vectors, weight matrix columns, and query and key vectors.