1.58-bit large language model: Difference between revisions

Content deleted Content added
BitNet: Expanding article
BitNet: Expanding article
Line 4:
 
== BitNet ==
In 2024, Ma et al., researchers at [[Microsoft]] declared that their 1.58-bit model, '''''BitNet''' b1.58'' is comparable in performance to the 16-bit [[Llama 2]] and opens the era of 1-bit LLM.{{sfn|Huyen|2024|p=330}} BitNet creators did not use the post-training quantization of weights but instead relied on the new ''BitLinear'' transform that replaced the ''nn.Linear'' layer of the traditional transformer design.{{sfn|Wang|Ma|Dong|Huang|2023|p=1}}
 
In 2025, Microsoft researchers had released an [[open-weights]] and [[open inference code]] model ''BitNet b1.58 2B4T'' demonstrating performance competitive to the full precision models at 2B parameters and 4T training tokens.{{sfn|Ma|Wang|Huang|Zhang|2025|p=}}