1.58-bit large language model: Difference between revisions

Content deleted Content added
m top: Adding/improving reference(s)
top: Expanding article
Line 1:
{{in use}}
A '''1.58-bit Large Language Model''' ('''1.58-bit LLM''') is a version of a [[large language model]] with weights using only three values: -1, 0, and +1. This restriction allows the model to replace costly multiplications with additions and reduce the storage memory. Since the end-task performance and perplexity of 1.58-bit LLMs are close to their "full precision" (16-bit [[FP16]] or [[BF16]]) counterparts, this design allows reaching the same [[artificial intelligence]] goals with much lower hardware requirements, latency, and training effort.{{sfn|Ma|Wang|Ma|Wang|2024|p=1}}
 
==References==