Computer number format: Difference between revisions

Content deleted Content added
SporkBot (talk | contribs)
m Replace or disable a template per TFD outcome; no change in content
m Reverted 1 edit by 191.91.246.208 (talk) to last revision by 209.35.193.1
 
(30 intermediate revisions by 25 users not shown)
Line 1:
{{short description|Internal representation of numeric values in a digital computer}}
{{refimprove|date=JuneOctober 20202022}}
A '''computer number format''' is the internal representation of numeric values in digital device hardware and software, such as in programmable [[computer]]s and [[calculator]]s.<ref>{{cite book |title = Inside the machine: an illustrated introduction to microprocessors and computer architecture |author = Jon Stokes |publisher = No Starch Press |year = 2007 |isbn = 978-1-59327-104-6 |page = 66 |url = https://books.google.com/books?id=Q1zSIarI8xoC&pg=PA66}}</ref> Numerical values are stored as groupings of [[bit]]s, such as [[byte]]s and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer;{{cn|date=June 2020}} the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.
A '''computer number format''' is the internal representation of numeric values in digital device hardware and software, such as in programmable [[computer]]s and [[calculator]]s.<ref>
{{cite book
| title = Inside the machine: an illustrated introduction to microprocessors and computer architecture
| author = Jon Stokes
| publisher = No Starch Press
| year = 2007
| isbn = 978-1-59327-104-6
| page = 66
| url = https://books.google.com/books?id=Q1zSIarI8xoC&pg=PA66
}}</ref> Numerical values are stored as groupings of [[bit]]s, such as [[byte]]s and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer;{{cn|date=June 2020}} the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.
 
==Binary number representation==
Line 72 ⟶ 63:
While a single bit, on its own, is able to represent only two values, a [[Bit string|string of bits]] may be used to represent larger values. For example, a string of three bits can represent up to eight distinct values as illustrated in Table 1.
 
As the number of bits composing a string increases, the number of possible ''0'' and ''1'' combinations increases [[Exponentiation|exponentially]]. A single bit allows only two value-combinations, two bits combined can make four separate values, three bits for eight, and so on, increasing with the formula 2^<sup>n</sup>. The amount of possible combinations doubles with each binary digit added as illustrated in Table 2.
 
Groupings with a specific number of bits are used to represent varying things and have specific names.
 
A ''[[byte]]'' is a bit string containing the number of bits needed to represent a [[Character (computing)|character]]. On most modern computers, this is an eight bit string. Because the definition of a byte is related to the number of bits composing a character, some older computers have used a different bit length for their byte.<ref>{{cite web|title=byte definition|url=http://catb.org/~esr/jargon/html/B/byte.html|access-date=24 April 2012}}</ref> In many [[Computer Architecture|computer architectures]], the byte is the smallest [[Byte addressing|addressable unit]], the atom of addressability, say.<!-- Find an external source for this --> For example, even though 64-bit processors may address memory sixty-four bits at a time, they may still split that memory into eight-bit pieces. This is called byte-addressable memory. Historically, many [[CPU core voltage|CPUs]]s read data in some multiple of eight bits.<ref>{{cite web|title=Microprocessor and CPU (Central Processing Unit)|url=http://www.networkdictionary.com/hardware/mc.php|publisher=Network Dictionary|access-date=1 May 2012|archive-url=https://web.archive.org/web/20171003225434/http://www.networkdictionary.com/hardware/mc.php|archive-date=3 October 2017|url-status=dead}}</ref> Because the byte size of eight bits is so common, but the definition is not standardized, the term [[Octet (computing)|octet]] is sometimes used to explicitly describe an eight bit sequence.
 
A ''[[nibble]]'' (sometimes ''nybble''), is a number composed of four bits.<ref>{{cite web|title=nybble definition|url=http://catb.org/~esr/jargon/html/N/nybble.html|access-date=3 May 2012}}</ref> Being a [[half-byte]], the nibble was named as a play on words. A person may need several nibbles for one bite from something; similarly, a nybble is a part of a byte. Because four bits allow for sixteen values, a nibble is sometimes known as a [[hexadecimal digit]].<ref>{{cite web|title=Nybble|url=http://www.techterms.com/definition/nybble|publisher=TechTerms.com|access-date=3 May 2012}}</ref>
Line 156 ⟶ 147:
The number of bits needed for the precision and range desired must be chosen to store the fractional and integer parts of a number. For instance, using a 32-bit format, 16 bits may be used for the integer and 16 for the fraction.
 
The eight's bit is followed by the four's bit, then the two's bit, then the one's bit. The fractional bits continue the pattern set by the integer bits. The next bit is the half's bit, then the quarter's bit, then the eighth's bit, and so on. For example:
{| class="toccolours"
|- style="text-align:center"
Line 198 ⟶ 189:
* a sign bit, giving the sign of the number.
 
Let'sWith seethe what this format looks like by showing how such a number would bebits stored in 8 bytes of memory:
 
{| class="wikitable"
Line 297 ⟶ 288:
The representation has a limited precision. For example, only 15 decimal digits can be represented with a 64-bit real. If a very small floating-point number is added to a large one, the result is just the large one. The small number was too small to even show up in 15 or 16 digits of resolution, and the computer effectively discards it. Analyzing the effect of limited precision is a well-studied problem. Estimates of the magnitude of round-off errors and methods to limit their effect on large calculations are part of any large computation project. The precision limit is different from the range limit, as it affects the significand, not the exponent.
 
The significand is a binary fraction that doesn't necessarily perfectly match a decimal fraction. In many cases a sum of reciprocal powers of 2 does not match a specific decimal fraction, and the results of computations will be slightly off. For example, the decimal fraction "0.1" is equivalent to an infinitely repeating binary fraction: 0.000110011 ...<ref>{{cite web|last=Goebel|first=Greg|title=Computer Numbering Format|url=http://www.vectorsite.net/tsfloat.html|archive-url=https://archive.today/20130222091425/http://www.vectorsite.net/tsfloat.html|url-status=usurped|archive-date=February 22, 2013|access-date=10 September 2012}}</ref>
 
==Numbers in programming languages==
Line 312 ⟶ 303:
* [[Binary-coded decimal]]
* [[Binary-to-text encoding]]
* [[Binary numeral systemnumber]]
* [[Gray code]]
* [[Numeral system]]
* [[Unum (number format)|Unum]]
* [[Posit (number format)|Posit]]
 
==Notes and references==