Content deleted Content added
Revert to revision 974209016 dated 2020-08-21 17:45:38 by Widefox using popups |
Dennis Brown (talk | contribs) m Reverted 1 edit by 191.91.246.208 (talk) to last revision by 209.35.193.1 |
||
(43 intermediate revisions by 37 users not shown) | |||
Line 1:
{{short description|Internal representation of numeric values in a digital computer}}
{{refimprove|date=
▲ }}</ref> Normally, numeric values are stored as groupings of [[bit]]s, named for the number of bits that compose them.{{cn|date=June 2020}}{{what?|date=June 2020}} The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer;{{cn|date=June 2020}} the bit format used by the computer's instruction set generally requires conversion for external use such as printing and display. Different types of processors may have different internal representations of numerical values. Different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.
==Binary number representation==
Line 72 ⟶ 63:
While a single bit, on its own, is able to represent only two values, a [[Bit string|string of bits]] may be used to represent larger values. For example, a string of three bits can represent up to eight distinct values as illustrated in Table 1.
As the number of bits composing a string increases, the number of possible ''0'' and ''1'' combinations increases [[Exponentiation|exponentially]].
Groupings with a specific number of bits are used to represent varying things and have specific names.
A ''[[byte]]'' is a bit string containing the number of bits needed to represent a [[Character (computing)|character]]. On most modern computers, this is an eight bit string. Because the definition of a byte is related to the number of bits composing a character, some older computers have used a different bit length for their byte.<ref>{{cite web|title=byte definition|url=http://catb.org/~esr/jargon/html/B/byte.html|
A ''[[nibble]]'' (sometimes ''nybble''), is a number composed of four bits.<ref>{{cite web|title=nybble definition|url=http://catb.org/~esr/jargon/html/N/nybble.html|
==Octal and
{{See also|Base64}}
[[Octal]] and
When typing numbers, formatting characters are used to describe the number system, for example 000_0000B or 0b000_00000 for binary and 0F8H or 0xf8 for hexadecimal numbers.
Line 128 ⟶ 119:
{{Main | Positional_notation#Base_conversion| l1=Positional notation (base conversion) }}
Each of these number systems is a positional system, but while decimal weights are powers of 10, the octal weights are powers of 8 and the
: <math>
Line 156 ⟶ 147:
The number of bits needed for the precision and range desired must be chosen to store the fractional and integer parts of a number. For instance, using a 32-bit format, 16 bits may be used for the integer and 16 for the fraction.
The eight's bit is followed by the four's bit, then the two's bit, then the one's bit. The fractional bits continue the pattern set by the integer bits. The next bit is the half's bit, then the quarter's bit, then the
{| class="toccolours"
|- style="text-align:center"
Line 178 ⟶ 169:
===Floating-point numbers===
While both unsigned and signed integers are used in digital systems, even a 32-bit integer is not enough to handle all the range of numbers a calculator can handle, and that's not even including fractions. To approximate the greater range and precision of [[real number]]s, we have to abandon signed integers and fixed-point numbers and go to a "[[floating
▲While both unsigned and signed integers are used in digital systems, even a 32-bit integer is not enough to handle all the range of numbers a calculator can handle, and that's not even including fractions. To approximate the greater range and precision of [[real number]]s, we have to abandon signed integers and fixed-point numbers and go to a "[[floating point|floating-point]]" format.
In the decimal system, we are familiar with floating-point numbers of the form ([[scientific notation]]):
Line 189 ⟶ 179:
: 1.1030402E5
which means "1.1030402 times 1 followed by 5 zeroes". We have a certain numeric value (1.1030402) known as a "[[significand]]", multiplied by a power of 10 (E5, meaning 10<sup>5</sup> or 100,000), known as an "[[exponentiation|exponent]]". If we have a negative exponent, that means the number is multiplied by a 1 that many places to the right of the decimal point. For example:
: 2.3434E−6 = 2.3434 × 10<sup>−6</sup> = 2.3434 × 0.000001 = 0.0000023434
The advantage of this scheme is that by using the exponent we can get a much wider range of numbers, even if the number of digits in the significand, or the "numeric precision", is much smaller than the range. Similar binary floating-point formats can be defined for computers. There is a number of such schemes, the most popular has been defined by [[Institute of Electrical and Electronics Engineers]] (IEEE). The [[IEEE floating point|IEEE 754-2008]] standard specification defines a 64 bit floating-point format with:
* an 11-bit binary exponent, using "excess-1023" format. Excess-1023 means the exponent appears as an unsigned binary integer from 0 to 2047; subtracting 1023 gives the actual signed value
Line 201 ⟶ 189:
* a sign bit, giving the sign of the number.
{| class="wikitable"
Line 235 ⟶ 223:
This scheme provides numbers valid out to about 15 decimal digits, with the following range of numbers:
{| class="wikitable"
|-
!
Line 273 ⟶ 261:
leading to the following range of numbers:
{| class="wikitable"
|-
!
Line 300 ⟶ 288:
The representation has a limited precision. For example, only 15 decimal digits can be represented with a 64-bit real. If a very small floating-point number is added to a large one, the result is just the large one. The small number was too small to even show up in 15 or 16 digits of resolution, and the computer effectively discards it. Analyzing the effect of limited precision is a well-studied problem. Estimates of the magnitude of round-off errors and methods to limit their effect on large calculations are part of any large computation project. The precision limit is different from the range limit, as it affects the significand, not the exponent.
The significand is a binary fraction that doesn't necessarily perfectly match a decimal fraction. In many cases a sum of reciprocal powers of 2 does not match a specific decimal fraction, and the results of computations will be slightly off. For example, the decimal fraction "0.1" is equivalent to an infinitely repeating binary fraction: 0.000110011 ...<ref>{{cite web|last=Goebel|first=Greg|title=Computer Numbering Format|url=http://www.vectorsite.net/tsfloat.html|
==Numbers in programming languages==
Line 307 ⟶ 295:
Programming in [[assembly language]] requires the programmer to keep track of the representation of numbers. Where the processor does not support a required mathematical operation, the programmer must work out a suitable algorithm and instruction sequence to carry out the operation; on some microprocessors, even integer multiplication must be done in software.
High-level [[programming language]]s such as [[
Some languages, such as [[REXX]] and [[Java (programming language)|Java]], provide decimal floating
==See also==
* [[Arbitrary-precision arithmetic]]
* [[Binary-coded decimal]]
* [[Binary-to-text
* [[Binary number]]
* [[Gray code]]
* [[Numeral system]]
* [[Unum (number format)|Unum]]
* [[Posit (number format)|Posit]]
==Notes and references==
{{vectorsite}}
|