Content deleted Content added
m dablink |
Petedarnell (talk | contribs) No edit summary |
||
Line 3:
{{dablink|This article is about a form of limited-precision arithmetic in computing. For the fixed points of a mathematical function, see [[fixed point (mathematics)]].}}
In [[computing]], a '''fixed-point number''' representation is a [[real data type]] for a number that has a fixed number of digits after the decimal (or binary or hexadecimal) point. These numbers are useful for representing fractional values in native [[two's complement]] format if the executing processor has no [[floating point]] unit (FPU) or if fixed-point provides an
improved performance. Most low cost embedded processors do not have an FPU.
For example, a 16 bit signed fixed-point binary number with 4 bits after the decimal point yields 12 magnitude bits and 4 fractional bits. It can represent numbers between 2047.9375 and -2048. The asymmetry between upper and lower bounds is due to the two's complement notation. A 16 bit unsigned fixed-point binary number with 4 fractional bits ranges between 4095.9375 and 0. Fixed point numbers can represent fractional powers of two exactly, but, like floating point numbers, cannot exactly represent fractional powers of 10. If exact fractional powers of ten are desired, then [[Binary-coded decimal]] (BCD) format should be used. However, BCD does not make as efficient use of bits as two's complement notation, nor is it as computationally fast.
For example, one-tenth (.1) and one-hundredth (.01) can be represented only approximately
by
Integer Fixed-point values aways exactly represent values up to the maximum/minimum determined by the number of magnitude bits. This is in contrast to [[floating point|floating-point]] representations, which include▼
▲This is in contrast to [[floating point|floating-point]] representations, which include
an automatically-managed exponent but cannot represent as many digits accurately (given the
same number of bits in its representation).
|