Computer number format: Difference between revisions

Content deleted Content added
Revert to revision 974209016 dated 2020-08-21 17:45:38 by Widefox using popups
copyedit
Line 1:
{{short description|Internal representation of numeric values in a digital computer}}
{{refimprove|date=June 2020}}
A '''computer number format''' is the internal representation of numeric values in digital device hardware and software, such as in programmable [[computer]]s and [[calculator]] hardware and softwares.<ref>
{{cite book
| title = Inside the machine: an illustrated introduction to microprocessors and computer architecture
Line 10:
| page = 66
| url = https://books.google.com/books?id=Q1zSIarI8xoC&pg=PA66
}}</ref> Normally, numericNumerical values are stored as groupings of [[bit]]s, namedsuch foras the[[byte]]s numberand of bits that compose themwords.{{cn|date=June 2020}}{{what?|date=June 2020}} The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer;{{cn|date=June 2020}} the bit formatencoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values. and Differentdifferent conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.
 
==Binary number representation==
Line 78:
A ''[[byte]]'' is a bit string containing the number of bits needed to represent a [[Character (computing)|character]]. On most modern computers, this is an eight bit string. Because the definition of a byte is related to the number of bits composing a character, some older computers have used a different bit length for their byte.<ref>{{cite web|title=byte definition|url=http://catb.org/~esr/jargon/html/B/byte.html|accessdate=24 April 2012}}</ref> In many [[Computer Architecture|computer architectures]], the byte is used to [[Byte addressing|address]]<!-- Find an external source for this --> specific areas of memory. For example, even though 64-bit processors may address memory sixty-four bits at a time, they may still split that memory into eight-bit pieces. This is called byte-addressable memory. Historically, many [[CPU]]s read data in some multiple of eight bits.<ref>{{cite web|title=Microprocessor and CPU (Central Processing Unit)|url=http://www.networkdictionary.com/hardware/mc.php|publisher=Network Dictionary|accessdate=1 May 2012|archive-url=https://web.archive.org/web/20171003225434/http://www.networkdictionary.com/hardware/mc.php|archive-date=3 October 2017|url-status=dead}}</ref> Because the byte size of eight bits is so common, but the definition is not standardized, the term [[Octet (computing)|octet]] is sometimes used to explicitly describe an eight bit sequence.
 
A ''[[nibble]]'' (sometimes ''nybble''), is a number composed of four bits.<ref>{{cite web|title=nybble definition|url=http://catb.org/~esr/jargon/html/N/nybble.html|accessdate=3 May 2012}}</ref> Being a [[half-byte]], the nibble was named as a play on words. A person may need several nibbles for one bite from something; similarly, a nybble is a part of a byte. Because four bits allow for sixteen values, a nibble is sometimes known as a [[hexadecimal digit]].<ref>{{cite web|title=Nybble|url=http://www.techterms.com/definition/nybble|publisher=TechTerms.com|accessdate=3 May 2012}}</ref><!-- A single digit in [[Binary_coded_decimal|binary-coded decimal]] can be stored in a nibble. -->
 
==Octal and hexhexadecimal number display==
{{See also|Base64}}
[[Octal]] and hexhexadecimal encoding are convenient ways to represent binary numbers, as used by computers. Computer engineers often need to write out binary quantities, but in practice writing out a binary number such as 1001001101010001 is tedious and prone to errors. Therefore, binary quantities are written in a base-8, or "octal", or, much more commonly, a base-16, "hexadecimal" or "(''hex"''), number format. In the decimal system, there are 10 digits, 0 through 9, which combine to form numbers. In an octal system, there are only 8 digits, 0 through 7. That is, the value of an octal "10" is the same as a decimal "8", an octal "20" is a decimal "16", and so on. In a hexadecimal system, there are 16 digits, 0 through 9 followed, by convention, with A through F. That is, a hexhexadecimal "10" is the same as a decimal "16" and a hexhexadecimal "20" is the same as a decimal "32". An example and comparison of numbers in different bases is described in the chart below.
 
When typing numbers, formatting characters are used to describe the number system, for example 000_0000B or 0b000_00000 for binary and 0F8H or 0xf8 for hexadecimal numbers.
Line 128:
{{Main | Positional_notation#Base_conversion| l1=Positional notation (base conversion) }}
 
Each of these number systems is a positional system, but while decimal weights are powers of 10, the octal weights are powers of 8 and the hexhexadecimal weights are powers of 16. To convert from hexhexadecimal or octal to decimal, for each digit one multiplies the value of the digit by the value of its position and then adds the results. For example:
 
: <math>
Line 178:
 
===Floating-point numbers===
 
While both unsigned and signed integers are used in digital systems, even a 32-bit integer is not enough to handle all the range of numbers a calculator can handle, and that's not even including fractions. To approximate the greater range and precision of [[real number]]s, we have to abandon signed integers and fixed-point numbers and go to a "[[floating point|floating-point]]" format.
 
Line 316 ⟶ 315:
* [[Binary numeral system]]
* [[Gray code]]
* [[Hexadecimal]]
* [[Numeral system]]
* [[Octal]]
 
==Notes and references==
 
{{vectorsite}}