Talk:Floating-point arithmetic/Archive 4: Difference between revisions

Content deleted Content added
MiszaBot I (talk | contribs)
m Archiving 24 thread(s) from Talk:Floating point.
 
MiszaBot I (talk | contribs)
m Archiving 5 thread(s) from Talk:Floating point.
Line 365:
 
:If you look at the contents it is in the IEEE section and they returned the appropriate signed infinity not NaN. I don't know what the exactly is about either - I believe it can be removed. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 15:54, 8 January 2010 (UTC)
== [[Significand]] ==
 
A succession of editors keep coming to this article to correct the spelling of 'significand' to 'significant.' How would people feel about using 'mantissa' instead? The pros and cons of mantissa are well discussed in our free-standing article called [[significand]]. Although IEEE prefers 'significand' it's not clear that they should win, because we ought to be documenting common usage, and be neutral between equally good ideas. Another option is to reword the article to not use 'significand' over and over, because it's not a term that people new to floating point are likely to have heard. So it doesn't have much explanatory value, even though we use it in the article 30 times. [[User:EdJohnston|EdJohnston]] ([[User talk:EdJohnston|talk]]) 18:18, 23 November 2007 (UTC)
:I suspect that the editors that are unfamiliar with the term 'significand' would be even more confused by the term 'mantissa'. At least the former is similar to a more common word (significant) which is in the right area.
:If changing it at all, I would change it to 'coefficient', which means the same but is less technical than 'significand' and certainly more correct than 'mantissa'. [[User:Mfc|mfc]] ([[User talk:Mfc|talk]]) 20:09, 23 November 2007 (UTC)
 
"Significand" may be the most pedanticly correct term, but it is linquisticly unfortunate. Just look at all the recent edits that changed "significand" to "signicant", just to be set right again. Looking at these edits I noticed that they typically involve the form "'''''n''''' signifcand bits". That phrase is syntactically correct with a single 'd' to 't' consonant substitution. Making the substitution changes the meaning. It obviously looks like a typo to a lot of people. I went through and recast the occurances of "significand" followd by "bit" or "bits". For the most part I think it makes it harder to misread. Let's try it for a while, anyway. [[User:Ferritecore|Ferritecore]] ([[User talk:Ferritecore|talk]]) 01:28, 1 March 2008 (UTC)
:Your change seems worthwhile to reduce confusion. There seems to be a reasonable argument that 'mantissa' is a worse alternative, but 'coefficient' still has some charm. The phrase 'floating point coefficient' gets about 8 times as many Google hits as 'floating point significand.' Those of us who are used to hearing the lingo of the IEEE standard may have overestimated how common 'significand' is. Most people have a notion of what a coefficient is from any math course, even when they're unfamiliar with floating point. Anyone willing to support 'coefficient' for this article? After all this is not the article about the IEEE standard. [[User:EdJohnston|EdJohnston]] ([[User talk:EdJohnston|talk]]) 02:24, 1 March 2008 (UTC)
::I am not fond of the term 'significand', let's get that out of the way up front. I kept it in my edit after skimming the significand mantissa explanation earlier in the article. After some further investigation I am not convinced that significand is really any better. Let's look at the available alternatives:
::* [[coefficient]] dos have more than just some charm. The floating point number can be expressed as a single-term pollynomial (mononomial) CB<sup>X</sup> with a coeficient, a base and the exponent. Reasonably educated people can be expected to recognize and correctly understand the word. If I had just invented floating point this would likely be the word I use.
::* [[mantissa]] is much maligned here. The argument given against it is at least partially incorrect. The [[http://mathworld.wolfram.com/Mantissa.html Wolfram Math world definition]] does not mention logs at all. Mathematically, a mantissa is a fractional part of any real number, not just a log. My math handbook has a table of "mantissas of common logarithms", not a "table of mantissas". The term is also used to describe the coresponding part of a number represented in scientific notation. The term has a long and respecable history. The [[significand]] article has a 1947 [[John von Neumann]] quote using mantissa in the floating point sense. The term was the one nearly universally used. It is still in common use. The term is not perfect, the bit-field stored in a floating point number is not the mantissa of the number represented. It is the number represented normalized to appear "in the form of a mantissa" or has the appearence of a mantissa.
::* [[significand]] is a relativly new term. It was probably coined in response to percieved difficulties with [[mantissa]]. It is apparently used by the IEEE standard. As a coined word it means exactly what the coiner intended, without the baggage of prior or other meanings. In spite of the IEEE floating point standard being arround for 20+ years significand hasn't managed to displace mantissa in common usage, probably because it looks like a typo and can be confused with significant. The term does not appear anywhere on the Wolfram Mathworld site.
::* [[fraction]] is, oddly enough, used by the [[IEEE 754-1985]] article. It is descriptive, universally understood (probably more so than coefficient). It strikes me as being slightly less accurate than coefficient.
 
::I personally would prefer to use mantissa as the generally accepted technical term, but am unwilling to put much energy into fighting against significand. The discussion of the issue in the article needs to be corrected and improved. I may do so when I have the time to give it the proper care and ballance.
 
::I mention Wolfram Mathworld because there is a hazard in consulting mainstream dictionaies, even ones as respectable as the OED, for technical or scientific terms. They are frequently not quite right. Computer terms are frequently borrowed from other disciplines. Nobody worries much that a computer virus lacks a protine coat and a dna core - biology is sufficiently removed from computer science. When borrowing a term from math, however the separation is not so great. I went to Wolfram in hopes of getting a math insight into the terms. [[User:Ferritecore|Ferritecore]] ([[User talk:Ferritecore|talk]]) 15:01, 2 March 2008 (UTC)
 
All the IBM documentation that I have seen from S/360 through z/Architecture HFP (Hexadecimal Floating Point) uses fraction, as it is less than one in the usual intepretation. I believe coefficient is sometimes used for other machines, where it isn't a fraction. I don't mind coefficient or fraction. I like significand, as, being new, it doesn't have previous usage to confuse people. The term "mantissa of common logarithms" is correct, as the table does not include the characteristics of the logs (the integer part). Only in a logarithmic representation (discussed here I believe) would mantissa and characteristic be correct. [[User:Gah4|Gah4]] ([[User talk:Gah4|talk]]) 23:02, 29 April 2011 (UTC)
 
== More user-friendly Introduction ==
 
As a total amateur when it comes to computing, I am trying to find out what the floating point is all about. I was doing fine for a very short time before I came across this sentence in the introduction:
 
"For example, a fixed-point representation that has seven decimal digits, with the decimal point assumed to be positioned after the fifth digit, can represent the numbers 12345.67, 123.45, 1.23 and so on,"
 
How do the the numbers 12345.67, 123.45, 1.23 correspond to the decimal point being assumed to be positioned after the fifth digit when 123.45 and 1.23 have neither five digits nor a decimal point after the (absent) "fifth digit"?
 
For an encyclopedia, which, I think, by definition, should be basically comprehensible even to amateurs, this statement is assuming too much knowledge of computing on the part of the reader.
 
Can someone please tweak it to make it a little more user-friendly? [[User:tripbeetle|tripbeetle]] ([[User talk:tripbeetle|talk]]) 3:36, 8 April 2010 (UTC)
 
:What it means is a fixed point number that can occupy a space like #####.## where the # refer to digits. 123.45 can occupy that space by putting 00123.45 in. Saying something like two decimal places would be better, I'll stick that in. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 10:16, 8 April 2010 (UTC)
 
== SIgnificand versus significant ==
 
I agree with teh IP editor that significant is better than significand in the leader. Significand is a correct term for that part, but the term has not been introduced yet. Also it would be 'significand' versus 'significant digits'. Saying significant digits explains the bit well for a simple explanation before reading further into the article. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 23:35, 9 November 2010 (UTC)
 
== Octuple Precision ==
 
Is there any Compiler that does Octuple Precision?
(256-binary)--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 05:18, 18 November 2010 (UTC)
 
:Not that I know of. Why on earth would anyone bother? If people needed such high precision they'd use a variable precision floating point package and do it via function calls [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 10:38, 18 November 2010 (UTC)
::As it is, it's hard to find any genuine uses for quadruple precision. E.g. a physical problem where the outcome can't be adequately predicted by a double precision calculation. [[User:EdJohnston|EdJohnston]] ([[User talk:EdJohnston|talk]]) 20:23, 18 November 2010 (UTC)
:::Sometimes you want all the precision possible. For instance, if you are trying to calculate the trajectory of [[99942 Apophis]] (which used to have a 2.7% chance of hitting earth), then you want to avoid all possible error. When simulating trajectories, rounding errors become cumulative (see [[Floating_point#Accuracy_problems]]. It's best to use numbers with as many digits as possible.--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 20:36, 18 November 2010 (UTC)
::::If you have information about the scientific work on the trajectory of [[99942 Apophis]], that shows it required more than double precision, and can find a reference, it might be interesting to put it in the [[Floating point]] article. [[User:EdJohnston|EdJohnston]] ([[User talk:EdJohnston|talk]]) 20:54, 18 November 2010 (UTC)
:::::Here's an article about 64-bit errors when calculating apophis. [http://aeweb.tamu.edu/aero489/Apophis%20Mitigation%20Project/Predicting%20Earth%20Encounters.pdf] (page 10 has a neat table)
:::::I found all kinds of interesting articles:
:::::Quote [http://www.sciencemag.org/content/296/5565/132.full#xref-ref-22-1]
:::::Numerical integration error can accumulate through rounding or truncation caused by machine precision limits, especially near planetary close approaches when the time-step size must change rapidly. We found that the cumulative relative integration error for 1950 DA remains less than 1 km until 2105, thereafter oscillating around zero with a maximum amplitude of 200 km until the 2809 Earth encounter (22). It then grows to –9900 km at the 2880 encounter, changing the nominal time of close approach on 16 March 2880 by –12 min.
:::::::^^Accuracy of asteroid projections using 64 bit.
:::::Round of error in long term planetary orbit integrations [http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?bibcode=1990AJ.....99.1016Q&db_key=AST&page_ind=0&plate_select=NO&data_type=GIF&type=SCREEN_GIF&classic=YES]
:::::::(BTW, I just found out about google scholar. It sure beats the normal page after page of yahoo answers that google dishes up for every query. If you want more sources, you can look at this link [http://scholar.google.com/scholar?start=0&q=asteroid+%22quadruple+precision%22&hl=en&as_sdt=400001&as_vis=1] A lot of the articles are behind paywalls, but I discovered a trick. Click on the link for (All * versions) under the summary and just keep clicking until you find an article you can access. Useful stuff.)--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 22:43, 18 November 2010 (UTC)
 
Octuple precision was more difficult to find, for instance this intriguing quote: [http://scholar.google.com/scholar?cluster=1240777706751756975&hl=en&as_sdt=400001&as_vis=1] is behind a paywall
::"This implies that three-body scattering calculations are severely limited by the finite wordlength of computers. Worse still, in the more extreme cases even octuple precision would not be su~cient."
Octuple precision for binary stars [http://books.google.com/books?hl=en&lr=&id=7xv0xPKIylsC&oi=fnd&pg=PA469&ots=n5u2ZFVu8o&sig=BJ_NMZdjNZyueixGZjwuBnx-Sis#v=onepage&q&f=false]
 
This article says [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.155.1015&rep=rep1&type=pdf]:
the mathcw library is designed to pave the way for future octuple-precision arithmetic in a 256-bit
format offering a significand of 235 bits in binary
(Other article seem to disagree about the size of the significand. It seems to vary between 224 and 240. Guess there's not a set standard yet. I know the new [[sandy bridge]] cpus have 256 bit [[simd]] registers and the new avx instructions. Maybe they will create a standard ocutple-precision then )
Perhaps it could be added to the article?--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 22:43, 18 November 2010 (UTC)
 
:Anyway nobody's going to write special compiler support for this sort of stuff. What you could do though in a number of languages like C++ though is write your own octuple class which does calls to subroutines for the various operations and so enables you to write ordinary arithmetic expressions. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 23:25, 18 November 2010 (UTC)
::By the way the SIMD instruction only support single and double precision plus some for 16 bit floating point. SIMD means single instruction multiple data - a number of these operands are handled in parallel. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 23:34, 18 November 2010 (UTC)
'''Octuple precision is not-existent'''. It won't be implemented in the coming decades and I doubt there'll be need for that. Even quadruple precision is not yet implemented in hardware. Double precision support was added to the mainstream processors because there was apparent need in the ''consumer'' market, not because some scientific application would benefit from it. Extending the range and precision of the computations even more does not benefit the consumer. Even if it did, the applications will almost certainly use arbitrary precision arithmetic library since it gives hell of a lot more freedom for the implementation.
All these SSE or AVX extensions are just for optimization of parallel algorithms. A SSE 128-bit register can hold 4 single precision floating point numbers or 2 double precision numbers while AVX register can hold 8 and 4 numbers respectively.[[User:1exec1|1exec1]] ([[User talk:1exec1|talk]]) 15:13, 19 November 2010 (UTC)
::Even if it's just being implemented in software, I still think it would be interesting to add a new section for Octo precesion: when it's needed and how so. As you can see, I have lots of good sources. I think I'll add it later today.--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 20:18, 19 November 2010 (UTC)
::: Octuple precision is not standardised, not supported and not used (as arbitrary precision packages are used instead). Thus it fails the wikipedia notability criteria and should (shall?) not be included.[[User:1exec1|1exec1]] ([[User talk:1exec1|talk]]) 21:23, 19 November 2010 (UTC)
:::: Read my sources above. I have scholarly papers of people using octuple precision, not arbitrary. There are libraries you can import into c to make this happen.--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 23:38, 19 November 2010 (UTC)
:::::And as I said above you can set up your own classes for octuple precision. A compiler would not add to efficiency in any appreciable way and no-one is liable to pay for a professionally tested compiler and too tiny a part of the market would be interested. I worked once on a machine with 128 bit packed decimal registers and I think there may be some quad precision hardware floating point implementations in the future. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 00:24, 20 November 2010 (UTC)
:::::: Might be, but also might not be. I don't see neither current nor future need for that in ''any'' of the market segments. Consumer market won't need that. The only place where some benefits could be seen is HPC. However, the supercomputer market is shifting towards GPGPU architecture. Quad precision would be very inefficient there as it's hard to parallelize (the operations would have long latencies with low throughput and the part of the program using it would easily become a bottleneck). Also, it's quite easy to set up quad precision math using integer hardware and it's easy for GPGPU hardware vendors to include more integer cores. Not to talk about benefits of using a lot more portable approach. Thus I believe that even if quad precision would be implemented in hardware, it'd have very limited usage. And because of that vendors won't implement that in the first place. [[User:1exec1|1exec1]] ([[User talk:1exec1|talk]]) 13:26, 20 November 2010 (UTC)
::::: These papers make no difference. What does the octuple precision library differ from arbitrary precision library if both do the calculations in software using integer math?? By saying ''arbitrary precision library'' I mean library which does not necessarily always calculate 1000000 digits of pi, but can do it on request. You can view octuple precision just as a subset of capabilities that library provides since you can easily set it to do the calculations ''only'' in octuple precision. So usage of ''octuple precision'' term doesn't have a lot of common with octuple precision floating point format. At least as of now.[[User:1exec1|1exec1]] ([[User talk:1exec1|talk]]) 13:15, 20 November 2010 (UTC)
 
:::::By the way IEEE quad precision has been implemented in hardware on the IBM z series. IBM also implemented quad precision in their 360 series and Dec did in their VAX series. In the older series hardware support would only be present on a few machines but both had emulation packages as far as I know and the quad was a double double implementation. IO should have remembered about the IBM one as I've seen emulator code for the operations. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 19:42, 20 November 2010 (UTC)
:::::Seemingly the VAX H format was an independent format similar to IEEE and not double double, also some or all the Cray machines had a quad precision which they called double and 32 bit was called half by them. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 19:57, 20 November 2010 (UTC)
 
== GMP ==
 
Any thoughts on adding a reference or external link to the The GNU Multiple Precision Arithmetic Library (GMP) library?
http://gmplib.org/
 
"GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers." <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/99.32.166.179|99.32.166.179]] ([[User talk:99.32.166.179|talk]]) 19:49, 8 January 2011 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
 
:I suppose you put a link to [[GNU Multi-Precision Library]] in the see also but it doesn't seem to me to warrant anything in the main article. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 20:51, 8 January 2011 (UTC)