Talk:Floating-point arithmetic/Archive 4: Difference between revisions

Content deleted Content added
MiszaBot I (talk | contribs)
m Archiving 24 thread(s) from Talk:Floating point.
 
m Deacon Vorbis moved page Talk:Floating point/Archive 4 to Talk:Floating-point arithmetic/Archive 4: Talk archive wasn't moved with rest of page
 
(13 intermediate revisions by 2 users not shown)
Line 365:
 
:If you look at the contents it is in the IEEE section and they returned the appropriate signed infinity not NaN. I don't know what the exactly is about either - I believe it can be removed. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 15:54, 8 January 2010 (UTC)
== [[Significand]] ==
 
A succession of editors keep coming to this article to correct the spelling of 'significand' to 'significant.' How would people feel about using 'mantissa' instead? The pros and cons of mantissa are well discussed in our free-standing article called [[significand]]. Although IEEE prefers 'significand' it's not clear that they should win, because we ought to be documenting common usage, and be neutral between equally good ideas. Another option is to reword the article to not use 'significand' over and over, because it's not a term that people new to floating point are likely to have heard. So it doesn't have much explanatory value, even though we use it in the article 30 times. [[User:EdJohnston|EdJohnston]] ([[User talk:EdJohnston|talk]]) 18:18, 23 November 2007 (UTC)
:I suspect that the editors that are unfamiliar with the term 'significand' would be even more confused by the term 'mantissa'. At least the former is similar to a more common word (significant) which is in the right area.
:If changing it at all, I would change it to 'coefficient', which means the same but is less technical than 'significand' and certainly more correct than 'mantissa'. [[User:Mfc|mfc]] ([[User talk:Mfc|talk]]) 20:09, 23 November 2007 (UTC)
 
"Significand" may be the most pedanticly correct term, but it is linquisticly unfortunate. Just look at all the recent edits that changed "significand" to "signicant", just to be set right again. Looking at these edits I noticed that they typically involve the form "'''''n''''' signifcand bits". That phrase is syntactically correct with a single 'd' to 't' consonant substitution. Making the substitution changes the meaning. It obviously looks like a typo to a lot of people. I went through and recast the occurances of "significand" followd by "bit" or "bits". For the most part I think it makes it harder to misread. Let's try it for a while, anyway. [[User:Ferritecore|Ferritecore]] ([[User talk:Ferritecore|talk]]) 01:28, 1 March 2008 (UTC)
:Your change seems worthwhile to reduce confusion. There seems to be a reasonable argument that 'mantissa' is a worse alternative, but 'coefficient' still has some charm. The phrase 'floating point coefficient' gets about 8 times as many Google hits as 'floating point significand.' Those of us who are used to hearing the lingo of the IEEE standard may have overestimated how common 'significand' is. Most people have a notion of what a coefficient is from any math course, even when they're unfamiliar with floating point. Anyone willing to support 'coefficient' for this article? After all this is not the article about the IEEE standard. [[User:EdJohnston|EdJohnston]] ([[User talk:EdJohnston|talk]]) 02:24, 1 March 2008 (UTC)
::I am not fond of the term 'significand', let's get that out of the way up front. I kept it in my edit after skimming the significand mantissa explanation earlier in the article. After some further investigation I am not convinced that significand is really any better. Let's look at the available alternatives:
::* [[coefficient]] dos have more than just some charm. The floating point number can be expressed as a single-term pollynomial (mononomial) CB<sup>X</sup> with a coeficient, a base and the exponent. Reasonably educated people can be expected to recognize and correctly understand the word. If I had just invented floating point this would likely be the word I use.
::* [[mantissa]] is much maligned here. The argument given against it is at least partially incorrect. The [[http://mathworld.wolfram.com/Mantissa.html Wolfram Math world definition]] does not mention logs at all. Mathematically, a mantissa is a fractional part of any real number, not just a log. My math handbook has a table of "mantissas of common logarithms", not a "table of mantissas". The term is also used to describe the coresponding part of a number represented in scientific notation. The term has a long and respecable history. The [[significand]] article has a 1947 [[John von Neumann]] quote using mantissa in the floating point sense. The term was the one nearly universally used. It is still in common use. The term is not perfect, the bit-field stored in a floating point number is not the mantissa of the number represented. It is the number represented normalized to appear "in the form of a mantissa" or has the appearence of a mantissa.
::* [[significand]] is a relativly new term. It was probably coined in response to percieved difficulties with [[mantissa]]. It is apparently used by the IEEE standard. As a coined word it means exactly what the coiner intended, without the baggage of prior or other meanings. In spite of the IEEE floating point standard being arround for 20+ years significand hasn't managed to displace mantissa in common usage, probably because it looks like a typo and can be confused with significant. The term does not appear anywhere on the Wolfram Mathworld site.
::* [[fraction]] is, oddly enough, used by the [[IEEE 754-1985]] article. It is descriptive, universally understood (probably more so than coefficient). It strikes me as being slightly less accurate than coefficient.
 
::I personally would prefer to use mantissa as the generally accepted technical term, but am unwilling to put much energy into fighting against significand. The discussion of the issue in the article needs to be corrected and improved. I may do so when I have the time to give it the proper care and ballance.
 
::I mention Wolfram Mathworld because there is a hazard in consulting mainstream dictionaies, even ones as respectable as the OED, for technical or scientific terms. They are frequently not quite right. Computer terms are frequently borrowed from other disciplines. Nobody worries much that a computer virus lacks a protine coat and a dna core - biology is sufficiently removed from computer science. When borrowing a term from math, however the separation is not so great. I went to Wolfram in hopes of getting a math insight into the terms. [[User:Ferritecore|Ferritecore]] ([[User talk:Ferritecore|talk]]) 15:01, 2 March 2008 (UTC)
 
All the IBM documentation that I have seen from S/360 through z/Architecture HFP (Hexadecimal Floating Point) uses fraction, as it is less than one in the usual intepretation. I believe coefficient is sometimes used for other machines, where it isn't a fraction. I don't mind coefficient or fraction. I like significand, as, being new, it doesn't have previous usage to confuse people. The term "mantissa of common logarithms" is correct, as the table does not include the characteristics of the logs (the integer part). Only in a logarithmic representation (discussed here I believe) would mantissa and characteristic be correct. [[User:Gah4|Gah4]] ([[User talk:Gah4|talk]]) 23:02, 29 April 2011 (UTC)
 
== More user-friendly Introduction ==
 
As a total amateur when it comes to computing, I am trying to find out what the floating point is all about. I was doing fine for a very short time before I came across this sentence in the introduction:
 
"For example, a fixed-point representation that has seven decimal digits, with the decimal point assumed to be positioned after the fifth digit, can represent the numbers 12345.67, 123.45, 1.23 and so on,"
 
How do the the numbers 12345.67, 123.45, 1.23 correspond to the decimal point being assumed to be positioned after the fifth digit when 123.45 and 1.23 have neither five digits nor a decimal point after the (absent) "fifth digit"?
 
For an encyclopedia, which, I think, by definition, should be basically comprehensible even to amateurs, this statement is assuming too much knowledge of computing on the part of the reader.
 
Can someone please tweak it to make it a little more user-friendly? [[User:tripbeetle|tripbeetle]] ([[User talk:tripbeetle|talk]]) 3:36, 8 April 2010 (UTC)
 
:What it means is a fixed point number that can occupy a space like #####.## where the # refer to digits. 123.45 can occupy that space by putting 00123.45 in. Saying something like two decimal places would be better, I'll stick that in. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 10:16, 8 April 2010 (UTC)
 
== SIgnificand versus significant ==
 
I agree with teh IP editor that significant is better than significand in the leader. Significand is a correct term for that part, but the term has not been introduced yet. Also it would be 'significand' versus 'significant digits'. Saying significant digits explains the bit well for a simple explanation before reading further into the article. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 23:35, 9 November 2010 (UTC)
 
== Octuple Precision ==
 
Is there any Compiler that does Octuple Precision?
(256-binary)--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 05:18, 18 November 2010 (UTC)
 
:Not that I know of. Why on earth would anyone bother? If people needed such high precision they'd use a variable precision floating point package and do it via function calls [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 10:38, 18 November 2010 (UTC)
::As it is, it's hard to find any genuine uses for quadruple precision. E.g. a physical problem where the outcome can't be adequately predicted by a double precision calculation. [[User:EdJohnston|EdJohnston]] ([[User talk:EdJohnston|talk]]) 20:23, 18 November 2010 (UTC)
:::Sometimes you want all the precision possible. For instance, if you are trying to calculate the trajectory of [[99942 Apophis]] (which used to have a 2.7% chance of hitting earth), then you want to avoid all possible error. When simulating trajectories, rounding errors become cumulative (see [[Floating_point#Accuracy_problems]]. It's best to use numbers with as many digits as possible.--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 20:36, 18 November 2010 (UTC)
::::If you have information about the scientific work on the trajectory of [[99942 Apophis]], that shows it required more than double precision, and can find a reference, it might be interesting to put it in the [[Floating point]] article. [[User:EdJohnston|EdJohnston]] ([[User talk:EdJohnston|talk]]) 20:54, 18 November 2010 (UTC)
:::::Here's an article about 64-bit errors when calculating apophis. [http://aeweb.tamu.edu/aero489/Apophis%20Mitigation%20Project/Predicting%20Earth%20Encounters.pdf] (page 10 has a neat table)
:::::I found all kinds of interesting articles:
:::::Quote [http://www.sciencemag.org/content/296/5565/132.full#xref-ref-22-1]
:::::Numerical integration error can accumulate through rounding or truncation caused by machine precision limits, especially near planetary close approaches when the time-step size must change rapidly. We found that the cumulative relative integration error for 1950 DA remains less than 1 km until 2105, thereafter oscillating around zero with a maximum amplitude of 200 km until the 2809 Earth encounter (22). It then grows to –9900 km at the 2880 encounter, changing the nominal time of close approach on 16 March 2880 by –12 min.
:::::::^^Accuracy of asteroid projections using 64 bit.
:::::Round of error in long term planetary orbit integrations [http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?bibcode=1990AJ.....99.1016Q&db_key=AST&page_ind=0&plate_select=NO&data_type=GIF&type=SCREEN_GIF&classic=YES]
:::::::(BTW, I just found out about google scholar. It sure beats the normal page after page of yahoo answers that google dishes up for every query. If you want more sources, you can look at this link [http://scholar.google.com/scholar?start=0&q=asteroid+%22quadruple+precision%22&hl=en&as_sdt=400001&as_vis=1] A lot of the articles are behind paywalls, but I discovered a trick. Click on the link for (All * versions) under the summary and just keep clicking until you find an article you can access. Useful stuff.)--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 22:43, 18 November 2010 (UTC)
 
Octuple precision was more difficult to find, for instance this intriguing quote: [http://scholar.google.com/scholar?cluster=1240777706751756975&hl=en&as_sdt=400001&as_vis=1] is behind a paywall
::"This implies that three-body scattering calculations are severely limited by the finite wordlength of computers. Worse still, in the more extreme cases even octuple precision would not be su~cient."
Octuple precision for binary stars [http://books.google.com/books?hl=en&lr=&id=7xv0xPKIylsC&oi=fnd&pg=PA469&ots=n5u2ZFVu8o&sig=BJ_NMZdjNZyueixGZjwuBnx-Sis#v=onepage&q&f=false]
 
This article says [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.155.1015&rep=rep1&type=pdf]:
the mathcw library is designed to pave the way for future octuple-precision arithmetic in a 256-bit
format offering a significand of 235 bits in binary
(Other article seem to disagree about the size of the significand. It seems to vary between 224 and 240. Guess there's not a set standard yet. I know the new [[sandy bridge]] cpus have 256 bit [[simd]] registers and the new avx instructions. Maybe they will create a standard ocutple-precision then )
Perhaps it could be added to the article?--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 22:43, 18 November 2010 (UTC)
 
:Anyway nobody's going to write special compiler support for this sort of stuff. What you could do though in a number of languages like C++ though is write your own octuple class which does calls to subroutines for the various operations and so enables you to write ordinary arithmetic expressions. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 23:25, 18 November 2010 (UTC)
::By the way the SIMD instruction only support single and double precision plus some for 16 bit floating point. SIMD means single instruction multiple data - a number of these operands are handled in parallel. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 23:34, 18 November 2010 (UTC)
'''Octuple precision is not-existent'''. It won't be implemented in the coming decades and I doubt there'll be need for that. Even quadruple precision is not yet implemented in hardware. Double precision support was added to the mainstream processors because there was apparent need in the ''consumer'' market, not because some scientific application would benefit from it. Extending the range and precision of the computations even more does not benefit the consumer. Even if it did, the applications will almost certainly use arbitrary precision arithmetic library since it gives hell of a lot more freedom for the implementation.
All these SSE or AVX extensions are just for optimization of parallel algorithms. A SSE 128-bit register can hold 4 single precision floating point numbers or 2 double precision numbers while AVX register can hold 8 and 4 numbers respectively.[[User:1exec1|1exec1]] ([[User talk:1exec1|talk]]) 15:13, 19 November 2010 (UTC)
::Even if it's just being implemented in software, I still think it would be interesting to add a new section for Octo precesion: when it's needed and how so. As you can see, I have lots of good sources. I think I'll add it later today.--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 20:18, 19 November 2010 (UTC)
::: Octuple precision is not standardised, not supported and not used (as arbitrary precision packages are used instead). Thus it fails the wikipedia notability criteria and should (shall?) not be included.[[User:1exec1|1exec1]] ([[User talk:1exec1|talk]]) 21:23, 19 November 2010 (UTC)
:::: Read my sources above. I have scholarly papers of people using octuple precision, not arbitrary. There are libraries you can import into c to make this happen.--[[User:RaptorHunter|RaptorHunter]] ([[User talk:RaptorHunter|talk]]) 23:38, 19 November 2010 (UTC)
:::::And as I said above you can set up your own classes for octuple precision. A compiler would not add to efficiency in any appreciable way and no-one is liable to pay for a professionally tested compiler and too tiny a part of the market would be interested. I worked once on a machine with 128 bit packed decimal registers and I think there may be some quad precision hardware floating point implementations in the future. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 00:24, 20 November 2010 (UTC)
:::::: Might be, but also might not be. I don't see neither current nor future need for that in ''any'' of the market segments. Consumer market won't need that. The only place where some benefits could be seen is HPC. However, the supercomputer market is shifting towards GPGPU architecture. Quad precision would be very inefficient there as it's hard to parallelize (the operations would have long latencies with low throughput and the part of the program using it would easily become a bottleneck). Also, it's quite easy to set up quad precision math using integer hardware and it's easy for GPGPU hardware vendors to include more integer cores. Not to talk about benefits of using a lot more portable approach. Thus I believe that even if quad precision would be implemented in hardware, it'd have very limited usage. And because of that vendors won't implement that in the first place. [[User:1exec1|1exec1]] ([[User talk:1exec1|talk]]) 13:26, 20 November 2010 (UTC)
::::: These papers make no difference. What does the octuple precision library differ from arbitrary precision library if both do the calculations in software using integer math?? By saying ''arbitrary precision library'' I mean library which does not necessarily always calculate 1000000 digits of pi, but can do it on request. You can view octuple precision just as a subset of capabilities that library provides since you can easily set it to do the calculations ''only'' in octuple precision. So usage of ''octuple precision'' term doesn't have a lot of common with octuple precision floating point format. At least as of now.[[User:1exec1|1exec1]] ([[User talk:1exec1|talk]]) 13:15, 20 November 2010 (UTC)
 
:::::By the way IEEE quad precision has been implemented in hardware on the IBM z series. IBM also implemented quad precision in their 360 series and Dec did in their VAX series. In the older series hardware support would only be present on a few machines but both had emulation packages as far as I know and the quad was a double double implementation. IO should have remembered about the IBM one as I've seen emulator code for the operations. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 19:42, 20 November 2010 (UTC)
:::::Seemingly the VAX H format was an independent format similar to IEEE and not double double, also some or all the Cray machines had a quad precision which they called double and 32 bit was called half by them. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 19:57, 20 November 2010 (UTC)
 
== GMP ==
 
Any thoughts on adding a reference or external link to the The GNU Multiple Precision Arithmetic Library (GMP) library?
http://gmplib.org/
 
"GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers." <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/99.32.166.179|99.32.166.179]] ([[User talk:99.32.166.179|talk]]) 19:49, 8 January 2011 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
 
:I suppose you put a link to [[GNU Multi-Precision Library]] in the see also but it doesn't seem to me to warrant anything in the main article. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 20:51, 8 January 2011 (UTC)
== Alternatives to the FP representation ==
 
There should be a mentioning of [[continued fractions]] in that section. Software implementations of it [http://blog.poucet.org/2008/02/continued-fractions-in-haskell/ already exist], and it has incredible properties, especially together with lazily-evaluated languages like Haskell.
[[User:Whitehorses2501|Whitehorses2501]] ([[User talk:Whitehorses2501|talk]]) 00:15, 29 April 2011 (UTC)
:Only if we had some notability and a reliable source. That thing you pointed at was a blog of someone's efforts and they haven't even figured out how to multiply the square root of 2 by itself yet. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 08:18, 29 April 2011 (UTC)
== formula to calculate Range of floating-point numbers ==
 
The '''Range of floating-point numbers''' section says {{quotation|Positive floating-point numbers in this format have an approximate range of 10<sup>−308</sup> to 10<sup>308</sup> (because 308 is approximately 1023 × log<sub>10</sub>(2), since the range of the exponent is [−1022,1023]). The complete range of the format is from about −10<sup>308</sup> through +10<sup>308</sup> (see [[IEEE 754]]).}}
Would this be more clear if expressed as: {{quotation|... because 308 is approximately log<sub>10</sub>(2<sup>1023</sup>) ...}}
The latter is more consistent with the earlier (in Overview) notation of ''value'' = ''s'' × ''b<sup>e</sup>'' (where ''b=2'', ''e''=1023). [[Special:Contributions/63.116.23.136|63.116.23.136]] ([[User talk:63.116.23.136|talk]]) 05:10, 1 July 2011 (UTC)
 
{{done|Done [[User:Mitch Ames|Mitch Ames]] ([[User talk:Mitch Ames|talk]]) 02:29, 31 July 2011 (UTC)}}
== Unnecesary precision ==
 
As requested after someone stuck in loads of extra digits of pi I have set up a section in this talk page for discussion if somebody else thinks loads of digits which whave nothing to do with the topic are a good idea. Until then the consensus of people on the matter from the history is pretty apparent and a big long uninteresting and irrelevant string of digits should not be put in. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 17:35, 5 October 2011 (UTC)
 
:In addition, everyone involved needs to read and follow [[Wikipedia:Edit warring]] I have placed warnings on the userpages of everyone who is at 2RR.
 
:Derek farn, the consensus is against you on this one. Davidhorman, Dmcq and Guy Macon all agree that going ten digits past the number of digits in the single precision example is enough to get the point across, and that more digits than that detract from the article. [[User:Guy Macon|Guy Macon]] ([[User talk:Guy Macon|talk]]) 21:19, 5 October 2011 (UTC)
 
::Well perhaps you could explain why you want so many digits in yourself. Having a load of unnecessary digits just encourages people to add extra ones as far as I can see as a kind of pointy comment on the length. Why did you want some digits in grey past the first seven in bold and do we really need thirty digits to see the difference? [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 23:45, 5 October 2011 (UTC) [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 23:45, 5 October 2011 (UTC)
== Software of book side of history ==
 
I just reverted a bit about the Pilot Ace in history because it used software to emulate floating point. However it occrs to me that there might be something worthwhile in the bit about J.H.Wilkinson, ''Rounding errors in algebraic processes''. Is there evidence about who wrote a book about floating point or that this was a particular turning point? [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 18:46, 6 February 2012 (UTC)
== IEEE 754 ==
 
I have added a section discussing the "big picture" on the rationale and use for the IEEE 754 features which often gets lost when discussing the details.
I plan to add specific references for the points made there (from Kahn's web site). It would be good to expand the examples and add additional ones as well.
 
[[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|undated]] comment added 11:22, 19 February 2012 (UTC).</span><!--Template:Undated--> <!--Autosigned by SineBot-->
 
:You need to cite something saying these were accepted rationales for it. Citations point to specific books journals or newspapers and preferably page number ranges. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 13:51, 19 February 2012 (UTC)
 
Added direct citations as requested.
[[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|undated]] comment added 18:20, 19 February 2012 (UTC).</span><!--Template:Undated--> <!--Autosigned by SineBot-->
 
:Thanks. My feeling about Kahan and his diatribe against Java is that he just doesn't get what programmers have to do when testing a program. Having a switch to enable lax typing of intermediate results where you know it ill only be run in environments you've tested is a good idea but that wasn't what Java was originally designed for. The section about extended precision there seems undue in length as I'm pretty certain other considerations like signed zero and denormal handling were the main original considerations where it differed from previous implementations. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 20:37, 19 February 2012 (UTC)
 
:Although I referenced Kahan's Java paper several times, I certainly didn't want this section to appear as a slight against Java. Kahan has several other papers discussing the need for extended precision that do not mention Java-- I will replace the current references with those in the near future, and try to trim it down (although I don't think that that reference is a diatribe against Java, just against its numerics). I certainly didn't want to get into the tradeoffs between improved numerical precision of results versus exact reproducibility in Java in this section. I do however think that it is important to clarify the intended use of the IEEE754 features in an introductory article like this, which can get lost in detailed descriptions of the features. In particular, I find that there is *wide* misunderstanding of the intended use of, and need for, extended precision amongst the programming community, particularly as extended precision was historically not supported in several RISC processors, and thus it is underused by programmers, even when targeting the x86 platform for e.g. HPC (even when these same programmers would carry additional significant figures for intermediate calculations if doing the same computations by hand, as alluded to in this section). Also, Kahan's descriptions of work on the design of the x87 (based on his experience designing HP calculators which use extended precision internally) makes it clear that extended precision was intended as a key feature (indeed a recommended feature) of IEEE754, compared with previous implementations.
 
[[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 00:56, 20 February 2012 (UTC)
 
:As far as I'm aware the main other rationales were
::To have a sound mathematical basis in that results were correctly rounded versions of accurate results and also so reasoning about the calculations would be easier.
::Round to even was used to improve accuracy. In fact this is much more important than extended precision if the double storage mode is only used for intermediate calculations. Using extended precision only gives bout one extra bit overall at the end if values in arrays are in doubles. The main reason I believe they were put in was it made calculating mathematical functions much easier and more accurate, they can also be used in inner routines with benefit.
::Biased rounding was put in I believe to support interval arithmetic - another part of being able to guarantee the results of calculations. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 15:43, 20 February 2012 (UTC)
 
:::''Using extended precision only gives bout one extra bit overall at the end if values in arrays are in doubles''. This is false in general; you must be thinking of some special cases where not many intermediate calculations happen before rounding to double for storage. For a counterexample, e.g. consider a loop to take a dot product of two double-precision arrays (not using Kahan summation etc.) [[User:Stevenj|— Steven G. Johnson]] ([[User talk:Stevenj|talk]]) 21:16, 20 February 2012 (UTC)
 
::::You would normally get very little advantage in that case over round to even with so few intermediate calculations. And for longer calculations round to even wins over just using a longer mantissa and rounding down. You only get a worthwhile gain if the storage is in extended precision. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 21:53, 20 February 2012 (UTC)
 
:::::That is certainly not the case in general. The examples you are thinking of are using simple exactly rounded single arithmetic expresions-- the advantage of extended precision is avoiding loss of precision in more complicated numerically unstable formulae-- e.g. it is easy to construct examples were even computing a quadratic formula discriminant can cause massive loss of ULP when computed in double but not in double extended. Several examples are given in the Kahan references. This is in addition to the advantage of the extended exponent in avoiding overflow in e.g. dot products. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 00:16, 22 February 2012 (UTC)
 
:::When you say ''Round to even was used to improve accuracy.'', I take it you are mainly referring to the exact rounding: breaking ties by round to even does avoid some additional statistic biases but it is rather subtle (might be worth mentioning the main text though..). [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 00:16, 22 February 2012 (UTC)
 
::: ''Biased rounding was put in I believe to support interval arithmetic''. Yes, I believe directed rounding was included to support interval arithmetic, but also for debugging numerical stability issues-- if an algorithm gives drastically different results under round to + and - infinity then it is likely unstable. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 00:16, 22 February 2012 (UTC)
 
::: ''As far as I'm aware the main other rationales were... to have a sound mathematical basis in that results were correctly rounded versions of accurate results and also so reasoning about the calculations would be easier.''. Yes, the exact rounding is an important point-- I have added some additional text earlier in the article to expand on this. It is true that, like previous arithmetics, having a precise specification to allow expert numerical analysts to write robust libraries was an important consideration, but the unique aspect of IEEE-754 is that it was also aimed at a broad market of non-expert users and so I focused in the section on the robustness features relevant to that (I will add some text highlighting that aspect as well though). [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 00:16, 22 February 2012 (UTC)
 
::::Well exact rounding, but I thought it better to specify the precise format they have. The point is that rounding rather than truncating is what really matters. With rounding the error only tends to go up with the number of computations as the square root of the number of operations whereas with directed rounding it goes up linearly. Even the reduction of bias by round to even matter in this. You alwayts get something else putting in a little bias so it is not as good as this but directed rounding is really bad. You're better off just perturbing the original figures for stability checking.
::::The mathematical basis makes it much easier to do things like construct longer precision arithmetic packages easily, in fact the fused multiply is particularly useful for this. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 00:27, 22 February 2012 (UTC)
:::::The use of directed rounding for diagnosis of stability issues is discussed here http://www.cs.berkeley.edu/~wkahan/Stnfrd50.pdf and in other references at that web site. It also discusses why perturbation alone is not as useful. IEEE 754-2008 annex B states this explicitly-- "B.2 Numerical sensitivity: Debuggers should be able to alter the attributes governing handling of rounding or exceptions inside subprograms, even if the source code for those subprograms is not available; dynamic modes might be used for this purpose. For instance, changing the rounding direction or precision during execution might help identify subprograms that are unusually sensitive to rounding, whether due to ill-condition of the problem being solved, instability in the algorithm chosen, or an algorithm designed to work in only one rounding- direction attribute. The ultimate goal is to determine responsibility for numerical misbehavior, especially in separately-compiled subprograms. The chosen means to achieve this ultimate goal is to facilitate the production of small reproducible test cases that elicit unexpected behavior." [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 01:04, 22 February 2012 (UTC)
::::::The uses that somebody makes of features is quite a different thing from the rationale for why somebody would pay to have them implemented. The introduction to the standard gives a succinct summary of the main reasons for the standard. I'll just copy the latest here so you can see
 
:a) Facilitate movement of existing programs from diverse computers to those that adhere to this standard as well as among those that adhere to this standard.
:b) Enhance the capabilities and safety available to users and programmers who, although not expert in numerical methods, might well be attempting to produce numerically sophisticated programs.
:c) Encourage experts to develop and distribute robust and efficient numerical programs that are portable, by way of minor editing and recompilation, onto any computer that conforms to this standard and possesses adequate capacity. Together with language controls it should be possible to write programs that produce identical results on all conforming systems.
:d) Provide direct support for
::― execution-time diagnosis of anomalies
::― smoother handling of exceptions
::― interval arithmetic at a reasonable cost.
:e) Provide for development of
::― standard elementary functions such as exp and cos
::― high precision (multiword) arithmetic
::― coupled numerical and symbolic algebraic computation.
:f) Enable rather than preclude further refinements and extensions.
::::::There are other things but this is what the basic rationale was and is. Directed rounding was for interval arithmetic. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 01:56, 22 February 2012 (UTC)
 
:::::::Thanks. Actually, I believe that "d) Provide direct support for― execution-time diagnosis of anomalies" is referring to this use of directed rounding to diagnose numerical instability. Certainly Kahan makes it clear that he considered it a key usage from the early design of the x87. I agree that its use for interval arithmetic was also considered from the beginning. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 02:11, 22 February 2012 (UTC)
::::::::No that refers to identification and methods of notifying the various exceptions and the handling of the signalling and quiet NaNs. Your reference from 2007 does not support in any way that arbitrarily jiggling the calculations using directed rounding was considered as a reason to include directed rounding in the specification. He'd have been just laughed at if he had justified spending money on the 8087 for such a purpose when there are easy ways of doing something like that without any hardware assistance. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 08:23, 22 February 2012 (UTC)
== Trivia removed ==
 
I removed about that the full precision of extended precision is attained when extended precision is used. The point about the algorithm is it converges using the precision used. We don't need to put in the precisions of single double and extended precision versions of the algorithm. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 23:23, 23 February 2012 (UTC)
 
:::I disagree that it is trivia-- it is a good example to also illustrate the earlier discussions on the usage of extended precision. In any case, to make it easier to find for those who may be interested in the information: the footnote to the final example, giving the precision using double extended for internal calculations, is included here-
 
:::"As the recurrence is applied repeatedly, the accuracy improves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic should be capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15 digits of precision. Footnote: if intermediate calculations are carried at a higher precision using double extended (x87 80 bit) format, it reaches 18 digits of precision, which is the full target double precision." [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 23:37, 23 February 2012 (UTC)
 
:It just has nothing to do with extended precision. The first algorithm would go wrong just as badly with extended precision and the second one behaves exactly like double. There is nothing of note here. Why should it have all the various precisons in? The same thing would happen with float or quad precision. All it says is that the precision for different orecisions is different. Also a double cannot hold 18 digits of precision, used as an intermediate for double you'd at most get one bit of precision extra. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 00:50, 25 February 2012 (UTC)
 
::::Agreed that the footnote does nothing to clarify the particular point being made by that example-- that wasn't the aim though. The intention was to also utilise the example to demonstrate the utility of computing intermediate values to higher precision than needed by the final destination format to limit the effects of round-off. In that sense it is an example for the earlier discussion on extended precision (and also the section of approaches to improve accuracy). Perhaps the text "Footnote: if intermediate calculations are carried at a higher precision using double extended (x87 80 bit) format, it reaches 18 digits of precision, which is the full target double precision (see discussion on extended precision above)." would be clearer. Agreed it is is not the most striking example of this, but still demonstrates the idea-- perhaps a separate, more striking and specific example would be preferable, I will see what I can find. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 04:52, 25 February 2012 (UTC)
 
:::::It does not illustrate that. What give you the idea it does? If anything it is an argument against what was said before. Using extended precision in the intermediate calculation and storing back as double does not give increased precision in the final result. The 18 digits only applies to the extended precision, it does not apply to the double result. The 18 digits is not the target precision of a double. A double can only hold 15 digits accurately. There is no way to stick the extra precision of the extended precision into the target double. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 09:53, 25 February 2012 (UTC)
 
::::::IEEE 754 double precision gives from 15 to 17 decimal digits of precision (17 digits if round-tripping from double to text back to double). When the example is computed with extended precision it gives 17 decimal digits of precision, so if the returned double was to be used for further computation it would have less roundoff error, in ULP (at least one extra decimal digit worth). Although, as you say, if the double result is printed to 15 decimal digits this extra precision will be lost. I agree that it is not a compelling example-- a better example could show a difference in many decimal significant digits due to internal extended precision. [[Special:Contributions/121.45.205.130|121.45.205.130]] ([[User talk:121.45.205.130|talk]]) 23:21, 25 February 2012 (UTC)
:::::::The 17 digits for a round trip is only needed to cope with making certain that rounding works okay. The actual precision is just less than 16 digits, about 15.95 if one cranks the figures. Printing has nothing to do with it. I was just talking about the 53 bits of precision information held within double precision format expressed as decimal digits. You can't shove any more information into the bits. The value there is about 1 ulp out and using extended precision would gain that back. This is what I was saying about extended precision being very useful for getting accurate maths functions, straightforward implementations in double will very often be 1 ulp out without special work whereas the extended precision result will very often give the value given by rounding the exact value. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 00:08, 26 February 2012 (UTC)
::::::::::Ideally, what should be added is a more striking example of using excess precision in intermediate computations to protect against numerical instability. The current one can indeed demonstrate this if excess precision is carried to IEEE quad precision, in which case the numerical unstable version gives good results. I have added notes to that effect which will do as an example for now. There are many examples also showing this using only double extended (e.g. even as simple as computing the roots of a quadratic equation), and I will add such an example in the future.. but not for a while (by the way, I think double extended adds more than 1 ULP but I haven't checked that). [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 06:54, 26 February 2012 (UTC)
:::::::::::That's not true either because how does one know when to stop? Using quadruple precision would still diverge. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 11:45, 26 February 2012 (UTC)
::::::::::::::::Yes that is so- once it does reach the correct value it stays there for several iterations (at double precision) but does eventually diverge from it again, so a stopping criterion of when the value does not change at double precision could be used. But yes, I am not completely happy with that example for that reason-- feel free to remove it if you feel it is misleading. Actually Kahan has several very compelling examples in his notes-- I will post one here in the next week or so. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 14:41, 26 February 2012 (UTC)
 
The use of extra precision can be illustrated easily using differentiation. If the result is to be single precision then using double precision for all the calculations is a good idea because of th loss of significance when subtracting two values of he function. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 12:00, 26 February 2012 (UTC)
::: ok yes, that could be a good example-- I will see what I can come up with. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 14:41, 26 February 2012 (UTC)
 
: If have added an example from Kahan's publications-- I think this is a good example as it demonstrates the massive roundoff error (up to half signif. digits lost) that can occur with even innocuous-looking formulae, and shows the two main methods to correct or improve that: increased internal precision, or numerical analysis. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 07:03, 28 February 2012 (UTC)
::Yes it is definitely better to source something like that to a good source like him. I may not agree with every last word he says about it but he definitely is the premiere source for anything on floating point. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 14:14, 28 February 2012 (UTC)
== 01010111 01101000 01100001 01110100 00101110 00101110 00101110 00111111 (What...?) ==
 
The section on internal representation does not explain how decimals are converted to floating-point values. I think it will be helpful if we add a step-by-step procedure that the computer follows. Thanks! [[Special:Contributions/68.173.113.106|68.173.113.106]] ([[User talk:68.173.113.106|talk]]) 02:16, 25 February 2012 (UTC)
:This gives an example of conversion and the articles on the particular formats give other examples. Wikipedia does not in general provide step by step procedures, it describes things, see [[WP:NOTHOWTO]]. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 02:24, 25 February 2012 (UTC)
::I just thought it was kind of unclear. Besides, doing so might actually help this article get to GA status.
::You see, I'm trying to design an algorithm for getting the mantissa, the exponent, and the sign of a <code>float</code> or <code>double</code>. So in case anyone else actually cares about that stuff. For the record, the storage is little-endian, so you have to reverse the bit order. [[Special:Contributions/68.173.113.106|68.173.113.106]] ([[User talk:68.173.113.106|talk]]) 02:50, 25 February 2012 (UTC)
:::It would stop FA status. Have a look at the articles about the individual formats. They describe in quite enough details the format. Any particular algorithm is up to the user, they are not interesting or discussed in secondary sources. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 10:01, 25 February 2012 (UTC)
:::The closest in Wikipedia for the sort of stuff you're talking about is if somebody wrote something for wikibooks. Have you had a look at the various external sites? Really to me what you're talking about sounds like some homework exercise and we shouldn't help with those except perhaps to give hints. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 10:20, 25 February 2012 (UTC)
== imho, "real numbers" is didactically misleading ==
 
I'd like to propose to change the beginning of the first sentence, because the limited amount of bits in the significand only allows for storing rational binary numbers. Because two is a prime factor of ten, this means only rational decimal numbers can be stored as well. Concluding, I'd like to propose to replace "real" by "rational" there.
[[User:Drgst|Drgst]] ([[User talk:Drgst|talk]]) 13:17, 25 February 2012 (UTC)
 
:Definitely not. That is a bad idea. They are approximations to real numbers. The concept of rational number just doesn't come into it. That they are rational is just a side effect. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 14:32, 25 February 2012 (UTC)
 
::In the section 'Some other computer representations for non-integral numbers' there are some systems that can represent some irrational numbers. for instance a logarithmic system does not necessarily represent rational numbers. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 14:36, 25 February 2012 (UTC)
 
:::Sorry for the delayed answer, Dmcq, it seems I forgot to tick the "watch page" checkbox... now for the content: IEEE FP numbers definitely are rational numbers. Even the most simple irrational number in the world, i.e. sqrt(2), cannot be represented, e.g. Any mathematical theorem that really depends on the existence of irrational numbers does not hold for the set of FP numbers. Nevertheless, you are right in stating that FP numbers are meant to approximate real numbers. Yet, as no non-rational number can be represented, transcendental numbers are far from being representable. Of course, this has serious consequences: for example, none of these nice trigonometric identities involving pi or pi/2 can be used naively without introducing large errors. This is just a simple example of why I think people should be warned of associating floating point numbers with real numbers.[[User:Drgst|Drgst]] ([[User talk:Drgst|talk]]) 21:14, 27 June 2012 (UTC)
 
::::"Irrational numbers are those real numbers that cannot be represented as terminating or repeating decimals." --[[Irrational number]] Therefore, irrational numbers ''cannot be exactly represented on any digital computer''. However, you can get arbitrarily close. It really doesn't take all that many bits to handle a Planck length (~10^-35m) and the estimated size of the universe (~10^26m) in the same calculation.
 
::::The key point here is that floating point really is a method of representing (not perfectly but arbitrarily close) real numbers. Yes, it just so happens that some of them are represented exactly and others are not, but that's not relevant to the fact that FP is a method of representing (imperfectly) real numbers. All of this is covered quite nicely in the "Representable numbers, conversion and rounding" section. No need to make the lead confusing and misleading. --[[User:Guy Macon|Guy Macon]] ([[User talk:Guy Macon|talk]]) 22:48, 27 June 2012 (UTC)
 
:::::I don't think this is correct "floating point really is a method of representing (not perfectly but arbitrarily close) real numbers". We talk about the "representable numbers" as those real numbers which can be represented exactly within the system. Other real numbers are rounded to some representable number. So I think we should either speak in terms of "working with real numbers" (which seems a little vague) or "representing approximations to real numbers" (as we do later in the article). --[[User:JakeVortex|Jake]] ([[User talk:JakeVortex|talk]]) 08:50, 22 October 2012 (UTC)
::::::You make a good point, but while "working with real numbers" is inexact and vague, "representing approximations to real numbers" is wordy and clumsy. Perhaps we can devise a third alternative? --[[User:Guy Macon|Guy Macon]] ([[User talk:Guy Macon|talk]]) 12:57, 22 October 2012 (UTC)
 
:::::::What about "approximating real numbers"? But IMHO, "real numbers" is slightly incorrect, because floating point can also be used for complex arithmetic (though a complex number is here seen as a pair of two real numbers). Moreover a floating-point arithmetic is not just about the representation, but also the behavior when doing an operation (e.g. how the result is rounded). So, I would prefer something like: "a method of doing numerical computations" [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 22:09, 22 October 2012 (UTC)
 
== Guard bits ==
 
Anybody know where the business of needing three extra bits comes from? For addition one only needs a guard/round digit plus a sticky bit as the sticky bit will always be zero if subtraction means you have to shift up. And for multiplication one needs the double length to cope with carry properly before rounding - but one can still cut that down to two bits before applying the particular rounding. The literaure talks about guard and round and sticky so I'm not disputig putting it in the text, just wondering why people got the idea in their heads in the first place. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 13:03, 8 March 2012 (UTC)
 
:Somewhat related: Take a look at "2 vs 3 guard bits" here:
:http://www.engineering.uiowa.edu/~carch/lectures07/55035-070404-prn.pdf
 
:Also interesting:
:http://www.google.com/patents/US4282582.pdf
 
:These two searches turn up some interesting pages:
:[http://www.google.com/search?q=%22floating+point%22+%2240+bits%22 <nowiki>http://www.google.com/search?q="floating+point"+"40+bits"</nowiki>]
:[http://www.google.com/search?q=%22floating+point%22+%22eight+guard+bits%22+%22DSP%22 <nowiki>http://www.google.com/search?q="floating+point"+"eight+guard+bits"+"DSP"</nowiki>]
:--[[User:Guy Macon|Guy Macon]] ([[User talk:Guy Macon|talk]]) 00:39, 9 March 2012 (UTC)
 
::Goldberg gives a discussion of the need for two guard digits in http://www.validlab.com/goldberg/paper.pdf (page 195). There is a very clear description with example cases in: Michael L. Overton (2001). Numerical Computing with IEEE Floating Point Arithmetic. SIAM. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 06:17, 9 March 2012 (UTC)
 
:::Very good reference. It should be noted that he not only covers base 10 and guard (decimal) digits but also base 2 and guard bits. --[[User:Guy Macon|Guy Macon]] ([[User talk:Guy Macon|talk]]) 07:02, 9 March 2012 (UTC)
 
:::I just looked at some implementation I did of the whole business I did ages ago and I did actually use three bits! Just me forgetting what I'd done, sorry. yes the subtraction does actually require them all. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 11:33, 9 March 2012 (UTC)
== edit : computation in page is correct after all ==
 
Sorry for the confusion : I used t_(i+1) instead of t_i. for that reason I missed a factor 2 : 2^(i+1) = 2 * 2^i. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:KeesLem|KeesLem]] ([[User talk:KeesLem|talk]] • [[Special:Contributions/KeesLem|contribs]]) 14:36, 21 February 2013 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
 
== Justification for division by zero definition ==
 
I [http://en.wikipedia.org/w/index.php?title=Division_by_zero&diff=511812597&oldid=510158610 recently added] to [[division by zero]] this statement with an appropriate source:
:"The justification for this definition is to preserve the sign of the result in case of [[arithmetic underflow]]. For example, in the double-precision computation 1/(''x''/2), where ''x'' = ±2<sup>−149</sup>, the computation ''x''/2 underflows and produces ±0 with sign matching ''x'', and the result will be ±∞ with sign matching ''x''. The sign will match that of the exact result ±2<sup>150</sup>, but the magnitude of the exact result is too large to represent, so infinity is used to indicate overflow."
Provided this is valid, I wonder if it could also be added in some relevant ___location in the body of floating point related articles. In general I'd like to see more information on design rationales. Thanks! [[User:Dcoetzee|Dcoetzee]] 07:42, 11 September 2012 (UTC)
 
== Signed zero section, branch cuts ==
 
The section on signed zero (under Internal representation >> Special values >> Signed zero) says the following:
 
"The difference between +0 and −0 is mostly noticeable for complex operations at so-called [[Branch cut|branch cuts]]."
 
In a strictly mathematical sense, +0/-0 ''can'' be interpreted as describing the limiting behaviors of a function, but that's not actually what's happening here. Moreover, branch cuts are not the only situation where these exceptional limiting behaviors appear, one can have branch cuts without exceptional limiting behaviors of this sort, and none of the examples given in the section are actually branch cuts. As far as I can tell, there is absolutely no significance to the relationship between branch cuts in complex analysis and signed zero in floating point numerical representations, but I wanted to make sure there wasn't a good reason for this being here. Thoughts? [[Special:Contributions/71.227.119.236|71.227.119.236]] ([[User talk:71.227.119.236|talk]]) 15:25, 29 September 2012 (UTC)
 
:Result of a quick Google search:
 
:"A system with signed zero can distinguish between asin(5+0i) and asin(5-0i) and pick the appropriate branch cut continuous with quadrant I or quadrant IV, respectively. A system without signed zero cannot distinguish and, according to the choses the branch cut such that it is continuous with quadrant IV (consistent with the rule of CCC). So, for asin(5+0i) it will return the same value as a system with signed zero would for asin(5-0i)." -Richard B. Kreckel ( [ http://www.ginac.de/~kreckel/ ] [ http://lists.gnu.org/archive/html/bug-gsl/2011-12/msg00004.html ] ).
 
:I think that when he wrote "according to the" he meant "accordingly" (probably not a native English speaker). --[[User:Guy Macon|Guy Macon]] ([[User talk:Guy Macon|talk]]) 23:34, 29 September 2012 (UTC)
 
::Somewhat straying from the subject but still quite interesting; the "Signed Zero" section of "What Every Computer Scientist Should Know About Floating-Point Arithmetic" ( [ http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html ] ) --[[User:Guy Macon|Guy Macon]] ([[User talk:Guy Macon|talk]]) 23:41, 29 September 2012 (UTC)
 
== imho, the computation for Pi as shown actually computes only Pi/2 ==
 
The algorithm as shown to compute an approximation of Pi actually computes imo in this form only Pi/2, even while the output shown contains
an approximation for Pi. I think either the values should be halved or the formula should be changed into : 12 * 2^i * t_i
[[User:KeesLem|KeesLem]] ([[User talk:KeesLem|talk]]) 15:16, 21 February 2013 (UTC) <span style="font-size: smaller;" class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/130.161.210.156|130.161.210.156]] ([[User talk:130.161.210.156|talk]]) </span><!-- Template:Unsigned IP --> <!--Autosigned by SineBot-->