Content deleted Content added
MiszaBot I (talk | contribs) m Archiving 1 thread(s) from Talk:Floating point. |
MiszaBot I (talk | contribs) m Robot: Archiving 1 thread from Talk:Floating point. |
||
Line 532:
:::::::Thanks. Actually, I believe that "d) Provide direct support for― execution-time diagnosis of anomalies" is referring to this use of directed rounding to diagnose numerical instability. Certainly Kahan makes it clear that he considered it a key usage from the early design of the x87. I agree that its use for interval arithmetic was also considered from the beginning. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 02:11, 22 February 2012 (UTC)
::::::::No that refers to identification and methods of notifying the various exceptions and the handling of the signalling and quiet NaNs. Your reference from 2007 does not support in any way that arbitrarily jiggling the calculations using directed rounding was considered as a reason to include directed rounding in the specification. He'd have been just laughed at if he had justified spending money on the 8087 for such a purpose when there are easy ways of doing something like that without any hardware assistance. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 08:23, 22 February 2012 (UTC)
== Trivia removed ==
I removed about that the full precision of extended precision is attained when extended precision is used. The point about the algorithm is it converges using the precision used. We don't need to put in the precisions of single double and extended precision versions of the algorithm. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 23:23, 23 February 2012 (UTC)
:::I disagree that it is trivia-- it is a good example to also illustrate the earlier discussions on the usage of extended precision. In any case, to make it easier to find for those who may be interested in the information: the footnote to the final example, giving the precision using double extended for internal calculations, is included here-
:::"As the recurrence is applied repeatedly, the accuracy improves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic should be capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15 digits of precision. Footnote: if intermediate calculations are carried at a higher precision using double extended (x87 80 bit) format, it reaches 18 digits of precision, which is the full target double precision." [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 23:37, 23 February 2012 (UTC)
:It just has nothing to do with extended precision. The first algorithm would go wrong just as badly with extended precision and the second one behaves exactly like double. There is nothing of note here. Why should it have all the various precisons in? The same thing would happen with float or quad precision. All it says is that the precision for different orecisions is different. Also a double cannot hold 18 digits of precision, used as an intermediate for double you'd at most get one bit of precision extra. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 00:50, 25 February 2012 (UTC)
::::Agreed that the footnote does nothing to clarify the particular point being made by that example-- that wasn't the aim though. The intention was to also utilise the example to demonstrate the utility of computing intermediate values to higher precision than needed by the final destination format to limit the effects of round-off. In that sense it is an example for the earlier discussion on extended precision (and also the section of approaches to improve accuracy). Perhaps the text "Footnote: if intermediate calculations are carried at a higher precision using double extended (x87 80 bit) format, it reaches 18 digits of precision, which is the full target double precision (see discussion on extended precision above)." would be clearer. Agreed it is is not the most striking example of this, but still demonstrates the idea-- perhaps a separate, more striking and specific example would be preferable, I will see what I can find. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 04:52, 25 February 2012 (UTC)
:::::It does not illustrate that. What give you the idea it does? If anything it is an argument against what was said before. Using extended precision in the intermediate calculation and storing back as double does not give increased precision in the final result. The 18 digits only applies to the extended precision, it does not apply to the double result. The 18 digits is not the target precision of a double. A double can only hold 15 digits accurately. There is no way to stick the extra precision of the extended precision into the target double. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 09:53, 25 February 2012 (UTC)
::::::IEEE 754 double precision gives from 15 to 17 decimal digits of precision (17 digits if round-tripping from double to text back to double). When the example is computed with extended precision it gives 17 decimal digits of precision, so if the returned double was to be used for further computation it would have less roundoff error, in ULP (at least one extra decimal digit worth). Although, as you say, if the double result is printed to 15 decimal digits this extra precision will be lost. I agree that it is not a compelling example-- a better example could show a difference in many decimal significant digits due to internal extended precision. [[Special:Contributions/121.45.205.130|121.45.205.130]] ([[User talk:121.45.205.130|talk]]) 23:21, 25 February 2012 (UTC)
:::::::The 17 digits for a round trip is only needed to cope with making certain that rounding works okay. The actual precision is just less than 16 digits, about 15.95 if one cranks the figures. Printing has nothing to do with it. I was just talking about the 53 bits of precision information held within double precision format expressed as decimal digits. You can't shove any more information into the bits. The value there is about 1 ulp out and using extended precision would gain that back. This is what I was saying about extended precision being very useful for getting accurate maths functions, straightforward implementations in double will very often be 1 ulp out without special work whereas the extended precision result will very often give the value given by rounding the exact value. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 00:08, 26 February 2012 (UTC)
::::::::::Ideally, what should be added is a more striking example of using excess precision in intermediate computations to protect against numerical instability. The current one can indeed demonstrate this if excess precision is carried to IEEE quad precision, in which case the numerical unstable version gives good results. I have added notes to that effect which will do as an example for now. There are many examples also showing this using only double extended (e.g. even as simple as computing the roots of a quadratic equation), and I will add such an example in the future.. but not for a while (by the way, I think double extended adds more than 1 ULP but I haven't checked that). [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 06:54, 26 February 2012 (UTC)
:::::::::::That's not true either because how does one know when to stop? Using quadruple precision would still diverge. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 11:45, 26 February 2012 (UTC)
::::::::::::::::Yes that is so- once it does reach the correct value it stays there for several iterations (at double precision) but does eventually diverge from it again, so a stopping criterion of when the value does not change at double precision could be used. But yes, I am not completely happy with that example for that reason-- feel free to remove it if you feel it is misleading. Actually Kahan has several very compelling examples in his notes-- I will post one here in the next week or so. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 14:41, 26 February 2012 (UTC)
The use of extra precision can be illustrated easily using differentiation. If the result is to be single precision then using double precision for all the calculations is a good idea because of th loss of significance when subtracting two values of he function. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 12:00, 26 February 2012 (UTC)
::: ok yes, that could be a good example-- I will see what I can come up with. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 14:41, 26 February 2012 (UTC)
: If have added an example from Kahan's publications-- I think this is a good example as it demonstrates the massive roundoff error (up to half signif. digits lost) that can occur with even innocuous-looking formulae, and shows the two main methods to correct or improve that: increased internal precision, or numerical analysis. [[User:Brianbjparker|Brianbjparker]] ([[User talk:Brianbjparker|talk]]) 07:03, 28 February 2012 (UTC)
::Yes it is definitely better to source something like that to a good source like him. I may not agree with every last word he says about it but he definitely is the premiere source for anything on floating point. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 14:14, 28 February 2012 (UTC)
|