Talk:Floating-point arithmetic: Difference between revisions

Content deleted Content added
m Archiving 1 discussion(s) to Talk:Floating-point arithmetic/Archive 5) (bot
 
(19 intermediate revisions by 7 users not shown)
Line 1:
{{talk header}}
{{Vital article|topic=Technology|level=5|class=B}}
{{WikiProject Computingbanner shell|class=B|importancevital=Topyes|science1=y|science-importance=Top}}
{{WikiProject Computer scienceComputing|classimportance=BTop|science=y|science-importance=Top}}
{{WikiProject Computer science|importance=Top}}
}}
{{User:MiszaBot/config
|archiveheader = {{aan}}
Line 13 ⟶ 15:
{{archives|bot=Lowercase sigmabot III|age=3|units=months}}
 
== Schubfach is not WP:OR ==
== imprecise info about imprecision of tan(gens) ==
 
I'm not quite sure why some of you consider Schubfach as WP:OR. Several implementations have been around for several years already, in particular it has been already adopted to Nim's standard library a year ago and working fine. It's true that the article is not formally reviewed, but honestly being published in a peer-reviewed conference/journal does not necessarily give that much of credit in this case. For example, one of the core parts (minmax Euclid algorithm) of the paper on Ryu contains a serious error, and this has been pointed out by several people, including Nazedin (a core contributor to Schubfach) if I recall correctly.
imho that: 'Also, the non-representability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a result of infinity' - is misleading as it's more a problem of the cos approximation not yielding '0' for pi()/2, if you replace cos(x) with sin(x-pi()/2) for that range you get a nice #DIV/0! for tan(pi()/2),
 
The main reason why Schubfach paper has not been published in a peer-reviewed journal, as far as I remember, is not because the work has not been verified, rather simply because the author didn't feel any benefit of going through all the paper works for journal publishing (things like fitting into the artificial page limit). The reason why it is still not accepted in OpenJDK (is it? even if it's not merged yet, it will make it soon) is probably because of lack of human resource who can and are willing to review the algorithm, and submitting the paper to a journal does not magically create such a human resource. (Of course they will do some amount of review, but it is very very far from being perfect, which is why things like the errors in the Ryu paper have not been caught in the review process.)
as well sin(pi()) not resulting in '0' can be corrected by replacing sin(x) with -sin(x-pi()) for that range,
 
The point is, Schubfach as an algorithm has already been completed a long time ago, like in 2017 as far as I believe, and at least two implementations (one in Java and one in C++) have been around at least since 2019, and the C++ one has been adopted to the standard library of a fairly popular language (Nim), and you can even find several more places where it has been adopted (Roblox, a very popular game in US, for example). So what really is a difference from Ryu? The only difference I can tell is that Ryu has a peer-reviewed journal paper, but as I elaborated, that isn't that big difference as far as I can tell. You also mentioned about new versions of the paper, and I felt like as if you think Schubfach is sort of a WIP project. If that's the case, then no, the new versions are just minor fixes/more clarifications rather than big overhauls. If Ryu paper were not published in a journal, probably the author of Ryu would have done the same kind of revisions (and fixed the error mentioned).
not sure if it holds, but if you reduce all trig. calculations on the numerical values of sin in the first quadrant - what imho is possible - the results may come out quite fine ... greatly neglected by calc, ex$el and others ... <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/77.0.177.112|77.0.177.112]] ([[User talk:77.0.177.112#top|talk]]) 01:26, 11 March 2021 (UTC)</small> <!--Autosigned by SineBot-->
:No, the tan floating-point function has nothing to do with the cos floating-point function. [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 11:32, 11 March 2021 (UTC)
 
In summary, I think at this point Schubfach is definitely an established work which has no less credibility compared to Ryu and others. [[Special:Contributions/2600:1700:7C0A:1800:24DF:1B93:6E37:99D2|2600:1700:7C0A:1800:24DF:1B93:6E37:99D2]] ([[User talk:2600:1700:7C0A:1800:24DF:1B93:6E37:99D2|talk]]) 01:09, 10 November 2022 (UTC)
:hello @Vincent, sorry for objecting ... imho (school math) and acc. to wikipedia (https://en.wikipedia.org/wiki/Trigonometric_functions, esp. 'Summary of relationships between trigonometric functions' there) "tan(x) = sin(x) / cos(x)", once you get a proper cos at pi()/2 [use sin(pi()/2-x), same reference], you can calculate a proper tan with overflow (#DIV/0! in 'calc'),
:In the mean time, I've learned by e-mail that the paper got a (possibly informal) review by serious people. So, OK to re-add it, but it is important to give references showing that it is used. And please, give the latest version of the paper and avoid typos in the WP text. And instead of "Apparently", try to give facts (i.e., what is really meant by "apparently"). Thanks. — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 01:26, 10 November 2022 (UTC)
perhaps it won't work 'in IEEE' (then it's a weakness there), but developers or users can achieve proper results once they have proper sin values for the first quadrant, <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/77.0.177.112|77.0.177.112]] ([[User talk:77.0.177.112#top|talk]]) 14:03, 11 March 2021 (UTC)</small> <!--Autosigned by SineBot-->
::"tan(x) = sin(x) / cos(x)" is a mathematical definition on the set of the real numbers. This has nothing to do with a floating-point specification. — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 18:23, 11 March 2021 (UTC)
 
== Digits of precision, a confusing early statement ==
:hello @Vincent, what's going on? there is a possibility to get math correct results, and you don't want it be posted?
 
I have removed the portion after the ellipses from the following text formerly found in the article:
0.: despite you think it a 'non floating point specification' you agree that the formulas hold and achieve correct results?
"12.345 is a floating-point number in a base-ten representation with five digits of precision...However, 12.345 is not a floating-point number with five base-ten digits of precision." I recognize the distinction made (a number with 5 base-ten digits of precision vs. a base-ten representation of a number with five digits of precision) and I suspect the author intended to observe that a binary representation of 12.345 would not have five base-ten digits of precision, but I can't divine what useful thing is intended to have been communicated there, so I've removed it. If I'm missing something obvious in the interpretation of this line, I suspect many others could, and encourage a more direct explanation if it's replaced. [[User:Factorial|john factorial]] ([[User talk:Factorial|talk]]) 18:44, 24 July 2023 (UTC)
 
:The sentence was made nonsensical by this revision by someone who mistook 12.3456 for a typo rather than a counterexample: https://en.wikipedia.org/w/index.php?title=Floating-point_arithmetic&diff=prev&oldid=1166821013
1.: the wikipedia article does not! state that there is any special 'floating-point-tangens' specification (and imho there isn't any), but states 'that an attempted computation of tan(π/2) will not yield a result of infinity', and that's simply only true for some attempts, by calculating sin() and cos() you can get the correct overflow,
:I have reverted the changes, and added a little more verbiage to emphasize that 12.3456 is a counterexample. [[User:Taylor Riastradh Campbell|Taylor Riastradh Campbell]] ([[User talk:Taylor Riastradh Campbell|talk]]) 20:56, 24 July 2023 (UTC)
 
== Computable reals ==
2.: 'mathematical definition on the set of the real numbers', yes, but what in that contradicts applying it on float or double figures as they are a subset of reals? some representations and results will have small deviations, that's the tradeoff for the speed of floats, but the basic math rules should hold as long as there aren't special points against it (as there are against e.g. associative rule) , pi(), pi()/2, pi()/4, 2*pi() and so on are not exact in floats or doubles ... as well as they are not! exact in decimals, despite that we calculate infinity for tan(pi()/2) in decimals, and thus we can! do the same in doubles (and floats?),
 
Concerning [[Special:Diff/1234874429]], I want to thank [[User:Vincent_Lefèvre]] for fast response. I agree that mentioning [[real closed field]] is off-topic. However, I still have a strong impression that [[computable number|computable reals]] should be listed as a separate bullet. I believe it is different from [[symbolic computation]]. I mean that arithmetic operations are not “aware” of <math>\pi</math> being <math>\pi</math>. Should I just propose a new edit? [[User:Korektysta|Korektysta]] ([[User talk:Korektysta|talk]]) 20:50, 17 July 2024 (UTC)
3.: plenty things in this world suffer from small deviations in fp-calculations ... we should start correcting them instead of getting the prayer mill 'fp-math is imprecise' going again and again,
:{{Mention|Korektysta}} Yes, but then, the first sentence of the section (before the list) should avoid the term "representing". It should rather talk about the arithmetic (which is some kind of representation and a way of working with it). BTW, I think that the list of alternatives to floating-point numbers should come later in the article, not in the first section. — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 11:51, 20 July 2024 (UTC)
::I had to think for a moment, but I still believe that [[computable number|computable reals]] constitute a separate representation. As far as I remember, the CoRN library does not remember the computation tree, but real numbers are represented as functions.
::I agree that [[Floating-point_arithmetic#Alternatives_to_floating-point_numbers|the subsection]] could be moved. For example, from [[Floating-point_arithmetic#Overview|the overview]] to the end of the article, just before [[Floating-point_arithmetic#See_also|See also]] as a separate section. [[User:Korektysta|Korektysta]] ([[User talk:Korektysta|talk]]) 22:56, 1 August 2024 (UTC)
::Ah, OK. Effectively, the arithmetic builds the computation tree, but it is opaque for the user. I guess that the treatment of leafs in the tree is also different because there is no special constant for <math>\pi</math>. <math>\pi</math> is just another function. [[User:Korektysta|Korektysta]] ([[User talk:Korektysta|talk]]) 04:49, 2 August 2024 (UTC)
 
== Patriot missile incident ==
4.: i am meanwhile slightly annoyed when 'fp-math is imprecise' is pushed again and again with wrong reasons, fp-math has weaknesses and 'you have to care what you do with it' is true and well known since Goldberg, but this does not forbid to achieve correct results with good algorithms, on the contrary, Goldberg and Kahan explicitly recommend it (because they did not see floating point numbers as a special world in which own laws should apply but as tools to be able to process real world tasks as fast and as good as possible),
 
The other day I made an edit clarifying the nature of the Patriot missile incident, based on the public sources already cited. [[User:Vincent Lefèvre]] reverted two parts of them:
5.: the article states that a correct calculation of tan(x) at pi()/2 is impossible as a result of the representation of pi() being imprecise, i'd show:
a: it's not impossible,
b: the representation of pi() isn't an issue against good results,
 
First, I replaced the link to [[loss of significance]] by the simpler word &lsquo;error&rsquo;, because [[loss of significance]] now just redirects to [[catastrophic cancellation]] since the old article was [https://en.wikipedia.org/w/index.php?title=Loss_of_significance&diff=prev&oldid=1107845106 deleted]. I was [https://en.wikipedia.org/wiki/Talk:Catastrophic_cancellation#Proposed_merge_of_Loss_of_significance_into_Catastrophic_cancellation loosely involved] in this deletion but I don't feel strongly about this; I think the term &lsquo;loss of significance&rsquo; is unnecessarily fancy without saying anything more than &lsquo;error&rsquo; does, but it's fine, and the error is essentially catastrophic cancellation after all.
agree? if not please with clear definitions and sources ... <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/77.3.16.116|77.3.16.116]] ([[User talk:77.3.16.116#top|talk]]) 16:29, 12 March 2021 (UTC)</small> <!--Autosigned by SineBot-->
 
Second, I added the text:
:Well, reading the beginning of what you said, about using <code>sin(x-pi/2)</code> in the implementation, yes, due to the cancellation in the subtraction, one could get a division by 0 and an infinity. I've clarified the text by saying "assuming an accurate implementation of tan". This would disallow implementations that do such ugly things. Even using <code>sin(x)/cos(x)</code> in the floating-point system to implement <code>tan(x)</code> would be a bad idea, due to the errors on sin and on cos, then on the division. And for 5, you misread the article (it implicitly assumes no contraction of the <code>tan(pi/2)</code> expression, but this is quite obvious to me). The article does not say that computing tan at the math value π/2 is impossible, it just says that the floating-point tan function will never give an infinity, because its input cannot be π/2 exactly (or ''k''π+π/2 exactly, ''k'' being an integer). — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 03:44, 13 March 2021 (UTC)
 
: The error arose not from the use of floating-point, but from the use of two different unit conversions when representing time in different parts of a calculation.
:: hello @Vincent, <br />
::- what do you refer to with: 'the floating-point tan function'? i could only find ('C')-library implementations and recomendations for FPGA's, and 'The asinPi, acosPi and tanPi functions were not part of the IEEE 754-2008 standard because they were deemed less necessary' in 'https://en.wikipedia.org/wiki/IEEE_754', <br />
::- "assuming an accurate implementation of tan": that sounds misleading and imho an attempt to stick to 'fp-math is imprecise' despite there are correct solutions, duping them as 'not accurate', <br />
:: - 'due to the errors on sin and on cos,': if you - or everyone - implement(s) the trig functions as proposed, and! respectively takes that function / that part of the quadrant that has less error one will get ... 'good results', <br />
:: - 'implementations that do such ugly things': opposite ... IEEE or '(binary) fp-math' or 'reducing accuracy by limiting to small amount off digits' is doing 'ugly things' with math in general, most of countermeasures rely on 'dirty tricks', i'd suggest letting the mill 'fp-math is imprecise' phase out, and using instead 'we are intelligent beings, we can recognize difficulties and deal with them' ... or the ' ... at least we try to', <br />
:: - 'Even using sin(x)/cos(x) in the floating-point system to implement tan(x) would be a bad idea, due to the errors on sin and on cos, then on the division.' - don't think that simple, it's well known which trig-function has weaknesses in which range(s) (calculated by approximations or taylor series or similar), pls. consider using substitutions only for that ranges ... <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/77.10.180.117|77.10.180.117]] ([[User talk:77.10.180.117#top|talk]]) 11:09, 13 March 2021 (UTC)</small> <!--Autosigned by SineBot-->
 
IThis text was [https://en.wikipedia.org/w/index.php?title=Floating-point_arithmetic&curid=11376&diff=11157133501299610166&oldid=11140762151299537597 edited the lead sectiondeleted] to try to tidy it up inon the followinggrounds waysthat:
:::The tan function (tangent) is included in the IEEE 754 and ISO C standards, for instance. The sentence "The asinPi, acosPi and tanPi functions..." is '''not''' about the tan function; moreover, this is historical information, as these functions are part of the current IEEE 754 standard as explained. My addition "assuming an accurate implementation of tan" is needed because some trig implementations are known to be inaccurate (at least for very large arguments), so who knows what one can get with such implementations... — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 12:16, 13 March 2021 (UTC)
 
: The intent is that there is a single time unit: 0.1s. The issue is that the software assumed that its accuracy did not matter; Skeel says: "this time difference should be in error by only 0.0001%, a truly insignificant amount". Something that may remain true... until a cancellation occurs like here.
== Lead section edits ==
 
But I don't think that is the whole story. The Skeel citation<ref name="Skeel">{{citation |url=https://www-users.cse.umn.edu/~arnold/disasters/Patriot-dharan-skeel-siam.pdf |title=Roundoff Error and the Patriot Missile |last=Skeel |first=Robert |journal=SIAM News |volume=25 |issue=4 |page=11 |date=July 1992 |access-date=2024-11-15}}</ref> says (emphasis added):
I [https://en.wikipedia.org/w/index.php?title=Floating-point_arithmetic&diff=1115713350&oldid=1114076215 edited the lead section] to try to tidy it up in the following ways:
 
: When Patriot systems were brought into the Gulf conflict, the software was modified (several times) to cope with the high speed of ballistic missiles, for which the system was not originally designed. <p>At least one of these software modifications was the introduction of a subroutine for converting clock-time more accurately into floating-point. This calculation was needed in about half a dozen places in the program, but the call to the subroutine was not inserted at every point where it was needed. '''Hence, with a less accurate truncated time of one radar pulse being subtracted from a more accurate time of another radar pulse, the error no longer cancelled.'''
- Previously the opening sentence was "In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision." I found this opaque (what is "arithmetic using formulaic representation"?) and oblique (it doesn't tell you what a floating-point number is, it only talks about an attempted "trade-off"). I think Wikipedia articles should open by defining the thing at hand directly, rather than talking around it. Therefore, the new opening sentence explicitly describes floating-point representation: "In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precisison, called the mantissa, scaled by an integer exponent of a fixed base."
 
The designers certainly didn't assume that its accuracy did not matter&mdash;if they did assume that, why would they have written a new conversion subroutine for more accurate conversion?
- Both "significand" and "mantissa" are used to describe the non-exponent part of a floating-point number, but "mantissa" is far more common, so I think it's the better choice. (Google: "floating-point mantissa" yields 672,000 results; "floating-point significand" yields 136,000 results).
 
Suppose the floating-point system on the control computer had 30-bit precision (a low estimate for a 48-bit floating-point format). The logic computed something like <math>C_1(t_1) - C_0(t_0)</math>, where <math>C_1(t)</math> is (say) the ''new'' higher-precision conversion from fixed-point to floating-point giving <math>0.1\times t\times(1 - 2^{-30})</math>, and <math>C_0(t)</math> is (say) the ''old'' lower-precision conversion giving <math>0.1\times t\times(1 - 2^{-20})</math>. There may be an additional ''floating-point rounding error'' of about one ulp, but that pales in comparison to the ''discrepancy between conversion subroutines'' of about <math>2^{10} \approx 1000</math> ulps in this hypothesis of 30-bit precision (if it were 40-bit precision, then it would be <math>2^{20} \approx 10^6</math> ulps, and so on).
- Previously, the topic of the large dynamic range of floating-point numbers was mentioned twice separately; these mentions have been merged into a single paragraph.
 
In brief, this was a much more mundane software engineering mistake&mdash;updating a unit conversion subroutine call in one place but not another, so the units are no longer commensurate&mdash;rather than anything you can rightly blame floating-point for.
- The links for examples of magnitude are changed to point to the actual examples mentioned (galactic distances and atomic distances).
 
It's possible that, after long enough uptime, computing <math>C(t_1) - C(t_0)</math> rather than <math>C(t_1 - t_0)</math> with the ''same'' conversion subroutine <math>C</math> ''could'' lose enough significant bits due to floating-point rounding error to cause the same problem. But in this case, the problem was using ''different'' conversion subroutines <math>C_1</math> and <math>C_0</math>. And, with at least 30-bit precision, the floating-point rounding error would take a thousand times as long to cause the same problem&mdash;over twenty thousand hours before a problem, or about two years and four months of continuous uptime. (I would also guess the format has >30 bits of precision, so it's likely much longer than that.)
Feel free to discuss here.
— [[User:Ka-Ping Yee|Ka-Ping Yee]] ([[User talk:Ka-Ping Yee|talk]]) 23:31, 12 October 2022 (UTC)
 
This cautionary tale is often used to blame the designers for using floating-point to represent time and to argue that floating-point numbers are incomprehensible black magic where reasoning goes out the window (e.g., on [https://news.ycombinator.com/item?id=1667060 Hacker News] and [https://old.reddit.com/r/programming/comments/6npfz/the_patriot_missile_failure/ Reddit]), even though the underlying story justifies neither of these conclusions. So that's why I think it is important to spell out the actual bug here&mdash;incomplete software change caused subtraction of incommensurate (but similar) units. [[User:Taylor Riastradh Campbell|Taylor Riastradh Campbell]] ([[User talk:Taylor Riastradh Campbell|talk]]) 04:16, 17 July 2025 (UTC)
== Schubfach is not WP:OR ==
:
 
:{{Re|Taylor Riastradh Campbell}}
I'm not quite sure why some of you consider Schubfach as WP:OR. Several implementations have been around for several years already, in particular it has been already adopted to Nim's standard library a year ago and working fine. It's true that the article is not formally reviewed, but honestly being published in a peer-reviewed conference/journal does not necessarily give that much of credit in this case. For example, one of the core parts (minmax Euclid algorithm) of the paper on Ryu contains a serious error, and this has been pointed out by several people, including Nazedin (a core contributor to Schubfach) if I recall correctly.
:Just saying "error" would be misleading because in general, one has an error at almost each floating-point operation, and this is often not a major issue (with carefully designed code). What matters here is that the (relative) error is very large due to a [[catastrophic cancellation]] as described in the document.
 
:Saying that there are "two different unit conversions" is incorrect, as the time unit is the same in both routines (0.1s), contrary to the [[Mars Climate Orbiter#Cause of failure|Mars Climate Orbiter failure]], where the pound-force second and newton-second units were mixed up (so, even with an infinite precision, the failure would still have occurred); the issue here is that there are different approximations in the time calculation, i.e. with different accuracy (see the term "accurate" used by Skeel). This is really related to error analysis (with an infinite precision, there would have been no issues).
The main reason why Schubfach paper has not been published in a peer-reviewed journal, as far as I remember, is not because the work has not been verified, rather simply because the author didn't feel any benefit of going through all the paper works for journal publishing (things like fitting into the artificial page limit). The reason why it is still not accepted in OpenJDK (is it? even if it's not merged yet, it will make it soon) is probably because of lack of human resource who can and are willing to review the algorithm, and submitting the paper to a journal does not magically create such a human resource. (Of course they will do some amount of review, but it is very very far from being perfect, which is why things like the errors in the Ryu paper have not been caught in the review process.)
:No, the tan floating-point function has nothing to do with the cos floating-point function. [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 1112:3230, 1117 MarchJuly 20212025 (UTC)
 
::{{Quote|Just saying "error" would be misleading because in general, one has an error at almost each floating-point operation, and this is often not a major issue (with carefully designed code). What matters here is that the (relative) error is very large due to a catastrophic cancellation as described in the document.}}
The point is, Schubfach as an algorithm has already been completed a long time ago, like in 2017 as far as I believe, and at least two implementations (one in Java and one in C++) have been around at least since 2019, and the C++ one has been adopted to the standard library of a fairly popular language (Nim), and you can even find several more places where it has been adopted (Roblox, a very popular game in US, for example). So what really is a difference from Ryu? The only difference I can tell is that Ryu has a peer-reviewed journal paper, but as I elaborated, that isn't that big difference as far as I can tell. You also mentioned about new versions of the paper, and I felt like as if you think Schubfach is sort of a WIP project. If that's the case, then no, the new versions are just minor fixes/more clarifications rather than big overhauls. If Ryu paper were not published in a journal, probably the author of Ryu would have done the same kind of revisions (and fixed the error mentioned).
::Right, I have no objection to your reverting that part of the change. I was just explaining why I changed the text/link &lsquo;loss of significance&rsquo; in the first place.
 
::{{Quote|Saying that there are "two different unit conversions" is incorrect, as the time unit is the same in both routines (0.1s)&hellip;the issue here is that there are different approximations in the time calculation, i.e. with different accuracy (see the term "accurate" used by Skeel).}}
In summary, I think at this point Schubfach is definitely an established work which has no less credibility compared to Ryu and others. [[Special:Contributions/2600:1700:7C0A:1800:24DF:1B93:6E37:99D2|2600:1700:7C0A:1800:24DF:1B93:6E37:99D2]] ([[User talk:2600:1700:7C0A:1800:24DF:1B93:6E37:99D2|talk]]) 01:09, 10 November 2022 (UTC)
::I'm not attached to phrasing it in terms of unit conversions (though I think a better analogy is yards/meters, which differ by about 9%, or quarts/liters, which differ by about 6%, rather than pounds/newtons, which differ by a factor of four). The part that is important to emphasize is the error from subtracting different approximations to the time conversion&mdash;because of an incomplete software change that didn't update all the subroutine calls&mdash;rather than the error from using floating-point.
:In the mean time, I've learned by e-mail that the paper got a (possibly informal) review by serious people. So, OK to re-add it, but it is important to give references showing that it is used. And please, give the latest version of the paper and avoid typos in the WP text. And instead of "Apparently", try to give facts (i.e., what is really meant by "apparently"). Thanks. — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 01:26, 10 November 2022 (UTC)
::Had the control computer used ''either'' <math>C_0(t_1) - C_0(t_0)</math> ''or'' <math>C_1(t_1) - C_1(t_0)</math> ''consistently'', whether the older and worse approximation or the newer and better approximation, the incident likely wouldn't have happened even though it used floating-point (until several years of uptime, rather than several dozen hours of uptime). And subtracting different conversion approximations <math>C_0</math> and <math>C_1</math> even in exclusively fixed-point arithmetic, or infinite-precision arithmetic, would also have caused the same error. [[User:Taylor Riastradh Campbell|Taylor Riastradh Campbell]] ([[User talk:Taylor Riastradh Campbell|talk]]) 13:13, 17 July 2025 (UTC)
 
:::The old code had to be rewritten to use a more accurate time computation ''to cope with the high speed of ballistic missiles''. So, anyway, even the old code, which used a ''consistent'' conversion, was globally not accurate enough. And you do not necessarily need to use the same conversion; two different conversion routines with sufficient accuracy would be enough. With infinite precision, you would have <math>C_0 = C_1</math>, so there would be no errors. — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 13:46, 17 July 2025 (UTC)
== Digits of precision, a confusing early statement ==
 
I have removed the portion after the ellipses from the following text formerly found in the article:
"12.345 is a floating-point number in a base-ten representation with five digits of precision...However, 12.345 is not a floating-point number with five base-ten digits of precision." I recognize the distinction made (a number with 5 base-ten digits of precision vs. a base-ten representation of a number with five digits of precision) and I suspect the author intended to observe that a binary representation of 12.345 would not have five base-ten digits of precision, but I can't divine what useful thing is intended to have been communicated there, so I've removed it. If I'm missing something obvious in the interpretation of this line, I suspect many others could, and encourage a more direct explanation if it's replaced. [[User:Factorial|john factorial]] ([[User talk:Factorial|talk]]) 18:44, 24 July 2023 (UTC)
 
:The sentence was made nonsensical by this revision by someone who mistook 12.3456 for a typo rather than a counterexample: https://en.wikipedia.org/w/index.php?title=Floating-point_arithmetic&diff=prev&oldid=1166821013
:I have reverted the changes, and added a little more verbiage to emphasize that 12.3456 is a counterexample. [[User:Taylor Riastradh Campbell|Taylor Riastradh Campbell]] ([[User talk:Taylor Riastradh Campbell|talk]]) 20:56, 24 July 2023 (UTC)