Talk:Floating-point arithmetic: Difference between revisions

Content deleted Content added
 
(36 intermediate revisions by 13 users not shown)
Line 1:
{{talk header}}
{{Vital article|topic=Technology|level=5|class=B}}
{{WikiProject Computingbanner shell|class=B|importancevital=Topyes|science1=y|science-importance=Top}}
{{WikiProject Computer scienceComputing|classimportance=BTop|science=y|science-importance=Top}}
{{WikiProject Computer science|importance=Top}}
}}
{{User:MiszaBot/config
|archiveheader = {{aan}}
Line 13 ⟶ 15:
{{archives|bot=Lowercase sigmabot III|age=3|units=months}}
 
== Schubfach is not WP:OR ==
== spelling inconsistency floating point or floating-point ==
 
The title and first section say "floating point". But elsewhere in the article "floating-point" is used. The article should be consistent in spelling.
In IEEE 754 they use "floating-point" with hyphen. I think that should be the correct spelling.[[User:JHBonarius|JHBonarius]] ([[User talk:JHBonarius|talk]]) 14:18, 18 January 2017 (UTC)
:This is not an inconsistency (at least, not always), but usual English rules: when followed by a noun, one adds an hyphen to avoid ambiguity, e.g. "floating-point arithmetic". [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 14:26, 18 January 2017 (UTC)
 
== hidden bit ==
 
The article [[Hidden bit]] redirects to this article, but there is no definition of this term here (there are two usages, but they are unclear in context unless you already know what the term is referring to). Either there should be a definition here, or the redirection should be removed and a stub created. [[User:JulesH|JulesH]] ([[User talk:JulesH|talk]]) 05:43, 1 June 2017 (UTC)
: It is defined in the [[Floating-point arithmetic#Internal_representation|Internal representation]] section. [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 17:56, 1 June 2017 (UTC)
 
== Seeking consensus on the deletion of the "Causes of Floating Point Error" section. ==
 
There is a discussion with [[User_talk:Vincent_Lefèvre#Deletion_of_section_"Causes_of_Floating_Point_Error"_in_Floating_Point_article|Vincent Lefèvre]] seeking consensus on the deletion of the "Causes of Floating Point Error" from this article on whether this change should be reverted.
 
[[User:Softtest123|Softtest123]] ([[User talk:Softtest123|talk]]) 20:16, 19 April 2018 (UTC)
 
:It started with "The primary sources of floating point errors are alignment and normalization." Both are completely wrong. First, alignment (of the significands) is just for addition and subtraction, and it is just an implementation method of a behavior that has (most of the time) already been specified: correct rounding. Thus alignment has nothing to do with floating-point errors. Ditto for normalization. Moreover, in the context of IEEE 754-2008, a result can be normalized or not (for the decimal formats and non-interchange binary formats), but this is a Level 4 consideration, i.e. it does not affect the rounded value, thus does not affect the rounding error. In the past (before IEEE 754), important errors could come from the lack of normalization before doing an addition or subtraction, but this is the opposite of what you said: the errors were due to the lack of normalization in the implementation of the operation, not due to normalization. Anyway, that's the past. Then this section went on about alignment and normalization...
:The primary source of floating-point errors is actually the fact that most real numbers cannot be represented exactly and must be rounded. But this point has already been covered in the article. Then, the errors also depend on the algorithms: those used to implement the basic operations (but in practice, this is fixed by the correct rounding requirement such as for the arithmetic operations +, −, ×, /, √), and those that use these operations. Note also that there is already a section [[Floating-point arithmetic#Accuracy problems|Accuracy problems]] about these issues.
:[[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 22:14, 19 April 2018 (UTC)
::Perhaps it would be better stated that the root cause of floating point error is alignment and normalization. Note that either alignment or normalization must delete possibly significant digits, then the value must be rounded or truncated, both of which introduce error.
 
:: Of course the '''reason''' there is floating point error is because real numbers, in general, cannot be represented without error. This does not address the cause. What actual operations inside the processor (or software algorithm) causes a floating point representation of a real number to be incorrect.
 
I'm not quite sure why some of you consider Schubfach as WP:OR. Several implementations have been around for several years already, in particular it has been already adopted to Nim's standard library a year ago and working fine. It's true that the article is not formally reviewed, but honestly being published in a peer-reviewed conference/journal does not necessarily give that much of credit in this case. For example, one of the core parts (minmax Euclid algorithm) of the paper on Ryu contains a serious error, and this has been pointed out by several people, including Nazedin (a core contributor to Schubfach) if I recall correctly.
:: Since you have not addressed my original arguments as posted on your talk page, I am reposing them here:
 
The main reason why Schubfach paper has not been published in a peer-reviewed journal, as far as I remember, is not because the work has not been verified, rather simply because the author didn't feel any benefit of going through all the paper works for journal publishing (things like fitting into the artificial page limit). The reason why it is still not accepted in OpenJDK (is it? even if it's not merged yet, it will make it soon) is probably because of lack of human resource who can and are willing to review the algorithm, and submitting the paper to a journal does not magically create such a human resource. (Of course they will do some amount of review, but it is very very far from being perfect, which is why things like the errors in the Ryu paper have not been caught in the review process.)
:::In your reason for this massive deletion, you explained "wrong in various ways." Specifically, how is it wrong? This is not a valid criteria for deletion. See [[WP:DEL-REASON]].
 
The point is, Schubfach as an algorithm has already been completed a long time ago, like in 2017 as far as I believe, and at least two implementations (one in Java and one in C++) have been around at least since 2019, and the C++ one has been adopted to the standard library of a fairly popular language (Nim), and you can even find several more places where it has been adopted (Roblox, a very popular game in US, for example). So what really is a difference from Ryu? The only difference I can tell is that Ryu has a peer-reviewed journal paper, but as I elaborated, that isn't that big difference as far as I can tell. You also mentioned about new versions of the paper, and I felt like as if you think Schubfach is sort of a WIP project. If that's the case, then no, the new versions are just minor fixes/more clarifications rather than big overhauls. If Ryu paper were not published in a journal, probably the author of Ryu would have done the same kind of revisions (and fixed the error mentioned).
:::When you find errors in Wikipedia, the alternative is to correct the errors with citations. This edit was a good faith edit [[WP:GF]].
 
In summary, I think at this point Schubfach is definitely an established work which has no less credibility compared to Ryu and others. [[Special:Contributions/2600:1700:7C0A:1800:24DF:1B93:6E37:99D2|2600:1700:7C0A:1800:24DF:1B93:6E37:99D2]] ([[User talk:2600:1700:7C0A:1800:24DF:1B93:6E37:99D2|talk]]) 01:09, 10 November 2022 (UTC)
:::Even if it is " badly presented", that is not a reason for deletion. Again, see [[WP:DEL-REASON]].
:In the mean time, I've learned by e-mail that the paper got a (possibly informal) review by serious people. So, OK to re-add it, but it is important to give references showing that it is used. And please, give the latest version of the paper and avoid typos in the WP text. And instead of "Apparently", try to give facts (i.e., what is really meant by "apparently"). Thanks. — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 01:26, 10 November 2022 (UTC)
 
== Digits of precision, a confusing early statement ==
:::And finally, "applied only to addition and subtraction (thus cannot be general)." Addition and subtraction are the major causes of floating point error. If you can make cases for adding other functions, such as multiplication, division, etc., then find a resource that backs your positions and add to the article.
 
I have removed the portion after the ellipses from the following text formerly found in the article:
:::I will give you some time to respond, but without substantive justification for your position, I am going to revert your deletion based on the Wikipedia policies cited. The first alternative is to reach a [[consensus]]. I am willing to discuss your point of view.
"12.345 is a floating-point number in a base-ten representation with five digits of precision...However, 12.345 is not a floating-point number with five base-ten digits of precision." I recognize the distinction made (a number with 5 base-ten digits of precision vs. a base-ten representation of a number with five digits of precision) and I suspect the author intended to observe that a binary representation of 12.345 would not have five base-ten digits of precision, but I can't divine what useful thing is intended to have been communicated there, so I've removed it. If I'm missing something obvious in the interpretation of this line, I suspect many others could, and encourage a more direct explanation if it's replaced. [[User:Factorial|john factorial]] ([[User talk:Factorial|talk]]) 18:44, 24 July 2023 (UTC)
 
:The sentence was made nonsensical by this revision by someone who mistook 12.3456 for a typo rather than a counterexample: https://en.wikipedia.org/w/index.php?title=Floating-point_arithmetic&diff=prev&oldid=1166821013
::: ([[User talk:Softtest123|talk]]) 20:08, 19 April 2018 (UTC)
:I have reverted the changes, and added a little more verbiage to emphasize that 12.3456 is a counterexample. [[User:Taylor Riastradh Campbell|Taylor Riastradh Campbell]] ([[User talk:Taylor Riastradh Campbell|talk]]) 20:56, 24 July 2023 (UTC)
 
== Computable reals ==
:: Because you have not responded specifically to these Wikipedia policies ([[WP:DEL-REASON]] and [[WP:GF]]), I am reverting the section. Please feel free to edit it to correct any errors you might see. I would refer you to the experts on floating point such as [https://people.eecs.berkeley.edu/~wkahan/ Professor Kahan] and [http://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf David Goldberg]
::[[User:Softtest123|Softtest123]] ([[User talk:Softtest123|talk]]) 23:03, 24 April 2018 (UTC)
 
Concerning [[Special:Diff/1234874429]], I want to thank [[User:Vincent_Lefèvre]] for fast response. I agree that mentioning [[real closed field]] is off-topic. However, I still have a strong impression that [[computable number|computable reals]] should be listed as a separate bullet. I believe it is different from [[symbolic computation]]. I mean that arithmetic operations are not “aware” of <math>\pi</math> being <math>\pi</math>. Should I just propose a new edit? [[User:Korektysta|Korektysta]] ([[User talk:Korektysta|talk]]) 20:50, 17 July 2024 (UTC)
::: You might not know, but Vincent ''is'' one of those experts on floating point. ;-)
:{{Mention|Korektysta}} Yes, but then, the first sentence of the section (before the list) should avoid the term "representing". It should rather talk about the arithmetic (which is some kind of representation and a way of working with it). BTW, I think that the list of alternatives to floating-point numbers should come later in the article, not in the first section. — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 11:51, 20 July 2024 (UTC)
::: Nevertheless, it is always better to correct or rephrase sub-standard contents instead of deleting it.
::I had to think for a moment, but I still believe that [[computable number|computable reals]] constitute a separate representation. As far as I remember, the CoRN library does not remember the computation tree, but real numbers are represented as functions.
::: --[[User:Matthiaspaul|Matthiaspaul]] ([[User talk:Matthiaspaul|talk]]) 11:43, 16 August 2019 (UTC)
::I agree that [[Floating-point_arithmetic#Alternatives_to_floating-point_numbers|the subsection]] could be moved. For example, from [[Floating-point_arithmetic#Overview|the overview]] to the end of the article, just before [[Floating-point_arithmetic#See_also|See also]] as a separate section. [[User:Korektysta|Korektysta]] ([[User talk:Korektysta|talk]]) 22:56, 1 August 2024 (UTC)
::Ah, OK. Effectively, the arithmetic builds the computation tree, but it is opaque for the user. I guess that the treatment of leafs in the tree is also different because there is no special constant for <math>\pi</math>. <math>\pi</math> is just another function. [[User:Korektysta|Korektysta]] ([[User talk:Korektysta|talk]]) 04:49, 2 August 2024 (UTC)
 
== Patriot missile incident ==
:::: {{ping|Softtest123|Matthiaspaul}} I think that this is more complex than you may think. The obvious cause of floating-point errors is that real numbers are not, in general, represented exactly in floating-point arithmetic. But if one wants to extend that, e.g. by mentioning solutions as what was expected with this section, this will necessarily go too far for this article. IMHO, a separate article would be needed, just like the recent [[Floating point error mitigation]], which should be improved and probably be renamed to "Numerical error mitigation". [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 14:46, 16 August 2019 (UTC)
::::: I agree that "...real numbers are not, in general, represented exactly in floating-point arithmetic" so then the question is, "How does that manifest itself in the algorithms, and consequently the hardware design?" What is it in the features of these implementations that manifests the errors?" As I have pointed out, rounding error occurs when the results of an arithmetic operation produces more bits than can be represented in the mantissa of a floating point value. There are methods of minimizing the probability of the accumulation of rounding error, however, there is also cancellation error. Cancellation error occurs during normalization of subtraction when the operands are similar, and cancellation amplifies any accumulated rounding error exponentially [Higham,1996, "Accuracy and Stability...", p. 11]. This is the material that I presented that was deleted.
:::::[[User:Softtest123|Softtest123]] ([[User talk:Softtest123|talk]]) 18:14, 16 August 2019 (UTC)
 
The other day I made an edit clarifying the nature of the Patriot missile incident, based on the public sources already cited. [[User:Vincent Lefèvre]] reverted two parts of them:
::::::Interestingly, it just so happens that this week I have been doing some engineering using my trusty SwissMicros DM42 calculator[https://www.swissmicros.com/index.php] which uses IEEE 754 quadruple precision decimal floating-point (~34 decimal digits, exponents from -6143 to +6144) and at the same time am writing code for a low end microcontroller used in a toy using [[bfloat16 floating-point format|bfloat16]] (better for this application than IEEE 754 [[Half-precision floating-point format|binary16]] which I also use on some projects). You really have to watch for error accumulation at half precision. --[[User:Guy Macon|Guy Macon]] ([[User talk:Guy Macon|talk]]) 19:28, 16 August 2019 (UTC)
 
First, I replaced the link to [[loss of significance]] by the simpler word &lsquo;error&rsquo;, because [[loss of significance]] now just redirects to [[catastrophic cancellation]] since the old article was [https://en.wikipedia.org/w/index.php?title=Loss_of_significance&diff=prev&oldid=1107845106 deleted]. I was [https://en.wikipedia.org/wiki/Talk:Catastrophic_cancellation#Proposed_merge_of_Loss_of_significance_into_Catastrophic_cancellation loosely involved] in this deletion but I don't feel strongly about this; I think the term &lsquo;loss of significance&rsquo; is unnecessarily fancy without saying anything more than &lsquo;error&rsquo; does, but it's fine, and the error is essentially catastrophic cancellation after all.
::::::The effect on the algorithms is various. Some algorithms (such as Malcolm's algorithm) are actually based on the rounding errors in order to work correctly. There is no short answer. Correct rounding is nowadays required in implementations of the FP basic operations; as long as this requirement is followed, the implementer has the choice of the hardware design. Cancellation is just the effect of subtracting two numbers that are close to each other; in this case, the subtraction operation itself is exact (assuming the same precision for all variables), and the normalization does not introduce any error. [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 20:13, 16 August 2019 (UTC)
 
Second, I added the text:
== Fastfloat16? ==
 
: The error arose not from the use of floating-point, but from the use of two different unit conversions when representing time in different parts of a calculation.
[ https://www.analog.com/media/en/technical-documentation/application-notes/EE.185.Rev.4.08.07.pdf ]
 
This text was [https://en.wikipedia.org/w/index.php?title=Floating-point_arithmetic&curid=11376&diff=1299610166&oldid=1299537597 deleted] on the grounds that:
Is this a separate floating point format or another name for an existing format? --[[User:Guy Macon|Guy Macon]] ([[User talk:Guy Macon|talk]]) 11:32, 20 September 2020 (UTC)
 
: The intent is that there is a single time unit: 0.1s. The issue is that the software assumed that its accuracy did not matter; Skeel says: "this time difference should be in error by only 0.0001%, a truly insignificant amount". Something that may remain true... until a cancellation occurs like here.
:Same question for [ http://people.ece.cornell.edu/land/courses/ece4760/Math/Floating_point/ ] Somebody just added both to our [[Minifloat]] article. --[[User:Guy Macon|Guy Macon]] ([[User talk:Guy Macon|talk]]) 11:37, 20 September 2020 (UTC)
 
But I don't think that is the whole story. The Skeel citation<ref name="Skeel">{{citation |url=https://www-users.cse.umn.edu/~arnold/disasters/Patriot-dharan-skeel-siam.pdf |title=Roundoff Error and the Patriot Missile |last=Skeel |first=Robert |journal=SIAM News |volume=25 |issue=4 |page=11 |date=July 1992 |access-date=2024-11-15}}</ref> says (emphasis added):
::As the title of the first document says: ''Fast Floating-Point Arithmetic Emulation on Blackfin® Processors''. So, these are formats convenient for a software implementation of floating point ("software implementation" rather than "emulation", as they don't try to emulate anything since they have their own arithmetic, without correct rounding). The shorter of the two formats has a 16-bit exponent and a 16-bit significand (including the sign). Thus that's a 32-bit format. Definitely not minifloat. And the goal (according to the provided algorithms) is not emulate minifloat formats either (contrary to what I have done with [https://www.vinc17.net/research/sipe/ ''Sipe''], where I use a large format for a software emulation of minifloat formats). In the second document, this is a 24-bit format with a 16-bit significand, so I would not say that this is a minifloat either. — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 16:23, 20 September 2020 (UTC)
 
: When Patriot systems were brought into the Gulf conflict, the software was modified (several times) to cope with the high speed of ballistic missiles, for which the system was not originally designed. <p>At least one of these software modifications was the introduction of a subroutine for converting clock-time more accurately into floating-point. This calculation was needed in about half a dozen places in the program, but the call to the subroutine was not inserted at every point where it was needed. '''Hence, with a less accurate truncated time of one radar pulse being subtracted from a more accurate time of another radar pulse, the error no longer cancelled.'''
:::Thanks! That was my conclusion as well but I wanted someone else to look at it on case I was missing something. As an embedded systems engineer working in the toy industry I occasionally '''use''' things line minfloat and brainfloat, but I am certainly not an expert. I fixed the minifloat article. --[[User:Guy Macon|Guy Macon]] ([[User talk:Guy Macon|talk]]) 17:50, 20 September 2020 (UTC)
 
The designers certainly didn't assume that its accuracy did not matter&mdash;if they did assume that, why would they have written a new conversion subroutine for more accurate conversion?
== imprecise info about imprecision of tan(gens) ==
 
Suppose the floating-point system on the control computer had 30-bit precision (a low estimate for a 48-bit floating-point format). The logic computed something like <math>C_1(t_1) - C_0(t_0)</math>, where <math>C_1(t)</math> is (say) the ''new'' higher-precision conversion from fixed-point to floating-point giving <math>0.1\times t\times(1 - 2^{-30})</math>, and <math>C_0(t)</math> is (say) the ''old'' lower-precision conversion giving <math>0.1\times t\times(1 - 2^{-20})</math>. There may be an additional ''floating-point rounding error'' of about one ulp, but that pales in comparison to the ''discrepancy between conversion subroutines'' of about <math>2^{10} \approx 1000</math> ulps in this hypothesis of 30-bit precision (if it were 40-bit precision, then it would be <math>2^{20} \approx 10^6</math> ulps, and so on).
imho that: 'Also, the non-representability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a result of infinity' - is misleading as it's more a problem of the cos approximation not yielding '0' for pi()/2, if you replace cos(x) with sin(x-pi()/2) for that range you get a nice #DIV/0! for tan(pi()/2),
 
In brief, this was a much more mundane software engineering mistake&mdash;updating a unit conversion subroutine call in one place but not another, so the units are no longer commensurate&mdash;rather than anything you can rightly blame floating-point for.
as well sin(pi()) not resulting in '0' can be corrected by replacing sin(x) with -sin(x-pi()) for that range,
 
It's possible that, after long enough uptime, computing <math>C(t_1) - C(t_0)</math> rather than <math>C(t_1 - t_0)</math> with the ''same'' conversion subroutine <math>C</math> ''could'' lose enough significant bits due to floating-point rounding error to cause the same problem. But in this case, the problem was using ''different'' conversion subroutines <math>C_1</math> and <math>C_0</math>. And, with at least 30-bit precision, the floating-point rounding error would take a thousand times as long to cause the same problem&mdash;over twenty thousand hours before a problem, or about two years and four months of continuous uptime. (I would also guess the format has >30 bits of precision, so it's likely much longer than that.)
not sure if it holds, but if you reduce all trig. calculations on the numerical values of sin in the first quadrant - what imho is possible - the results may come out quite fine ... greatly neglected by calc, ex$el and others ... <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/77.0.177.112|77.0.177.112]] ([[User talk:77.0.177.112#top|talk]]) 01:26, 11 March 2021 (UTC)</small> <!--Autosigned by SineBot-->
:No, the tan floating-point function has nothing to do with the cos floating-point function. [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 11:32, 11 March 2021 (UTC)
 
This cautionary tale is often used to blame the designers for using floating-point to represent time and to argue that floating-point numbers are incomprehensible black magic where reasoning goes out the window (e.g., on [https://news.ycombinator.com/item?id=1667060 Hacker News] and [https://old.reddit.com/r/programming/comments/6npfz/the_patriot_missile_failure/ Reddit]), even though the underlying story justifies neither of these conclusions. So that's why I think it is important to spell out the actual bug here&mdash;incomplete software change caused subtraction of incommensurate (but similar) units. [[User:Taylor Riastradh Campbell|Taylor Riastradh Campbell]] ([[User talk:Taylor Riastradh Campbell|talk]]) 04:16, 17 July 2025 (UTC)
:hello @Vincent, sorry for objecting ... imho (school math) and acc. to wikipedia (https://en.wikipedia.org/wiki/Trigonometric_functions, esp. 'Summary of relationships between trigonometric functions' there) "tan(x) = sin(x) / cos(x)", once you get a proper cos at pi()/2 [use sin(pi()/2-x), same reference], you can calculate a proper tan with overflow (#DIV/0! in 'calc'),
:
perhaps it won't work 'in IEEE' (then it's a weakness there), but developers or users can achieve proper results once they have proper sin values for the first quadrant,
:{{Re|Taylor Riastradh Campbell}}
:Just saying "error" would be misleading because in general, one has an error at almost each floating-point operation, and this is often not a major issue (with carefully designed code). What matters here is that the (relative) error is very large due to a [[catastrophic cancellation]] as described in the document.
:Saying that there are "two different unit conversions" is incorrect, as the time unit is the same in both routines (0.1s), contrary to the [[Mars Climate Orbiter#Cause of failure|Mars Climate Orbiter failure]], where the pound-force second and newton-second units were mixed up (so, even with an infinite precision, the failure would still have occurred); the issue here is that there are different approximations in the time calculation, i.e. with different accuracy (see the term "accurate" used by Skeel). This is really related to error analysis (with an infinite precision, there would have been no issues).
: It is defined in the [[Floating-point arithmetic#Internal_representation|Internal representation]] section. [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 1712:5630, 117 JuneJuly 20172025 (UTC)
::{{Quote|Just saying "error" would be misleading because in general, one has an error at almost each floating-point operation, and this is often not a major issue (with carefully designed code). What matters here is that the (relative) error is very large due to a catastrophic cancellation as described in the document.}}
::Right, I have no objection to your reverting that part of the change. I was just explaining why I changed the text/link &lsquo;loss of significance&rsquo; in the first place.
::{{Quote|Saying that there are "two different unit conversions" is incorrect, as the time unit is the same in both routines (0.1s)&hellip;the issue here is that there are different approximations in the time calculation, i.e. with different accuracy (see the term "accurate" used by Skeel).}}
::I'm not attached to phrasing it in terms of unit conversions (though I think a better analogy is yards/meters, which differ by about 9%, or quarts/liters, which differ by about 6%, rather than pounds/newtons, which differ by a factor of four). The part that is important to emphasize is the error from subtracting different approximations to the time conversion&mdash;because of an incomplete software change that didn't update all the subroutine calls&mdash;rather than the error from using floating-point.
::Had the control computer used ''either'' <math>C_0(t_1) - C_0(t_0)</math> ''or'' <math>C_1(t_1) - C_1(t_0)</math> ''consistently'', whether the older and worse approximation or the newer and better approximation, the incident likely wouldn't have happened even though it used floating-point (until several years of uptime, rather than several dozen hours of uptime). And subtracting different conversion approximations <math>C_0</math> and <math>C_1</math> even in exclusively fixed-point arithmetic, or infinite-precision arithmetic, would also have caused the same error. [[User:Taylor Riastradh Campbell|Taylor Riastradh Campbell]] ([[User talk:Taylor Riastradh Campbell|talk]]) 13:13, 17 July 2025 (UTC)
:::The old code had to be rewritten to use a more accurate time computation ''to cope with the high speed of ballistic missiles''. So, anyway, even the old code, which used a ''consistent'' conversion, was globally not accurate enough. And you do not necessarily need to use the same conversion; two different conversion routines with sufficient accuracy would be enough. With infinite precision, you would have <math>C_0 = C_1</math>, so there would be no errors. — [[User:Vincent Lefèvre|Vincent Lefèvre]] ([[User talk:Vincent Lefèvre|talk]]) 13:46, 17 July 2025 (UTC)