Decimal64 floating-point format: Difference between revisions

Content deleted Content added
better structured explanation of encoding idiosyncrasies, still WIP to pull duplicate info from other sections
Tags: nowiki added Visual edit
added 'performance'
Line 14:
 
The binary format of the same bit-size supports a range from denormal-min {{gaps|±5|||||e=-324|}}, over normal-min with full 53-bit precision {{gaps|±2.225|073|858|507|201|e=-308|4}} to max {{gaps|±1.797|693|134|862|315|e=+308|7}}.
 
== Performance ==
Performance comparison is inaccurate on modern IT systems for various reasons. One can roughly say that in a current 64-bit Intel(r) / linux / gcc / libdfp / BID implementation, basic arithmetic operations with decimal64 values are between factor 2 and 15 slower than with binary64 data types, while 'higher' functions like powers ( ~600 ) and trigonometric functions like tangent ( ~10 000 ) suffer more performance penalties. Perhaps the [https://gcc.gnu.org/contribute.html GNU gcc project] and the [https://github.com/libdfp/libdfp/issues?q=is%3Aissue 'libdfp' project on github] could like some help to improve.
 
== Representation / encoding of decimal64 values ==
decimal64 values are represented in a 'not normalized' near to 'scientific format', with combining some bits of the exponent with the leading bits of the significand in a 'combination field'.