J
josephkk
Guest
On Wed, 23 Apr 2014 21:08:38 -0400, Phil Hobbs <hobbs@electrooptical.net>
wrote:
Base 16 is just a compaction of binary for easier readability. The real
difference it made is that the exponent was base 16 as well so the lead
digit could be anything from 1 to F. They also used the same hardware to
implement BCD floating point since the 360 series.
wrote:
That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.
The denormal problem was _introduced_ by IEEE floating point, iirc.
Cheers
Phil Hobbs
IEEE 854 formalized the terminology, but underflow was already being dealt
with in DEC VAXes and likely IBM 370s.
Denormals are intended as a way to handle underflow gracefully, rather
than brutally setting all such numbers to zero like an oppressive white
male. (But I repeat myself.)
Normally floats are implemented as a binary fraction (significand) and
an exponent, which is usually base-2 but sometimes (as in the old IBM
format) base-16.
Base 16 is just a compaction of binary for easier readability. The real
difference it made is that the exponent was base 16 as well so the lead
digit could be anything from 1 to F. They also used the same hardware to
implement BCD floating point since the 360 series.
If the exponent is binary, the leading bit of the significand is always
a 1, unless the number is identically zero. Thus IEEE and various other
base-2 formats take advantage of this free bit and don't bother storing
it, which gives them a factor of 2 increase in precision in ordinary
circumstances.
However, when the exponent reaches its maximum negative value, there's
no room left. In order to make the accuracy of computations degrade
gracefully in that situation, i.e. to make such a number distinguishable
from zero, the IEEE specification allows "denormalized numbers", i.e.
those in which the leading bit of the significand is not 1.
The problem is that denormals are considered so rare that AFAIK FPU
designers don't implement them in silicon, but rather in microcode.
Hence the speed problem.
Cheers
Phil Hobbs