Driver to drive?

On 18/04/14 04:09, Jim Thompson wrote:
On Thu, 17 Apr 2014 12:14:15 +1000, Clifford Heath
no.spam@please.net> wrote:
SA612/SA602. There are models around, but I have no way of knowing how
good they are.
Nothing on the data sheet tells me the voltage on pins 1 and 2, just
shows a block called "bias".
Can you provide those numbers?

No, but if it's a clue, I've seen it used as a place to control the
gain. In particular, the G3ZOI 2m ARDF receiver allows these pins to
rise more than one diode drop above ground to reduce gain. (I think,
without checking the TA7613AP data sheet).

<http;//www.open-circuit.co.uk/download/rox2t-v3-dia.pdf‎>

Does that help?

Oh, and the BF904R was used in HF for low IP3, but wasn't the
exceptional transistor - I was thinking of the 3SK299, along with
similar obsolete devices like the NE25139 and NE25339, all of which only
exist as counterfeits, it seems.

I'd still be interested in how the BF904R works.
 
On 22/04/2014 14:07, Phil Hobbs wrote:
On 04/22/2014 03:40 AM, Martin Brown wrote:

The only thing C had going for it over FORTRAN was that the indexing of
FFTs was a lot more natural with 0 based arrays instead of 1 based.


And no COMMON blocks, DATA cards, computed GOTOs, arithmetic IFs, .....

I learned Fortran in the late 70s attempting to debug a radiative
transfer astrophysics code that I didn't understand very well at all. I
sure don't miss it.

FORTRAN wasn't *all* that bad. I also worked on speeding up a fluid in
cell code for relativistic plasma beaming in radio galaxies back then.
It helped a lot that I understood how to prevent the exception handling
from going crazy when computing with denormalised numbers.

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

For a short while I played with RATFOR (by Brian Kernigan)
http://en.wikipedia.org/wiki/Ratfor

And for non numerical stuff BCPL (by Martin Richards) a predecessor of B
which later gave rise to C. I still miss BCPL's tagged $). Allegedly it
is that language that started the canonical "HELLO WORLD" craze.

http://www.catb.org/~esr/jargon/html/B/BCPL.html

Its claim to fame was as an incredibly small portable compiler.

Mostly it was F77 on everything from the humble Z80 to a Cray-XMP. We
always learnt something new porting "working" code to a new machine.

I also used some very early computer algebra packages around the same
time Camal and REDUCE. They could be made to output FORTRAN code too.

One thing about FORTRAN I don't miss are continuation cards. A mistake
in one of the computer algebra programs resulting in VSOP82 getting the
light travel time wrong near Jupiter - a flaw that was detected
observationally ~1984 by the binary pulsar getting close enough to
Jupiter for this to matter.

--
Regards,
Martin Brown
 
On 04/23/2014 09:22 AM, Martin Brown wrote:
On 22/04/2014 14:07, Phil Hobbs wrote:
On 04/22/2014 03:40 AM, Martin Brown wrote:

The only thing C had going for it over FORTRAN was that the indexing of
FFTs was a lot more natural with 0 based arrays instead of 1 based.


And no COMMON blocks, DATA cards, computed GOTOs, arithmetic IFs, .....

I learned Fortran in the late 70s attempting to debug a radiative
transfer astrophysics code that I didn't understand very well at all. I
sure don't miss it.

FORTRAN wasn't *all* that bad. I also worked on speeding up a fluid in
cell code for relativistic plasma beaming in radio galaxies back then.
It helped a lot that I understood how to prevent the exception handling
from going crazy when computing with denormalised numbers.

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

Been there.

Denormalized numbers are usually handled in microcode, which is
slooooowwwwwww. My 3D EM simulator used to slow way down after a few
minutes' run time, as a tiny leading field, due entirely to roundoff
error, filled the space with denormals. Once I figured out the cause,
all it took was a compile option to set denormals to zero.

For a short while I played with RATFOR (by Brian Kernigan)
http://en.wikipedia.org/wiki/Ratfor

And for non numerical stuff BCPL (by Martin Richards) a predecessor of B
which later gave rise to C. I still miss BCPL's tagged $). Allegedly it
is that language that started the canonical "HELLO WORLD" craze.

http://www.catb.org/~esr/jargon/html/B/BCPL.html

Its claim to fame was as an incredibly small portable compiler.

Mostly it was F77 on everything from the humble Z80 to a Cray-XMP. We
always learnt something new porting "working" code to a new machine.

I also used some very early computer algebra packages around the same
time Camal and REDUCE. They could be made to output FORTRAN code too.

One thing about FORTRAN I don't miss are continuation cards. A mistake
in one of the computer algebra programs resulting in VSOP82 getting the
light travel time wrong near Jupiter - a flaw that was detected
observationally ~1984 by the binary pulsar getting close enough to
Jupiter for this to matter.

Talk about remote debugging!

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

On 22/04/2014 14:07, Phil Hobbs wrote:
On 04/22/2014 03:40 AM, Martin Brown wrote:

The only thing C had going for it over FORTRAN was that the indexing of
FFTs was a lot more natural with 0 based arrays instead of 1 based.


And no COMMON blocks, DATA cards, computed GOTOs, arithmetic IFs, .....

I learned Fortran in the late 70s attempting to debug a radiative
transfer astrophysics code that I didn't understand very well at all. I
sure don't miss it.

FORTRAN wasn't *all* that bad. I also worked on speeding up a fluid in
cell code for relativistic plasma beaming in radio galaxies back then.
It helped a lot that I understood how to prevent the exception handling
from going crazy when computing with denormalised numbers.

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.
 
On 04/23/2014 11:53 AM, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 22/04/2014 14:07, Phil Hobbs wrote:
On 04/22/2014 03:40 AM, Martin Brown wrote:

The only thing C had going for it over FORTRAN was that the indexing of
FFTs was a lot more natural with 0 based arrays instead of 1 based.


And no COMMON blocks, DATA cards, computed GOTOs, arithmetic IFs, .....

I learned Fortran in the late 70s attempting to debug a radiative
transfer astrophysics code that I didn't understand very well at all. I
sure don't miss it.

FORTRAN wasn't *all* that bad. I also worked on speeding up a fluid in
cell code for relativistic plasma beaming in radio galaxies back then.
It helped a lot that I understood how to prevent the exception handling
from going crazy when computing with denormalised numbers.

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.

The denormal problem was _introduced_ by IEEE floating point, iirc.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 4/23/2014 10:06 PM, josephkk wrote:
On Wed, 23 Apr 2014 17:58:26 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 04/23/2014 11:53 AM, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 22/04/2014 14:07, Phil Hobbs wrote:
On 04/22/2014 03:40 AM, Martin Brown wrote:

The only thing C had going for it over FORTRAN was that the indexing of
FFTs was a lot more natural with 0 based arrays instead of 1 based.


And no COMMON blocks, DATA cards, computed GOTOs, arithmetic IFs, .....

I learned Fortran in the late 70s attempting to debug a radiative
transfer astrophysics code that I didn't understand very well at all. I
sure don't miss it.

FORTRAN wasn't *all* that bad. I also worked on speeding up a fluid in
cell code for relativistic plasma beaming in radio galaxies back then.
It helped a lot that I understood how to prevent the exception handling
from going crazy when computing with denormalised numbers.

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.


The denormal problem was _introduced_ by IEEE floating point, iirc.

Cheers

Phil Hobbs

IEEE 854 formalized the terminology, but underflow was already being dealt
with in DEC VAXes and likely IBM 370s.

Denormals are intended as a way to handle underflow gracefully, rather
than brutally setting all such numbers to zero like an oppressive white
male. (But I repeat myself.)

Normally floats are implemented as a binary fraction (significand) and
an exponent, which is usually base-2 but sometimes (as in the old IBM
format) base-16.

If the exponent is binary, the leading bit of the significand is always
a 1, unless the number is identically zero. Thus IEEE and various other
base-2 formats take advantage of this free bit and don't bother storing
it, which gives them a factor of 2 increase in precision in ordinary
circumstances.

However, when the exponent reaches its maximum negative value, there's
no room left. In order to make the accuracy of computations degrade
gracefully in that situation, i.e. to make such a number distinguishable
from zero, the IEEE specification allows "denormalized numbers", i.e.
those in which the leading bit of the significand is not 1.

The problem is that denormals are considered so rare that AFAIK FPU
designers don't implement them in silicon, but rather in microcode.
Hence the speed problem.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On Wed, 23 Apr 2014 17:58:26 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 04/23/2014 11:53 AM, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 22/04/2014 14:07, Phil Hobbs wrote:
On 04/22/2014 03:40 AM, Martin Brown wrote:

The only thing C had going for it over FORTRAN was that the indexing of
FFTs was a lot more natural with 0 based arrays instead of 1 based.


And no COMMON blocks, DATA cards, computed GOTOs, arithmetic IFs, ......

I learned Fortran in the late 70s attempting to debug a radiative
transfer astrophysics code that I didn't understand very well at all. I
sure don't miss it.

FORTRAN wasn't *all* that bad. I also worked on speeding up a fluid in
cell code for relativistic plasma beaming in radio galaxies back then.
It helped a lot that I understood how to prevent the exception handling
from going crazy when computing with denormalised numbers.

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.


The denormal problem was _introduced_ by IEEE floating point, iirc.

Cheers

Phil Hobbs

IEEE 854 formalized the terminology, but underflow was already being dealt
with in DEC VAXes and likely IBM 370s.

?-)
 
On Wed, 23 Apr 2014 18:53:56 +0300, upsidedown@downunder.com wrote:

On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 22/04/2014 14:07, Phil Hobbs wrote:
On 04/22/2014 03:40 AM, Martin Brown wrote:

The only thing C had going for it over FORTRAN was that the indexing of
FFTs was a lot more natural with 0 based arrays instead of 1 based.


And no COMMON blocks, DATA cards, computed GOTOs, arithmetic IFs, ......

I learned Fortran in the late 70s attempting to debug a radiative
transfer astrophysics code that I didn't understand very well at all. I
sure don't miss it.

FORTRAN wasn't *all* that bad. I also worked on speeding up a fluid in
cell code for relativistic plasma beaming in radio galaxies back then.
It helped a lot that I understood how to prevent the exception handling
from going crazy when computing with denormalised numbers.

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.

Yes and no. Current IBM mainframes have to support 5 different systems of
floating point and IEEE is just one of them.

?-)
 
On Wed, 23 Apr 2014 21:08:38 -0400, Phil Hobbs
<hobbs@electrooptical.net> wrote:

On 4/23/2014 10:06 PM, josephkk wrote:
On Wed, 23 Apr 2014 17:58:26 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 04/23/2014 11:53 AM, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 22/04/2014 14:07, Phil Hobbs wrote:
On 04/22/2014 03:40 AM, Martin Brown wrote:

The only thing C had going for it over FORTRAN was that the indexing of
FFTs was a lot more natural with 0 based arrays instead of 1 based.


And no COMMON blocks, DATA cards, computed GOTOs, arithmetic IFs, .....

I learned Fortran in the late 70s attempting to debug a radiative
transfer astrophysics code that I didn't understand very well at all. I
sure don't miss it.

FORTRAN wasn't *all* that bad. I also worked on speeding up a fluid in
cell code for relativistic plasma beaming in radio galaxies back then.
It helped a lot that I understood how to prevent the exception handling
from going crazy when computing with denormalised numbers.

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.


The denormal problem was _introduced_ by IEEE floating point, iirc.

Cheers

Phil Hobbs

IEEE 854 formalized the terminology, but underflow was already being dealt
with in DEC VAXes and likely IBM 370s.

Denormals are intended as a way to handle underflow gracefully, rather
than brutally setting all such numbers to zero like an oppressive white
male. (But I repeat myself.)

Normally floats are implemented as a binary fraction (significand) and
an exponent, which is usually base-2 but sometimes (as in the old IBM
format) base-16.

If the exponent is binary, the leading bit of the significand is always
a 1, unless the number is identically zero. Thus IEEE and various other
base-2 formats take advantage of this free bit and don't bother storing
it, which gives them a factor of 2 increase in precision in ordinary
circumstances.

However, when the exponent reaches its maximum negative value, there's
no room left. In order to make the accuracy of computations degrade
gracefully in that situation, i.e. to make such a number distinguishable
from zero, the IEEE specification allows "denormalized numbers", i.e.
those in which the leading bit of the significand is not 1.

The problem is that denormals are considered so rare that AFAIK FPU
designers don't implement them in silicon, but rather in microcode.
Hence the speed problem.

Cheers

Phil Hobbs

Which decade are you talking about ?

For instance in the x87 based FP units, the evaluation stack is 80
bits wide. To enter a 32 bit float or 64 bit double for calculations,
it is first pushed into the 80 bit wide evaluation stack. The floating
point expressions are evaluated on the 80 bit stack and the final
expression result is popped of the stack and stored as 32/64 bit IEEE
values. I see no reason, why the hidden bit would cause any problems
in pushing or popping values to/from the stack.

In more traditional architectures, you still will have to have an
extra bit, so that when there is an overflow in float add/sub, the
mantissa must be shifted right and the exponent incremented.

Your description about denormals in microcode/software might apply to
established manufacturer that adapt their existing FP-processor board
for IEEE with only small alterations (e.g. changing the offset in the
exponent), Adding the denormalization logic might have required more
extensive PCB alterations, thus it would be a temptation to use
emulation for denorms.
 
On 23/04/2014 22:58, Phil Hobbs wrote:
On 04/23/2014 11:53 AM, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.

The denormal problem was _introduced_ by IEEE floating point, iirc.

No. It definitely predates IEEE 754. The IBM 360/370 series definitely
had the denormalised FP handling problems in FORTRAN. Indeed ISTR that
its 7bit exponent range for REAL*4 was smaller than that for IEEE 754.

I think S/390 was the first IBM mainframe to offer IEEE 754 FP. I can't
recall exactly what the Cyber CDC 7600 did although since it had native
fast 60bit floating point it was less likely to fail denormalised.

--
Regards,
Martin Brown
 
On 24/04/2014 07:03, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 21:08:38 -0400, Phil Hobbs
hobbs@electrooptical.net> wrote:

On 4/23/2014 10:06 PM, josephkk wrote:
On Wed, 23 Apr 2014 17:58:26 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 04/23/2014 11:53 AM, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.


The denormal problem was _introduced_ by IEEE floating point, iirc.

Cheers

Phil Hobbs

IEEE 854 formalized the terminology, but underflow was already being dealt
with in DEC VAXes and likely IBM 370s.

Denormals are intended as a way to handle underflow gracefully, rather
than brutally setting all such numbers to zero like an oppressive white
male. (But I repeat myself.)

Normally floats are implemented as a binary fraction (significand) and
an exponent, which is usually base-2 but sometimes (as in the old IBM
format) base-16.

If the exponent is binary, the leading bit of the significand is always
a 1, unless the number is identically zero. Thus IEEE and various other
base-2 formats take advantage of this free bit and don't bother storing
it, which gives them a factor of 2 increase in precision in ordinary
circumstances.

However, when the exponent reaches its maximum negative value, there's
no room left. In order to make the accuracy of computations degrade
gracefully in that situation, i.e. to make such a number distinguishable
from zero, the IEEE specification allows "denormalized numbers", i.e.
those in which the leading bit of the significand is not 1.

The problem is that denormals are considered so rare that AFAIK FPU
designers don't implement them in silicon, but rather in microcode.
Hence the speed problem.

And by default the operating system would usually trap on a denorm
operand and go through a tedious long winded recovery routine every
time. You had to alter the control word to mask denorm traps out.

> Which decade are you talking about ?

Big iron 70's and 80's.

For instance in the x87 based FP units, the evaluation stack is 80
bits wide. To enter a 32 bit float or 64 bit double for calculations,
it is first pushed into the 80 bit wide evaluation stack. The floating
point expressions are evaluated on the 80 bit stack and the final
expression result is popped of the stack and stored as 32/64 bit IEEE
values. I see no reason, why the hidden bit would cause any problems
in pushing or popping values to/from the stack.

It depends whether or not you have masked the bit for generating an
interrupt on encountering a denormalised operand.
In more traditional architectures, you still will have to have an
extra bit, so that when there is an overflow in float add/sub, the
mantissa must be shifted right and the exponent incremented.

Your description about denormals in microcode/software might apply to
established manufacturer that adapt their existing FP-processor board
for IEEE with only small alterations (e.g. changing the offset in the
exponent), Adding the denormalization logic might have required more
extensive PCB alterations, thus it would be a temptation to use
emulation for denorms.

The problem can still arise today if you have the wrong compiler options
set and scale your problem unwisely. The denorm operand exception
handler can end up taking the bulk of all the elapsed time.

--
Regards,
Martin Brown
 
In article <53586416.8050105@electrooptical.net>, Phil Hobbs
<hobbs@electrooptical.net> wrote:

On 4/23/2014 10:06 PM, josephkk wrote:
On Wed, 23 Apr 2014 17:58:26 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 04/23/2014 11:53 AM, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 22/04/2014 14:07, Phil Hobbs wrote:
On 04/22/2014 03:40 AM, Martin Brown wrote:

The only thing C had going for it over FORTRAN was that the indexing of
FFTs was a lot more natural with 0 based arrays instead of 1 based.


And no COMMON blocks, DATA cards, computed GOTOs, arithmetic IFs, .....

I learned Fortran in the late 70s attempting to debug a radiative
transfer astrophysics code that I didn't understand very well at all. I
sure don't miss it.

FORTRAN wasn't *all* that bad. I also worked on speeding up a fluid in
cell code for relativistic plasma beaming in radio galaxies back then.
It helped a lot that I understood how to prevent the exception handling
from going crazy when computing with denormalised numbers.

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.


The denormal problem was _introduced_ by IEEE floating point, iirc.

Cheers

Phil Hobbs

IEEE 854 formalized the terminology, but underflow was already being dealt
with in DEC VAXes and likely IBM 370s.

Denormals are intended as a way to handle underflow gracefully, rather
than brutally setting all such numbers to zero like an oppressive white
male. (But I repeat myself.)

Normally floats are implemented as a binary fraction (significand) and
an exponent, which is usually base-2 but sometimes (as in the old IBM
format) base-16.

If the exponent is binary, the leading bit of the significand is always
a 1, unless the number is identically zero. Thus IEEE and various other
base-2 formats take advantage of this free bit and don't bother storing
it, which gives them a factor of 2 increase in precision in ordinary
circumstances.

However, when the exponent reaches its maximum negative value, there's
no room left. In order to make the accuracy of computations degrade
gracefully in that situation, i.e. to make such a number distinguishable
from zero, the IEEE specification allows "denormalized numbers", i.e.
those in which the leading bit of the significand is not 1.

The problem is that denormals are considered so rare that AFAIK FPU
designers don't implement them in silicon, but rather in microcode.
Hence the speed problem.

I first ran into such issues when doing polynomial approximations to
functions too complicated to compute in realtime. The resulting
polynomials (or rational polynomials) were solved using 32-bit
fixed-point arithmetic. Unity scaling of the polynomial variable x was
essential.

It seems to me that the root cause of the denormalized numbers is
trying to do everything in unscaled SI units. If one formulates the
problem such that variable values are roughly centered on unity, in
most problem domains the underflows and overflows will vanish.

If not, reformulation to operate in the log domain may be useful.

Joe Gwinn
 
On 04/24/2014 02:03 AM, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 21:08:38 -0400, Phil Hobbs
hobbs@electrooptical.net> wrote:

On 4/23/2014 10:06 PM, josephkk wrote:
On Wed, 23 Apr 2014 17:58:26 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 04/23/2014 11:53 AM, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 22/04/2014 14:07, Phil Hobbs wrote:
On 04/22/2014 03:40 AM, Martin Brown wrote:

The only thing C had going for it over FORTRAN was that the indexing of
FFTs was a lot more natural with 0 based arrays instead of 1 based.


And no COMMON blocks, DATA cards, computed GOTOs, arithmetic IFs, .....

I learned Fortran in the late 70s attempting to debug a radiative
transfer astrophysics code that I didn't understand very well at all. I
sure don't miss it.

FORTRAN wasn't *all* that bad. I also worked on speeding up a fluid in
cell code for relativistic plasma beaming in radio galaxies back then.
It helped a lot that I understood how to prevent the exception handling
from going crazy when computing with denormalised numbers.

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.


The denormal problem was _introduced_ by IEEE floating point, iirc.

Cheers

Phil Hobbs

IEEE 854 formalized the terminology, but underflow was already being dealt
with in DEC VAXes and likely IBM 370s.

Denormals are intended as a way to handle underflow gracefully, rather
than brutally setting all such numbers to zero like an oppressive white
male. (But I repeat myself.)

Normally floats are implemented as a binary fraction (significand) and
an exponent, which is usually base-2 but sometimes (as in the old IBM
format) base-16.

If the exponent is binary, the leading bit of the significand is always
a 1, unless the number is identically zero. Thus IEEE and various other
base-2 formats take advantage of this free bit and don't bother storing
it, which gives them a factor of 2 increase in precision in ordinary
circumstances.

However, when the exponent reaches its maximum negative value, there's
no room left. In order to make the accuracy of computations degrade
gracefully in that situation, i.e. to make such a number distinguishable
from zero, the IEEE specification allows "denormalized numbers", i.e.
those in which the leading bit of the significand is not 1.

The problem is that denormals are considered so rare that AFAIK FPU
designers don't implement them in silicon, but rather in microcode.
Hence the speed problem.

Cheers

Phil Hobbs

Which decade are you talking about ?

Around 2007, on a cluster of dual 12-core AMD Magny Cours boxes. It
flew, except when it slowed down by 30 times due to a simulation space
full of denormals. Finding the right compiler switch fixed it
completely. It did the same on the dual Xeon box I had in my office.

For instance in the x87 based FP units, the evaluation stack is 80
bits wide. To enter a 32 bit float or 64 bit double for calculations,
it is first pushed into the 80 bit wide evaluation stack. The floating
point expressions are evaluated on the 80 bit stack and the final
expression result is popped of the stack and stored as 32/64 bit IEEE
values. I see no reason, why the hidden bit would cause any problems
in pushing or popping values to/from the stack.

In more traditional architectures, you still will have to have an
extra bit, so that when there is an overflow in float add/sub, the
mantissa must be shifted right and the exponent incremented.

That's only a worry during an actual FP operation, so is easily handled
in silicon. FP numbers stored in registers and main memory always use
the free bit except with denormals and NANs, AFAIK.

Your description about denormals in microcode/software might apply to
established manufacturer that adapt their existing FP-processor board
for IEEE with only small alterations (e.g. changing the offset in the
exponent), Adding the denormalization logic might have required more
extensive PCB alterations, thus it would be a temptation to use
emulation for denorms.

Nope, relatively recent hardware and good compilers.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 04/24/2014 09:37 AM, Joe Gwinn wrote:
In article <53586416.8050105@electrooptical.net>, Phil Hobbs
hobbs@electrooptical.net> wrote:

On 4/23/2014 10:06 PM, josephkk wrote:
On Wed, 23 Apr 2014 17:58:26 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 04/23/2014 11:53 AM, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 22/04/2014 14:07, Phil Hobbs wrote:
On 04/22/2014 03:40 AM, Martin Brown wrote:

The only thing C had going for it over FORTRAN was that the indexing of
FFTs was a lot more natural with 0 based arrays instead of 1 based.


And no COMMON blocks, DATA cards, computed GOTOs, arithmetic IFs, .....

I learned Fortran in the late 70s attempting to debug a radiative
transfer astrophysics code that I didn't understand very well at all. I
sure don't miss it.

FORTRAN wasn't *all* that bad. I also worked on speeding up a fluid in
cell code for relativistic plasma beaming in radio galaxies back then.
It helped a lot that I understood how to prevent the exception handling
from going crazy when computing with denormalised numbers.

The problem as originally written was scaled in SI units as I recall and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.


The denormal problem was _introduced_ by IEEE floating point, iirc.

Cheers

Phil Hobbs

IEEE 854 formalized the terminology, but underflow was already being dealt
with in DEC VAXes and likely IBM 370s.

Denormals are intended as a way to handle underflow gracefully, rather
than brutally setting all such numbers to zero like an oppressive white
male. (But I repeat myself.)

Normally floats are implemented as a binary fraction (significand) and
an exponent, which is usually base-2 but sometimes (as in the old IBM
format) base-16.

If the exponent is binary, the leading bit of the significand is always
a 1, unless the number is identically zero. Thus IEEE and various other
base-2 formats take advantage of this free bit and don't bother storing
it, which gives them a factor of 2 increase in precision in ordinary
circumstances.

However, when the exponent reaches its maximum negative value, there's
no room left. In order to make the accuracy of computations degrade
gracefully in that situation, i.e. to make such a number distinguishable
from zero, the IEEE specification allows "denormalized numbers", i.e.
those in which the leading bit of the significand is not 1.

The problem is that denormals are considered so rare that AFAIK FPU
designers don't implement them in silicon, but rather in microcode.
Hence the speed problem.

I first ran into such issues when doing polynomial approximations to
functions too complicated to compute in realtime. The resulting
polynomials (or rational polynomials) were solved using 32-bit
fixed-point arithmetic. Unity scaling of the polynomial variable x was
essential.

It seems to me that the root cause of the denormalized numbers is
trying to do everything in unscaled SI units. If one formulates the
problem such that variable values are roughly centered on unity, in
most problem domains the underflows and overflows will vanish.

If not, reformulation to operate in the log domain may be useful.

Joe Gwinn

No, the issue is that in a FDTD code, when you turn on the source, the
real fields propagate at approximately c. However, since you're
iterating through all of main memory twice per full time step, the
roundoff errors propagate superluminally and fill the domain with
denormals until the real fields get there.

A combination of initializing the domain with small amounts of properly
normalized noise, and the right compiler switches, fixed it.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 04/24/2014 07:05 AM, Martin Brown wrote:
On 23/04/2014 22:58, Phil Hobbs wrote:
On 04/23/2014 11:53 AM, upsidedown@downunder.com wrote:
On Wed, 23 Apr 2014 14:22:46 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

The problem as originally written was scaled in SI units as I recall
and
included many terms which were something vaguely like k*G/c^2.
Benchmarking it showed that 80% of the time was spent in exception
handling for some values that could be safely set to zero!

That is not a problem with Fortran, rather the problem was that the
hardware platforms behave differently. The IEEE floating point
standard helped to clear out some of this mess.

The denormal problem was _introduced_ by IEEE floating point, iirc.

No. It definitely predates IEEE 754. The IBM 360/370 series definitely
had the denormalised FP handling problems in FORTRAN. Indeed ISTR that
its 7bit exponent range for REAL*4 was smaller than that for IEEE 754.

I think S/390 was the first IBM mainframe to offer IEEE 754 FP. I can't
recall exactly what the Cyber CDC 7600 did although since it had native
fast 60bit floating point it was less likely to fail denormalised.

The 360 FP format was also powers-of-16, iirc. That didn't hurt the
average precision too much (since the 3 bits you lose on the swing
(significand) you gain on the roundabouts (exponent)). You did lose the
free bit, though, and it made the roundoff error horribly
pattern-sensitive, so the LSB/sqrt(12) approximation was hopelessly wrong.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> writes:

On 04/24/2014 09:37 AM, Joe Gwinn wrote:
In article <53586416.8050105@electrooptical.net>, Phil Hobbs
hobbs@electrooptical.net> wrote:

[...]

No, the issue is that in a FDTD code, when you turn on the source, the
real fields propagate at approximately c. However, since you're
iterating through all of main memory twice per full time step, the
roundoff errors propagate superluminally and fill the domain with
denormals until the real fields get there.

A combination of initializing the domain with small amounts of
properly normalized noise, and the right compiler switches, fixed it.

Yeah, that's how god does it.


--

John Devereux
 
I wonder if you go with the PID from microchip ( if they still offer
it) that you can get a cert from usb.org with it. Same for FTDI.

I am pretty sure you cannot. You need unique UID for cert.


> I'm pretty sure Microchip will still give you a PID, upon request,

Yes, for development.

> but I don't know how that works with the USB.org folks.

No.
 
We've done USB boxes with both the FTDI chips and using the USB
hardware inside an NXP uP. We use their VIDs. Seems to work fine.

But you can't certify the box without your own VID. Perhaps you can pay FTDI to do it for you, but you can't do it using their VID.
 
On 24/04/2014 19:16, John Devereux wrote:
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> writes:

On 04/24/2014 09:37 AM, Joe Gwinn wrote:
In article <53586416.8050105@electrooptical.net>, Phil Hobbs
hobbs@electrooptical.net> wrote:


[...]

No, the issue is that in a FDTD code, when you turn on the source, the
real fields propagate at approximately c. However, since you're
iterating through all of main memory twice per full time step, the
roundoff errors propagate superluminally and fill the domain with
denormals until the real fields get there.

A combination of initializing the domain with small amounts of
properly normalized noise, and the right compiler switches, fixed it.

Yeah, that's how god does it.

A bit like the conjecture that if we can build a quantum computer in
this universe with a serious number of qubits then it becomes a lot more
likely that we are inside a higher level computer simulation.

--
Regards,
Martin Brown
 
In article <dd83l99jqq9dj7ddujbi4mmndfeepenhr4@4ax.com>,
martin_rid@verizon.net says...
On Fri, 18 Apr 2014 10:35:26 -0400, WangoTango
Asgard24@mindspring.com> wrote:

In article <grl0l9hln3cqsr88vdaaqensjcpk5mkn8u@4ax.com>,
martin_rid@verizon.net says...
On Thu, 17 Apr 2014 13:56:01 -0700, John Larkin
jlarkin@highlandtechnology.com> wrote:

On Thu, 17 Apr 2014 13:12:08 -0500, Tim Wescott
tim@seemywebsite.really> wrote:

On Thu, 17 Apr 2014 09:26:12 -0700, John Larkin wrote:

On Thu, 17 Apr 2014 00:03:04 -0500, Tim Wescott
tim@seemywebsite.really> wrote:

I have a customer who wants a USB-powered battery charger designed, with
certification -n- all. I figure the certification part will be harder
than the charger part, so I have to give it a pass.

Anyone do that and have spare cycles, or know someone? He wants someone
with a track record, or I'd talk him into using me!

What certs? UL/CSA/CE? FCC?

A test lab will do those, for a moderate pile of money.

Is there a USB certification standard?

You have to pass their compatibility tests if you want to use their logos
& such. I'm not sure whether you can even use "USB", but I suspect by
now that you can if you use the right wording.

You can use something like the FTDI chips and (maybe) inherit the
certs.

I think its $3K just for the PID and VID alone.

They have upped it to $5K.
We finally got into doing enough USB devices that I decided to pop out
the $2K they were asking for a VID (you make up the PID yourself), and
wouldn't you know it, they had increased the price to $5K just in time
for me to give them some money. That's my lot.
Naturally, they want you to sign on for the $4K a year subscription that
gets you logo use and so on.

I wonder if you go with the PID from microchip ( if they still offer
it) that you can get a cert from usb.org with it. Same for FTDI.

Cheers
I don't know.
I'm pretty sure Microchip will still give you a PID, upon request, but I
don't know how that works with the USB.org folks.
 

Welcome to EDABoard.com

Sponsor

Back
Top