G
glen herrmannsfeldt
Guest
rickman <gnuarm@gmail.com> wrote:
(snip, I wrote)
an FPGA or a small microcontroller, or even DSP processor,
the choice is often for the processor. With the price of smaller
FPGAs coming down, that might change, but also there are more
people who know how to program such processors than HDL design.
Given that, I would expect some overlap between FPGA/HDL
applications and processor/software applications, depending
on the speed required at the time. Some applications might
initially be developed in software, debugged and tested, before
moving to an FPGA/HDL implementation.
Maybe the question isn't whether scaled fixed point is useful
in HDL, but why isn't it useful, enough that people ask for
it in an HLL, in software! Why no scaled fixed point datatype
in C or Java?
(snip)
the applications that will later be implemented in scaled fixed
point VHDL.
Well, there are two applications that D. Knuth believes should
always be done in fixed point: finance and typesetting. As we know,
it is also often used in DSP, but Knuth likely didn't (doesn't)
do much DSP work. (It was some years ago he said that.)
(snip)
using FPGAs when conventional processors would do. With the lower
prices of smaller (but not so small) FPGAs that might change.
(snip)
to do it for me. I can do it using integer arithmetic in C,
(easier if I can get all the product bits from multiply), and
in HDL.
(snip)
certainly isn't unique to me. But yes, I know the problems that
I work on better than those that I don't. But if you can't process
data faster than a medium sized conventional processor, there
won't be much demand, unless the cost (including amortized
development) is less.
or petaflop scale, it is scientific programming. With the size
and speed of floating point in an FPGA, one could instead
use really wide fixed point. Given the choice between 32 bit
floating point and 64 bit or more fixed point for DSP applications,
which one is more useful?
In general, fixed point is a better choice for quantities
with an absolute (not size dependent) uncertainty, floating
point for quantities with a relative uncertainty.
-- glen
(snip, I wrote)
As far as I know, currently if something can be done either withThe suggestion, which could be wrong, was that if scaled fixed point
is useful in an HDL, it should also be useful in non-HDL applications.
The designers of PL/I believed that it would be, but designers of
other languages don't seem to have believed in it enough to include.
Personally, I liked PL/I 40 years ago, and believe that it should
have done better than it did, and scaled fixed point was a feature
that I liked to use.
I have no idea what bearing a forty year old computer language has to
do with this issue.
an FPGA or a small microcontroller, or even DSP processor,
the choice is often for the processor. With the price of smaller
FPGAs coming down, that might change, but also there are more
people who know how to program such processors than HDL design.
Given that, I would expect some overlap between FPGA/HDL
applications and processor/software applications, depending
on the speed required at the time. Some applications might
initially be developed in software, debugged and tested, before
moving to an FPGA/HDL implementation.
Maybe the question isn't whether scaled fixed point is useful
in HDL, but why isn't it useful, enough that people ask for
it in an HLL, in software! Why no scaled fixed point datatype
in C or Java?
(snip)
(snip)Software is implemented on programmable hardware and for
the most part is designed for the most common platforms and does not
do a good job of utilizing unusual hardware effectively. I don't see
how fixed point arithmetic would be of any value on conventional
integer oriented hardware.
Well, the designers of conventional hardware might argue that they
support scaled fixed point as long as you keep track of the radix
point yourself. It is, then, up to software to make it easier for
programmers by helping them keep track of the radix point.
I believe that in addition to PL/I that there are some other less
commonly used languages, maybe ADA, that support it.
As I said above, if for no other reason, then to test and debugAgain, I don't know why you are dragging high level languages into
this. What HLLs do has nothing to do with the utility of features of
HDLs.
the applications that will later be implemented in scaled fixed
point VHDL.
Well, there are two applications that D. Knuth believes should
always be done in fixed point: finance and typesetting. As we know,
it is also often used in DSP, but Knuth likely didn't (doesn't)
do much DSP work. (It was some years ago he said that.)
(snip)
Maybe so. In the past, the cost and size kept people away fromIn the past, the tools would not synthesize division with a
non-constant divisor. If you generate the division logic yourself,
you can add in any pipelining needed. (I did one once, and not for
a systolic array.) With the hardware block multipliers in many FPGAs,
it may not be necessary to generate pipelined multipliers, but many
will want pipelined divide.
Oh, I see why you are talking about pipelining now. I don't think the
package provides pipelining. But that does not eliminate the utility
of the package. It may not be useful to you if it isn't pipelined,
but many other apps can use non-pipelined arithmetic just fine.
using FPGAs when conventional processors would do. With the lower
prices of smaller (but not so small) FPGAs that might change.
(snip)
Yes. If I determine the shifts and rounds, then I don't need VHDLYes, multiply generates a wide product, and most processors with
a multiply instruction supply all the bits. But most HLLs don't
provide a way to get those bits. For scaled fixed point, you
shift as appropriate to get the needed product bits out.
I have no interest in what HLLs do. I use HDLs to design hardware and
I determine how the HDL shifts or rounds or whatever it is that I
need.
to do it for me. I can do it using integer arithmetic in C,
(easier if I can get all the product bits from multiply), and
in HDL.
(snip)
Well, the desire to run almost any computational problem fasterThe point I didn't make very well was that floating point is still
not very usable in FPGA designs, as it is still too big. If a
design can be pipelined, then it has a chance to be fast enough
to be useful.
Whether floating point is too big depends on your app. Pipelining
does not reduce the size of a design and can only be used in apps that
can tolerate the pipeline latency. You speak as if your needs are the
same as everyone else's.
certainly isn't unique to me. But yes, I know the problems that
I work on better than those that I don't. But if you can't process
data faster than a medium sized conventional processor, there
won't be much demand, unless the cost (including amortized
development) is less.
True, but for floating point number crunching, in the teraflopHistorically, FPGA based hardware to accelerate scientific
programming has not done very well in the marketplace. People
keep trying, though, and I would like to see someone succeed.
Scientific programming is not the only app for FPGAs and fixed or
floating point arithmetic. Fixed point arithmetic is widely used in
signal processing apps and floating point is often used when fixed
point is too limiting.
or petaflop scale, it is scientific programming. With the size
and speed of floating point in an FPGA, one could instead
use really wide fixed point. Given the choice between 32 bit
floating point and 64 bit or more fixed point for DSP applications,
which one is more useful?
In general, fixed point is a better choice for quantities
with an absolute (not size dependent) uncertainty, floating
point for quantities with a relative uncertainty.
-- glen