Effective square root algorithms implemented on FPGAs alread

D

dpetrov

Guest
Hello guys,

I'm trying to find a little bit more information for efficient square roo
algorithms which are most likely implemented on FPGA. A lot of algorithm
are found already but which one are for example from Intel or AMD? B
efficient I
mean they are either really fast or they don't need much memory.
Could anyone mention some or point me some resources where I can get mor
information?

Thanks!



---------------------------------------
Posted through http://www.FPGARelated.com
 
dpetrov <dimitar_petrov@n_o_s_p_a_m.abv.bg> wrote:

I'm trying to find a little bit more information for efficient square root
algorithms which are most likely implemented on FPGA. A lot of algorithms
are found already but which one are for example from Intel or AMD? By
efficient I
mean they are either really fast or they don't need much memory.
Could anyone mention some or point me some resources where I can get more
information?
The usual software implementation, especially in floating point,
is Newton-Raphson based. With a good starting value, and appopriate
exponent adjustment, it is about two cycles for single precision
and four for double, with a divide in each cycle.

There is an algorithm that is mostly shift and add (or subtract),
slightly similar to shift and subtract divide algorithms, a binary
implementation of the pencil and paper square root algorithm that was
used before calculators became common. There is a related algorithm
used to do square root on an abacus.

-- glen
 
I thought that the abacus finds just the integer part of the square roo
and it's not working on floating point numbers?

dpetrov <dimitar_petrov@n_o_s_p_a_m.abv.bg> wrote:

I'm trying to find a little bit more information for efficient squar
root
algorithms which are most likely implemented on FPGA. A lot o
algorithms
are found already but which one are for example from Intel or AMD? By
efficient I
mean they are either really fast or they don't need much memory.
Could anyone mention some or point me some resources where I can ge
more
information?

The usual software implementation, especially in floating point,
is Newton-Raphson based. With a good starting value, and appopriate
exponent adjustment, it is about two cycles for single precision
and four for double, with a divide in each cycle.

There is an algorithm that is mostly shift and add (or subtract),
slightly similar to shift and subtract divide algorithms, a binary
implementation of the pencil and paper square root algorithm that was
used before calculators became common. There is a related algorithm
used to do square root on an abacus.

-- glen
---------------------------------------
Posted through http://www.FPGARelated.com
 
Search for the paper:

Implementation of Single Precision Floating Point Square Root on FPGAs
- by Yamin Li and Wanming Chu.

Cheers,
Jon
 
dpetrov <dimitar_petrov@n_o_s_p_a_m.n_o_s_p_a_m.abv.bg> wrote:

I thought that the abacus finds just the integer part of the square root
and it's not working on floating point numbers?
Fixed point, but you can move the decimal point anywhere you want to.
One position in the square root for each two positions in the argument.
That is, sqrt(2) = sqrt(2000000)/1000. Just like the slide rule,
you have to remember the position of the decimal point.

I once tested the algorithm to calculate sqrt(2) to six places.
(The digits of the starting number decrease in pairs, as the digits
of the square root increase. I had to make up some tricks to get six.

I bought a cheap abacus with a little booklet of algorithms in the
Chinatown of some large city. There is also a cube root algorithm,
but you need two abaci for a reasonable number of digits.
(You keep the accumulating cube root and its square as you go.)

-- glen
 
Jon Beniston <jon@beniston.com> wrote:
Search for the paper:

Implementation of Single Precision Floating Point Square Root on FPGAs
- by Yamin Li and Wanming Chu.
Probably not so hard. The hard one to do on FPGAs is floating point
addition. (Pre and post normalization are much bigger than the add.)

You need to multiply by sqrt(2) if the incoming exponent is odd,
otherwise it is as easy as fixed point.

-- glen
 
Thanks for hints guys.

@Glen: Could you please explain why it's hard to it on FPGAs?

I got a little bit confused right now ;)
Most of the algorithms which are already implemented on FPGAs are floatin
point right?

Cheers,
Dimitar

Jon Beniston <jon@beniston.com> wrote:
Search for the paper:

Implementation of Single Precision Floating Point Square Root on FPGAs
- by Yamin Li and Wanming Chu.

Probably not so hard. The hard one to do on FPGAs is floating point
addition. (Pre and post normalization are much bigger than the add.)

You need to multiply by sqrt(2) if the incoming exponent is odd,
otherwise it is as easy as fixed point.

-- glen
---------------------------------------
Posted through http://www.FPGARelated.com
 
On Mon, 16 Jan 2012 08:48:35 -0600
"dpetrov" <dimitar_petrov@n_o_s_p_a_m.n_o_s_p_a_m.abv.bg> wrote:

Thanks for hints guys.

@Glen: Could you please explain why it's hard to it on FPGAs?

I got a little bit confused right now ;)
Most of the algorithms which are already implemented on FPGAs are floating
point right?

Cheers,
Dimitar
Not at all. I churn out signal processing designs in FPGAs pretty constantly and have never once implemented a floating-point design. They're very expensive in terms of hardware used, as compared to fixed-point. I'm not saying that there aren't plenty of people out there, including on this group, doing floating. But I think you'd find that floating point designs are in the minority, especially on lower-end (Spartan, Cyclone) FPGAs.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order. See above to fix.
 
Roby, thanks a lot for bringing a little bit light. I really appreciat
that. I'll take a look and try to clarify to myself cons and pros o
both.

Cheers,
Dimitar

-

On Mon, 16 Jan 2012 08:48:35 -0600
"dpetrov" <dimitar_petrov@n_o_s_p_a_m.n_o_s_p_a_m.abv.bg> wrote:

Thanks for hints guys.

@Glen: Could you please explain why it's hard to it on FPGAs?

I got a little bit confused right now ;)
Most of the algorithms which are already implemented on FPGAs ar
floating
point right?

Cheers,
Dimitar

Not at all. I churn out signal processing designs in FPGAs prett
constantly and have never once implemented a floating-point design.
They're very expensive in terms of hardware used, as compared t
fixed-point. I'm not saying that there aren't plenty of people out there
including on this group, doing floating. But I think you'd find tha
floating point designs are in the minority, especially on lower-en
(Spartan, Cyclone) FPGAs.
--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order. See above to fix.
---------------------------------------
Posted through http://www.FPGARelated.com
 

Welcome to EDABoard.com

Sponsor

Back
Top