about negative in numeric_std package

M

move

Guest
Hi all:

is negative in VHDL free of overflow?

like this

signal din : std_logic_vector(7 downto 0);

......

when i do

cout <= std_logic_vector(- signed(din));

for example when din equal b"10000000" in complient binary = -128,
after negate it, we got 128 : b"10000000" , is it a overflow? how to
get the right answer? what is the best way to do

a = - b ; ?

Thanks ALL in advance !!!

liubenyuan
 
move wrote:

how to get the right answer?
That is an interesting question.

To synthesize any significant signed math,
I make an rtl-style sim model using reals to work
out all the register lengths.

To check the sims, I might also use this
www.python.org/download
as a functional calculator.

Once the real answers are right in simulation,
I convert the reals to integer ranges
for small numbers, or to numeric_std.signed
vectors for big numbers.

-- Mike Treseler
 
On Aug 29, 1:23 pm, move <liubeny...@gmail.com> wrote:
Hi all:

is negative in VHDL free of overflow?

like this

signal din : std_logic_vector(7 downto 0);
std_logic_vectors do not have any numerical interpretation, they can
not 'overflow', they can not be 'positive' or 'negative'. To treat a
collection of bits as a signed or unsigned numeric thing, you should
put the following line of code to include use of the numeric_std
library.

use ieee.numeric_std

.....

when i do

cout <= std_logic_vector(- signed(din));

for example when din equal b"10000000" in complient binary = -128,
after negate it, we got 128 : b"10000000" , is it a overflow? how to
get the right answer?
With an 8 bit collection of bits interpreted as twos complement binary
as you've done, your range of operation is from -128 to +127. If
you're going to do something that may take you outside of that range
then you need another bit.

what is the best way to do

a = - b ; ?
'a' should have one more bit of precision than 'b'

signal b: signed(7 downto 0);
signal a: signed(8 downto 0);

KJ
 
Andy wrote:

There is a new standard package for fixed point, that does not
overflow or roll over, the result is large enough to handle the
largest possibility (n+1 bits in the case of adding or subtracting n
bit operands). Just declare a signed fixed point vector with zero
fractional bits, and you have mathematically accurate integer
arithmetic, but you have to keep up with the signal/variable widths
yourself (the compiler keeps you honest).
I don't yet grok the difference from numeric_std.signed
for this application, but it sounds like it might be soup,
so I'll check it out.
Let's see
http://www.vhdl.org/vhdl-200x/vhdl-200x-ft/packages/files.html
Yes, Mr. Bishop has been busy.
This looks like a nice example:
http://www.vhdl.org/vhdl-200x/vhdl-200x-ft/packages/fixed_synth.vhdl
This one too
http://www.vhdl.org/vhdl-200x/vhdl-200x-ft/packages/real_tests.vhdl

Thanks Andy.

-- Mike Treseler
 
On Aug 29, 4:22 pm, Mike Treseler <mtrese...@gmail.com> wrote:
move wrote:
how to get the right answer?

That is an interesting question.

To synthesize any significant signed math,
I make an rtl-style sim model using reals to work
out all the register lengths.

To check the sims, I might also use thiswww.python.org/download
as a functional calculator.

Once the real answers are right in simulation,
I convert the reals to integer ranges
for small numbers, or to numeric_std.signed
vectors for big numbers.

        -- Mike Treseler
There is a new standard package for fixed point, that does not
overflow or roll over, the result is large enough to handle the
largest possibility (n+1 bits in the case of adding or subtracting n
bit operands). Just declare a signed fixed point vector with zero
fractional bits, and you have mathematically accurate integer
arithmetic, but you have to keep up with the signal/variable widths
yourself (the compiler keeps you honest).

For quantities within integer'range, integer arithmetic automatically
handles overflow. The simulation keeps you honest, giving you
assertions if you overflow the range of a variable/signal.

Integer arithmetic also simulates many times faster than vector based
arithmetic.

Andy
 
On Aug 30, 3:27 am, KJ <kkjenni...@sbcglobal.net> wrote:
On Aug 29, 1:23 pm, move <liubeny...@gmail.com> wrote:
It seem there IS OVERFLOW, so when do negatate, one more bit is needed
to act as a protect bit;

i do a simple simulate :

library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;

entity test is
end entity test;

architecture rtl of test is
signal din : std_logic_vector(7 downto 0);
signal dout : std_logic_vector(7 downto 0);
begin
dout <= std_logic_vector(-signed(din));

pstim : process
begin
din <= b"00001111"; -- 15
wait for 100 ns; din <= b"11110001"; -- -15
wait for 100 ns; din <= b"10000000"; -- -128
wait;
end process pstim;
end architecture rtl;

so the results are:
11110001
00001111
10000000 -_-!

Thanks all!
 
Andy wrote:
On Aug 29, 4:22 pm, Mike Treseler <mtrese...@gmail.com> wrote:
move wrote:
how to get the right answer?
That is an interesting question.

To synthesize any significant signed math,
I make an rtl-style sim model using reals to work
out all the register lengths.

To check the sims, I might also use thiswww.python.org/download
as a functional calculator.

Once the real answers are right in simulation,
I convert the reals to integer ranges
for small numbers, or to numeric_std.signed
vectors for big numbers.

-- Mike Treseler

There is a new standard package for fixed point, that does not
overflow or roll over, the result is large enough to handle the
largest possibility (n+1 bits in the case of adding or subtracting n
bit operands). Just declare a signed fixed point vector with zero
fractional bits, and you have mathematically accurate integer
arithmetic, but you have to keep up with the signal/variable widths
yourself (the compiler keeps you honest).
This issue is exactly the reason that the fixed point package grows one
bit when it does an "abs" or a "-" on a number. When writing your code
it drives you nuts, but it keeps the precision correct.

You can get this package (downgraded for vhdl-93) at:
http://www.vhdl.org/fphdl/vhdl.html
 
move wrote:

so the results are:
11110001
00001111
10000000 -_-!

Thanks all!
Thanks for posting your results.

-- Mike Treseler
 
On Aug 30, 2:02 pm, David Bishop <dbis...@vhdl.org> wrote:
Andy wrote:
On Aug 29, 4:22 pm, Mike Treseler <mtrese...@gmail.com> wrote:
move wrote:
how to get the right answer?
That is an interesting question.

To synthesize any significant signed math,
I make an rtl-style sim model using reals to work
out all the register lengths.

To check the sims, I might also use thiswww.python.org/download
as a functional calculator.

Once the real answers are right in simulation,
I convert the reals to integer ranges
for small numbers, or to numeric_std.signed
vectors for big numbers.

        -- Mike Treseler

There is a new standard package for fixed point, that does not
overflow or roll over, the result is large enough to handle the
largest possibility (n+1 bits in the case of adding or subtracting n
bit operands). Just declare a signed fixed point vector with zero
fractional bits, and you have mathematically accurate integer
arithmetic, but you have to keep up with the signal/variable widths
yourself (the compiler keeps you honest).

This issue is exactly the reason that the fixed point package grows one
bit when it does an "abs" or a "-" on a number.   When writing your code
it drives you nuts, but it keeps the precision correct.

You can get this package (downgraded for vhdl-93) at:http://www.vhdl.org/fphdl/vhdl.html
Yes, declaring signals and variables of the correct size to handle the
results is a pain, but it keeps you aware of what is going on. The
functions help, but are still not very easy to use. This is (at
least) one place where I wish you could declare an unconstrained
variable, initialized to its appropriate width as follows:

variable a,b : sfixed(7 downto 0):= (others => 0);
variable ab_sum : sfixed := a + b;

You can do that with constants...

Andy
 
Andy wrote:

Yes, declaring signals and variables of the correct size to handle the
results is a pain, but it keeps you aware of what is going on.
.... if I know what the signed lengths ought to be in advance.
If I have to run some sims to figure this out,
I start with reals.

The
functions help, but are still not very easy to use. This is (at
least) one place where I wish you could declare an unconstrained
variable, initialized to its appropriate width as follows:

variable a,b : sfixed(7 downto 0):= (others => 0);
variable ab_sum : sfixed := a + b;

You can do that with constants...
Great idea.
Yes, I do that with constants all the time.
I can't see any problem if the expression is static.
Consider an enhancement request to
http://www.eda.org/vasg/bugrep.htm

-- Mike Treseler
 
Andy wrote:

I think therein lies the problem; the initialization expression is not
exactly static or even statically constrained for that matter. Perhaps
variable references inside a declarative region could be considered
static? Something like this could open up a big can of worms if not
carefully considered.
True, but such consideration is more likely
to occur at eda.org than it is here.
I think the idea has promise.

-- Mike Treseler
 
On Sep 2, 12:49 pm, Mike Treseler <mtrese...@gmail.com> wrote:
Andy wrote:
Yes, declaring signals and variables of the correct size to handle the
results is a pain, but it keeps you aware of what is going on.

... if I know what the signed lengths ought to be in advance.
If I have to run some sims to figure this out,
I start with reals.

The
functions help, but are still not very easy to use.  This is (at
least) one place where I wish you could declare an unconstrained
variable, initialized to its appropriate width as follows:

variable a,b    : sfixed(7 downto 0):= (others => 0);
variable ab_sum : sfixed            := a + b;

You can do that with constants...

Great idea.
Yes, I do that with constants all the time.
I can't see any problem if the expression is static.
Consider an enhancement request tohttp://www.eda.org/vasg/bugrep.htm

       -- Mike Treseler
I think therein lies the problem; the initialization expression is not
exactly static or even statically constrained for that matter. Perhaps
variable references inside a declarative region could be considered
static? Something like this could open up a big can of worms if not
carefully considered.

Andy
 

Welcome to EDABoard.com

Sponsor

Back
Top