why not use std_logic_arith?

V

vu_5421

Guest
Hi all,

I was fumbling around the Xilinx 8.1i program folder and found a ieee
library folder with std_logic_arith in it. My understanding from posts
in the past was that std_logic_arith was something that was released by
Synopsys and not the library of choice by designers. Many suggests
using numeric_std and std_logic_unsigned instead (which was also in the
same ieee library folder).

Personally, looking at std_logic_arith, I see that it has many handy
functions like conv_std_logic_vector and ext that I use quite a bit for
integer to/from SLV conversions. I realize you could achieve the same
with numeric_std library, but it's a lot wordier.

Has IEEE standardized this std_logic_arith library? I noticed that the
header of this file is not the same as that of numeric_std, so I think
that it still is maintained by Synopsys. Nevertheless, is there an
argument against using std_logic_arith?

Thanks for comments.
 
vu_5421 wrote:
Hi all,

I was fumbling around the Xilinx 8.1i program folder and found a ieee
library folder with std_logic_arith in it. My understanding from posts
in the past was that std_logic_arith was something that was released by
Synopsys and not the library of choice by designers. Many suggests
using numeric_std and std_logic_unsigned instead (which was also in the
same ieee library folder).

Personally, looking at std_logic_arith, I see that it has many handy
functions like conv_std_logic_vector and ext that I use quite a bit for
integer to/from SLV conversions. I realize you could achieve the same
with numeric_std library, but it's a lot wordier.

Has IEEE standardized this std_logic_arith library? I noticed that the
header of this file is not the same as that of numeric_std, so I think
that it still is maintained by Synopsys. Nevertheless, is there an
argument against using std_logic_arith?

Thanks for comments.
This was noticed as a problem, which we plan to fix in VHDL-2006.

The basic issue was "why can't I add 1 to a std_logic_vector"? or "why
can't I convert an integer into std_logic_vector"? Std_logic_vectors
were not meant to be mathematically representations (UNSIGNED and SIGNED
were), but people use them that way anyway.

Because of this we created a package for just this situation.
We called the package "numeric_std_unsigned" (so as not to conflict with
the other names already out there).
You can get a vhdl-93 copy at:
http://www.vhdl.org/vhdl-200x/vhdl-200x-ft/packages/numeric_std_unsigned_c.vhdl

The problem you will find with all of the non IEEE packages is that they
are different depending on which compiler you use. This new package
will be standardized, so it should be the same across all compilers.
 
David,

Will there be a numeric_std_signed as well? Seems only fair...

What about numeric_bit_unsigned/signed?

Maybe I'll just stick with integers... but it would help if the minimum
implementation of integers were expanded to at least 64 bits (signed or
unsigned), or required to be arbitrary (ok, power of two) and
configurable per the user.

While we're at it, since we've agreed that std_logic_vector, signed,
and unsigned all have a specific numeric interpretation, can we agree
that integers have a specific bit representation, and add bitwise
operators for integers to the standard? These would map directly to
machine primitives in simulation, and speed things up tremendously.

Integer numeric operations already simulate MUCH faster (orders of
magnitude) than with vectors, but they are constrained by the 32 bit
(signed) implementations.

Andy


David Bishop wrote:
vu_5421 wrote:
Hi all,

I was fumbling around the Xilinx 8.1i program folder and found a ieee
library folder with std_logic_arith in it. My understanding from posts
in the past was that std_logic_arith was something that was released by
Synopsys and not the library of choice by designers. Many suggests
using numeric_std and std_logic_unsigned instead (which was also in the
same ieee library folder).

Personally, looking at std_logic_arith, I see that it has many handy
functions like conv_std_logic_vector and ext that I use quite a bit for
integer to/from SLV conversions. I realize you could achieve the same
with numeric_std library, but it's a lot wordier.

Has IEEE standardized this std_logic_arith library? I noticed that the
header of this file is not the same as that of numeric_std, so I think
that it still is maintained by Synopsys. Nevertheless, is there an
argument against using std_logic_arith?

Thanks for comments.


This was noticed as a problem, which we plan to fix in VHDL-2006.

The basic issue was "why can't I add 1 to a std_logic_vector"? or "why
can't I convert an integer into std_logic_vector"? Std_logic_vectors
were not meant to be mathematically representations (UNSIGNED and SIGNED
were), but people use them that way anyway.

Because of this we created a package for just this situation.
We called the package "numeric_std_unsigned" (so as not to conflict with
the other names already out there).
You can get a vhdl-93 copy at:
http://www.vhdl.org/vhdl-200x/vhdl-200x-ft/packages/numeric_std_unsigned_c.vhdl

The problem you will find with all of the non IEEE packages is that they
are different depending on which compiler you use. This new package
will be standardized, so it should be the same across all compilers.
 
Andy wrote:
David,

Will there be a numeric_std_signed as well? Seems only fair...

What about numeric_bit_unsigned/signed?
We didn't create one, but it should not be to hard to create one.

Maybe I'll just stick with integers... but it would help if the minimum
implementation of integers were expanded to at least 64 bits (signed or
unsigned), or required to be arbitrary (ok, power of two) and
configurable per the user.
I'd just use unsigned math, or limit the size of the integer with a range.

While we're at it, since we've agreed that std_logic_vector, signed,
and unsigned all have a specific numeric interpretation, can we agree
that integers have a specific bit representation, and add bitwise
operators for integers to the standard? These would map directly to
machine primitives in simulation, and speed things up tremendously.
numeric_std_unsigned is already overloaded for natural. You can add a
std_logic_vector to a natural as long as the result is a std_logic_vector.

Integer numeric operations already simulate MUCH faster (orders of
magnitude) than with vectors, but they are constrained by the 32 bit
(signed) implementations.
Yes, but the synthesize much worse. That's the problem.
 
David Bishop wrote:
Andy wrote:
David,

Will there be a numeric_std_signed as well? Seems only fair...

What about numeric_bit_unsigned/signed?

We didn't create one, but it should not be to hard to create one.
My point is these updates to slv should also be created for bit_vector.
I at least like that slv will be a subtype of sulv, so they can be
interchanged more easily, and sulv (or signed/unsigned) can be used
more effectively in applications that do not need resolution (i.e.
multiple drivers, tri-state busses, etc.) where the compiler can find
wiring errors for you.

Maybe I'll just stick with integers... but it would help if the minimum
implementation of integers were expanded to at least 64 bits (signed or
unsigned), or required to be arbitrary (ok, power of two) and
configurable per the user.

I'd just use unsigned math, or limit the size of the integer with a range.
I already use integers (actually subtypes of integer) for synthesis
wherever I can. The simulation speed increase is incredible, and mixing
sizes of addends and sums is much easier, as is extraction of
carry/borrow. The only problem is they are currently limited to 31 bits
for unsigned values.

While we're at it, since we've agreed that std_logic_vector, signed,
and unsigned all have a specific numeric interpretation, can we agree
that integers have a specific bit representation, and add bitwise
operators for integers to the standard? These would map directly to
machine primitives in simulation, and speed things up tremendously.

numeric_std_unsigned is already overloaded for natural. You can add a
std_logic_vector to a natural as long as the result is a std_logic_vector.
That's not what I meant/want. Prior to numeric_std_unsigned, the ieee
agreement was that there was no universal numeric interpretation for an
slv. Now there is. Conversely there was no universal binary
representation of integers, so bit-wise logic functions on integers
were not defined. Now that we have agreed on a numeric representation
of bits, why not reciprocate an agreement on bit representation of
numbers (integers)? Doing so would allow bitwise rtl models to execute
(simulate) at MUCH faster speeds by using machine instructions for
and/or/etc. on integer types.

Integer numeric operations already simulate MUCH faster (orders of
magnitude) than with vectors, but they are constrained by the 32 bit
(signed) implementations.

Yes, but the synthesize much worse. That's the problem.
Define worse. Can you test the carry/borrow bit on unsigned
addition/subtraction with one statement, without adding dummy bits?

Try evaluating (my_natural - 1 < 0) with unsigned vectors; it won't
always give you the right answer! With naturals, it sims and
synthesizes correctly every time, and even uses the borrow bit to boot.

Try "sum <= a + b", where all three have different widths, with
vectors.

And (a + 1) > a should ALWAYS be TRUE, or throw an assertion at me.
Don't assume I wanted it to roll over!

They synthesize just fine, thank you, just make them bigger!

As for bitwise operators on integers, I'll take the hit and use mod
(just like I have to with rollovers in arithmetic or in SW) when I have
to; it is still magnitudes faster than doing it with a vector, and it
does exactly what I told it to.

A few years ago, I had a small ~20k gate FPGA design that had a 5 bit
internal subaddress that was distributed to several modules and decoded
for register selects, etc. I changed that one internal subaddress
signal from "numeric_std.unsigned(4 downto 0)" to "natural range 0 to
2**5-1", and my unchanged 2.5 hour testbench ran in less than an hour.
Everyone says vhdl simulates like a dog compared to verilog, but not
when you use integers in your vhdl. Unfortunately when you have to use
bit-wise logic operators, you still have to slow down and convert
to/from vectors.

Andy
 
Andy wrote:
David Bishop wrote:
Andy wrote:
David,

Will there be a numeric_std_signed as well? Seems only fair...

What about numeric_bit_unsigned/signed?
We didn't create one, but it should not be to hard to create one.

My point is these updates to slv should also be created for bit_vector.
They were.
http://www.vhdl.org/vhdl-200x/vhdl-200x-ft/packages/numeric_bit_unsigned.vhdl

I at least like that slv will be a subtype of sulv, so they can be
interchanged more easily, and sulv (or signed/unsigned) can be used
more effectively in applications that do not need resolution (i.e.
multiple drivers, tri-state busses, etc.) where the compiler can find
wiring errors for you.
I like that too. I just hate having to do "UNRESOLVED_UNSIGNED" (or
even "U_UNSIGNED") instead of UNSIGNED, but I'll get used to it.

Maybe I'll just stick with integers... but it would help if the minimum
implementation of integers were expanded to at least 64 bits (signed or
unsigned), or required to be arbitrary (ok, power of two) and
configurable per the user.
I'd just use unsigned math, or limit the size of the integer with a range.

I already use integers (actually subtypes of integer) for synthesis
wherever I can. The simulation speed increase is incredible, and mixing
sizes of addends and sums is much easier, as is extraction of
carry/borrow. The only problem is they are currently limited to 31 bits
for unsigned values.
You may want to try a short floating point number. I've been playing
with 16 bit floating point (float (5 downto -10), or 5 bit exponent and
10 bits of fraction) and found it working well for some DSP apps.

While we're at it, since we've agreed that std_logic_vector, signed,
and unsigned all have a specific numeric interpretation, can we agree
that integers have a specific bit representation, and add bitwise
operators for integers to the standard? These would map directly to
machine primitives in simulation, and speed things up tremendously.
numeric_std_unsigned is already overloaded for natural. You can add a
std_logic_vector to a natural as long as the result is a std_logic_vector.

That's not what I meant/want. Prior to numeric_std_unsigned, the ieee
agreement was that there was no universal numeric interpretation for an
slv. Now there is.
Sort of. The numeric_std_unsigned and numeric_bit_unsigned were mainly
user request. They are actually based on numeric_std and numeric_bit.

Conversely there was no universal binary
representation of integers, so bit-wise logic functions on integers
were not defined. Now that we have agreed on a numeric representation
of bits, why not reciprocate an agreement on bit representation of
numbers (integers)? Doing so would allow bitwise rtl models to execute
(simulate) at MUCH faster speeds by using machine instructions for
and/or/etc. on integer types.
The problem here is that a "bit" or "std_ulogic" has a very different
meaning from an integer. In simulation, you can get an unknown on
something.

Integer numeric operations already simulate MUCH faster (orders of
magnitude) than with vectors, but they are constrained by the 32 bit
(signed) implementations.
Yes, but the synthesize much worse. That's the problem.

Define worse. Can you test the carry/borrow bit on unsigned
addition/subtraction with one statement, without adding dummy bits?
You should try it in Verilog some time. With signed values it even
makes VHDL look easy.

Try evaluating (my_natural - 1 < 0) with unsigned vectors; it won't
always give you the right answer! With naturals, it sims and
synthesizes correctly every time, and even uses the borrow bit to boot.
Here is a place where you might want to use a signed value.

Try "sum <= a + b", where all three have different widths, with
vectors.
One of the reasons why the fixed point package allows vectors to grow.

And (a + 1) > a should ALWAYS be TRUE, or throw an assertion at me.
Don't assume I wanted it to roll over!
in the fixed point package numbers "saturate", they don't roll over
unless you tell them to.

They synthesize just fine, thank you, just make them bigger!

As for bitwise operators on integers, I'll take the hit and use mod
(just like I have to with rollovers in arithmetic or in SW) when I have
to; it is still magnitudes faster than doing it with a vector, and it
does exactly what I told it to.
I will typically convert to binary and then use a shift instead.
Synthesis is essentially the same thing as long as you do "mod 2**x"

A few years ago, I had a small ~20k gate FPGA design that had a 5 bit
internal subaddress that was distributed to several modules and decoded
for register selects, etc. I changed that one internal subaddress
signal from "numeric_std.unsigned(4 downto 0)" to "natural range 0 to
2**5-1", and my unchanged 2.5 hour testbench ran in less than an hour.
Everyone says vhdl simulates like a dog compared to verilog, but not
when you use integers in your vhdl. Unfortunately when you have to use
bit-wise logic operators, you still have to slow down and convert
to/from vectors.
Remember:
VHDL was designed by a bunch of software guys that had no idea how to
design hardware. So, we beat on it until you could design hardware with it.

Verilog was designed by a bunch of hardware guys that had no idea how to
design software. So, we beat on it until you could design software with it.

Pick your poison. SystemVerilog seems to have all of the worst
attributes of both C and Verilog. I'll still pinning things on VHDL.
 
David Bishop wrote:

Pick your poison. SystemVerilog seems to have all of the worst
attributes of both C and Verilog. I'll still pinning things on VHDL.
Thanks for hanging in there.

-- Mike Treseler
 

Welcome to EDABoard.com

Sponsor

Back
Top