std_logic_vector <= my_constant

B

Brad Smallridge

Guest
Hello,

For a while I have been using this verbose
conversion code with numeric_std:
my_std_logic_vector &lt;= std_logic_vector(to_unsigned (
my_natural_constant,my_std_logic_vector'length));

Recently, I discovered that Xilinx ISE and ModelSim will
accept this with std_logic_unsigned:
my_std_logic_vector &lt;= "00000000" + my_natural_constant;
which I think is a lot easier to look at.

My question is which method, if either, is best?

Also can I have both numeric_std and std_logic_unsigned
in a module without any issues?

Brad Smallridge
AiVision
 
Why are you converting a numeric constant to a std_logic_vector in the
first place though? Presumably you would have yet more code that uses
that constant and, it would likely be in a mathematical context where
you can simply use the 'my_natural_constant' as is without any
conversion

Example:
some_unsigned_signal &lt;= some_other_unsigned_signal +
my_natural_constant;
Why? When I began writing VHDL the concept of std_logic was a sort
of mystery to me. Since all (or at least most from Xilinx)
introductory tutorials used std_logic, I thought it was proper
to use it as well. I think most beginners will use std_logic
within their architectures. Then the arithmetic starts getting
difficult starting with signed and unsigned binaries, and then
multiplication, and so on. Also there is an issue with indexed
arrays that want a natural or integer index argument. I recall that
complicating the issue, too, was the fact that Variables did not
show up on ModelSim, without some cryptic procedure that I have
forgotten now, because I assign desired variable into std_logic.
I haven't seen any book or article address these issues, only a
few posts here and there.

In my current design there are a lot of constants that are modified
for synthesis as opposed to simulation due to the inordinate amount
of video data that the systhesis hardware deals with. Following the
advice from a previous post I use an array for each pair of constants.

type syn_sim is array(0 to 1) of natural;
constant start_row_syn_sim : syn_sym := (20,2);
.. . . and more arrays
constant start_row : natural := start_row_syn_sim(sim);
.. . . and more constants

sim is a generic natural that defaults to 0 or synthesis, but
is set to 1 for simulation from the testbench. I had thought
of using boolean for sim but I thought that down the road I
might want to have different sets of simulation constants.

Hence I have a number of these my_std_logic_vector &lt;= my_const_natural;
assignments in my architecture.

Except for at the top level design ports you would be better off using
the proper data type internally (i.e. natural, integer, my_type, etc.)
for all signals and constants.
That sounds good. I have never declared signal my_natural : natural;
however, there are quite a few instances in my code where I have to
pass std_logic and std_logic_vectors to other modules, FIFOs and the
like, so it seems that there would be a similar amount of mess
untangling all of that. True?

Also can I have both numeric_std and std_logic_unsigned
in a module without any issues?

You'll be better off sticking with numeric_std.
That does seem to be the prevailing wind. However I was wondering
if their was a severe gothcha in using both. I find it so easy to
drop in a counter using std_logic_unsigned and not have to worry
about converting it at all.

Brad Smallridge
AiVision
 
Why are you converting a numeric constant to a std_logic_vector in the
first place though? Presumably you would have yet more code that uses
that constant and, it would likely be in a mathematical context where
you can simply use the 'my_natural_constant' as is without any
conversion

Example:
some_unsigned_signal &lt;= some_other_unsigned_signal +
my_natural_constant;
Why? When I began writing VHDL the concept of std_logic was a sort
of mystery to me. Since all (or at least most from Xilinx)
introductory tutorials used std_logic, I thought it was proper
to use it as well. I think most beginners will use std_logic
within their architectures. Then the arithmetic starts getting
difficult starting with signed and unsigned binaries, and then
multiplication, and so on. Also there is an issue with indexed
arrays that want a natural or integer index argument. I recall that
complicating the issue, too, was the fact that Variables did not
show up on ModelSim, without some cryptic procedure that I have
forgotten now, because I assign desired variable into std_logic.
I haven't seen any book or article address these issues, only a
few posts here and there.

In my current design there are a lot of constants that are modified
for synthesis as opposed to simulation due to the inordinate amount
of video data that the systhesis hardware deals with. Following the
advice from a previous post I use an array for each pair of constants.

type syn_sim is array(0 to 1) of natural;
constant start_row_syn_sim : syn_sym := (20,2);
.. . . and more arrays
constant start_row : natural := start_row_syn_sim(sim);
.. . . and more constants

sim is a generic natural that defaults to 0 or synthesis, but
is set to 1 for simulation from the testbench. I had thought
of using boolean for sim but I thought that down the road I
might want to have different sets of simulation constants.

Hence I have a number of these my_std_logic_vector &lt;= my_const_natural;
assignments in my architecture.

Except for at the top level design ports you would be better off using
the proper data type internally (i.e. natural, integer, my_type, etc.)
for all signals and constants.
That sounds good. I have never declared signal my_natural : natural;
however, there are quite a few instances in my code where I have to
pass std_logic and std_logic_vectors to other modules, FIFOs and the
like, so it seems that there would be a similar amount of mess
untangling all of that. True?

Also can I have both numeric_std and std_logic_unsigned
in a module without any issues?

You'll be better off sticking with numeric_std.
That does seem to be the prevailing wind. However I was wondering
if their was a severe gothcha in using both. I find it so easy to
drop in a counter using std_logic_unsigned and not have to worry
about converting it at all.

Brad Smallridge
AiVision
 
Brad Smallridge wrote:

For a while I have been using this verbose
conversion code with numeric_std:
my_std_logic_vector &lt;= std_logic_vector(to_unsigned (
my_natural_constant,my_std_logic_vector'length));
I use std_logic_vector only on the top ports and
use unsigned or signed everywhere inside.

So if my port were
q : out std_logic_vector(15 downto 0)

I might declare an architecture signal as
subtype q_t is unsigned(q'range);
signal q_s: q_t ;
and use type q_t everywhere inside
with no conversion.

And somewhere, the output port assignment is
q &lt;= std_logic_vector(q_s);

Not so bad.

Recently, I discovered that Xilinx ISE and ModelSim will
accept this with std_logic_unsigned:
my_std_logic_vector &lt;= "00000000" + my_natural_constant;
which I think is a lot easier to look at.
numeric_std.unsigned does the same thing.

My question is which method, if either, is best?
std_logic_unsigned is more popular.
numeric_std.unsigned is better.

-- Mike Treseler

Also can I have both numeric_std and std_logic_unsigned
in a module without any issues?
No. That's one reason numeric_std is better.
Google for details.
 
On Apr 28, 4:02 pm, "Brad Smallridge" &lt;bradsmallri...@dslextreme.com&gt;
wrote:
Hello,

For a while I have been using this verbose
conversion code with numeric_std:
my_std_logic_vector &lt;= std_logic_vector(to_unsigned (
my_natural_constant,my_std_logic_vector'length));

Recently, I discovered that Xilinx ISE and ModelSim will
accept this with std_logic_unsigned:
my_std_logic_vector &lt;= "00000000" + my_natural_constant;
which I think is a lot easier to look at.
Except for having to hard code the proper number of zeros
though...which also then makes it a problem if you want to change the
length of 'my_std_logic_vector'. Although I agree it is easier to
look at then the first, the first is more rugged when it comes to
changes in vector sizes...but when it comes right down to it, neither
method is generally needed (read on).

My question is which method, if either, is best?
Why are you converting a numeric constant to a std_logic_vector in the
first place though? Presumably you would have yet more code that uses
that constant and, it would likely be in a mathematical context where
you can simply use the 'my_natural_constant' as is without any
conversion

Example:
some_unsigned_signal &lt;= some_other_unsigned_signal +
my_natural_constant;

Except for at the top level design ports you would be better off using
the proper data type internally (i.e. natural, integer, my_type, etc.)
for all signals and constants.

Also can I have both numeric_std and std_logic_unsigned
in a module without any issues?
You'll be better off sticking with numeric_std.

KJ
 
Dave wrote:

As far as I know, numeric_std and std_logic_unsigned can actually play
very nicely together. Whether you agree with using it stylistically is
another issue. It can make your code more straightforward and clear.
A matter of opinion I guess,
but not nicely enough for me.

library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
use ieee.std_logic_unsigned.all;

entity unsigned_libs is
end unsigned_libs;

architecture sim of unsigned_libs is
-- Tue Apr 29 07:22:36 2008 M.Treseler
constant a : unsigned := x"02";
constant b : std_logic_vector := x"03";
begin
one: process is
begin
assert a + a = 4;
assert b + b = 6;
assert a + b = 5; -- No feasible entries for infix operator "+"
wait;
end process one;
end sim;

-- Mike Treseler
 
On Apr 28, 5:39 pm, Mike Treseler &lt;mike_trese...@comcast.net&gt; wrote:

Also can I have both numeric_std and std_logic_unsigned
in a module without any issues?

No. That's one reason numeric_std is better.
Google for details.
As far as I know, numeric_std and std_logic_unsigned can actually play
very nicely together. Whether you agree with using it stylistically is
another issue. It can make your code more straightforward and clear.

Dave
 
On Apr 29, 10:56 am, Mike Treseler &lt;mike_trese...@comcast.net&gt; wrote:
Dave wrote:
As far as I know, numeric_std and std_logic_unsigned can actually play
very nicely together. Whether you agree with using it stylistically is
another issue. It can make your code more straightforward and clear.

A matter of opinion I guess,
but not nicely enough for me.
Fair enough. I would only do it in the rare occasions that there's a
lot of math bewteen SLV's and integers, or SLV's and SLV's, to be
done, and it's all unsigned math. Otherwise things would get confusing
in a hurry.

library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
use ieee.std_logic_unsigned.all;

entity unsigned_libs is
end unsigned_libs;

architecture sim of unsigned_libs is
-- Tue Apr 29 07:22:36 2008 M.Treseler
constant a : unsigned := x"02";
constant b : std_logic_vector := x"03";
begin
one: process is
begin
assert a + a = 4;
assert b + b = 6;
assert a + b = 5; -- No feasible entries for infix operator "+"
wait;
end process one;
end sim;

-- Mike Treseler
That line of code would need a conversion, no matter what libraries
you did or did not use.

Dave
 
On Apr 29, 12:07 pm, Dave &lt;dhsch...@gmail.com&gt; wrote:
On Apr 29, 10:56 am, Mike Treseler &lt;mike_trese...@comcast.net&gt; wrote:

Dave wrote:
As far as I know, numeric_std and std_logic_unsigned can actually play
very nicely together. Whether you agree with using it stylistically is
another issue. It can make your code more straightforward and clear.

A matter of opinion I guess,
but not nicely enough for me.

Fair enough. I would only do it in the rare occasions that there's a
lot of math bewteen SLV's and integers, or SLV's and SLV's, to be
done, and it's all unsigned math. Otherwise things would get confusing
in a hurry.
It would get more confusing even quicker ever using SLVs to do
math...even worse now when you say you would use it in the situation
where there is a 'lot of math...'. Mike's example pointed that out
quite readily, using it in a situation where you have even more
signals/constants of SLVs like you stated is asking for grief...or a
rewrite.

That line of code would need a conversion, no matter what libraries
you did or did not use.
No conversion is needed if the proper data type in the first place (in
this case if either signed or unsigned was chosen for the constant
'b')...which is kind of the point of a lot of discussions on this
group.

Kevin Jennings
 
Brad Smallridge wrote:

Why? When I began writing VHDL the concept of std_logic was a sort
of mystery to me. Since all (or at least most from Xilinx)
introductory tutorials used std_logic, I thought it was proper
to use it as well.
This style works fine for simple examples where everything
is the same type, but it does not scale well.

I think most beginners will use std_logic.

That is the default on the Xilinx text editor.

Then the arithmetic starts getting
difficult starting with signed and unsigned binaries, and then
multiplication, and so on. Also there is an issue with indexed
arrays that want a natural or integer index argument. I recall that
complicating the issue, too, was the fact that Variables did not
show up on ModelSim, without some cryptic procedure that I have
forgotten now, because I assign desired variable into std_logic.
I haven't seen any book or article address these issues, only a
few posts here and there.
Many posts, actually.
Here's how I do it:
http://mysite.verizon.net/miketreseler/

I find it so easy to
drop in a counter using std_logic_unsigned and not have to worry
about converting it at all.
Read your own posting above, for some of
the downside to that style.

-- Mike Treseler
 
On Apr 29, 7:04 am, KJ &lt;kkjenni...@sbcglobal.net&gt; wrote:
On Apr 28, 4:02 pm, "Brad Smallridge" &lt;bradsmallri...@dslextreme.com
wrote:

Hello,

For a while I have been using this verbose
conversion code with numeric_std:
my_std_logic_vector &lt;= std_logic_vector(to_unsigned (
my_natural_constant,my_std_logic_vector'length));

Recently, I discovered that Xilinx ISE and ModelSim will
accept this with std_logic_unsigned:
my_std_logic_vector &lt;= "00000000" + my_natural_constant;
which I think is a lot easier to look at.

Except for having to hard code the proper number of zeros
though...which also then makes it a problem if you want to change the
length of 'my_std_logic_vector'. Although I agree it is easier to
look at then the first, the first is more rugged when it comes to
changes in vector sizes...but when it comes right down to it, neither
method is generally needed (read on).



My question is which method, if either, is best?

Why are you converting a numeric constant to a std_logic_vector in the
first place though? Presumably you would have yet more code that uses
that constant and, it would likely be in a mathematical context where
you can simply use the 'my_natural_constant' as is without any
conversion

Example:
some_unsigned_signal &lt;= some_other_unsigned_signal +
my_natural_constant;

Except for at the top level design ports you would be better off using
the proper data type internally (i.e. natural, integer, my_type, etc.)
for all signals and constants.

Also can I have both numeric_std and std_logic_unsigned
in a module without any issues?

You'll be better off sticking with numeric_std.

KJ
I agree with KJ, if you're willing to write and use:

my_std_logic_vector &lt;= "00000000" + my_natural_constant;
you should be equally happy with

subtype slv is std_logic_vector; -- abbreviation

my_slv &lt;= slv(my_unsigned + my_natural_constant, 8);

I also use integer for internal values/ports whenever possible, and
unsigned when not.

Andy
 

Welcome to EDABoard.com

Sponsor

Back
Top