boolean operations on "integer" in VHDL'93

On Nov 9, 4:06 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:
rickman wrote:
On Nov 5, 1:23 pm, Jan Decaluwe <j...@jandecaluwe.com> wrote:
I don't see what you are referring to here. It can't be Python/MyHDL's
actual choice, because that is the same as VHDL/Verilog for signed, and
probably any VHDL synthesis tool for integer.

I think I have no idea what you are saying with this.  What Python
does with integers has no bearing on what VHDL does.  So what is your
point about mentioning Python?

I try to convince people to take a good look at Python/MyHDL integers
and possibly consider to do it similarly in a future VHDL standard.

Agreed, you complained about the consequences of VHDL's strong
typing system. But that's what I intended to refer to also.

Again, I have no idea why you are bringing this up.  How does it
pertain to the discussion?

The ideas I'm proposing would solve many of the VHDL usability
issues that we are all struggling with, including the OP and
you as I understood it, when you announced that you'd rather
switch (to Verilog) than fight (with VHDL).
I don't see where it would solve the problems I have seen unless it
allows the use of integers to replace all data types... I tried using
Boolean for some control signals as this simplifies expressions in
conditionals. But in simulation Boolean signals are displayed as a
value like an integer which is a PITA. A std_logic signal is
displayed as a line with two levels and is very easy to see rather
than having to read a value which can be off the display.


That's what I mean, yes: strong typing and abstract types without
an implied representation, such as VHDL's boolean, enum and
integer. I'm personally all for it in general, but not for the
case of integer. Sometimes practicality beats purity.

Ok, you have stated your preference, but you have not given any basis
ccc> > for it.  In general a given type does not have an
representation
implied so that it can be implemented in the manner that suits the
application the best.  Although 2's complement is pretty universal, it
is not the only way to use integers.  Do you think it is worth
eliminating the use of integers for any other representation by
specifying one representation in the standard?  I guess I know the
answer to that one.  But you can see where this is a problem for some
usage that others may want, no?

No, I don't think there is a problem.

Imagine an integer type with an "accessible" 2's complement representation.
A synthesis tool only has to honour that when the representation is
actually "accessed" in the code, something which is easy for a tool to
detect. Otherwise, it could implement it with any optimized representation it
chooses. The latter case is equivalent to the current situation, with an
"inaccessible" representation. In other words, this would be a backwards
compatible enhancement.

If you need full control over representation, you'd have to do it like
today: use bit vectors with dedicated logic, and interprete the bit
vector values as numbers yourself.
I don't need control over the representation of integers. But my tool
vendor may need that. The synthesis tool is designed for the target.
If it works better to represent integers as signed magnitude then the
synthesis tool can do that without my involvement or knowledge. How
would you allow a synthesis tool to optimize for a given target
implementation if the representation is fixed? By requiring the tool
to work one way when the bit representation is accessed and a
different way when it is not sounds like a complexity that could cause
problems for users.

Maybe that is not really important. I know it is an issue in the
software world, but in FPGAs and ASICs I can't think of an example
where the number representation is anything other than 2's
complement. But I don't see it helping with any problems unless you
can replace all data types with integers.

Rick
 
On Nov 8, 12:53 pm, JimLewis <J...@SynthWorks.com> wrote:
The closest vhdl vector arithmetic comes to true integer arithmetic
accuracy is the fixed point package types, with zero fractional bits
declared. Fixed point operators automatically pad the result size to
account for accuracy in all cases, except one: a ufixed minus a ufixed
is still a ufixed (but actually bigger by one bit! go figure) rather
than an sfixed. With the almost universal need to resize sfixed/ufixed
results to fit in an assigned signal/variable, the conversion from
sfixed to ufixed could easily be handled in the resize function
anyway.

I think both have issues.  For example:
signal A_ufixed8, B_ufixed8, C_ufixed8, D_ufixed8 : ufixed(7 downto
0) ;
signal Y_ufixed11 : ufixed(10 downto 0) ;
Y_ufixed11 <= A_ufixed8 + B_ufixed8 + C_ufixed8 + D_ufixed8 ;

results in a different size than:
signal A_ufixed8, B_ufixed8, C_ufixed8, D_ufixed8 : ufixed(7 downto
0) ;
signal Y_ufixed10 : ufixed(9 downto 0) ;
Y_ufixed10 <= (A_ufixed8 + B_ufixed8) + (C_ufixed8 + D_ufixed8) ;
If you used y_ufixed10 <= resize(expr, y_ufixed10); it wouldn't make
any difference, no matter which form of the expression you used. I
find it very seldom that you do not have to use a resize function
prior to an assignment with the fixed point packages, which is why
overloading the assignment operator to include the resize
functionality makes a lot of sense. Again, this would work very
similarly to the way integer expressions and assignments work, but
without the limitations of size in integer.

Or better yet, allow assignment operators to be overloaded so that
they can do the resizing automatically.

It would be an interesting proposal. If it gets approved, are you
interested in writing it?  
I don't have any compiler writing experience, so defining the syntax
to use for overloading an assignment operator, and limiting its use to
cases that are reasonable to implement would be beyond me. But I am
certainly willing to help where I can (defining what we want to be
able to do).

Can you formulate something that
chooses between modulo math (like unsigned/signed) or full precision
arith
(like ufixed/sfixed)?  If you blow the doors open and allow anything,
I
would think that is bad.  If you add more saftey such as  enforcement
of
ranges for ufixed/sfixed (so that more than size is enforced) then it
would
be exciting.
Allowing blanket overloading of assignment operators would necessarily
"blow the doors off".

Perhaps restricting overloaded assignment operators to be defined in
the same declarative region as the type to which they assign would
help, especially in the case of the standard packages (users could not
re-overload the assignment operators outside the package).

An overloaded assignment operator for vectors, unlike a standard
operator, would have to be able to know what the target range is,
which is not currently possible for a function in vhdl. So it would
have to be handled more like a procedure with an in and out argument,
unless we developed some whole new syntax.

It would not be the assignment operator which would define modulo
(roll-over) math vs full-precision. That is controlled by the type,
and those operators that are defined for the type. The assignment
operator could define behavior like truncate, saturate, round, etc.
when assigning a larger vector into a smaller one.

On the other hand, we would not need overloaded assignment operators
(and all of their potential pitfalls blowing doors off) if we had an
arbitrarily sized type of integer, including fixed point capability
and bit-wise logical operations defined. Then there would be no
overloaded assignment operators and all their potential pitfalls to
deal with.

Andy
 
On Nov 4, 11:30 pm, whygee <y...@yg.yg> wrote:
Hi !

Brian Drummond wrote:
On Thu, 04 Nov 2010 12:28:31 +0100, whygee <y...@yg.yg> wrote:
Any hint ? Did I miss something ?
bit_vector should be less heavyweight than std_logic_vector.

sure but i want to use integers :-/

 > - Brian

Nicolas Matringe wrote :
 > That's strong typing for you...
it's not a problem of typing, i can create new functions,
however I see nowhere an explanation of these missing operations.
why do AND/OR/XOR work on bit(_vector) and std_(u)logic(vector)
and not on integer, as in any other language ?

 > Nicolas
yg
--http://ygdes.com/http://yasep.org
The reason is because this is an artifact carried over from Ada-83. In
Ada and VHDL, integers are always considered to be signed even if the
range is restricted to positive numbers. There is no universal way to
handle boolean operations on signed numbers so it was decided to leave
out implicit boolean operators in Ada.

This was remedied in Ada-95 with the addition of modular types. They
are restricted to unsigned integers and *do* have implicit boolean
operators. As their name suggests, modular types also wraparound on
overflow and underflow without throwing an exception. It would be
interesting to consider adding them to a future revision to VHDL for
those who want an efficient alternative to the array types.
 
Kevin Thibedeau wrote:
On Nov 4, 11:30 pm, whygee <y...@yg.yg> wrote:
Hi !

Brian Drummond wrote:
On Thu, 04 Nov 2010 12:28:31 +0100, whygee <y...@yg.yg> wrote:
Any hint ? Did I miss something ?
bit_vector should be less heavyweight than std_logic_vector.
sure but i want to use integers :-/

- Brian

Nicolas Matringe wrote :
That's strong typing for you...
it's not a problem of typing, i can create new functions,
however I see nowhere an explanation of these missing operations.
why do AND/OR/XOR work on bit(_vector) and std_(u)logic(vector)
and not on integer, as in any other language ?

Nicolas
yg
--http://ygdes.com/http://yasep.org

The reason is because this is an artifact carried over from Ada-83. In
Ada and VHDL, integers are always considered to be signed even if the
range is restricted to positive numbers. There is no universal way to
handle boolean operations on signed numbers so it was decided to leave
out implicit boolean operators in Ada.

This was remedied in Ada-95 with the addition of modular types. They
are restricted to unsigned integers and *do* have implicit boolean
operators. As their name suggests, modular types also wraparound on
overflow and underflow without throwing an exception. It would be
interesting to consider adding them to a future revision to VHDL for
those who want an efficient alternative to the array types.
Fascinating stuff - thanks for this info.

Restricting the bit view to the "unsigned" domain makes a lot
of sense (talking from my own language design experience).

During a quick survey I found that part of the rationale
behind modular types was "easier interaction with hardware".
Interesting, isn't it?

It seems that a lot of the groundwork that would turn VHDL
into the "easy" HDL is readily available.

Jan

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
On Nov 8, 12:57 pm, Andy <jonesa...@comcast.net> wrote:
The VHDL standard has already adopted an assumed two's complement
numeric representation for vectors (numeric_std, numeric_std_unsigned,
ufixed/sfixed, etc.) Why can we not adopt an assumed two's complement
representation for integes as well?!
Your statement is not exactly correct. "vectors" are not 2's
complement. numeric_std is 2's complement. SLV is still not
arithmetic at all. That is the difference. If you wanted to add a
2's complement integer type as a new type then that would be backwards
compatible by not changing the standard integer type. The new type
could be defined in something like integer_numeric_std or maybe added
to numeric_std.


The primary problem with vhdl vector based arithmetic (numeric_*) is
that it rolls over (not to signed, but what's the difference, an
unsigned rollover is still inaccurate). Take two unsigned, add them
together, and you can get a result that is less than either of the
operands.
What you call "inaccurate" is a result of limited range of the
representation. What would you have the implementation do when the
"overflow" occurs? I can see three choices: roll over treating the
limited range as modulo arithmetic (minimum logic), saturate at the
max and min of the range (more logic and still "inaccurate") or just
not perform the operation (also more logic and who knows what
"inaccurate" really means in this case). Both can/should throw an
error if the operation is actually doing arithmetic. But it is often
that a counter is intended to roll over. For those I have to use an
explicit modulo operator.


The closest vhdl vector arithmetic comes to true integer arithmetic
accuracy is the fixed point package types, with zero fractional bits
declared. Fixed point operators automatically pad the result size to
account for accuracy in all cases, except one: a ufixed minus a ufixed
is still a ufixed (but actually bigger by one bit! go figure) rather
than an sfixed. With the almost universal need to resize sfixed/ufixed
results to fit in an assigned signal/variable, the conversion from
sfixed to ufixed could easily be handled in the resize function
anyway.

Or better yet, allow assignment operators to be overloaded so that
they can do the resizing automatically.

Hey, I can dream, can't I?
I believe overloading operators has been suggested for the next go
around on the VHDL spec. I have no idea if this would create any
problems.

Rick
 
Jan Decaluwe <jan@jandecaluwe.com> writes:

Kevin Thibedeau wrote:
Ada and VHDL, integers are always considered to be signed even if the
range is restricted to positive numbers. There is no universal way to
handle boolean operations on signed numbers so it was decided to leave
out implicit boolean operators in Ada.

This was remedied in Ada-95 with the addition of modular types. They
are restricted to unsigned integers and *do* have implicit boolean
operators. As their name suggests, modular types also wraparound on
overflow and underflow without throwing an exception. It would be
interesting to consider adding them to a future revision to VHDL for
those who want an efficient alternative to the array types.

Fascinating stuff - thanks for this info.

Restricting the bit view to the "unsigned" domain makes a lot
of sense (talking from my own language design experience).

During a quick survey I found that part of the rationale
behind modular types was "easier interaction with hardware".
Interesting, isn't it?

It seems that a lot of the groundwork that would turn VHDL
into the "easy" HDL is readily available.
Maybe we should just quit VHDL and start synthesising Ada directly
(with only half-a-smiley ;)

Maybe call it SystemAda :)

Cheers,
Martin

--
martin.j.thompson@trw.com
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.co.uk/capabilities/39-electronic-hardware
 
On Fri, 12 Nov 2010 09:13:38 +0000, Martin Thompson <martin.j.thompson@trw.com>
wrote:

Jan Decaluwe <jan@jandecaluwe.com> writes:

Kevin Thibedeau wrote:
....

This was remedied in Ada-95 with the addition of modular types. They
are restricted to unsigned integers and *do* have implicit boolean
operators. As their name suggests, modular types also wraparound on
overflow and underflow without throwing an exception. It would be
interesting to consider adding them to a future revision to VHDL for
those who want an efficient alternative to the array types.

Fascinating stuff - thanks for this info.

Restricting the bit view to the "unsigned" domain makes a lot
of sense (talking from my own language design experience).

During a quick survey I found that part of the rationale
behind modular types was "easier interaction with hardware".
Interesting, isn't it?

It seems that a lot of the groundwork that would turn VHDL
into the "easy" HDL is readily available.


Maybe we should just quit VHDL and start synthesising Ada directly
(with only half-a-smiley ;)

Maybe call it SystemAda :)
I believe that would make a lot more sense as a starting point than C or C++.

I am playing with Ada, and finding it very nice indeed.

Ada-95 and now 2005 add a rational type of object oriented programing,
maintaining good type safety even in a complex class hierarchy. If VHDL is to
acquire classes, I hope they will be along the same lines.

As well as adding the modular types, Ada has fixed point types (you specify
range and precision) which could make DSP a breeze.

I suspect the original VHDL committee chose about the right subset of Ada as a
starting point in the 1980s, but now a larger subset could be useful - fixed
point types for example.

Since then there has been parallel (usually divergent) evolution, but VHDL-2008
got conditional- (and case?) -expressions before Ada (they are due in Ada-2012).

But if we're going down the SystemAda route, we need a synthesisable subset - no
heap allocation, limits on recursion, etc. It'll look a lot like the SPARK
subset - with which, using annotations in the form of Ada comments - your design
can be proved formally correct.

There would seem to be a lot of commonality between restructuring logic for
formal proof, and restructuring it for synthesis - and certainly there are a lot
of similar limitations. If the prover fails to terminate, that probably implies
infinite hardware, etc...

So, I'm going with SystemSPARK, and using Ada for my testbenches!

- Brian
 
Brian Drummond wrote:
On Fri, 12 Nov 2010 09:13:38 +0000, Martin Thompson <martin.j.thompson@trw.com
wrote:

Jan Decaluwe <jan@jandecaluwe.com> writes:

Kevin Thibedeau wrote:
...

This was remedied in Ada-95 with the addition of modular types. They
are restricted to unsigned integers and *do* have implicit boolean
operators. As their name suggests, modular types also wraparound on
overflow and underflow without throwing an exception. It would be
interesting to consider adding them to a future revision to VHDL for
those who want an efficient alternative to the array types.
Fascinating stuff - thanks for this info.

Restricting the bit view to the "unsigned" domain makes a lot
of sense (talking from my own language design experience).

During a quick survey I found that part of the rationale
behind modular types was "easier interaction with hardware".
Interesting, isn't it?

It seems that a lot of the groundwork that would turn VHDL
into the "easy" HDL is readily available.

Maybe we should just quit VHDL and start synthesising Ada directly
(with only half-a-smiley ;)

Maybe call it SystemAda :)

I believe that would make a lot more sense as a starting point than C or C++.

I am playing with Ada, and finding it very nice indeed.

Ada-95 and now 2005 add a rational type of object oriented programing,
maintaining good type safety even in a complex class hierarchy. If VHDL is to
acquire classes, I hope they will be along the same lines.

As well as adding the modular types, Ada has fixed point types (you specify
range and precision) which could make DSP a breeze.

I suspect the original VHDL committee chose about the right subset of Ada as a
starting point in the 1980s, but now a larger subset could be useful - fixed
point types for example.

Since then there has been parallel (usually divergent) evolution, but VHDL-2008
got conditional- (and case?) -expressions before Ada (they are due in Ada-2012).

But if we're going down the SystemAda route, we need a synthesisable subset
To get started, we would need to define/implement an RTL semantics subset. The
language has built-in concurrency support, so no problem there. An RTL-style
signal would probably be easy. The remaining problem is a model for sensitivity,
and then a simulation engine could be written ...

heap allocation, limits on recursion, etc. It'll look a lot like the SPARK
subset - with which, using annotations in the form of Ada comments - your design
can be proved formally correct.

There would seem to be a lot of commonality between restructuring logic for
formal proof, and restructuring it for synthesis - and certainly there are a lot
of similar limitations. If the prover fails to terminate, that probably implies
infinite hardware, etc...

So, I'm going with SystemSPARK, and using Ada for my testbenches!

- Brian

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
On Nov 11, 3:33 pm, rickman <gnu...@gmail.com> wrote:
On Nov 8, 12:57 pm, Andy <jonesa...@comcast.net> wrote:

The VHDL standard has already adopted an assumed two's complement
numeric representation for vectors (numeric_std, numeric_std_unsigned,
ufixed/sfixed, etc.) Why can we not adopt an assumed two's complement
representation for integes as well?!

Your statement is not exactly correct. "vectors" are not 2's
complement. numeric_std is 2's complement. SLV is still not
arithmetic at all. That is the difference.
I beg to differ. The new numeric_std_unsigned package assigns an
arithmetic interpretation to std_logic_vector, just like
std_logic_arith did.

Also, by using the conversion unsigned(my_slv) you are already
implying that the unconverted slv has the same bit representation as
unsigned, which is arithmetic. The tool is not allowed to convert/move
bits around in that conversion, so the new interpretation is in effect
placed on the old slv as well.

The primary problem with vhdl vector based arithmetic (numeric_*) is
that it rolls over (not to signed, but what's the difference, an
unsigned rollover is still inaccurate). Take two unsigned, add them
together, and you can get a result that is less than either of the
operands.

What you call "inaccurate" is a result of limited range of the
representation. What would you have the implementation do when the
"overflow" occurs? I can see three choices: roll over treating the
limited range as modulo arithmetic (minimum logic), saturate at the
max and min of the range (more logic and still "inaccurate") or just
not perform the operation (also more logic and who knows what
"inaccurate" really means in this case). Both can/should throw an
error if the operation is actually doing arithmetic. But it is often
that a counter is intended to roll over. For those I have to use an
explicit modulo operator.
If I tell the simulator or synthesis tool to add one to a value, the
new value better be larger than the old value, by exactly one, or it
should die trying (with an informative error message in the case of a
simulator). It should not silently assume that something else will be
good enough. Same goes for subtraction. If I need it do do something
besides adding or subtracting, then I will tell it what I want it to
do (either by resize() or mod, etc.)

Integer and sfixed/ufixed do this correctly, with the exception of
subtracting ufixed values.

Or better yet, allow assignment operators to be overloaded so that
they can do the resizing automatically.

Hey, I can dream, can't I?

I believe overloading operators has been suggested for the next go
around on the VHDL spec. I have no idea if this would create any
problems.

Rick
I think limiting overloaded assignment operators to re-sizing the same
type (between the expression and the target) would be pretty safe, but
it also would not handle the signed/unsigned issue. But because
integer and natural are the same base type, it can also handle signed/
unsigned conversions (with bounds checking). Those are different base
types in VHDL.

So maybe we limit the actions of overloaded assignment operators to
converting "closely related" types and resizing to this relatively
safe. If it works out, maybe we can extend overloaded assignments to
other areas, but I'd rather take baby steps and not break anything,
than take too big a step and cause bigger, unforseen problems.

Andy
 
On Nov 12, 9:38 am, Andy <jonesa...@comcast.net> wrote:
On Nov 11, 3:33 pm, rickman <gnu...@gmail.com> wrote:

On Nov 8, 12:57 pm, Andy <jonesa...@comcast.net> wrote:

The VHDL standard has already adopted an assumed two's complement
numeric representation for vectors (numeric_std, numeric_std_unsigned,
ufixed/sfixed, etc.) Why can we not adopt an assumed two's complement
representation for integes as well?!

Your statement is not exactly correct.  "vectors" are not 2's
complement.  numeric_std is 2's complement.  SLV is still not
arithmetic at all.  That is the difference.

I beg to differ. The new numeric_std_unsigned package assigns an
arithmetic interpretation to std_logic_vector, just like
std_logic_arith did.

Also, by using the conversion unsigned(my_slv) you are already
implying that the unconverted slv has the same bit representation as
unsigned, which is arithmetic. The tool is not allowed to convert/move
bits around in that conversion, so the new interpretation is in effect
placed on the old slv as well.
Yes, if the designer wants to consider SLV as a 2's complement vector,
there is nothing in the language to prevent that. But there is
nothing in the language to promote it either. Anyone is free to
create their own libraries or to use standard ones to add capabilities
to the language. That is what is going on in both
numeric_std_unsigned and in numeric_std. The utility of these data
types is being extended as the designer wishes. It is not a default
part of the language.


The primary problem with vhdl vector based arithmetic (numeric_*) is
that it rolls over (not to signed, but what's the difference, an
unsigned rollover is still inaccurate). Take two unsigned, add them
together, and you can get a result that is less than either of the
operands.

What you call "inaccurate" is a result of limited range of the
representation.  What would you have the implementation do when the
"overflow" occurs?  I can see three choices: roll over treating the
limited range as modulo arithmetic (minimum logic), saturate at the
max and min of the range (more logic and still "inaccurate") or just
not perform the operation (also more logic and who knows what
"inaccurate" really means in this case).  Both can/should throw an
error if the operation is actually doing arithmetic.  But it is often
that a counter is intended to roll over.  For those I have to use an
explicit modulo operator.

If I tell the simulator or synthesis tool to add one to a value, the
new value better be larger than the old value, by exactly one, or it
should die trying (with an informative error message in the case of a
simulator). It should not silently assume that something else will be
good enough. Same goes for subtraction. If I need it do do something
besides adding or subtracting, then I will tell it what I want it to
do (either by resize() or mod, etc.)

Integer and sfixed/ufixed do this correctly, with the exception of
subtracting ufixed values.
How do you expect a synthesis tool to handle this requirement? If you
write VHDL code to increment a counter, what do you expect the
hardware to do when the counter reaches the max value? Are you saying
you expect the synthesis tool to throw an error if the designer does
not indicate explicitly what will happen with an IF statement or a MOD
operator?

What does the synthesis tool do with integers in the case of a counter
that can overflow?

signal a : integer range 0 to 15;
-- clocked process wrapper...
a <= a + 1;

What hardware should this produce? Or how should I write this for a 4
bit counter? How exactly should the synthesis tool "die trying"?


Or better yet, allow assignment operators to be overloaded so that
they can do the resizing automatically.

Hey, I can dream, can't I?

I believe overloading operators has been suggested for the next go
around on the VHDL spec.  I have no idea if this would create any
problems.

Rick

I think limiting overloaded assignment operators to re-sizing the same
type (between the expression and the target) would be pretty safe, but
it also would not handle the signed/unsigned issue. But because
integer and natural are the same base type, it can also handle signed/
unsigned conversions (with bounds checking). Those are different base
types in VHDL.

So maybe we limit the actions of overloaded assignment operators to
converting "closely related" types and resizing to this relatively
safe. If it works out, maybe we can extend overloaded assignments to
other areas, but I'd rather take baby steps and not break anything,
than take too big a step and cause bigger, unforseen problems.
I have no idea what is safe and what is not. But us talking about it
won't solve anything.

Rick
 
On Nov 12, 9:58 am, rickman <gnu...@gmail.com> wrote:
Yes, if the designer wants to consider SLV as a 2's complement vector,
there is nothing in the language to prevent that.  But there is
nothing in the language to promote it either.  Anyone is free to
create their own libraries or to use standard ones to add capabilities
to the language.  That is what is going on in both
numeric_std_unsigned and in numeric_std.  The utility of these data
types is being extended as the designer wishes.  It is not a default
part of the language.
The standard ("default part of the") language now includes the
packages. And by allowing the unsigned(my_slv) conversion, which does
not include any reforming of the elements within my_slv, and my_slv 10 (via numeric_std_unsigned), the language is ensuring that the
numeric interpretation is extended to slv. Whether you choose to
access that interpretation or not, the representation must be
consistent with the numeric interpretation.

How do you expect a synthesis tool to handle this requirement?  If you
write VHDL code to increment a counter, what do you expect the
hardware to do when the counter reaches the max value?  Are you saying
you expect the synthesis tool to throw an error if the designer does
not indicate explicitly what will happen with an IF statement or a MOD
operator?

What does the synthesis tool do with integers in the case of a counter
that can overflow?

signal a : integer range 0 to 15;
-- clocked process wrapper...
  a <= a + 1;

What hardware should this produce?  Or how should I write this for a 4
bit counter?  How exactly should the synthesis tool "die trying"?
Your description did not legally tell the synthesis tool what to do
when a is 15, since storing the result of 15 + 1 in a would be illegal
(and in fact impossible in a four bit storage register). Therefore,
the synthesis tool is free to do anything it wants in that case.

So, one way to look at what really happens in synthesis is this:

signal a : integer range 0 to 15;
-- clocked process wrapper...
if a + 1 > 15 then -- implied by the range of a
a <= "don't care"; -- because assigning 16 to a is illegal anyway
else
a <= a + 1;
end if;

Lucky for us, it turns out that the most efficient implementation of
the above is to simply truncate the result of 15 + 1, which is a roll
over, or modulo counter.

I would prefer that it at least give me a warning that it was doing
this, but in reality, it could do anything it wants, because I did not
tell it what to do.

To keep simulation and synthesis on the same page, a better way to
write it in the first place would be:

signal a : integer range 0 to 15;
-- clocked process wrapper...
a <= (a + 1) mod 16;

This way, you are explicitly, legally telling the synthesis tool (and
the simulator) what you want to do when a is 15.

Andy
 
On Fri, 12 Nov 2010 06:38:15 -0800 (PST), Andy <jonesandy@comcast.net> wrote:

On Nov 11, 3:33 pm, rickman <gnu...@gmail.com> wrote:
On Nov 8, 12:57 pm, Andy <jonesa...@comcast.net> wrote:

So maybe we limit the actions of overloaded assignment operators to
converting "closely related" types and resizing to this relatively
safe. If it works out, maybe we can extend overloaded assignments to
other areas, but I'd rather take baby steps and not break anything,
than take too big a step and cause bigger, unforseen problems.
Before you go too far with overloaded assignment operators,
you have to face another issue :

assignment is not an operator!

And changing that in VHDL would probably be difficult (for a mild
understatement). For better or worse, it's a 50-year old design decision in,
clearly not just VHDL, but back through Ada, and right back to Algol-60.

I don't see it happening.

- Brian
 
On Nov 12, 6:11 pm, Brian Drummond <brian_drumm...@btconnect.com>
wrote:
Before you go too far with overloaded assignment operators,
you have to face another issue :

assignment is not an operator!

And changing that in VHDL would probably be difficult (for a mild
understatement). For better or worse, it's a 50-year old design decision in,
clearly not just VHDL, but back through Ada, and right back to Algol-60.

I don't see it happening.
Rather than overloading the assignment operator, my suggestion to the
(*1) VHDL pubas (two years ago) is to add a method to allow a function
to get access to the attributes associated with whatever the result of
the function will be assigned to. Adds to the language without
breaking anything existing and accomplishes the same goal.

Kevin Jennings

(*1) bug #240 https://bugzilla.mentor.com/show_bug.cgi?id=240
 
On Nov 12, 2:46 pm, Andy <jonesa...@comcast.net> wrote:
On Nov 12, 9:58 am, rickman <gnu...@gmail.com> wrote:

Yes, if the designer wants to consider SLV as a 2's complement vector,
there is nothing in the language to prevent that. But there is
nothing in the language to promote it either. Anyone is free to
create their own libraries or to use standard ones to add capabilities
to the language. That is what is going on in both
numeric_std_unsigned and in numeric_std. The utility of these data
types is being extended as the designer wishes. It is not a default
part of the language.

The standard ("default part of the") language now includes the
packages. And by allowing the unsigned(my_slv) conversion, which does
not include any reforming of the elements within my_slv, and my_slv =
10 (via numeric_std_unsigned), the language is ensuring that the
numeric interpretation is extended to slv. Whether you choose to
access that interpretation or not, the representation must be
consistent with the numeric interpretation.



How do you expect a synthesis tool to handle this requirement? If you
write VHDL code to increment a counter, what do you expect the
hardware to do when the counter reaches the max value? Are you saying
you expect the synthesis tool to throw an error if the designer does
not indicate explicitly what will happen with an IF statement or a MOD
operator?

What does the synthesis tool do with integers in the case of a counter
that can overflow?

signal a : integer range 0 to 15;
-- clocked process wrapper...
a <= a + 1;

What hardware should this produce? Or how should I write this for a 4
bit counter? How exactly should the synthesis tool "die trying"?

Your description did not legally tell the synthesis tool what to do
when a is 15, since storing the result of 15 + 1 in a would be illegal
(and in fact impossible in a four bit storage register). Therefore,
the synthesis tool is free to do anything it wants in that case.

So, one way to look at what really happens in synthesis is this:

signal a : integer range 0 to 15;
-- clocked process wrapper...
if a + 1 > 15 then -- implied by the range of a
a <= "don't care"; -- because assigning 16 to a is illegal anyway
else
a <= a + 1;
end if;

Lucky for us, it turns out that the most efficient implementation of
the above is to simply truncate the result of 15 + 1, which is a roll
over, or modulo counter.

I would prefer that it at least give me a warning that it was doing
this, but in reality, it could do anything it wants, because I did not
tell it what to do.

To keep simulation and synthesis on the same page, a better way to
write it in the first place would be:

signal a : integer range 0 to 15;
-- clocked process wrapper...
a <= (a + 1) mod 16;

This way, you are explicitly, legally telling the synthesis tool (and
the simulator) what you want to do when a is 15.

Andy
As it turns out, that is exactly what I do because the simulation
doesn't work without it. It also solves your synthesis problem.

There are any number of things that a synthesis tool assumes if you
don't tell it. They think they are doing you a favor. But then these
are the same sorts of things that people complain about when using
VHDL. That is what VHDL is all about, telling the tools exactly what
you want rather than using default.

Rick
 
Brian Drummond wrote:
On Fri, 12 Nov 2010 06:38:15 -0800 (PST), Andy <jonesandy@comcast.net> wrote:

On Nov 11, 3:33 pm, rickman <gnu...@gmail.com> wrote:
On Nov 8, 12:57 pm, Andy <jonesa...@comcast.net> wrote:

So maybe we limit the actions of overloaded assignment operators to
converting "closely related" types and resizing to this relatively
safe. If it works out, maybe we can extend overloaded assignments to
other areas, but I'd rather take baby steps and not break anything,
than take too big a step and cause bigger, unforseen problems.

Before you go too far with overloaded assignment operators,
you have to face another issue :

assignment is not an operator!

And changing that in VHDL would probably be difficult (for a mild
understatement). For better or worse, it's a 50-year old design decision in,
clearly not just VHDL, but back through Ada, and right back to Algol-60.

I don't see it happening.
Moreover, these proposals invariably seem to be inspired
by types that force us to deal with the representation to
get arithmetic right. For some reason, we seem to think
that we are better at that than synthesis tools. The
opposite is true of course.

For integer arithmetic, a more usable integer type would
address the issues in a much better way. I don't know
about fixed point, but probably the Ada example can be
enlightening.

Jan

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
Martin Thompson <martin.j.thompson@TRW.com> sent on November 12th, 2010:

|------------------------------------------------------------------|
|"[..] |
| |
|Maybe we should just quit VHDL and start synthesising Ada directly|
|[..]" |
|------------------------------------------------------------------|

People have claimed to have synthesized Ada to hardware.

|--------------------------------------------------------------------|
|"Maybe call it SystemAda :)" |
|--------------------------------------------------------------------|

There was an article by Zainalabedin Navabi and others in "Ada
Letters" entitled "System level hardware design and simulation with
SystemAda" in 2009. It seems to me that they were trying to get an
easy publication instead of doing real work. They and the editor
showed unknowingly in the article that they did not know what they
were discussing.

Yours sincerely,
Paul Colin Gloster
 

Welcome to EDABoard.com

Sponsor

Back
Top