Bit width of expression from synthesis point of view

D

dmitriym

Guest
Hello, experts!


I'm performing a research in order to define list of "rules"
regardning determinition of expressions bit widths for synthesis. I've
already googled for all related topics regarding expressions bit width
determinition. Common are issues about simple rules that are defined
in the Verilog Standard (e.g. compiler makes the bit width of RHS as
wide as LHS is). But I'm interested in synthesis bit width-related
"rules", that could help me to deal with issues like following:
--- 8< ---
reg A, B;
reg [1:0] SUM;
....
SUM = A + B;
....
--- >8 ---
[this description is bad from linter point of view (bit width of RHS
is less than bit width of assignment destination), but it is ok for
synthesis tool (adder with carry-out could be inferred)]

Or, another one example:
--- 8< ---
reg [1:0] A;
reg [2:0] B;
reg [4:0] M;
....
M = A * B + 1'b1;
....
--- >8 ---
[the product of 2-bit "A" and 3-bit "B" should have 5 bits (3+2), but
1'b1 is added and overflow can occur => so, if I defibe "M" as "reg
[5:0] M", it will be also correct? in other words, how to define the
bit width "rule" for such case?]

Maybe, someone faced with some kind of document or whatever that is
related to this problem?

I'm sorry, if this question occurs not very clear... but this problem
really disturbs me and I'll highly appreciate any help :)


Best regards,
-dmitriym
 
On Thu, 07 Jun 2007 06:58:19 -0700,
dmitriym <explorer@inbox.ru> wrote:

I'm performing a research in order to define list of "rules"
regardning determinition of expressions bit widths for synthesis.
No "research" needed - the rules are now pretty clear, in
the 1364-2005 LRM.

reg A, B;
reg [1:0] SUM;
...
SUM = A + B;
A, B both widened to 2 bits by zero-padding before the
addition is performed. SUM captures the carry out.

A;
reg [2:0] B;
reg [4:0] M;
...
M = A * B + 1'b1;
A, B, 1'b1 all widened to 5 bits before any calculation is
performed.

--- >8 ---
[the product of 2-bit "A" and 3-bit "B" should have 5 bits (3+2), but
1'b1 is added and overflow can occur => so, if I defibe "M" as "reg
[5:0] M", it will be also correct? in other words, how to define the
bit width "rule" for such case?]
All operands are widened to the same width as the widest - including
the left-hand side. Then, do the arithmetic in that width.
Then assign the result to the left-hand side,
dropping more-significant bits if necessary.

BUT... it isn't quite that simple. The main difficulties are:

(1) signed vs. unsigned. If ALL the right-hand side operands
are signed, then widening is performed by sign extension
rather than by zero-fill. If ANY right-hand side operand
is unsigned, then ALL widening and arithmetic is unsigned.
This behaviour is NOT affected by whether the left-hand
side is signed or unsigned.

(2) self-determined expressions. There are some places in
Verilog where an expression or operand is isolated - it is
in a "self-determined context" - and its width is not affected
by, and does not affect, the surrounding expression. All
detailed fully in IEEE Std.1364-2005.

There have been many discussions of this issue here.
A glance at Steven Sharp's posting history over the
past two years will produce some gems.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
On Thu, 07 Jun 2007 06:58:19 -0700, dmitriym <explorer@inbox.ru>
wrote:

A;
reg [2:0] B;
reg [4:0] M;
...
M = A * B + 1'b1;
...
in other words, how to define the
bit width "rule" for such case?
To add to Jonathan's reply - in other words, it normally just works if
you set your destination to be wide enough (note, in this case, M only
needs to be 4 bits, because 3*7+1 is 22, so you don't need the extra
overflow bit that you suggest).

*But*, sometimes it doesn't work:

reg [1:0] A;
reg [2:0] B;
reg [21:0] M;
....
M = 1'b1 << (A * B);

What's the answer here? This one's so good that I'm not going to tell
you - solution tomorrow (or possibly Monday, since I've got much
better things to do tomorrow). I've run the code above on 5 simulators
and 2 of them got it wrong (you know who you are!).

Moral:

1 - don't mess with Verilog - it'll bite you

2 - don't mix signed and unsigned operands, although that's not
relevant here

3 - don't construct complex expressions - break down all the
sub-expressions explicitly

Evan
 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Evan Lavelle wrote:

To add to Jonathan's reply - in other words, it normally just works if
you set your destination to be wide enough (note, in this case, M only
needs to be 4 bits, because 3*7+1 is 22, so you don't need the extra
overflow bit that you suggest).

*But*, sometimes it doesn't work:

reg [1:0] A;
reg [2:0] B;
reg [21:0] M;
...
M = 1'b1 << (A * B);

What's the answer here? This one's so good that I'm not going to tell
you - solution tomorrow (or possibly Monday, since I've got much
better things to do tomorrow). I've run the code above on 5 simulators
and 2 of them got it wrong (you know who you are!).
Hmm... I think I'm one who gets it wrong, but I like my answer better:p

wing % iverilog foo.vl
wing % vvp a.out
M=1000000000000000000000 (A=3, B=7)

module main;
reg [1:0] A;
reg [2:0] B;
reg [21:0] M;

initial begin
A = 3;
B = 7;
M = 1'b1 << (A * B);
$display("M=%b (A=%d, B=%d)", M, A, B);
end
endmodule // main


- --
Steve Williams "The woods are lovely, dark and deep.
steve at icarus.com But I have promises to keep,
http://www.icarus.com and lines to code before I sleep,
http://www.picturel.com And lines to code before I sleep."
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org

iD4DBQFGc1DVrPt1Sc2b3ikRAkS9AJd87AOMwny9fKSos9UBarenPYXzAKC87+qx
T+UKvBqNQOyz2meglX6ENw==
=5hio
-----END PGP SIGNATURE-----
 
On Fri, 15 Jun 2007 19:54:13 -0700, Stephen Williams
<spamtrap@icarus.com> wrote:

Hmm... I think I'm one who gets it wrong, but I like my answer better:p
I think I probably do as well.. :)

dmitriym - see point 2 of Jonathan's post. In this case, the right
operand of the shift is self-determined, which means that the
multiplication is only carried out in a 3-bit multiplier, and the top
2 bits of the result are lost. More complete testcase below, with
expected results.

Evan

========================================================

module test;
initial
main;
task main;
integer i, j;
reg [1:0] A;
reg [2:0] B;
reg [21:0] M;
begin
for(i = 0; i <= 3; i = i+1) begin
for(j = 0; j <= 7; j = j+1) begin
A = i;
B = j;
M = 1'b1 << (A * B);
$display("A: %d; B: %d; M: %b", A, B, M);
end
end
end
endtask
endmodule

========================================================

# A: 0; B: 0; M: 0000000000000000000001
# A: 0; B: 1; M: 0000000000000000000001
# A: 0; B: 2; M: 0000000000000000000001
# A: 0; B: 3; M: 0000000000000000000001
# A: 0; B: 4; M: 0000000000000000000001
# A: 0; B: 5; M: 0000000000000000000001
# A: 0; B: 6; M: 0000000000000000000001
# A: 0; B: 7; M: 0000000000000000000001
# A: 1; B: 0; M: 0000000000000000000001
# A: 1; B: 1; M: 0000000000000000000010
# A: 1; B: 2; M: 0000000000000000000100
# A: 1; B: 3; M: 0000000000000000001000
# A: 1; B: 4; M: 0000000000000000010000
# A: 1; B: 5; M: 0000000000000000100000
# A: 1; B: 6; M: 0000000000000001000000
# A: 1; B: 7; M: 0000000000000010000000
# A: 2; B: 0; M: 0000000000000000000001
# A: 2; B: 1; M: 0000000000000000000100
# A: 2; B: 2; M: 0000000000000000010000
# A: 2; B: 3; M: 0000000000000001000000
# A: 2; B: 4; M: 0000000000000000000001
# A: 2; B: 5; M: 0000000000000000000100
# A: 2; B: 6; M: 0000000000000000010000
# A: 2; B: 7; M: 0000000000000001000000
# A: 3; B: 0; M: 0000000000000000000001
# A: 3; B: 1; M: 0000000000000000001000
# A: 3; B: 2; M: 0000000000000001000000
# A: 3; B: 3; M: 0000000000000000000010
# A: 3; B: 4; M: 0000000000000000010000
# A: 3; B: 5; M: 0000000000000010000000
# A: 3; B: 6; M: 0000000000000000000100
# A: 3; B: 7; M: 0000000000000000100000
 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


So, should I fix it in Icarus Verilog, or should I just document it?
I really *do* like my answer better and I'm not above (or beneath)
extending the language in my implementation, and the answer that
Icarus Verilog gives is less surprising then the "right" answer;
so what to do? What a grim choice in this case.

Either way, this can stand to go into a bug report.

Evan Lavelle wrote:
On Fri, 15 Jun 2007 19:54:13 -0700, Stephen Williams
spamtrap@icarus.com> wrote:

Hmm... I think I'm one who gets it wrong, but I like my answer better:p

I think I probably do as well.. :)

dmitriym - see point 2 of Jonathan's post. In this case, the right
operand of the shift is self-determined, which means that the
multiplication is only carried out in a 3-bit multiplier, and the top
2 bits of the result are lost. More complete testcase below, with
expected results.

Evan

========================================================

module test;
initial
main;
task main;
integer i, j;
reg [1:0] A;
reg [2:0] B;
reg [21:0] M;
begin
for(i = 0; i <= 3; i = i+1) begin
for(j = 0; j <= 7; j = j+1) begin
A = i;
B = j;
M = 1'b1 << (A * B);
$display("A: %d; B: %d; M: %b", A, B, M);
end
end
end
endtask
endmodule

- --
Steve Williams "The woods are lovely, dark and deep.
steve at icarus.com But I have promises to keep,
http://www.icarus.com and lines to code before I sleep,
http://www.picturel.com And lines to code before I sleep."
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org

iD8DBQFGdquTrPt1Sc2b3ikRAkvMAKDQ7TbVx3kVcaWBgEhvzQ+hhe4A5gCgpC48
o/vgEDbzGAg7KdWvJ4mZhjM=
=3TyA
-----END PGP SIGNATURE-----
 
On Mon, 18 Jun 2007 08:58:11 -0700, Stephen Williams
<spamtrap@icarus.com> wrote:

So, should I fix it in Icarus Verilog, or should I just document it?
I would vote for fixing it. Apart from anything else, there's always
the possibility that someone will simulate with Icarus, and then
synthesise with a tool that gives a different result, and then not
bother to simulate their synthesised netlist. I think they'd probably
end up blaming you, and not themselves, in that case... :)

Evan
 
Evan Lavelle <nospam@nospam.com> writes:

On Mon, 18 Jun 2007 08:58:11 -0700, Stephen Williams
spamtrap@icarus.com> wrote:

So, should I fix it in Icarus Verilog, or should I just document it?

I would vote for fixing it. Apart from anything else, there's always
the possibility that someone will simulate with Icarus, and then
synthesise with a tool that gives a different result, and then not
bother to simulate their synthesised netlist. I think they'd probably
end up blaming you, and not themselves, in that case... :)

Evan
We had roughly the same situation in our in-house tool, and the
solution we decided upon was to give an error when the language
standard behavior gave incosistent results with the version that naive
users would apply. Of course, the users could turn the error off, but
it did prevent some level of surprises, i.e. cases where the Verilog
standard requires a result that while logical and self-consistent
doesn't meet the expectations of untrained (or careless) coders. A bit
more calculation is required (i.e. the simulation is slowed down) to
get that effect, but the aid in pointing out faults appeared to be
worth it.

Hope this helps,
-Chris

*****************************************************************************
Chris Clark Internet : compres@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
23 Bailey Rd voice : (508) 435-5016
Berlin, MA 01503 USA fax : (978) 838-0263 (24 hours)
------------------------------------------------------------------------------
 
Thanks a lot, Jonathan, Evan, Stephen and Chris!!!


Yout feedbacks are interesting and very informative for me.

I'm not sure, that I've properly explained my problem.
I would like to try to express it in different way:

1) For example, I have the 'rule' regarding assignments in RTL
description: "RHS of the assignment should not be wider than
assignment destination"

2) I would like to implement an "automatic checker" of this 'rule'

3) During analysis of some expression, this "checker" should properly
calculate the bit width of RHS in order to compare it with bit width
of assignment destination:
- there is no any problem to calculate the bit width of destination
(it could be simple signal or concatenation)
- but how to make _proper_ decision about bit widths mismatch when,
for example, RHS is an expression with overflow (assuming {r, a, b} -
1-bit signals)?

r = a + b; // violation? (RHS bit width = max(a,b) + carry-out =
2 > destination bit width = 1)
// or no violation? (engineer wants to synthesize
adder without carry-out)


Best regards,
dmitriym
 
On Tue, 24 Jul 2007 02:35:44 -0700, dmitriym <explorer@inbox.ru>
wrote:

1) For example, I have the 'rule' regarding assignments in RTL
description: "RHS of the assignment should not be wider than
assignment destination"

2) I would like to implement an "automatic checker" of this 'rule'
You mean a linter rule, presumably, and not a 'hard' rule?

3) During analysis of some expression, this "checker" should properly
calculate the bit width of RHS in order to compare it with bit width
of assignment destination:
- there is no any problem to calculate the bit width of destination
(it could be simple signal or concatenation)
- but how to make _proper_ decision about bit widths mismatch when,
for example, RHS is an expression with overflow (assuming {r, a, b} -
1-bit signals)?

r = a + b; // violation? (RHS bit width = max(a,b) + carry-out =
2 > destination bit width = 1)
// or no violation? (engineer wants to synthesize
adder without carry-out)
The 'overflow' is not *your* problem; it's the programmer's problem.
The overflow isn't even 'real'. The programmer may be trying to use
the rules of the language to find somewhere to store the 'overflow',
or the programmer may want to throw away the 'overflow', or the
programmer may not know what he's doing.

In other words, you're probably on a hiding to nothing (simple
translation: you're wasting your time) by analysing 'r = a+b'. How can
you know what the programmer wanted?

I look at it this way. In 'The C++ Programming Language', Stroustrup
justifies his decision not to check return types when determining
overload resolution by stating "the reason for this design choice is
partly that strict bottom-up analysis is more comprehensible and
partly that it is not considered the compiler's job to decide which
precision the programmer might want for the addition". Very smart,
IMHO; compare and contrast.

Evan
 
On 24 , 13:28, Evan Lavelle <nos...@nospam.com> wrote:
On Tue, 24 Jul 2007 02:35:44 -0700, dmitriym <explo...@inbox.ru
wrote:

1) For example, I have the 'rule' regarding assignments in RTL
description: "RHS of the assignment should not be wider than
assignment destination"

2) I would like to implement an "automatic checker" of this 'rule'

You mean a linter rule, presumably, and not a 'hard' rule?

3) During analysis of some expression, this "checker" should properly
calculate the bit width of RHS in order to compare it with bit width
of assignment destination:
- there is no any problem to calculate the bit width of destination
(it could be simple signal or concatenation)
- but how to make _proper_ decision about bit widths mismatch when,
for example, RHS is an expression with overflow (assuming {r, a, b} -
1-bit signals)?

r = a + b; // violation? (RHS bit width = max(a,b) + carry-out =
2 > destination bit width = 1)
// or no violation? (engineer wants to synthesize
adder without carry-out)

The 'overflow' is not *your* problem; it's the programmer's problem.
The overflow isn't even 'real'. The programmer may be trying to use
the rules of the language to find somewhere to store the 'overflow',
or the programmer may want to throw away the 'overflow', or the
programmer may not know what he's doing.

In other words, you're probably on a hiding to nothing (simple
translation: you're wasting your time) by analysing 'r = a+b'. How can
you know what the programmer wanted?

I look at it this way. In 'The C++ Programming Language', Stroustrup
justifies his decision not to check return types when determining
overload resolution by stating "the reason for this design choice is
partly that strict bottom-up analysis is more comprehensible and
partly that it is not considered the compiler's job to decide which
precision the programmer might want for the addition". Very smart,
IMHO; compare and contrast.

Evan
Evan,


Thanks for very fast respond!

You are totally right: it is Verilog designer's buiseness - how to
handle an 'overflow'.
But I'm still not sure about proper bit width definition: if I will
not handle carry-out, my "checker" will trigger on "2-bit-destination
= 1-bit-a + 1-bit-b" (that is totally OK from the language point of
view). Comment it, please.


Kind regards,
dmitriym
 
You mean a linter rule, presumably, and not a 'hard' rule?
Sorry, Evan, I've just found this part of your respond. Yes, I'm
thinking on something that could be successfully compared with
synthesis-oriented linter rule. Main target is to define bit bidths by
the same "set of rules" as synthesis tools do this (but not by the
same rules that are defined for compilers in IEEE Verilog Standard).
 
On Tue, 24 Jul 2007 03:41:07 -0700, dmitriym <explorer@inbox.ru>
wrote:

But I'm still not sure about proper bit width definition: if I will
not handle carry-out, my "checker" will trigger on "2-bit-destination
= 1-bit-a + 1-bit-b" (that is totally OK from the language point of
view). Comment it, please.
I'm not really sure what you're asking - are you (1) asking about the
Verilog rules, or (2) trying to get opinions on the heuristics you
might use to decide whether or not the programmer might have made a
mistake?

On the issue of the rules, I think you might be slightly confused when
you say:

r = a + b; // violation? (RHS bit width = max(a,b) + carry-out =
2 > destination bit width = 1)
Assume you've parsed your expression into a tree. You need to start at
the leaves, and work your way up to the root, which is the assignment.
Then you adjust your result depending on the width of the destination,
and work back down the tree again, to the leaves.

for the assignment:

r2 = a1 + b1

on the way up, you derive a bit width of 1. You reach the top, set the
width to 2, and then go back down again. The RHS width is 1 on the way
up, and 2 on the way down. Which width better represents the
programmer's intent? Is that what you're asking?

If you want opinions on the heuristics, then -

1 - you don't know what the real data in a and b is. Go back to your
original example - IIRC, you thought that the original adder might
have to be 5-bit. But, if you look at the real data in the adder
inputs, it becomes obvious that overflow is impossible, and a 4-bit
adder will suffice. How do you know about this as a tool developer?
You don't. It's just presumptious to second-guess the engineer; it's
their circuit, they know what they're doing, you don't.

2 - r(n+1) = a(n) + b(n) is probably not an error, but may be.

3 - r(n) = a(n) + b(n) is almost certainly not an error. I've done a
lot of hardware and, IMHO, this is more common than case (2). If you
were thinking of flagging this, then what about "r(n) = a(n) - b(n)"?
Is that also suspicious?

4 - what about subtraction, multiplication, division, left shift,
right shift, etc.? All these operations might logically have a
different size destination from the source operands. Are you going to
check all of them? If not, why not? There's nothing special about
addition.

Evan
 
On Tue, 24 Jul 2007 03:41:07 -0700,
dmitriym <explorer@inbox.ru> wrote:


You are totally right: it is Verilog designer's buiseness - how to
handle an 'overflow'.
But I'm still not sure about proper bit width definition: if I will
not handle carry-out, my "checker" will trigger on "2-bit-destination
= 1-bit-a + 1-bit-b" (that is totally OK from the language point of
view). Comment it, please.
Yes, it *is* totally OK. Given

reg a, b;
reg [1:0] sum;
...
sum = a + b;

Verilog's language rules say this....
The expression a+b is context-determined. The context has 2-bit
width (widest of sum, a, b) and so "a+b" is evaluated in 2-bit
context. Both a and b are widened to 2 bits before the addition.
In this case there can be no overflow out of the addition (but
there *might* be overflow, if one of a,b is 2 bits wide!).

So the expression is OK, there can never be overflow, simulation
and synthesis will both catch the carry-output bit.

But consider this....

reg a,b,c,d;
reg [1:0] sum;
...
sum = a+b+c+d;

This is STILL a 2-bit context, but of course there is now a
possibility of overflow.

In the cases I showed, it would be easy to determine the limit
case (values that give most likelihood of overflow) and give
a linter warning if overflow *might* occur. Synthesis tools
commonly do that. But I don't think it is possible to do
this analysis in all situations.

The arithmetic bit width rules are clearly described in the
Verilog-2005 standard.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
I'm not really sure what you're asking - are you (1) asking about the
Verilog rules, or (2) trying to get opinions on the heuristics you
might use to decide whether or not the programmer might have made a
mistake?
Actually, my question is much more related to 2nd point. You see, I
need to make the decision about rule violation and to make this
decision, I should know "what kind of description is correct and what
kind is not correct".

So, target of my question: "rules" to define *synthesis* bit width of
an expression. Bit width definition for synthesis is _not related_ to
Verilog LRM rules:

--- 8< ---
input [1:0] a;
output [7:0] r;
....
r = f; // simulation: OK (LRM: bit width of RHS is made the same as
destinations)
// synthesis: *NOT OK* (only two rightmost bits of output will
be mapped)
....
--- >8 ---

3 - r(n) = a(n) + b(n) is almost certainly not an error. I've done a
lot of hardware and, IMHO, this is more common than case (2). If you
were thinking of flagging this, then what about "r(n) = a(n) - b(n)"?
Is that also suspicious?
Thanks! This is much closer to things I'm asking for :)

So, if such description is better, bit width "rule checker" should not
consider carry-out? Or, maybe, it is better to store some flag about
carry overflow (it will enable to consider both from these cases as
correct cases and programmer's intent will be just his intent)?

4 - what about subtraction, multiplication, division, left shift,
right shift, etc.? All these operations might logically have a
different size destination from the source operands. Are you going to
check all of them? If not, why not? There's nothing special about
addition.
Of course. Addition was just an example (correct decision regarding
this case will help to process all another operations).


Kind regards,
dmitriym
 
On Jul 25, 4:25 am, dmitriym <explo...@inbox.ru> wrote:

If you are actually trying to determine the widths necessary to avoid
overflow for various operators, the widths will get rather extreme in
many cases. For adding two operands, you only need one extra bit.
For multiplying two operands, you need a result that is the sum of the
two operand widths. For left-shift, you need a width that is
something like shifted_operand_width+2**(shift_count_width-1).

As others have pointed out, you can't just assume that the user
intends to avoid overflow. For example, in a barrel-shifter, you
expect to lose bits off the end. Or perhaps the user has built a
rotator, that puts those bits back into the lower end so they aren't
lost. But you won't figure that out just from looking at the bit-
widths of the operations.
 

Welcome to EDABoard.com

Sponsor

Back
Top