Help: Vector range problem in addition.

M

Mike

Guest
We've come across a problem that we can't explain. Our simulators have no
problem with this, but our formal verification tool found that our
synthesizer produces different code on case 2.3 in the examples below. All
other cases produce identical results in simulation and synthesis.

In the first example, everything comes out the way we expect: in all three
cases, B = A + 1. In the second example, our simulators produce B = A + 1
in all cases. If the original value of A is 14'h0, then B = 14'h1, with
B[2] = 1. However, our synthesizer produces something different in case 2.3
(I'm not sure exactly what, but it's clearly different than the logic
produced in case 2.2, and our formal verification tool says the 2.2 result
is okay, and the 2.3 result is not).

So, the question is, what's going on here? My mental model for the order of
events is this:

1. A gets right justified from [15:2] to [13:0].
2. 1 gets left padded with zeros to 14'h1, or
16'h1 gets left truncated to 14'h1, or
14'h1 gets used as is.
3. A gets added to 14'h1.
4. The result is assigned to B.

Clearly, our synthesizer is having problems with this sequence in case 2.3,
but oddly enough, not in case 2.2. I am confused.

Can anyone help?

// Example 1

reg [13:0] A,B;

A <= somevalue;

B <= A + 1; // 1.1 ok
B <= A + 16'h1; // 1.2 ok
B <= A + 14'h1; // 1.3 ok

// Example 2

reg [15:2] A,B;

A <= somevalue;

B <= A + 1; // 2.1 ok
B <= A + 16'h1; // 2.2 ok
B <= A + 14'h1; // 2.3 not ok

-- Mike --
 
"Mike" <mike@nospam.com> wrote in message
news:1i65ov3vcv4og.1a7v5ht93q5ao$.dlg@40tude.net...

[snip clear explanation]

So, the question is, what's going on here? My mental model for the order
of
events is this:

1. A gets right justified from [15:2] to [13:0].
Not really; the [15:2] is a perfectly good vector as it stands;
its least significant bit has the binary weight 1 as usual, but it just
happens to be numbered with index [2].

2. 1 gets left padded with zeros to 14'h1, or
16'h1 gets left truncated to 14'h1, or
14'h1 gets used as is.
3. A gets added to 14'h1.
No; when doing an addition, Verilog will always generate a result that
is one bit wider than its wider operand.

4. The result is assigned to B.
Yes; and if the result has >14 bits, any unused upper bits are
thrown away as usual.

Clearly, our synthesizer is having problems with this sequence in case
2.3,
but oddly enough, not in case 2.2. I am confused.
Me too; I think it's just a bug in the synth tool.

Try building a module containing only your offending example.
That module is small enough that you can easily look at the
synthesis schematic or netlist, and work out what's going on.

FWIW I tried your code snippets with Synopsys DC and all seems
to be well.
--

Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * Perl * Tcl/Tk * Verification * Project Services

Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, Hampshire, BH24 1AW, UK
Tel: +44 (0)1425 471223 mail: jonathan.bromley@doulos.com
Fax: +44 (0)1425 471573 Web: http://www.doulos.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
Not so.
The bit-length of "a+b" is "max(len(a),len(b))".
See Table 29 in IEEE Std 1364-2001.

Shalom


Jonathan Bromley wrote:

No; when doing an addition, Verilog will always generate a result that
is one bit wider than its wider operand.
--
Shalom Bresticker Shalom.Bresticker@motorola.com
Design & Reuse Methodology Tel: +972 9 9522268
Motorola Semiconductor Israel, Ltd. Fax: +972 9 9522890
POB 2208, Herzlia 46120, ISRAEL Cell: +972 50 441478

[x]Motorola General Business Information
[ ]Motorola Internal Use Only
[ ]Motorola Confidential Proprietary
 
"Shalom Bresticker" <Shalom.Bresticker@motorola.com> wrote
in message news:40337FF0.67456305@motorola.com...
Jonathan Bromley wrote:

No; when doing an addition, Verilog will always generate a result that
is one bit wider than its wider operand.

Not so.
The bit-length of "a+b" is "max(len(a),len(b))".
See Table 29 in IEEE Std 1364-2001.
Mea culpa; you're exactly right. O, the joys of
expression width in Verilog. Perhaps, just maybe,
I can be forgiven for the lapse, in view of...

begin: junk
reg [4:0] sum;
$display(4'd7 + 4'd10); // prints " 1"
sum = 4'd7 + 4'd10;
$display(sum); // prints "17"
end

Steven Sharp has fairly recently posted here a brilliant
exposition of how Verilog determines expression width.
I should have taken more notice at the time :)

--

Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * Perl * Tcl/Tk * Verification * Project Services

Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, Hampshire, BH24 1AW, UK
Tel: +44 (0)1425 471223 mail: jonathan.bromley@doulos.com
Fax: +44 (0)1425 471573 Web: http://www.doulos.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
Jonathan,

The context (e.g. target width of the result numeric expression) is
relevant. So, while you may feel appologetic for a mis-post. In many
ways your posting was correct, see point 1 below.

I think that 2 points need to be made to clarify what is happening (in
terms of the standard, not necessarily in terms of the specific
simulator, which appears to be non-conforming, which is probably a
bug).

Point 1:

Verilog arithmetic expressions are always only widened before
operations and never narrowed. In your simple example, where a 5 bit
target is used to accept the result of the computation from 2 4 bit
constants. Both 4 bit constants are widened to 5 bits (padding on the
left width zeros if they are unsigned, and I presume padding on the
left with sign bits if the are signed (but I'm not clear on that part
of the standard as I am still working with the 1995 spec for my
day-to-day life)). The same thing would be true if those values have
been 4 bit wide part-selects.

Now, once the operands have been widened the table applies. It also
applies to help determine what the appropriate width is, but the table
doesn't include the context dependent result width, which is often a
subtle factor.

The subtleties of this point are often lost, because in many contexts
it doesn't matter whether you widen or narrow the results, the value
of the expression after narrowing to the target width, which happens
after the expression is calculated, is the same. The places where it
matters tend to involved right-shifting and division operations.
Simple boolean operations, addition, subtraction, and multiplies don't
tend to be affected by when one narrows.

And, again, one must mention, that what I know of this was largely
aided by Steve Sharp who taught me most (and probably) all that I know
about Verilog. He made the point about the widening of operands
occuring first to me some time back (and pointed out many of the
obscure contextual rules that effect what the appropriate width might
be). It has helped my understanding of Verilog immensely.

Point 2:

There is no shifting of values, to make the bit vectors have zero
lsb's. It may be convenient as a user to think of them as such.
However, the lsb of the expression has no "arithmetic" effect on how
the bits are treated in subsequent operations. The bits of the bit
vector are just bits and it doesn't matter if the source (or target)
has a non-zero lsb or a zero one.
 
Mike <mike@nospam.com> wrote in message news:<1i65ov3vcv4og.1a7v5ht93q5ao$.dlg@40tude.net>...
So, the question is, what's going on here? My mental model for the order of
events is this:
The official model doesn't exactly match your mental model. I will
describe it just for precision, though the differences aren't
important to the problem you are seeing.

1. A gets right justified from [15:2] to [13:0].
There is no actual operation here from a language viewpoint. A 14-bit
vector is a 14-bit vector. The bit numbers you use to designate the
most and least significant bits only matter if you are indexing into
the vector with bit numbers, i.e. using a bit select or part select.

However, it does appear that your synthesis tool has a bug related to
the bit numbers. Internally it probably does operate on bit numbers
normalized to [13:0]. If one piece of its code mistakenly used the
declared bit numbers where it should have used normalized ones, this
could have produced your bug.

2. 1 gets left padded with zeros to 14'h1, or
Actually, an unbased 1 is already 32 bits in size (and may be
larger in some implementations). So what is supposed to happen is
that A gets left padded with zeroes to 32 bits.

16'h1 gets left truncated to 14'h1, or
No, instead A gets left padded with zeroes to 16 bits. The final
result will get truncated back down to 14 bits before being assigned
to B, so there is no visible difference in this case. Your synthesis
tool probably does it the way you describe, since it is equivalent
and uses less circuitry. But if it made a visible difference, the tool
would have to do it the way it is actually defined in the language.

14'h1 gets used as is.
More properly, both 14'h1 and A get used as is.

3. A gets added to 14'h1.
No, the (possibly zero extended) A gets added to the constant,
which was already at the width of the computation.

4. The result is assigned to B.
The result is truncated down to the width of B, and then assigned.
Again, the extensions before the addition and the truncation of the
result may be functionally equivalent to doing the operation at the
narrower width, and your synthesis tool probably takes advantage of
that.

In most cases, the rules have been designed so that overflows get
avoided without users having to worry about them or having to know
what the rules are. However, in some complex cases it is useful
to know them.

Clearly, our synthesizer is having problems with this sequence in case 2.3,
but oddly enough, not in case 2.2. I am confused.
I think your synthesis tool has a bug in case 2.3. Nothing in my
pedantic description of the official model of the computations should
cause any difference between the results of any of the cases.
 
On 18 Feb 2004 11:53:46 -0500, Chris F Clark wrote:

Point 2:

There is no shifting of values, to make the bit vectors have zero
lsb's. It may be convenient as a user to think of them as such.
However, the lsb of the expression has no "arithmetic" effect on how
the bits are treated in subsequent operations. The bits of the bit
vector are just bits and it doesn't matter if the source (or target)
has a non-zero lsb or a zero one.
Indeed, the shifting I referred to is a mental simplification that makes it
easier (for me) to keep track of operand alignment.

-- Mike --
 

Welcome to EDABoard.com

Sponsor

Back
Top