Subtle Verilog Scheduling Issue

  • Thread starter Stephen Williams
  • Start date
S

Stephen Williams

Guest
Here's one that's been nagging me. As a Verilog compiler writer
I've been making my high-horse statements, but the consequences
are starting to hit home, so I wonder how this issue is dealt
with in the real world.

Here's my degenerate example:

module main;

reg foo = 0;

reg bar;
always @(foo) bar = !foo;

initial begin
#1 $display("foo=%b, bar=%b", foo, bar);
$finish;
end

endmodule // main

Salient features are the initialization assignment, and the
combinational calculation of bar from foo. (Yes, I know I can
use continuous assignment, but I'm trying to make a complicated
point with a simple example.)

This makes two threads:

initial foo = 0;
always @(foo) bar = !foo;

Run with Icarus Verilog, the $display prints foo=0 and bar=x.
Why? Because there is a time-0 race here. The initialization of
foo=0 is executed before the "always...", which means the "@(foo)"
enters seeing a 0 value, and no change. The included statement is
therefore not executed.

Now that behavior is perfectly correct, until some unsuspecting
hardware engineer (I'm a software guy!) uses always statements
to model combinational logic. A true model of combinational
logic would start at time-0 with an initial pass through the
always block, and then wait.

Or, the scheduler would somehow magically know that this thread
is to be started before the "initial" thread.

So I considered making a "quality of implementation" hack to
Icarus Verilog to have it start these so-called combinational
threads first, before any other threads, so that the @(...)
is guaranteed to be entered before anything else happens. I
would use a heuristic such as "always statements that contain
@(...) with no edges start first" to get the desired effect.

(SystemVerilog adds the always_comb keyword to call out these
sorts of threads explicitly.) I can also use attributes to
tweak scheduling in a similar fashion.

So what do the Big Guys do about this sort of thing?
--
Steve Williams "The woods are lovely, dark and deep.
steve at icarus.com But I have promises to keep,
http://www.icarus.com and lines to code before I sleep,
http://www.picturel.com And lines to code before I sleep."
 
Stephen Williams wrote:
Here's one that's been nagging me. As a Verilog compiler writer
I've been making my high-horse statements, but the consequences
are starting to hit home, so I wonder how this issue is dealt
with in the real world.

Here's my degenerate example:

module main;

reg foo = 0;

reg bar;
always @(foo) bar = !foo;

initial begin
#1 $display("foo=%b, bar=%b", foo, bar);
$finish;
end

endmodule // main

Salient features are the initialization assignment, and the
combinational calculation of bar from foo. (Yes, I know I can
use continuous assignment, but I'm trying to make a complicated
point with a simple example.)

This makes two threads:

initial foo = 0;
always @(foo) bar = !foo;

Run with Icarus Verilog, the $display prints foo=0 and bar=x.
Why? Because there is a time-0 race here. The initialization of
foo=0 is executed before the "always...", which means the "@(foo)"
enters seeing a 0 value, and no change. The included statement is
therefore not executed.

Now that behavior is perfectly correct, until some unsuspecting
hardware engineer (I'm a software guy!) uses always statements
to model combinational logic. A true model of combinational
logic would start at time-0 with an initial pass through the
always block, and then wait.
Right. The real problem is that the traditional way to
model synthesizable combinatorial logic with always blocks,
has a flaw. Therefore it should be solved with proper (different)
modeling.

Or, the scheduler would somehow magically know that this thread
is to be started before the "initial" thread.

So I considered making a "quality of implementation" hack to
Icarus Verilog to have it start these so-called combinational
threads first, before any other threads, so that the @(...)
is guaranteed to be entered before anything else happens. I
would use a heuristic such as "always statements that contain
@(...) with no edges start first" to get the desired effect.
I think that would not be a good idea. In the declaration:

reg foo = 0;

it's clearly the intention that foo effectively never has an
initial x value. Apparently you implement this by still starting
at x, putting the initialization in an initial block, and run
such blocks first. This is fine. However, with the proposed heuristic
you would now expose this "hidden" x->0 event to some
always blocks - very confusing if you ask me.

Moreover, not all always blocks describe combinatorial logic :)
Suppose you have a high-level event-driven model:

always @(event)
<event processing>

then you don't want to trigger it at time zero. But how would
you tell the difference?

(SystemVerilog adds the always_comb keyword to call out these
sorts of threads explicitly.) I can also use attributes to
tweak scheduling in a similar fashion.
So if you want to spend time on this, I believe it's
a better idea to implement always_comb. (Consider making
a pass at vpi callback scheduling first, though :))

So what do the Big Guys do about this sort of thing?
That I don't know :)

Regards, Jan

--
Jan Decaluwe - Resources bvba - http://jandecaluwe.com
Losbergenlaan 16, B-3010 Leuven, Belgium
Bored with EDA the way it is? Check this:
http://jandecaluwe.com/Tools/MyHDL/Overview.html
 
Jan Decaluwe <jan@jandecaluwe.com> writes:

Stephen Williams wrote:
[D E L]

Run with Icarus Verilog, the $display prints foo=0 and bar=x.
Why? Because there is a time-0 race here. The initialization of
foo=0 is executed before the "always...", which means the "@(foo)"
enters seeing a 0 value, and no change. The included statement is
therefore not executed.
Jan's comment is entirely valid! The problem is that the
assignment foo=0 is NOT executed by the simulator! It has been
GIVEN to the simulator! Therefore there is no change in the
value of foo, i.e. the
always @(foo) bar = !foo;
statement is never triggered.

Now that behavior is perfectly correct, until some unsuspecting
hardware engineer (I'm a software guy!) uses always statements
to model combinational logic. A true model of combinational
logic would start at time-0 with an initial pass through the
always block, and then wait.
I believe, however that you do have a point here. The
simulators schedule the threads as they encounter them in the
source file. Although it has been said that the order of
modules and threads should not be taken into account and, in
the case of race condition, the order of execution is not
defined a-priori, i.e. it is non-deterministic, almost all
simulators do exactly this.

To illustrate, let me rewrite your example into:

module main;
reg [3:0] foo;
reg [3:0] bar;

initial begin
foo = 0;
end

always @(foo) bar = !foo;

initial begin
#1 $display("foo=%b, bar=%b", foo, bar);
$finish;
end

endmodule // main

Now foo does have the x value at the beginning but we still
get

foo=0000, bar=xxxx

If, in the source file, we change the order of

initial begin
foo = 0;
end

and

always @(foo) bar = !foo;

we'll get

foo=0000, bar=0001

I believe that this problem can only be solved by using "old"
and "new" values for all variables in a model, i.e. initially
the "old" value of foo is x, when an assignment is executed,
the old value still stays at x and the assigned value goes
into the "new" value of foo. At the end of a micro cycle,
"new" replaces "old" and this avoids race conditions. It is
important here to do this at the end of a micro cycle, i.e. we
allow the value of a variable to change several times within a
time step.

Just my opinion...
--
============
Jordan
http://www.cse.dmu.ac.uk/~jordan/
 
"Jordan Dimitrov" <jordan@strl.dmu.ac.uk> wrote in message
news:7pd6eidnn9.fsf@strl.dmu.ac.uk...

[...]
assignment foo=0 is NOT executed by the simulator! It has been
GIVEN to the simulator! Therefore there is no change in the
value of foo, i.e. the
always @(foo) bar = !foo;
statement is never triggered.
A common misconception, but entirely wrong. Initialisation of
variables in Verilog-2001 is simply syntactic sugar for an initial
statement containing a zero-delay blocking assignment; it most
certainly IS "executed" by the simulator, in a nondeterministic
sequence with respect to any other time-zero activity, just
as Stephen Williams indicated.

Whether this was a wise choice by the designers of Verilog-2001
is quite another matter. It was done, as far as I am aware,
with the express intention of getting an event at time 0 on
signals that are so initialised.

I believe, however that you do have a point here. The
simulators schedule the threads as they encounter them in the
source file.
Says who? This is a myth. It is of course possible that some simulators
may do this; equally, I can see many reasons why a simulator might not.
Any Verilog code that relies on any specific scheduling order is
*ipso facto* broken.

[...]
I believe that this problem can only be solved by using "old"
and "new" values for all variables in a model,
Nonblocking assignment essentially does precisely this, and gives you
just the effect you require when modelling edge-triggered
synchronous logic.

i.e. initially
the "old" value of foo is x, when an assignment is executed,
the old value still stays at x and the assigned value goes
into the "new" value of foo. At the end of a micro cycle,
"new" replaces "old" and this avoids race conditions. It is
important here to do this at the end of a micro cycle, i.e. we
allow the value of a variable to change several times within a
time step.
Yes. It's called "simulation cycles", or "delta cycles" in VHDL. Read
the Scheduling section of the Verilog LRM to understand it fully; please
don't guess. Verilog's scheduling model is a bit of a handful, but
its behaviour is pretty well defined by the LRM.

There *is* a simple solution, I suggest...
put the sensitivity list at the END of the always block...
not so pretty to code, but it works better for modelling
combinational logic.

initial foo = 0;

always begin
bar = !foo;
@foo;
end

Now let's see what happens:

A) foo initialised first, then the always block runs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bar takes the value 1 at the end of time 0, as hoped.
The always block runs just once at time 0, gets the
right answer, and then stalls at @foo.

B) always block runs first, then foo initialised
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
First run of always gives you bar=1'bx because foo==1'bx.
The always block then stalls at @foo.
Then foo is initialised to 0 and there's an event on foo.
That releases the stalled always block, still at time 0;
so it runs again, and sets bar to 1 as required.

Spookily, this form of "always" block is exactly what
is provided by a VHDL process with a sensitivity list.

Hmmm. VHDL users NEVER, NEVER suffer this problem,
because "process with sensitivity list" and "signal
initialisation" were defined sensibly from the outset.
Not bad for a "$400 million mistake".

SystemVerilog changes the definition of variable
initialisation to match VHDL's, and (again as Stephen
pointed out) adds always_comb to provide a combinational
process with bottom-testing of the sensitivity list.
Just like VHDL :)

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * Perl * Tcl/Tk * Verification * Project Services

Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, Hampshire, BH24 1AW, UK
Tel: +44 (0)1425 471223 mail: jonathan.bromley@doulos.com
Fax: +44 (0)1425 471573 Web: http://www.doulos.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
Stephen Williams <spamtrap@icarus.com> wrote in message news:<bj3dl0$ig4$1@sun-news.laserlink.net>...
So what do the Big Guys do about this sort of thing?
It's been a few years since I've used VCS, but from what I remember,
normally it's non-deterministic and depends on the order of the code.
But sometime around v5.0 they added an option +alwaystrigger which
ensures that all always blocks get triggered at time 0 if anything in
their sensitivity lists changed value. For all I know that may be
VCS's default behavior now.

-cb
 
Jonathan Bromley wrote:
"Jordan Dimitrov" <jordan@strl.dmu.ac.uk> wrote in message
news:7pd6eidnn9.fsf@strl.dmu.ac.uk...

[...]
assignment foo=0 is NOT executed by the simulator! It has been
GIVEN to the simulator! Therefore there is no change in the
value of foo, i.e. the
always @(foo) bar = !foo;
statement is never triggered.

A common misconception, but entirely wrong. Initialisation of
variables in Verilog-2001 is simply syntactic sugar for an initial
statement containing a zero-delay blocking assignment; it most
certainly IS "executed" by the simulator, in a nondeterministic
sequence with respect to any other time-zero activity, just
as Stephen Williams indicated.

Whether this was a wise choice by the designers of Verilog-2001
is quite another matter. It was done, as far as I am aware,
with the express intention of getting an event at time 0 on
signals that are so initialised.
I have no evidence, but if I had to guess, it would be that
it was just a hack - nothing deliberate and certainly nothing wise
about it. What could have been the reason for using a declaration
to intentionally create an event, without being able to
guarantee that all blocks would see it?

Anyway, the SystemVerilog 3.1 Accelera standard clarifies the
issue and does the right thing. That is, at least I think that
there will be a consensus that it *is* the right thing,
though this (Verilog) world is full of (bad) surprizes.

Stephen's suggested hack would therefore not be compliant
with SystemVerilog ...

Regards, Jan

--
Jan Decaluwe - Resources bvba - http://jandecaluwe.com
Losbergenlaan 16, B-3010 Leuven, Belgium
Bored with EDA the way it is? Check this:
http://jandecaluwe.com/Tools/MyHDL/Overview.html
 
Jan Decaluwe wrote:
Stephen Williams wrote:

Here's one that's been nagging me. As a Verilog compiler writer
I've been making my high-horse statements, but the consequences
are starting to hit home, so I wonder how this issue is dealt
with in the real world.

Here's my degenerate example:

module main;

reg foo = 0;

reg bar;
always @(foo) bar = !foo;

initial begin
#1 $display("foo=%b, bar=%b", foo, bar);
$finish;
end

endmodule // main

Salient features are the initialization assignment, and the
combinational calculation of bar from foo. (Yes, I know I can
use continuous assignment, but I'm trying to make a complicated
point with a simple example.)

This makes two threads:

initial foo = 0;
always @(foo) bar = !foo;

Run with Icarus Verilog, the $display prints foo=0 and bar=x.
Why? Because there is a time-0 race here. The initialization of
foo=0 is executed before the "always...", which means the "@(foo)"
enters seeing a 0 value, and no change. The included statement is
therefore not executed.

Now that behavior is perfectly correct, until some unsuspecting
hardware engineer (I'm a software guy!) uses always statements
to model combinational logic. A true model of combinational
logic would start at time-0 with an initial pass through the
always block, and then wait.


Right. The real problem is that the traditional way to
model synthesizable combinatorial logic with always blocks,
has a flaw. Therefore it should be solved with proper (different)
modeling.
I (the compiler writer) agree, but I (the guy who has to get
this verilog through xst as well) see a value to reflecting
practical reality here. *Any* ordering of time-0 thread starts
is valid under the LRM law, so maybe I can find a shuffle that
satisfies both me and I.


I think that would not be a good idea. In the declaration:

reg foo = 0;

it's clearly the intention that foo effectively never has an
initial x value.
Yes, but the Verilog LRM defined that as exactly the same
as a shorthand for "reg foo; initial foo = 0;" Preloading
foo with 0 is a valid example of that.

Moreover, not all always blocks describe combinatorial logic :)
True. It is harmless if I reorder them around as well. If
shuffling their startup causes problems, you have a time-0
race anyhow.

(SystemVerilog adds the always_comb keyword to call out these
sorts of threads explicitly.) I can also use attributes to
tweak scheduling in a similar fashion.


So if you want to spend time on this, I believe it's
a better idea to implement always_comb. (Consider making
a pass at vpi callback scheduling first, though :))
always_comb is not supported by synthesizers. In particular,
we are using xst for this project. always_comb does better
match what the synthesizer infers, but it will be quite a
while before synthesizers support those sort of things.

After 0.8 is released, I plan to rework (and indeed repair)
the scheduler so that VPI callbacks work correctly. You are
not the only one having issues, and yes I remember you.

--
Steve Williams "The woods are lovely, dark and deep.
steve at icarus.com But I have promises to keep,
http://www.icarus.com and lines to code before I sleep,
http://www.picturel.com And lines to code before I sleep."
 
Chris Briggs wrote:
Stephen Williams <spamtrap@icarus.com> wrote in message news:<bj3dl0$ig4$1@sun-news.laserlink.net>...

So what do the Big Guys do about this sort of thing?


It's been a few years since I've used VCS, but from what I remember,
normally it's non-deterministic and depends on the order of the code.
But sometime around v5.0 they added an option +alwaystrigger which
ensures that all always blocks get triggered at time 0 if anything in
their sensitivity lists changed value. For all I know that may be
VCS's default behavior now.
Can someone send me a snippet of documentation for this?

I also considered using an attribute to call out slightly
different trigger behavior, but knowing what people have been
trained to expect is interesting.
--
Steve Williams "The woods are lovely, dark and deep.
steve at icarus.com But I have promises to keep,
http://www.icarus.com and lines to code before I sleep,
http://www.picturel.com And lines to code before I sleep."
 
Stephen Williams wrote:

I (the compiler writer) agree, but I (the guy who has to get
this verilog through xst as well) see a value to reflecting
practical reality here. *Any* ordering of time-0 thread starts
is valid under the LRM law, so maybe I can find a shuffle that
satisfies both me and I.
Oh dear! look what Verilog does to both of you!

--
Jan Decaluwe - Resources bvba - http://jandecaluwe.com
Losbergenlaan 16, B-3010 Leuven, Belgium
Bored with EDA the way it is? Check this:
http://jandecaluwe.com/Tools/MyHDL/Overview.html
 
"Jonathan Bromley" <jonathan.bromley@doulos.com> writes:
I believe, however that you do have a point here. The
simulators schedule the threads as they encounter them in the
source file.

Says who? This is a myth. It is of course possible that some simulators
may do this; equally, I can see many reasons why a simulator might not.
Any Verilog code that relies on any specific scheduling order is
*ipso facto* broken.
I entirely agree with your last statement. Any code that
relies on the order of execution is wrong but nevertheless we
can still see this behaviour in many simulators. The problem
here is that parallelism is implemented by non-determinism,
which is wrong.

============
Jordan
http://www.cse.dmu.ac.uk/~jordan/
 
Jan Decaluwe wrote:
Anyway, the SystemVerilog 3.1 Accelera standard clarifies the
issue and does the right thing. That is, at least I think that
there will be a consensus that it *is* the right thing,
though this (Verilog) world is full of (bad) surprizes.
.... for example: I just learned (honestly didn't realize this earlier)
that the SystemVerilog Accelera initiative is apparently an effort that
is diverging completely (including concerning the points being discussed)
from the IEEE standardization process.

What a mess this is.

--
Jan Decaluwe - Resources bvba - http://jandecaluwe.com
Losbergenlaan 16, B-3010 Leuven, Belgium
Bored with EDA the way it is? Check this:
http://jandecaluwe.com/Tools/MyHDL/Overview.html
 
Jan Decaluwe <jan@jandecaluwe.com> wrote in message news:<3F5604E2.4367BD95@jandecaluwe.com>...
Jonathan Bromley wrote:

Whether this was a wise choice by the designers of Verilog-2001
is quite another matter. It was done, as far as I am aware,
with the express intention of getting an event at time 0 on
signals that are so initialised.

I have no evidence, but if I had to guess, it would be that
it was just a hack - nothing deliberate and certainly nothing wise
about it. What could have been the reason for using a declaration
to intentionally create an event, without being able to
guarantee that all blocks would see it?
I had always assumed that this had been done deliberately. The
obvious reason would be to get an event and ensure that the starting
value got propagated and triggered combinational evaluation. Always
blocks might not work any better, but they wouldn't work any worse
either. And it would avoid the possibility of introducing new
problems with continuous assignments and gates.

However, I checked the IEEE 1364 archives and found the original
suggestion for this extension. The initializers were originally
proposed as a convenient shorthand for a declaration followed by
an initial block. It wasn't a matter of choosing a particular
approach to a general proposal of variable initialization. The
approach was assumed in the original suggestion. Of course, it
is possible that alternatives got considered, and the issues of
event propagation got discussed later.
 

Welcome to EDABoard.com

Sponsor

Back
Top