Incorrect description of non-blocking assignments in the Mod

J

Jan Decaluwe

Guest
The other day I was glancing over the ModelSim User’s Manual, v6.5e
and I saw something strange.

They explain nondeterminism caused by blocking assignments with the
following example (p. 250):

1. always@(q) p = q;
2. always @(q) p2 = not q;
3. always @(p or p2) clk = p and p2;

with p=0, p2=1, and q going from 0->1.

Depending on the execution order, always block 3 may be triggered with
inconsistent values of p and p2 (both 1), or not. So you may have
glitch on clk, or not. Hence this model is nondeterministic, and they
call this a race condition.

Then, on p. 252, they state (between brackets):

"In the preceding example, changing all statements to non-blocking
assignments would not remove the race condition".

I'm not a Verilog event scheduling specialist :), but I think it
would. With non-blocking assignments, p and p2 would always trigger
block 3 with the same values, irrespective of the execution order of
the always blocks. Hence the model would be deterministic.

Or am I missing something about Verilog again ;-) ?

Jan
 
On Mon, 13 Sep 2010 14:04:00 -0700 (PDT), Jan Decaluwe wrote:

1. always @(q) p = q;
2. always @(q) p2 = not q;
3. always @(p or p2) clk = p and p2;

with p=0, p2=1, and q going from 0->1.

Depending on the execution order, always block 3 may be triggered with
inconsistent values of p and p2 (both 1), or not. So you may have
glitch on clk, or not. Hence this model is nondeterministic, and they
call this a race condition.

Then, on p. 252, they state (between brackets):

"In the preceding example, changing all statements to non-blocking
assignments would not remove the race condition".

I'm not a Verilog event scheduling specialist :), but I think it
would. With non-blocking assignments, p and p2 would always trigger
block 3 with the same values, irrespective of the execution order of
the always blocks. Hence the model would be deterministic.
I believe you are not quite right here, but the usual
caveats apply - I've been plenty wrong in the past,
and surely will be again.

Given

always @q p <= q;
always @q p2 <= ~q; [sic]

a change on q will now trip both @q controls, putting
the two assignments onto the Active queue. Those
two nonblocking assignments then get executed in a
nondeterministic order. The resulting updates of p
and p2 are reliably scheduled onto the NBA queue in
the same order as the assignments were executed, but
sadly that order was nondeterministic. So when we
get around to evaluating the NBA queue, the order of
update of p and p2 is nondeterministic too. So our
race is still with us, since there is no guarantee
of all the NBAs executing atomically before their
downstream sensitivities (their fanouts) trip.

The key reason is that the NBA queue is not executed
atomically. Instead, when the scheduler gets around
to dealing with NBA events, it promotes all those
events to the Active region and executes them there.
But each of the NBA events will in turn put other
events into the Active region, where they compete with
any as yet un-executed events that came from NBA. The
only guarantee you have is that the NBA updates will
happen in the same relative order as the assignments
that caused them were executed.

Or am I missing something about Verilog again ;-) ?
You're just projecting your bourgeois academic
liberal European VHDL preconceptions on to Verilog.

Or something like that.
--
Jonathan Bromley
 
On Sep 13, 11:45 pm, Jonathan Bromley <s...@oxfordbromley.plus.com>
wrote:
On Mon, 13 Sep 2010 14:04:00 -0700 (PDT), Jan Decaluwe wrote:
1. always @(q) p = q;
2. always @(q) p2 = not q;
3. always @(p or p2) clk = p and p2;

with p=0, p2=1, and q going from 0->1.

Depending on the execution order, always block 3 may be triggered with
inconsistent values of p and p2 (both 1), or not. So you may have
glitch on clk, or not. Hence this model is nondeterministic, and they
call this a race condition.

Then, on p. 252, they state (between brackets):

"In the preceding example, changing all statements to non-blocking
assignments would not remove the race condition".

I'm not a Verilog event scheduling specialist :), but I think it
would. With non-blocking assignments, p and p2 would always trigger
block 3 with the same values, irrespective of the execution order of
the always blocks. Hence the model would be deterministic.

I believe you are not quite right here, but the usual
caveats apply - I've been plenty wrong in the past,
and surely will be again.

Given

  always @q p <= q;
  always @q p2 <= ~q; [sic]

a change on q will now trip both @q controls, putting
the two assignments onto the Active queue.  Those
two nonblocking assignments then get executed in a
nondeterministic order.  The resulting updates of p
and p2 are reliably scheduled onto the NBA queue in
the same order as the assignments were executed, but
sadly that order was nondeterministic.  So when we
get around to evaluating the NBA queue, the order of
update of p and p2 is nondeterministic too.  So our
race is still with us, since there is no guarantee
of all the NBAs executing atomically before their
downstream sensitivities (their fanouts) trip.

The key reason is that the NBA queue is not executed
atomically.  Instead, when the scheduler gets around
to dealing with NBA events, it promotes all those
events to the Active region and executes them there.  
But each of the NBA events will in turn put other
events into the Active region, where they compete with
any as yet un-executed events that came from NBA.  The
only guarantee you have is that the NBA updates will
happen in the same relative order as the assignments
that caused them were executed.
I see it now. Determinism is an illusion. I will seek
therapy for my obsession :)

Jan
 
On Sep 14, 3:24 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:
On Sep 13, 11:45 pm, Jonathan Bromley <s...@oxfordbromley.plus.com
wrote:





On Mon, 13 Sep 2010 14:04:00 -0700 (PDT), Jan Decaluwe wrote:
1. always @(q) p = q;
2. always @(q) p2 = not q;
3. always @(p or p2) clk = p and p2;

with p=0, p2=1, and q going from 0->1.

Depending on the execution order, always block 3 may be triggered with
inconsistent values of p and p2 (both 1), or not. So you may have
glitch on clk, or not. Hence this model is nondeterministic, and they
call this a race condition.

Then, on p. 252, they state (between brackets):

"In the preceding example, changing all statements to non-blocking
assignments would not remove the race condition".

I'm not a Verilog event scheduling specialist :), but I think it
would. With non-blocking assignments, p and p2 would always trigger
block 3 with the same values, irrespective of the execution order of
the always blocks. Hence the model would be deterministic.

I believe you are not quite right here, but the usual
caveats apply - I've been plenty wrong in the past,
and surely will be again.

Given

  always @q p <= q;
  always @q p2 <= ~q; [sic]

a change on q will now trip both @q controls, putting
the two assignments onto the Active queue.  Those
two nonblocking assignments then get executed in a
nondeterministic order.  The resulting updates of p
and p2 are reliably scheduled onto the NBA queue in
the same order as the assignments were executed, but
sadly that order was nondeterministic.  So when we
get around to evaluating the NBA queue, the order of
update of p and p2 is nondeterministic too.  So our
race is still with us, since there is no guarantee
of all the NBAs executing atomically before their
downstream sensitivities (their fanouts) trip.

The key reason is that the NBA queue is not executed
atomically.  Instead, when the scheduler gets around
to dealing with NBA events, it promotes all those
events to the Active region and executes them there.  
But each of the NBA events will in turn put other
events into the Active region, where they compete with
any as yet un-executed events that came from NBA.  The
only guarantee you have is that the NBA updates will
happen in the same relative order as the assignments
that caused them were executed.

I see it now. Determinism is an illusion. I will seek
therapy for my obsession :)

Jan- Hide quoted text -

- Show quoted text -
The following one will be good, right?

always
begin
@(q) p = q;
@(q) p2=~q;
end
 
On Sep 14, 8:32 am, michael6866 <michael6...@gmail.com> wrote:
On Sep 14, 3:24 am, Jan Decaluwe <j...@jandecaluwe.com> wrote:





On Sep 13, 11:45 pm, Jonathan Bromley <s...@oxfordbromley.plus.com
wrote:

On Mon, 13 Sep 2010 14:04:00 -0700 (PDT), Jan Decaluwe wrote:
1. always @(q) p = q;
2. always @(q) p2 = not q;
3. always @(p or p2) clk = p and p2;

with p=0, p2=1, and q going from 0->1.

Depending on the execution order, always block 3 may be triggered with
inconsistent values of p and p2 (both 1), or not. So you may have
glitch on clk, or not. Hence this model is nondeterministic, and they
call this a race condition.

Then, on p. 252, they state (between brackets):

"In the preceding example, changing all statements to non-blocking
assignments would not remove the race condition".

I'm not a Verilog event scheduling specialist :), but I think it
would. With non-blocking assignments, p and p2 would always trigger
block 3 with the same values, irrespective of the execution order of
the always blocks. Hence the model would be deterministic.

I believe you are not quite right here, but the usual
caveats apply - I've been plenty wrong in the past,
and surely will be again.

Given

  always @q p <= q;
  always @q p2 <= ~q; [sic]

a change on q will now trip both @q controls, putting
the two assignments onto the Active queue.  Those
two nonblocking assignments then get executed in a
nondeterministic order.  The resulting updates of p
and p2 are reliably scheduled onto the NBA queue in
the same order as the assignments were executed, but
sadly that order was nondeterministic.  So when we
get around to evaluating the NBA queue, the order of
update of p and p2 is nondeterministic too.  So our
race is still with us, since there is no guarantee
of all the NBAs executing atomically before their
downstream sensitivities (their fanouts) trip.

The key reason is that the NBA queue is not executed
atomically.  Instead, when the scheduler gets around
to dealing with NBA events, it promotes all those
events to the Active region and executes them there.  
But each of the NBA events will in turn put other
events into the Active region, where they compete with
any as yet un-executed events that came from NBA.  The
only guarantee you have is that the NBA updates will
happen in the same relative order as the assignments
that caused them were executed.

I see it now. Determinism is an illusion. I will seek
therapy for my obsession :)

Jan- Hide quoted text -

- Show quoted text -

The following one will be good, right?

always
    begin
    @(q) p = q;
    @(q) p2=~q;
    end- Hide quoted text -

- Show quoted text -
I didn't see that q is not a wave but a one-time pulse. Please ignore
the above code. Here I get another question:

if I change the code to the following:

(1)
always
begin
@(q) p=q;
@(p) p2=p; //I changed to @(p)
end
(2)
always
begin
@(q) p<=q;
@(p) p2<=p; //I changed to @(p)
end

In both case, let q go from 0 to 1. Which one is correct or, I mean,
in which case the assignment to p2 will be triggered?
 
Please note that the original post was about concurrent always blocks,
and that I formulated it as a hypothesis instead of a puzzle.

Jan

On Sep 14, 2:47 pm, michael6866 <michael6...@gmail.com> wrote:

if I change the code to the following:

(1)
always
    begin
    @(q) p=q;
    @(p) p2=p;  //I changed to @(p)
    end
(2)
always
    begin
    @(q) p<=q;
    @(p) p2<=p;  //I changed to @(p)
    end

In both case, let q go from 0 to 1. Which one is correct or, I mean,
in which case the assignment to p2 will be triggered?
 
On Sep 14, 2:40 pm, Jan Decaluwe <j...@jandecaluwe.com> wrote:
Please note that the original post was about concurrent always blocks,
and that I formulated it as a hypothesis instead of a puzzle.

Jan

On Sep 14, 2:47 pm, michael6866 <michael6...@gmail.com> wrote:





if I change the code to the following:

(1)
always
    begin
    @(q) p=q;
    @(p) p2=p;  //I changed to @(p)
    end
(2)
always
    begin
    @(q) p<=q;
    @(p) p2<=p;  //I changed to @(p)
    end

In both case, let q go from 0 to 1. Which one is correct or, I mean,
in which case the assignment to p2 will be triggered?- Hide quoted text -

- Show quoted text -
Yes, I see. I just happened to remember those code fragments. Hope
someone will clarify those conceptions.
 
On Sep 14, 1:47 pm, michael6866 <michael6...@gmail.com> wrote:

(1)
always
    begin
    @(q) // Wait for a change on q
p=q; // Update p to the new value of q.
// This was a blocking assign, so execution does not
// proceed until p is updated.

    @(p) // p has already changed, so @p will never trigger
// unless some other block updates it later. Stuck.

(2)
always
    begin
    @(q) p<=q; // p will update when all other processes have stalled
    @(p) // waiting for p to change, but p has not yet been
// updated so that's OK - we just wait

And then other processes execute, eventually reaching event or
timing controls where they will stall. Then, pending nonblocking
assignments take effect. One of those will update p. This will
trip the @(p) and we then move forwards...

p2<=p; // and this then executes
    end
Your code fragment (1) is almost certainly wrong.
At the very least, it's confusing and has
behaviour that is not obvious from the sequential
code description.

Code fragment (2) has a straightforward meaning, and may
have some usefulness in certain types of zero-delay
modelling where you need to wait for one signal to update
before trying to modify another. It allows you to
model step-by-step sequential activity in zero time,
with a guarantee that each step-by-step update can
be detected by @() event controls. Of course, it
makes no sense for synthesis.
--
Jonathan Bromley
 
On Sep 15, 3:38 am, Jonathan Bromley <s...@oxfordbromley.plus.com>
wrote:
On Sep 14, 1:47 pm, michael6866 <michael6...@gmail.com> wrote:

(1)
always
    begin
    @(q) // Wait for a change on q
    p=q; // Update p to the new value of q.

      // This was a blocking assign, so execution does not
      // proceed until p is updated.

    @(p)  // p has already changed, so @p will never trigger

            // unless some other block updates it later.  Stuck.

(2)
always
    begin
    @(q) p<=q;  // p will update when all other processes have stalled
    @(p)        // waiting for p to change, but p has not yet been

                  // updated so that's OK - we just wait

And then other processes execute, eventually reaching event or
timing controls where they will stall.  Then, pending nonblocking
assignments take effect.  One of those will update p.  This will
trip the @(p) and we then move forwards...

      p2<=p;   // and this then executes
    end

Your code fragment (1) is almost certainly wrong.
At the very least, it's confusing and has
behaviour that is not obvious from the sequential
code description.

Code fragment (2) has a straightforward meaning, and may
have some usefulness in certain types of zero-delay
modelling where you need to wait for one signal to update
before trying to modify another.  It allows you to
model step-by-step sequential activity in zero time,
with a guarantee that each step-by-step update can
be detected by @() event controls.  Of course, it
makes no sense for synthesis.
--
Jonathan Bromley
Gotcha, thank you!
 

Welcome to EDABoard.com

Sponsor

Back
Top