event_expression list

R

romi

Guest
I realize that the code below does not make much sense. However, I
was wondering if when a=0 and b=1 should this code hang a simulator?
Because of the @*, the net 'c' is in the event_expression list. Does
the fact that c will transition from 0 to 1 in the block mean that the
block will get triggered continuously?

always @* begin
c=a;
c=c + b;
end

I have the answer from the simulator I use, but would like an answer
based on another LRM interpretation.

Further, does anyone know why the LRM decided to include temporary
nets in the event_expression list for @*? Adding the tmp1 and tmp2
nets in the code below (from the LRM) to the expression list doesn't
seem to add much value.

always @* begin
tmp1=a & b;
tmp2=c & d;
y=tmp1 | tmp2;
end


Thanks.
 
On Wed, 22 Aug 2007 17:26:38 -0000, romi <weberrm@gmail.com> wrote:

I realize that the code below does not make much sense. However, I
was wondering if when a=0 and b=1 should this code hang a simulator?
Because of the @*, the net 'c' is in the event_expression list. Does
the fact that c will transition from 0 to 1 in the block mean that the
block will get triggered continuously?

always @* begin
c=a;
c=c + b;
end
Whilst the code is executing, it's not waiting at the @*.
Since the assignments to c are blocking, they cannot possibly
take effect at any time other than when the code is in the
act of executing. So I would say, no, it should not loop,
because c cannot change during the time when the code is
waiting on the @*. If you had nonblocking assignments to
c, the story could of course be very different, since those
assignments would take effect at a time when your code is
sitting waiting at the @*.

There is a much trickier question about whether another,
distinct @(c) should trip if c makes a zero-time
1->0->1 transition because of your code. I believe the
LRM washes its hands of that case.

Further, does anyone know why the LRM decided to include temporary
nets in the event_expression list for @*? Adding the tmp1 and tmp2
nets in the code below (from the LRM) to the expression list doesn't
seem to add much value.
It doesn't subtract any, either; and it spares the simulator the ugly
task of analyzing the way those signals get updated. @* rather simply
seeks signals that appear in any expression, and adds them to the
event_expression list.

Of course @* is thereby booby-trapped, since it doesn't detect
changes on signals that are evaluated in functions as a side-effect.
The always_comb construct from SystemVerilog plugs that loophole,
and adds other synthesis-specific usefulness.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
Thanks!

So, could code like this loop? c is in both sensitivity lists.

always @* begin
c[0]=a[0];
c[0]=c[0] + b[0];
end
always @* begin
c[1]=a[1];
c[1]=c[1] + b[1];
end


It doesn't subtract any, either;
Doesn't the block potentially have to execute extra times because of
tmp1 and tmp2 being added. ie. once when a, b, c, or d changes then
potentially again simply because tmp1 or tmp2 changed due to a, b, c,
or d changing?

Thanks!
 
On Wed, 22 Aug 2007 18:01:56 -0000, romi <weberrm@gmail.com> wrote:

So, could code like this loop? c is in both sensitivity lists.

always @* begin
c[0]=a[0];
c[0]=c[0] + b[0];
end
always @* begin
c[1]=a[1];
c[1]=c[1] + b[1];
end
As I said, I don't think that is deterministic.
Suppose c==2'b11, b==2'b11, and then you change
a from some other value to 2'b00. Both @* controls
now trip. Both blocks will execute, with arbitrary
interleaving, but on any realistic simulator one of
them will execute to completion and then the other
will execute. So the first one (let's say) drives c[0]
1->0->1 and then returns to its @*. The second block
then drives c[1] 1->0->1. Does this pair of changes
trip the other block's @*? Depends how the simulator
implements event detection. It's not deterministic,
and not mandated by the LRM. Put #0 delays between
the assignments and I can promise you that the second
@* will trip because of the changes on c[] caused by
the other block.

[re: @* detects changes on intermediate variables]
Doesn't the block potentially have to execute extra times because of
tmp1 and tmp2 being added. ie. once when a, b, c, or d changes then
potentially again simply because tmp1 or tmp2 changed due to a, b, c,
or d changing?
No, certainly not. Once again the @* is NOT WAITING at the time when
your tmp variables are updated. It does not see the changes on them.
However, if you were to update them by nonblocking <= assignment,
then it would be necessary for them to appear in the event control
for the block to work as a piece of combinational logic; and @*
would indeed detect the delayed updates on the tmp signals.

Are you, perchance, coming to this from a VHDL background? :)
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
On Wed, 22 Aug 2007 19:13:44 +0100, Jonathan Bromley
<jonathan.bromley@MYCOMPANY.com> wrote:

On Wed, 22 Aug 2007 18:01:56 -0000, romi <weberrm@gmail.com> wrote:

So, could code like this loop? c is in both sensitivity lists.

always @* begin
c[0]=a[0];
c[0]=c[0] + b[0];
end
always @* begin
c[1]=a[1];
c[1]=c[1] + b[1];
end

[...] Put #0 delays between
the assignments and I can promise you that the second
@* will trip because of the changes on c[] caused by
the other block.
Ooops, I'm not sure I'm right... Does the whole of c
appear in both sensitivity lists? Or is it only c[0]
in one of them, and c[1] in the other? Too late to
check the LRM now; maybe someone else could answer...
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
Jonathan Bromley wrote:
There is a much trickier question about whether another,
distinct @(c) should trip if c makes a zero-time
1->0->1 transition because of your code. I believe the
LRM washes its hands of that case.
Yes, this is not specified. It is a question of whether a change is
sufficient, or whether the change must be visible to the process when
it wakes up. Is it a change as viewed by the writer or the reader?

Verilog-XL uses both definitions in different situations, so it
considered both to be valid, and the LRM could not forbid that.

Even if you tried to require that the change be visible to the process
executing the event control, the LRM provides a loophole that would
still allow the other behavior. It allows arbitrary preemption of
processes by another ready-to-run process. So the first change could
wake up the event control, which could immediately preempt and start
executing and see the change, then immediately suspend again in favor
of the first process.

Further, does anyone know why the LRM decided to include temporary
nets in the event_expression list for @*? Adding the tmp1 and tmp2
nets in the code below (from the LRM) to the expression list doesn't
seem to add much value.

It doesn't subtract any, either; and it spares the simulator the ugly
task of analyzing the way those signals get updated. @* rather simply
seeks signals that appear in any expression, and adds them to the
event_expression list.
It isn't just ugly. The suggestions I have seen for defining what is
a temporary variable have actually been theoretically uncomputable in
the general case. It may be easy in these trivial examples, but the
LRM must specify the behavior for any possible code.
 
romi wrote:
Thanks!

So, could code like this loop? c is in both sensitivity lists.

always @* begin
c[0]=a[0];
c[0]=c[0] + b[0];
end
always @* begin
c[1]=a[1];
c[1]=c[1] + b[1];
end
Yes, this could put your simulator into an infinite evaluation loop.


It doesn't subtract any, either;

Doesn't the block potentially have to execute extra times because of
tmp1 and tmp2 being added. ie. once when a, b, c, or d changes then
potentially again simply because tmp1 or tmp2 changed due to a, b, c,
or d changing?
Not for the example that Jonathan was responding to. As he already
explained, an always block cannot wake itself up, since it is not
waiting while it is running.
 
Jonathan Bromley wrote:
Ooops, I'm not sure I'm right... Does the whole of c
appear in both sensitivity lists? Or is it only c[0]
in one of them, and c[1] in the other? Too late to
check the LRM now; maybe someone else could answer...
All of it does. This is necessary in some cases when the index is non-
constant. For example, the block might loop through all the bits of
c.

SystemVerilog's always_comb defines a "longest static prefix" to try
to include constant subscripts in the sensitivity list. But this
still doesn't solve the infinite looping problem for all cases, and it
introduces other problems of its own.
 
Thanks for the replies.

The responses use reasoning like "not waiting at the @*" and "always
block cannot wake itself up". The answers make sense and seem to be
needed to make @* work with this type of procedural code. However,
the answers also sound implementation based. I'm wondering how
someone can derive a fact like "always block cannot wake itself up"
from the LRM. The LRM states:

"Every change in value of a net or variable in the circuit being
simulated, as well as the named event, is considered
an update event."

"Processes are sensitive to update events. When an update event is
executed, all the processes that are sensitive
to that event are evaluated in an arbitrary order. The evaluation of a
process is also an event, known as an
evaluation event."

The "all the processes that are sensitive to that event are
evaluated..." is the cocerning part. Your answers suggest that the
always block that is executing is not sensitive to the changes it is
making. I need help seeing how the LRM makes this distinction.

Thanks!
 
On Sat, 25 Aug 2007 16:37:34 -0000, romi <weberrm@gmail.com> wrote:

The responses use reasoning like "not waiting at the @*" and "always
block cannot wake itself up". The answers make sense and seem to be
needed to make @* work with this type of procedural code. However,
the answers also sound implementation based. I'm wondering how
someone can derive a fact like "always block cannot wake itself up"
from the LRM. The LRM states:

"Every change in value of a net or variable in the circuit being
simulated, as well as the named event, is considered
an update event."

"Processes are sensitive to update events.
I agree that this last sentence is not terribly helpful. It seems
to imply that processes are *permanently* sensitive to update
events, which is not a very useful notion. I think it is reasonably
clear from a reading of the whole LRM that - for any practical
purpose - processes are sensitive to update events only whilst
they are waiting at an event or delay control.

I can find absolutely nothing in the LRM suggesting that a
process should do anything special if some event should occur
*while it's busy executing code*. But even if you insist
that this is not excluded, you surely must agree that
our description is the only one that could make sense...

reg a = 0;
always @a a = ~a;

When "a=~a" is evaluated and 'a' is updated, clearly
you must agree that execution is *not* stalled at the @a
event control. When execution reaches the @a event control,
'a' has already changed. It will not change again. So the
event control will stall forever; its event of interest has
already occurred, earlier in the same time slot, and can
no longer have any effect. I see nothing "implementation based"
in this argument, which is based purely on the semantics of
blocking assignment and the @ event control.

"When an update event is executed, all the processes that are sensitive
to that event are evaluated in an arbitrary order. The evaluation of a
process is also an event, known as an evaluation event."

The "all the processes that are sensitive to that event are
evaluated..." is the cocerning part. Your answers suggest that the
always block that is executing is not sensitive to the changes it is
making. I need help seeing how the LRM makes this distinction.
What do you think should happen if a process were to detect
an event whilst it's in the middle of executing its procedural
code? It's already in the Active region of the scheduler, for
if it were not then it could not be executing. So it can't
in any meaningful way promote itself to be Active. In other
words, even if you force enough angels to dance on the pinhead
so that we agree that a process is sensitive to update events
whilst it's executing, that sensitivity is sure to have no
visible effect.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
On Aug 25, 12:37 pm, romi <webe...@gmail.com> wrote:
The responses use reasoning like "not waiting at the @*" and "always
block cannot wake itself up". The answers make sense and seem to be
needed to make @* work with this type of procedural code. However,
the answers also sound implementation based.
The behavior of Verilog procedural code is defined in terms of a
thread of execution that sequentially executes statements, and
suspends itself at event controls. This may sound implementation-
based to you, but it is also how the language semantics are defined.
Someone could try to implement it some other way, but it must have
this behavior or it is incorrect.

I'm wondering how
someone can derive a fact like "always block cannot wake itself up"
from the LRM. The LRM states:

"Every change in value of a net or variable in the circuit being
simulated, as well as the named event, is considered
an update event."

"Processes are sensitive to update events. When an update event is
executed, all the processes that are sensitive
to that event are evaluated in an arbitrary order. The evaluation of a
process is also an event, known as an
evaluation event."

The "all the processes that are sensitive to that event are
evaluated..." is the cocerning part. Your answers suggest that the
always block that is executing is not sensitive to the changes it is
making. I need help seeing how the LRM makes this distinction.
The section you are quoting was a late addition to the LRM, and tries
to give a general conceptual model of how Verilog works. This
conceptual model is rather abstract and the description uses its own
terminology that is not used elsewhere in the LRM. It may be useful
in getting a general idea of how event-driven simulation works, and
includes certain specific rules about scheduling order in Verilog.
But if you want to know how specific constructs like event controls
work, you need to look in the sections related to those.

Yes, a process can be sensitive to an update event. But it doesn't go
into detail about when and how that happens. It also glosses over the
situations when a process is not sensitive to any update events. One
of those would be when it is waiting for a time delay. Another would
be when it is currently executing.

For procedural code, the sensitivity it is talking about occurs when
the process executes an event control or wait statement and suspends
itself. This "sensitivity" is not an permanent or unchanging property
of a procedural process. Consider a situation like

always begin
@b do_something;
@c do_something_else;
#10 yet_something_else;
end

What is "the sensitivity" of this block? This is not a question with
a single unchanging answer. It is sensitive to b when it is waiting
at the @b. It is sensitive to c when it is waiting at the @c. It is
not sensitive to either of them when it is waiting for the #10, or
executing do_something, do_something_else, or yet_something_else.
Your idea that it should wake up on an event while not waiting at an
event control runs into the problem that you can't say what events it
is sensitive to at that point. In the actual language, that is
determined by the event control it is waiting at. And waking it up is
meaningless when it is already awake and executing.

The text you cite can discuss the sensitivity of the process in
situations where the language constructs in the process make it
sensitive to something. But the sensitivity arises from the use of
those language constructs, and is defined by the specific rules of
those language constructs. It isn't imposed in some vague and ill-
defined way by the text you cite.
 
On Aug 25, 12:37 pm, romi <webe...@gmail.com> wrote:
The responses use reasoning like "not waiting at the @*" and "always
block cannot wake itself up". The answers make sense and seem to be
needed to make @* work with this type of procedural code. However,
the answers also sound implementation based.
....
The "all the processes that are sensitive to that event are
evaluated..." is the cocerning part. Your answers suggest that the
always block that is executing is not sensitive to the changes it is
making. I need help seeing how the LRM makes this distinction.
I haven't had the pleasure of reading the Verilog standard in over a
year, but I did read it rather intently a few years ago when doing an
in-house Verilog simulator, and from that experience....

Looking for the answers of the kind you are seeking in the standard is
likely to be a painful and unsatisfactory experience. Not because the
Verilog standard is particularly poorly written; it isn't; it is
fairly average for a language standard. Simply because language
standards always make assumptions about what the audiance knows and
talk within that framework. That framework is what you refered to as
arguments being implementation based.

When you read a language standard, you need a pretty good notion of
what is being discussed and how it should generally work *before*
reading the standard. If you have that, then the standard can
illuminate your mind to some of the corners you may not have
considered.

Moreover, in general, each section of the standard generally focuses
on one detail of the implementation. Reading that section, you should
know what area the standard is attempting to address. Then, you will
know which details the section is covering and are relevant. Often
the section has to talk about other features and to do so usually
simplifies them (e.g. the concept of processes in the section on
scheduling--that is a simplification of the rest of the simulator).
What is important about that section is the separation of events into
different types of queues--there is real meat there, that can enable a
compiler writer to get that part correct. However, if one assumes
that the simplified part (the idea that everything is a process and a
process is always sensitive to events) is also gospel, one misses the
fact that that part of the model is overly simplified (to make the
meat accessible).

To illustrate this point, section 14 talks about specify blocks, I
read it and understood parts of it. However, it wasn't relevant to
the particular simulator I was building and so I had no examples that
I could run to validate my intuition. Therefore, not having that
pragmatic knowledge of what specify blocks are doing, how they are
used, and so forth, I cannot tell you reliably which details of that
chapter are significant.

On the other hand, since UDP's are something that I was interested in,
I read that chapter, tried examples, asked on this newsgroup, etc. and
at one point could tell you what information in that section was key
(and what was implicit). And, therein lies a key, sometimes what the
standard doesn't say is as important as what it does.

Fortunately, there is a lot more than the language standard one can
use to get that peripheral insight, to know the general feel of the
language, so that one can look at the standard and know which details
are the ones that are keying one to a significant point.

If you believe the standard is saying something is true, then you
should formulate some test cases which should behave a specific way
only if what you believe is correct and will behave differently if you
are wrong. Then, run those test cases (perfereably with a couple of
different simulators) and see if the results match your expectations.
The more that they do, the more likely you understand a particular
facet. If they don't, then you have missed something.

Similarly, you can ask knowledgable co-workers, on the newsgroup,
etc. and see if your understanding matches the consensus. If it does,
again, you probably understand. If not, the way your question gets
answered may give you a clue as to what underlying concept you are
missing.

However, attempting to understand the language standard from "first
principles" is a fool's errand. It isn't well enough written to do
that. I've never seen a language standard that was. Moreover, the
language standard also describes the language from a particular
perspective (that of writing simulators), if you are using it for
synthesis, there are entire concepts that are not addressed at all.
 

Welcome to EDABoard.com

Sponsor

Back
Top