basic VHDL question

B

Brad Smallridge

Guest
Can anybody tell me what the difference in simulation and synthesis by
putting what I call assignments inside processes? The only thing I can tell
is that in ModelSim, the workspace list will have a process name label on
the assignament rather than a line__xx label.

a <= b and c;

or

a_proc:process(b,c)
begin
a <= b and c;
end process;

Brad Smallridge
AiVision dot com
 
One simulation difference is that signals in a process all get updated at
the end of the process....sooo..

a <= b and c; -- #1
d <= e and f; -- #2

You have no 'control' over whether #1 or #2 gets evaluated first so there
may be a simulation delta time between when signals 'a' and 'd' change.

If you have the same statements within a process (with the appropriate
sensitivity list of course) then 'a' and 'd' will always change at the exact
same time on the same simulation delta time as well.

It's very rare where this subtle difference makes any difference at all.
I've never needed it when writing code that needs to be synthesizable, I
have in a couple instances used this when writing non-synthesizable
simulation models...unfortunately the situation where it was useful escapes
me just now but since you're asking a 'basic VHDL question' my guess is that
you won't run across a need for this for a while if ever.

From a synthesis perspective it makes no difference, both ways will produce
the same result.

From a practical standpoint, just putting the equations without a process is
somewhat cleaner since you don't have to check (and recheck) that you have
all the appropriate signals in the sensitivity list. For example, look at
the following code. If you run with a proper simulator, signal 'd' will not
get updated when signal 'e' and 'f' change...unless they happen to be
coincident with signals 'b' and 'c' changing since only 'b' and 'c' are in
the sensitivity list so the process only gets executed when there is a
change to either 'b' or 'c'.

a_proc:process(b,c)
begin
a <= b and c; -- #1
d <= e and f; -- #2
end process;

If you take this code and synthesize it, your synthesis tool will probably
kick out a warning about an incomplete sensitivity list and implement what
you had probably had intended all along (i.e. 'd' will get updated when
either 'e' or 'f' change. Although that is probably what you had intended
it does mean that what gets compiled into your physical device will not be
the same thing that you're seeing in simulation. From my perspective having
simulation not matching reality is a bad thing. As a general guideline I
personally tend to avoid processes other than clocked processes for just
this reason....much less chance for mucking up something since of course
'real' code will not be quite so easy to spot that signals are missing from
the sensitivity list.

KJ


"Brad Smallridge" <bradsmallridge@dslextreme.com> wrote in message
news:1240aqsg8uc0273@corp.supernews.com...
Can anybody tell me what the difference in simulation and synthesis by
putting what I call assignments inside processes? The only thing I can
tell is that in ModelSim, the workspace list will have a process name
label on the assignament rather than a line__xx label.

a <= b and c;

or

a_proc:process(b,c)
begin
a <= b and c;
end process;

Brad Smallridge
AiVision dot com
 
On Fri, 14 Apr 2006 16:07:55 -0700, "Brad Smallridge"
<bradsmallridge@dslextreme.com> wrote:

Can anybody tell me what the difference in simulation and synthesis by
putting what I call assignments inside processes?

a <= b and c;

or

a_proc:process(b,c)
begin
a <= b and c;
end process;
Brad,

Those two chunks of code are identical by definition in VHDL. As
you say, ModelSim and other simulators may give you slightly
different views of them, and may optimize them in slightly
different ways, but it is mandatory that they give identical
behaviour in simulation; and since they are synthesisable,
you would expect identical results in synthesis too.

In the jargon,
a <= b and c;
is a "concurrent signal assignment".

Note that the rough equivalent in Verilog,
assign a = b & c;
vs
always @(b or c) a = b & c;
are not identical.

Note, too, that the following PAIR of concurrent assignments:

a <= b and c;
d <= e and f;

is not exactly the same as the following SINGLE process:

process (b,c,e,f)
begin
a <= b and c;
d <= e and f;
end process;

(although in practice they would probably give the same results)
because the pair of concurrent assignments is equivalent to
two separate processes each with its own sensitivity list.

Of course, there are lots of stylistic arguments about which
formulation is preferable in any given situation.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, Hampshire, BH24 1AW, UK
Tel: +44 (0)1425 471223 Email: jonathan.bromley@MYCOMPANY.com
Fax: +44 (0)1425 471573 Web: http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
Thank you Jonathan, (and KJ,)

What you have told is consistent with what I understand about VHDL except
for the issue of timing. My understanding of VHDL is that events are
scheduled as the language is parsed, and that all processes are parsed until
they are all "resolved" for changes in their sensitivity lists. That is why
the order of the processes within the module is unimportant. And perhaps
this is why the order of assignments without the processes is important? I
did not follow why KJ felt the delta time may be different nor why the pair
of concurrent statements is different.

Brad Smallridge
AiVision

Note, too, that the following PAIR of concurrent assignments:

a <= b and c;
d <= e and f;

is not exactly the same as the following SINGLE process:

process (b,c,e,f)
begin
a <= b and c;
d <= e and f;
end process;

Jonathan Bromley, Consultant
DOULOS - Developing Design Know-how
 
Brad Smallridge wrote:

I
did not follow why KJ felt the delta time may be different nor why the pair
of concurrent statements is different.
Delta delays may burn some cpu cycles,
but they don't effect the NOW time
anywhere in a vhdl simulation.

In fact, such delays have no practical
effect on simulation results
for non-clock signals, other than
to make them repeatably even in the
face of asynchronous feedback to the
sensitivity list.

A clocked process will only update
its signals/ports once per cycle at one delta
after the clock rise.

A combinational process may use one
or more deltas to settle an output
signal after any input level change.

In either case, the simulation waveform
will show the output signal transition occurring
0 ns after the input event.

As KJ said, using synchronous processes
exclusively, will free your mind and
your CPU to worry about other things.


-- Mike Treseler
 
In either case, the simulation waveform
will show the output signal transition occurring
0 ns after the input event.

As KJ said, using synchronous processes
exclusively, will free your mind and
your CPU to worry about other things.
Thanks Mike,

Rest assured, I use nothing but synchronous processes.

I am getting the idea that a DELTA time represents
one pass at resolving all the logic changes. At that
doesn't represent a hill of beans unless you've got
some sort of infinite loop or other problem.

This detail about concurrent assignments came up
mostly from ModelSims 7.x release which has a new
way of labeling signals.

I do, however, in the past use these non process
assignments, mostly to connect signals to the
output pins:

my_output_pin <= my_signal;

This was my standard way of avoiding the buffer v.
out connectivity issue. It get rids of the error
saying that signal x can not be read.

And it also is providing a fast way to change
test outputs.

LED1_out_test_pin <= my_signal_in_question;

One doesn't have to create new names in the Entity.

And it came up again recently with Virtex4 FIFOs
when I discovered that if the output were redirected
into the input with just logic, then one doesn't
need to delay the WR signal in respect to the RD (EN)
signal to get video line data lined up. I still
need to look at the timing requirements of such a move
but that is a question for the fpga group I suppose.
One needs to use the First Word Fall Through mode.

But that did require asynchonous logic. Since the
WR and EN signals need to ANDED with line valid
signals.

Brad Smallridge
Ai Vision
 
Hi,

KJ schrieb:

One simulation difference is that signals in a process all get updated at
the end of the process....sooo..

a <= b and c; -- #1
d <= e and f; -- #2

You have no 'control' over whether #1 or #2 gets evaluated first so there
may be a simulation delta time between when signals 'a' and 'd' change.
I consider a simulator broken, if there's a delta between a and d
(unless the signals b,c,e and f update in different deltas). I know no
reason why a (LRM conform) simulator should have the freedom to insert
a delta inbetween.

These statements are complete equal with statements inside a
combinatorial process.
Main difference is that concurent statements need less code than
process for simple statements but often need more code for complex
conditional statements (nested ifs and so on).

bye Thomas
 
I consider a simulator broken, if there's a delta between a and d
(unless the signals b,c,e and f update in different deltas). I know no
reason why a (LRM conform) simulator should have the freedom to insert
a delta inbetween.
You're right, what I had 'intended' to write was something more like
a <= b and c; -- #1
d <= e and a; -- #2

Where equation #2 depends on the result of #1. Bad example.

"Thomas Stanka" <usenet_10@stanka-web.de> wrote in message
news:1145344787.857451.58780@v46g2000cwv.googlegroups.com...
Hi,

KJ schrieb:

One simulation difference is that signals in a process all get updated at
the end of the process....sooo..

a <= b and c; -- #1
d <= e and f; -- #2

You have no 'control' over whether #1 or #2 gets evaluated first so there
may be a simulation delta time between when signals 'a' and 'd' change.

I consider a simulator broken, if there's a delta between a and d
(unless the signals b,c,e and f update in different deltas). I know no
reason why a (LRM conform) simulator should have the freedom to insert
a delta inbetween.

These statements are complete equal with statements inside a
combinatorial process.
Main difference is that concurent statements need less code than
process for simple statements but often need more code for complex
conditional statements (nested ifs and so on).

bye Thomas
 
I consider a simulator broken, if there's a delta between a and d
(unless the signals b,c,e and f update in different deltas). I know no
reason why a (LRM conform) simulator should have the freedom to insert
a delta inbetween.

You're right, what I had 'intended' to write was something more like
a <= b and c; -- #1
d <= e and a; -- #2

Where equation #2 depends on the result of #1. Bad example.
OK, now that we have the right example, what happens?
What I have been told is that both a and d get scheduled
at the same time. For a behavioral simulation.

If one is doing a post place and route synthesized model
then d might be resolved later than a depending on how
the synthesis tool rendered the logic. It might be the
case that d were synthesized as e and b and c and that the
routes might actually make the d signal resolve first.

What exactly is a delta?

And have we concluded that putting the concurrent statements
in a process, with a correct sensitivity list,
have no effect on the simulation or synthesis?

Brad Smallridge
AiVision
 
I'll try to regroup and basically say what I had said in the first post as
well. Whether you write it as a two concurrent statements or a process with
the appropriate sensitivity list will make no difference either in
simulation or in synthesis (if we can ignore glitches and assume that we're
just talking about steady state). What it will affect is maintainability of
the code itself in that maintaining the sensitivity list itself is a
requirement for processes but is done 'automagically' for concurrent
statements by the simulator/synthesizer.

When you choose to write combinatorial logic as a process instead of as
concurrent statements, you take on the additional burden of making sure
that your sensitivity list is complete. Whether or not you (or your
organization) believes that is 'time well spent' or not is something to
decide for yourself. It's additional effort that can bite you if you get it
wrong and would not have been expended if the process template had not been
used in the first place. In any case, maintaining the sensitivity list is
extra work (therefore a negative). Whether the positives outweigh the
negatives is a value decision to make.

In any case, the bottom line is that it's up to you to decide which is
cleaner to write and maintain.

The cases where having signals getting updated only at the end of the
process is of some benefit as I mentioned in the first post are rare but
they do occur. For me it has only come up when modelling parts that I
intend to use with the design that I'm testing, NOT with the actual design
(i.e. the thing that needs to actually be synthesized into real parts).

A 'delta' is somewhat like a propogation delay of zero time. Consider the
example again
a <= b and c; -- #1
d <= e and a; -- #2

When either 'b' or 'c' changes appropriately then the simulator will go
through and see that 'a' needs to be recomputed and calculate a new value
and schedule that to happen. Before the simulator can 'advance' time it
looks at the list of signals that have events scheduled at the current time
and sees that wherease it used to have 'b' and 'c' now (which it has dealt
with) instead it has 'a'. Looking at both of the equations it will see then
that 'd' needs to be recomputed so it will figure out what the new value for
'd' is and schedule that to happen also.

If you happen to have combinatorial equations that are cross coupled the
simulator can get into a mode where it is having to continually recompute
signals on each step (or 'delta') because something changed on the previous
step that now causes something in the cross coupled equations to be
re-evaluated. When you get this condition the simulator errors out
eventually with some form of message that says that it's exceeded some
iteration limit. The 'iteration limit' is basically the count of how many
'steps' (or 'deltas') it takes in order to reach a steady state. If that
limit is exceeded the simulator knows that something is wrong and stops.

If you pretend to be the simulator it tooks you two times through before you
were done processing. In step 1 you determine that signal 'a' needs to be
recomputed since either (or both) signals 'b' or 'c' changed, but signal 'd'
was just fine since it only needs to be recomputed if 'e' or 'a' changes
(and they haven't....not yet, only 'b' and 'c' have). So now you figure out
the result for step 1 and see that signal 'a' has changed so now you have to
take a look again at ALL of the equations a second time. This time you see
that signal 'a' does not need to be recomputed since neither 'b' nor 'c'
have changed (that change occured on step 1...this is step 2), but signal
'd' does need to be recomputed since signal 'a' has changed. So you go
through and figure out the new value for signal 'd'. Now knowing that 'd'
has changed you once again scan through the equations. This time you see
that neither 'a' nor 'd' needs to be recomputed since none of the signals
that go into computing them (i.e. 'b', 'c, 'e' or 'a') have changed. At
this point you're done and can advance the simulator's clock. The fact that
it takes you multiple times through scanning the set of signals looking for
what equations now need to be evaluated in order to figure out all of the
consequences of some input signal that has changed is really the concept
behind 'delta time'.

That, in a (maybe too wordy) nutshell is what 'delta' is all about.

KJ
 
That, in a (maybe too wordy) nutshell is what 'delta' is all about.

KJ

Not too wordy. I think you nailed it for me. Thank you.

Brad Smallridge
Ai Vision
 

Welcome to EDABoard.com

Sponsor

Back
Top