VHDL - processes, race conditions, & Verilog

T

Taras_96

Guest
Hi everyone

I've just started to learn VHDL, for the purpose of synthesising the
code onto an FPGA. I have previously worked with synthesisable Verilog
at RTL, and am trying to get my head around a couple of the mechanisms
VHDL offers. Warning: the following post is a bit long, but I tried to
make myself as clear as possible. My uncertainty involves race
conditions, and how VHDL handles signal assignments inside processes.

A Verilog Example.

always @ (posedge sysclk)
begin
b<=c
end

//other code....

always @ (posedge sysclk)
begin
a<=b
end

The type of assignment above is called 'non-blocking' assignment. The
non-blocking assignment mechanism ensures that your are getting the old
value of 'b', not the new one. Of course, if this mechanism was not
supported, you could ensure those statements which required the
'old'values of 'b' (eg: a<=b) appear before the statements that assign
b its new value (eg: c<=b) - but this means you have to make sure this
always happens, which is time consuming and error prone. The order in
which the two blocks are written doesn't matter. NB: for contrast, the
following does result in a race condition because the order in which
the two blocks execute is not specified:

always @ (posedge sysclk)
begin
a = b
end

always @ (posedge sysclk)
begin
b = c
end

How does VHDL protect against such race conditions?

On a similar topic, how do you model pipelines in VHDL (notice that the
Verilog code given above models a pipeline)? Perhaps something like the
following code...

process (clock)
begin
if clock'event and clock = '1' then
b<=c
end if;
end process

process (clock)
begin
if clock'event and clock = '1' then
a<=b
end if;
end process

I'm not sure what would happen here, because the book I've got (VHDL
for logic synthesis - Andrew Rushton) doesn't have an example of what
the simulator does when two processes are triggered from the same
event. As I understand it, if a process gets triggered, and a signal
assignment is made within the statement, a *transaction* is added to
the queue of the signal that was assigned to. The actual change of the
variable (Rushton describes this as "the point where a transaction
becomes due on a signal, that signal becomes active") occurs when the
process execution phase is finished; it occurs at the beginning of the
event processing phase (and this assignment can cause new processes to
be triggered).

How does the queueing mechanism (of the queued transactions) work?
Does it queue an assignment to take place, using the value of the RHS
at the time the signal assignment was come across, or does it
re-calculate the value to use when the transaction is to be executed in
the event processing stage and then assign that value? The former
option seems to be a more logical explanation to me, in order to
preserve the property that processes themselves are concurrent
(although I'm a bit confused by that as well, for now, I'll assume that
concurrent processes means it doesn't matter where they appear in the
code - see below). If the second option was used (it re-calculates the
value in the event processing stage based on what is stored in the
signal at the time the transaction in the queue is getting processed -
where the signal becomes "active") then the above two processes
wouldn't be concurrent. This is illustrated by an example. If the
transaction for the assignment b<=c was executed first (in the event
processing state), 'b' would get the value of 'c'. 'a' in the
assignment a<=b would then get the 'new' value of 'b' when its
transaction is executed in the event processing stage. Obviously
concurrency is broken - if the assignment a<=b happened before the
assignment b<=c, using this second option of queuing, the processes
wouldn't be concurrent.

If the former option of queueing I mentioned is actually the one used,
the pipeline example would work ('a' would get the 'old' value of 'b')
if *all processes that were triggered in the same delta time period run
to completion*. This would mean that during process execution using
the example above, 'b' would get *scheduled* to be assigned the value
of 'c', and 'a' would be *scheduled* to be assigned the (old) value of
'b'; these assignments will take place in the next event processing
cycle. Is this right? Is this what happens when multiple processes are
triggered at the same time?

What would happen in this case?

process (clock)
begin
if clock'event and clock = '1' then
b<=c
a<=b
end if;
end process

Would it result in similar behaviour to the above case? (Intuitively I
think it should). However, if the first queueing algorithm was used
(where signals get assigned 'old' values - the value of the variable
when it's assignment was encountered in the process execution stage),
wouldn't this result in non-sequential assignment inside the process
block ('b' would be assigned the 'old' value of 'c', 'a' will be
assigned the 'old' value of 'b' - if we switch the order of these
signal assignments around, then the result will be the same, thus the
assignments aren't sequential)?

Now I've confused myself :).

Coming from an RTL for synthesis point of view, is there any
application of the process (who's main feature, as I see it, is that
assignments are done sequentially rather than concurrently), other than
for the purpose of modelling a flip-flop? I seem to remember a book
saying that "processes are concurrent statements" - does this just mean
the position of processes in the code doesn't matter?

Finally, is there a use of variables in synthesisable VHDL at RTL?

Thanks

Taras
 
Hi everyone

Thanks a lot for your replies, VHDL makes a lot more sense now :). I
do have a few resulting questions; they're a bit late because I had to
think about the contents for a while (warning: long post).

Blocking and blocking assignments in VHDL & Verilog
---------------------------------------------------

I was hoping someone could tell me if the following analysis was
correct, as I'm a bit rusty?

Example 1:

Assume
a = 1
b = 2
c = 3

a<=b; //a is scheduled to be assigned 2
c<=a; //c is scheduled to be assigned 1
b = b + a; //b is assigned 3 - nb: this does not create race
conditions because variable assignments (blocking) are local to the
process. As a side effect, there is no need for other non-blocking
assignments in other processes to run up to this point (contrast with
Verilog below)
f<=b; //f is scheduled to be assigned 3
g<=a; //g is scheduled to be assigned 1 (ie: a<=b hasn't been executed
yet) - correct?

Example 2:
a<=b //a scheduled to be assigned 2
c<=a //c scheduled to be assigned 1
b:=c //b is set to 3
d<=b //d scheduled to be assigned 3

Example 3:
always@(posedge clock)
begin
a<=b;
c<=a;
b = b + a; // the simulator must wait for 5 & 6 to execute before
proceeding - (is this correct?)
f<=b;
end

always@(posedge clock)
begin
g <= b //5
f <= b + a //6
d = c + b //7
end

Sequential Assignment
---------------------------------------------------
The 'sequentiality' comes from being able to do this:
a<=b
a<=c
this wouldn't make sense in concurrent VHDL for synthesis because it
represents:

------b----
|------a---------
------c----

Yes and no. The *results* would certainly be the same. The key is
that you are making nonblocking assignment ("signal assignment" in
VHDL-speak) to two DIFFERENT signals, so it doesn't matter in which
order they are executed.
So are you saying that my code was the same because the signals were
different. However, with something like that shown below, it wouldn't
be the same:

eg 1:
process(clk)
begin
if rising_edge(clk) then
a<=b;
end if;
end process;

process(clk)
begin
if rising_edge(clk) then
if (complicated expression) then
a<=c
end if;
end if;
end process

is NOT the same as eg 2:
process(clk)
begin
if rising_edge(clk) then
a<=b
if (complicated expression) then
a<=c
end if;
end if;
end process;

The first instance requires the value of 'a' to be resolved, while the
second example will just 'overwrite' the value to be assigned to 'a' (I
think).

Concurrent assignments are approximately the same as assignments in
Verilog processes in VHDL corresponds to always blocks in Verilog.
Having said that, I would like to discuss some general design
guidelines, referring to Mike's way.

I think what Mike is suggesting is that, using his example:

Instead of writing (in VHDL pseudocode):

shift_result <= shift(input);
mask_result <= mask_inversion(shift_result);
add_result <= mask_result + 42;
process (clock)
if rising_edge(clock) then
output <= add_result;
end if;
end process;

We write:
process (clock)
shift_result := shift(input);
mask_result := mask_inversion(shift_result);
if rising_edge(clock) then
output <= mask_result+42;
end if;
end process;

What is the advantage of the bottom notation? Or is this more obvious
in more complex examples (admittedly I haven't read the linked article
in depth, in particular the UART example).

The top example maps into Verilog as:

assign shift_result = shift(input);
assign mask_result = mask_inversion(shift_result);
assign sum_result = mask_result + 42;

always@(posedge clock)
begin
output <= sum_result;
end

While the bottom example maps (I think) into Verilog as:
always@(posedge clock)
begin
shift_result = shift(input);
mask_result = mask_inversion(shift_result);
output <= mask_result + 42;
end

I was taught these basic rules for Verilog coding:

1) Do not use complex combinational relationships in
assign values (I didn't know why at first, but I found this quote
on a Usenet board:

"If you wanted the latch or combinational logic. I would decide which
construct to use based on the other features available in continuous
assignments vs. procedural assignments. These are delays and strengths.
You can give strengths to wires and not regs (so you would use the
continuous assignment if you wanted this feature). The delay semantics
for a continuous assignment and a procedural assignment are very
different (see the IEEE1364-1995 spec).)"

2) If you wish to model combinational logic, use an
always@(/*AUTOSENSE*/) block with blocking assignments (AUTOSENSE
automatically generates a complete sensitivity list, thus avoiding
latch inference)

3) If you wish to model synchronous logic, use an always@(posedge
clock) block with non-blocking assignments.

4) Never mix the two blocks:

Some quotes from Usenet:

"In summary then, one should use non-blocking assignments to
synchronous signals and blocking assignments in combinational
constructs. Don't mix them together. "
...."sequential blocks use non-blocking <=
combinational blocks use blocking ="

Now, the bottom example in Verilog, which is what Mike suggests what
variables are 'good' for, mixes both blocking and non-blocking
statements, which breaks guideline 4. Any comments?

Additionally, I'm considering using the following 'template' for the
code I wish to write:

architecture behavioural of blah is
//signal definitions
begin
//do the combinational logic part here
//using concurrent assignments
//assign the result to some 'temporary
//variable' called comb_out for example
process (clock)
begin
if rising_edge(clock) then
//possibly do some muxing here to
//choose the input based on the state
//for example if(reset) then module_out<= 0;
module_out<=comb_out;
end if;
end process;
end behavioural;

Apart from the problems highlighted in the document linked by Mike, is
there any problems with doing this? The additional problem with using
the method suggested in the document linked by Mike is that a fair bit
of trust is put in the synthesiser. Usually this wouldn't be a
problem, but the chip I'm designing, an RSA decryption chip, should be
resistant to 'Timing Analysis' attacks, which means that every
decryption should take *exactly* the same amount of time. This means
that in some instances we may do an add and store it in a dummy
register even if we don't require the results of the add. I'm afraid
that the synthesiser might 'optimise away' this required architecture,
and thus maybe my template is the best approach - any ideas?

Egbert
---------------------------------------------------
On a final note, with Egbert's example, I was initially a bit confused
about why in the bottom example, b:=c synthesises into a flip-flop. My
initial train of thought was "but b can only be assigned to c on the
rising edge of the clock, however the synthesis results tell us that b
is always the value of c!". I think the answer to this is: I remember
reading somewhere (I can't find where now, annoyingly), that variables
retain their values until the next process execution. This means that
while the process isn't executing, b == c, thus, for all time, b == c
(even though the assignment is done within the clocked process)

Thanks again everyone

Taras
 
Hi Mike

I will try to find a simulator somewhere so I can check the assignments
out (probably a good idea :))

I think I understand the point you were trying to make - blocking
assignments improve the readibility of the code. I suppose blocking
assignments are less 'dangerous' in VHDL because variables are local to
the process. A big reason (I was told, anyway) not to use blocking
assignments in Verilog is to avoid having race conditions.

Thanks for your help

Taras
 

Welcome to EDABoard.com

Sponsor

Back
Top