T
Taras_96
Guest
Hi everyone
I've just started to learn VHDL, for the purpose of synthesising the
code onto an FPGA. I have previously worked with synthesisable Verilog
at RTL, and am trying to get my head around a couple of the mechanisms
VHDL offers. Warning: the following post is a bit long, but I tried to
make myself as clear as possible. My uncertainty involves race
conditions, and how VHDL handles signal assignments inside processes.
A Verilog Example.
always @ (posedge sysclk)
begin
b<=c
end
//other code....
always @ (posedge sysclk)
begin
a<=b
end
The type of assignment above is called 'non-blocking' assignment. The
non-blocking assignment mechanism ensures that your are getting the old
value of 'b', not the new one. Of course, if this mechanism was not
supported, you could ensure those statements which required the
'old'values of 'b' (eg: a<=b) appear before the statements that assign
b its new value (eg: c<=b) - but this means you have to make sure this
always happens, which is time consuming and error prone. The order in
which the two blocks are written doesn't matter. NB: for contrast, the
following does result in a race condition because the order in which
the two blocks execute is not specified:
always @ (posedge sysclk)
begin
a = b
end
always @ (posedge sysclk)
begin
b = c
end
How does VHDL protect against such race conditions?
On a similar topic, how do you model pipelines in VHDL (notice that the
Verilog code given above models a pipeline)? Perhaps something like the
following code...
process (clock)
begin
if clock'event and clock = '1' then
b<=c
end if;
end process
process (clock)
begin
if clock'event and clock = '1' then
a<=b
end if;
end process
I'm not sure what would happen here, because the book I've got (VHDL
for logic synthesis - Andrew Rushton) doesn't have an example of what
the simulator does when two processes are triggered from the same
event. As I understand it, if a process gets triggered, and a signal
assignment is made within the statement, a *transaction* is added to
the queue of the signal that was assigned to. The actual change of the
variable (Rushton describes this as "the point where a transaction
becomes due on a signal, that signal becomes active") occurs when the
process execution phase is finished; it occurs at the beginning of the
event processing phase (and this assignment can cause new processes to
be triggered).
How does the queueing mechanism (of the queued transactions) work?
Does it queue an assignment to take place, using the value of the RHS
at the time the signal assignment was come across, or does it
re-calculate the value to use when the transaction is to be executed in
the event processing stage and then assign that value? The former
option seems to be a more logical explanation to me, in order to
preserve the property that processes themselves are concurrent
(although I'm a bit confused by that as well, for now, I'll assume that
concurrent processes means it doesn't matter where they appear in the
code - see below). If the second option was used (it re-calculates the
value in the event processing stage based on what is stored in the
signal at the time the transaction in the queue is getting processed -
where the signal becomes "active") then the above two processes
wouldn't be concurrent. This is illustrated by an example. If the
transaction for the assignment b<=c was executed first (in the event
processing state), 'b' would get the value of 'c'. 'a' in the
assignment a<=b would then get the 'new' value of 'b' when its
transaction is executed in the event processing stage. Obviously
concurrency is broken - if the assignment a<=b happened before the
assignment b<=c, using this second option of queuing, the processes
wouldn't be concurrent.
If the former option of queueing I mentioned is actually the one used,
the pipeline example would work ('a' would get the 'old' value of 'b')
if *all processes that were triggered in the same delta time period run
to completion*. This would mean that during process execution using
the example above, 'b' would get *scheduled* to be assigned the value
of 'c', and 'a' would be *scheduled* to be assigned the (old) value of
'b'; these assignments will take place in the next event processing
cycle. Is this right? Is this what happens when multiple processes are
triggered at the same time?
What would happen in this case?
process (clock)
begin
if clock'event and clock = '1' then
b<=c
a<=b
end if;
end process
Would it result in similar behaviour to the above case? (Intuitively I
think it should). However, if the first queueing algorithm was used
(where signals get assigned 'old' values - the value of the variable
when it's assignment was encountered in the process execution stage),
wouldn't this result in non-sequential assignment inside the process
block ('b' would be assigned the 'old' value of 'c', 'a' will be
assigned the 'old' value of 'b' - if we switch the order of these
signal assignments around, then the result will be the same, thus the
assignments aren't sequential)?
Now I've confused myself .
Coming from an RTL for synthesis point of view, is there any
application of the process (who's main feature, as I see it, is that
assignments are done sequentially rather than concurrently), other than
for the purpose of modelling a flip-flop? I seem to remember a book
saying that "processes are concurrent statements" - does this just mean
the position of processes in the code doesn't matter?
Finally, is there a use of variables in synthesisable VHDL at RTL?
Thanks
Taras
I've just started to learn VHDL, for the purpose of synthesising the
code onto an FPGA. I have previously worked with synthesisable Verilog
at RTL, and am trying to get my head around a couple of the mechanisms
VHDL offers. Warning: the following post is a bit long, but I tried to
make myself as clear as possible. My uncertainty involves race
conditions, and how VHDL handles signal assignments inside processes.
A Verilog Example.
always @ (posedge sysclk)
begin
b<=c
end
//other code....
always @ (posedge sysclk)
begin
a<=b
end
The type of assignment above is called 'non-blocking' assignment. The
non-blocking assignment mechanism ensures that your are getting the old
value of 'b', not the new one. Of course, if this mechanism was not
supported, you could ensure those statements which required the
'old'values of 'b' (eg: a<=b) appear before the statements that assign
b its new value (eg: c<=b) - but this means you have to make sure this
always happens, which is time consuming and error prone. The order in
which the two blocks are written doesn't matter. NB: for contrast, the
following does result in a race condition because the order in which
the two blocks execute is not specified:
always @ (posedge sysclk)
begin
a = b
end
always @ (posedge sysclk)
begin
b = c
end
How does VHDL protect against such race conditions?
On a similar topic, how do you model pipelines in VHDL (notice that the
Verilog code given above models a pipeline)? Perhaps something like the
following code...
process (clock)
begin
if clock'event and clock = '1' then
b<=c
end if;
end process
process (clock)
begin
if clock'event and clock = '1' then
a<=b
end if;
end process
I'm not sure what would happen here, because the book I've got (VHDL
for logic synthesis - Andrew Rushton) doesn't have an example of what
the simulator does when two processes are triggered from the same
event. As I understand it, if a process gets triggered, and a signal
assignment is made within the statement, a *transaction* is added to
the queue of the signal that was assigned to. The actual change of the
variable (Rushton describes this as "the point where a transaction
becomes due on a signal, that signal becomes active") occurs when the
process execution phase is finished; it occurs at the beginning of the
event processing phase (and this assignment can cause new processes to
be triggered).
How does the queueing mechanism (of the queued transactions) work?
Does it queue an assignment to take place, using the value of the RHS
at the time the signal assignment was come across, or does it
re-calculate the value to use when the transaction is to be executed in
the event processing stage and then assign that value? The former
option seems to be a more logical explanation to me, in order to
preserve the property that processes themselves are concurrent
(although I'm a bit confused by that as well, for now, I'll assume that
concurrent processes means it doesn't matter where they appear in the
code - see below). If the second option was used (it re-calculates the
value in the event processing stage based on what is stored in the
signal at the time the transaction in the queue is getting processed -
where the signal becomes "active") then the above two processes
wouldn't be concurrent. This is illustrated by an example. If the
transaction for the assignment b<=c was executed first (in the event
processing state), 'b' would get the value of 'c'. 'a' in the
assignment a<=b would then get the 'new' value of 'b' when its
transaction is executed in the event processing stage. Obviously
concurrency is broken - if the assignment a<=b happened before the
assignment b<=c, using this second option of queuing, the processes
wouldn't be concurrent.
If the former option of queueing I mentioned is actually the one used,
the pipeline example would work ('a' would get the 'old' value of 'b')
if *all processes that were triggered in the same delta time period run
to completion*. This would mean that during process execution using
the example above, 'b' would get *scheduled* to be assigned the value
of 'c', and 'a' would be *scheduled* to be assigned the (old) value of
'b'; these assignments will take place in the next event processing
cycle. Is this right? Is this what happens when multiple processes are
triggered at the same time?
What would happen in this case?
process (clock)
begin
if clock'event and clock = '1' then
b<=c
a<=b
end if;
end process
Would it result in similar behaviour to the above case? (Intuitively I
think it should). However, if the first queueing algorithm was used
(where signals get assigned 'old' values - the value of the variable
when it's assignment was encountered in the process execution stage),
wouldn't this result in non-sequential assignment inside the process
block ('b' would be assigned the 'old' value of 'c', 'a' will be
assigned the 'old' value of 'b' - if we switch the order of these
signal assignments around, then the result will be the same, thus the
assignments aren't sequential)?
Now I've confused myself .
Coming from an RTL for synthesis point of view, is there any
application of the process (who's main feature, as I see it, is that
assignments are done sequentially rather than concurrently), other than
for the purpose of modelling a flip-flop? I seem to remember a book
saying that "processes are concurrent statements" - does this just mean
the position of processes in the code doesn't matter?
Finally, is there a use of variables in synthesisable VHDL at RTL?
Thanks
Taras