modeling delays in RTL

A

anand

Guest
Hi,

I am new to RTL modeling, and I was reading some RTL files where the
designer had
statements like

always @(posedge clk or negedge reset_n)
begin
if (~reset_n) begin
s_en_d <= #1 1'b0;
s_en_pulse <= #1 1'b0;
sbusy_d <= #1 1'b0;
end
else begin
s_en_pulse <= #1 s_en && ~i2s_en_d;
sbusy_d <= #1 sbusy;
end
end


I am totally confused. I thought that in RTL designs that you actually
synthesize should not have # delays inserted since the Synopsys
compiler will ignore them.

Question is, if the Syn compiler will eventually ignore the # delays,
then why bother adding it in the RTL when all you can do is observe
this "phantom delay" in simulation?

Is there any value in observing deleays in simulation? A real world
code example would be appreciated.

-Thanks
 
anand wrote:
Hi,

I am new to RTL modeling, and I was reading some RTL files where the
designer had
statements like

always @(posedge clk or negedge reset_n)
begin
if (~reset_n) begin
s_en_d <= #1 1'b0;
s_en_pulse <= #1 1'b0;
sbusy_d <= #1 1'b0;
end
else begin
s_en_pulse <= #1 s_en && ~i2s_en_d;
sbusy_d <= #1 sbusy;
end
end


I am totally confused. I thought that in RTL designs that you actually
synthesize should not have # delays inserted since the Synopsys
compiler will ignore them.
Most people add these lines in the synthesizable code to help
in visualizing the sequence of events (cause and effect) when
running a behavioral simulation. The simulator would otherwise
use zero delays until you ran actual back-annotated timing files.

Sometimes adding this sort of delay fixes simulation problems
when running signals from one module to another. With zero
clock to output delay, the simulation often produces incorrect
results if different modules use different timing resolution or
different
"copies" of the same clock. I've had this problem quite often using
simulation models from the Xilinx CoreGen libraries.

Question is, if the Syn compiler will eventually ignore the # delays,
then why bother adding it in the RTL when all you can do is observe
this "phantom delay" in simulation?
There is some value to viewing some delay in simulation, since the
real design always has some delay. As mentioned above, it's nice
to see the effects of a clock edge happen after the edge. It is
also sometimes possible to estimate actual delays before getting
to the back-annotation phase of the design. Then the waveform
output of the simulator can be used for documentation.

Is there any value in observing deleays in simulation? A real world
code example would be appreciated.

-Thanks
 
anand wrote:

I am totally confused. I thought that in RTL designs that you actually
synthesize should not have # delays inserted since the Synopsys
compiler will ignore them.
Any synthesis tool will ignore them.

Question is, if the Syn compiler will eventually ignore the # delays,
then why bother adding it in the RTL when all you can do is observe
this "phantom delay" in simulation?
Good question.
When I look at sim waveforms, I know that
all the wave edges don't really happen
at exactly the same time as the clock edge.
Seeing a fixed delta wouldn't really
clarify the functional behavior for me
and I don't like to muck up my code
if I don't have to.

Is there any value in observing deleays in simulation?
Not for me. If the actual delay is a problem,
STA will tell me about it.

-- Mike Treseler
 
Thanks you guys,


Mike Treseler wrote:
anand wrote:

I am totally confused. I thought that in RTL designs that you actually
synthesize should not have # delays inserted since the Synopsys
compiler will ignore them.

Any synthesis tool will ignore them.

Question is, if the Syn compiler will eventually ignore the # delays,
then why bother adding it in the RTL when all you can do is observe
this "phantom delay" in simulation?

Good question.
When I look at sim waveforms, I know that
all the wave edges don't really happen
at exactly the same time as the clock edge.
Seeing a fixed delta wouldn't really
clarify the functional behavior for me
and I don't like to muck up my code
if I don't have to.

Is there any value in observing deleays in simulation?

Not for me. If the actual delay is a problem,
STA will tell me about it.

-- Mike Treseler
 
Thanks you guys,


Mike Treseler wrote:
anand wrote:

I am totally confused. I thought that in RTL designs that you actually
synthesize should not have # delays inserted since the Synopsys
compiler will ignore them.

Any synthesis tool will ignore them.

Question is, if the Syn compiler will eventually ignore the # delays,
then why bother adding it in the RTL when all you can do is observe
this "phantom delay" in simulation?

Good question.
When I look at sim waveforms, I know that
all the wave edges don't really happen
at exactly the same time as the clock edge.
Seeing a fixed delta wouldn't really
clarify the functional behavior for me
and I don't like to muck up my code
if I don't have to.

Is there any value in observing deleays in simulation?

Not for me. If the actual delay is a problem,
STA will tell me about it.

-- Mike Treseler
 
Sometimes you need to put a delay in a model to mimic the actual
operation of an external device for a testbench. For example, if you
know an SRAM in your testbed environment takes 70ns to produce data,
it's sometimes very useful to modle this to make sure your actual code
doesn't grab the data too soon.

John Providenza


Mike Treseler wrote:
anand wrote:

I am totally confused. I thought that in RTL designs that you actually
synthesize should not have # delays inserted since the Synopsys
compiler will ignore them.

Any synthesis tool will ignore them.

Question is, if the Syn compiler will eventually ignore the # delays,
then why bother adding it in the RTL when all you can do is observe
this "phantom delay" in simulation?

Good question.
When I look at sim waveforms, I know that
all the wave edges don't really happen
at exactly the same time as the clock edge.
Seeing a fixed delta wouldn't really
clarify the functional behavior for me
and I don't like to muck up my code
if I don't have to.

Is there any value in observing deleays in simulation?

Not for me. If the actual delay is a problem,
STA will tell me about it.

-- Mike Treseler
 
gabor wrote:
anand wrote:
Hi,

I am new to RTL modeling, and I was reading some RTL files where the
designer had
statements like

always @(posedge clk or negedge reset_n)
begin
if (~reset_n) begin
s_en_d <= #1 1'b0;
s_en_pulse <= #1 1'b0;
sbusy_d <= #1 1'b0;
end
else begin
s_en_pulse <= #1 s_en && ~i2s_en_d;
sbusy_d <= #1 sbusy;
end
end


I am totally confused. I thought that in RTL designs that you actually
synthesize should not have # delays inserted since the Synopsys
compiler will ignore them.


Most people add these lines in the synthesizable code to help
in visualizing the sequence of events (cause and effect) when
running a behavioral simulation. The simulator would otherwise
use zero delays until you ran actual back-annotated timing files.

Sometimes adding this sort of delay fixes simulation problems
when running signals from one module to another. With zero
clock to output delay, the simulation often produces incorrect
results if different modules use different timing resolution or
different
"copies" of the same clock. I've had this problem quite often using
simulation models from the Xilinx CoreGen libraries.
Adding delays like this to "fix" simulation problems is a good way
to get yourself into trouble very quickly.

That said, your notice of Xilinx Coregen problems without
these things is a valid problem. Xilinx Coregen libraries
are junk. They don't model the blocking/non-blocking
behaviour correctly at all for sequential elements.

The Xilinx Simprims get the behavior correct, so if you're simulating
with those, you're all set.

But for the front end simulations using the coregen's - don't
trust them at all. I filed a ticket with Xilinx, they admitted to
the problem, but no fix has shown up. The workaround we use -
use the VHDL coregen libraries. Even though we're a verilog
house, we use the VHDL models here...

Regards,

Mark
 

Welcome to EDABoard.com

Sponsor

Back
Top