Gate Level Simulation VS Static Timing Analysis

A

Arpan

Guest
Hi,
This is a question on the relevance and accuracy of gate level
simulation versus static timing analysis(STA). In gate level
simulation post synthesis, the technology library provided gates come
with their own module path delays as provided by the library designer.
Note that there is no concept of distributed delays out here. In fact,
if there was one who would provide for that across hierarchies and
all? On the other hand STA at this stage will have both the library
provided delay (Liberty format for e.g.) + the wire load model which
takes into account the net delays.

My question: Does this mean that STA is more accurate than gate level
simulation?

Any help appreciated,
Arpan
 
Arpan wrote:

My question: Does this mean that STA is more accurate than gate level
simulation?
STA covers timing constraints on
all paths without a testbench.

RTL simulation quickly verifies function up to the testbench coverage.

Gate simulation verifies timing for some paths,
but is always less complete than STA.

-- Mike Treseler
 
Arpan wrote:
distributed delay information/SDF. On the other hand STA will use both
net delay and cell delay. So does this mean that technologically STA
is a better estimate than gate level sim? Then why do design houses
still do dynamic simulation? Clearly I am missing something here.
The dynamic simulation is done to catch stupid STA setup errors etc.
It's quite easy to forget for example duty cycle constraint from some
place where it is needed and that is easily visible in timing
simulation. Also dynamic simulation is a good way to check the wakeup
sequence of the chip to be working. In many flows the RTL simulations do
not test the bist, bisr, pll etc. logic which is inserted directly to
the netlist.

--Kim
 
On Thu, 11 Jun 2009 21:24:23 -0700 (PDT), Arpan <arpansen@gmail.com>
wrote:
That STA will cover more ground is right since gate level sim may not
be able to exercise all potential path combinations for setup/hold
violations -- this is reasonable. My question is on the correctness of
data. Without the net delays, what gate level sim is doing is that it
just uses cell delays from technology library. Neither the library
designer, nor the rtl or the verification engineer will generate the
distributed delay information/SDF. On the other hand STA will use both
net delay and cell delay. So does this mean that technologically STA
is a better estimate than gate level sim? Then why do design houses
still do dynamic simulation? Clearly I am missing something here.
What you're missing is that the back-annotated (with an SDF file)
gate-level simulations do take net delays into account. SDF files are
generated by a delay calculation tool to which the inputs are the
technology library and the P&R'ed database. The delay calculator uses
the latter to calculate all wire-loads. That information is used to
calculate the cell input slew and output delays. SDF files can also
contain interconnect delays in addition to gate delays so actually
there is not much information is missing in a gate-level simulation.

There are some cases which are not covered STA so dynamic simulation
may be useful. One of the more important cases is paths which the
designer has marked as false but still need to be verified. An example
of this is an asynchronous FIFO. If one has a real async fifo with
fully unrelated clocks for read & write then doing a dynamic analysis
of it is a very good idea.

Muzaffer Kal

DSPIA INC.
ASIC/FPGA Design Services
http://www.dspia.com
 
On Jun 12, 3:15 am, Mike Treseler <mtrese...@gmail.com> wrote:
Arpan wrote:
My question: Does this mean that STA is more accurate than gate level
simulation?

STA covers timing constraints on
all paths without a testbench.

RTL simulation quickly verifies function up to the testbench coverage.

Gate simulation verifies timing for some paths,
but is always less complete than STA.

       -- Mike Treseler
Hi,
That STA will cover more ground is right since gate level sim may not
be able to exercise all potential path combinations for setup/hold
violations -- this is reasonable. My question is on the correctness of
data. Without the net delays, what gate level sim is doing is that it
just uses cell delays from technology library. Neither the library
designer, nor the rtl or the verification engineer will generate the
distributed delay information/SDF. On the other hand STA will use both
net delay and cell delay. So does this mean that technologically STA
is a better estimate than gate level sim? Then why do design houses
still do dynamic simulation? Clearly I am missing something here.

Regards,
Arpan
 
Arpan wrote:

My question is on the correctness of
data. Without the net delays, what gate level sim is doing is that it
just uses cell delays from technology library.
For a synchronous design with reliable design rules,
my successful RTL sim tells me that
the design will work on the chip as long as the Fmax
constraint is met in synthesis.

So I have to run synthesis once in a while.

As others have pointed out,
a gate simulation is a necessary checkoff item,
but it doesn't belong in my design/debug flow
because it is slow and incomplete.

-- Mike Treseler
 
On Jun 12, 10:36 am, Muzaffer Kal <k...@dspia.com> wrote:
On Thu, 11 Jun 2009 21:24:23 -0700 (PDT), Arpan <arpan...@gmail.com
wrote:

That STA will cover more ground is right since gate level sim may not
be able to exercise all potential path combinations for setup/hold
violations -- this is reasonable. My question is on the correctness of
data. Without the net delays, what gate level sim is doing is that it
just uses cell delays from technology library. Neither the library
designer, nor the rtl or the verification engineer will generate the
distributed delay information/SDF. On the other hand STA will use both
net delay and cell delay. So does this mean that technologically STA
is a better estimate than gate level sim? Then why do design houses
still do dynamic simulation? Clearly I am missing something here.

What you're missing is that the back-annotated (with an SDF file)
gate-level simulations do take net delays into account. SDF files are
generated by a delay calculation tool to which the inputs are the
technology library and the P&R'ed database. The delay calculator uses
the latter to calculate all wire-loads. That information is used to
calculate the cell input slew and output delays. SDF files can also
contain interconnect delays in addition to gate delays so actually
there is not much information is missing in a gate-level simulation.

There are some cases which are not covered STA so dynamic simulation
may be useful. One of the more important cases is paths which the
designer has marked as false but still need to be verified. An example
of this is an asynchronous FIFO. If one has a real async fifo with
fully unrelated clocks for read & write then doing a dynamic analysis
of it is a very good idea.

Muzaffer Kal

DSPIA INC.
ASIC/FPGA Design Serviceshttp://www.dspia.com
Hi,
Thank you for the response. This is precisely what I was looking for.
Couple of questions:
1) From what you mention, it looks like that gate level simulation
only makes sense in the post layout scenario. What about post
synthesis -- does gate level simulation have any significance post
synthesis but pre-layout?
2) If I understand this right, the gate level simulation is reading in
a SDF file which is generated by a P&R tool. I believe PT or DC can
also generate the SDF file. If that is the case, then can't the net
delays be incorporated at this stage for a quick approx. simulation?
Also, what happens to the specify blocks that are coded as part of the
library modules? Will the timing from SDF files override them or will
the better/worse of the SDF/specify timings be taken during
simulation?

Best Regards,
Arpan
 
Hi All,

my $0.02 added

1) From what you mention, it looks like that gate level simulation
only makes sense in the post layout scenario. What about post
synthesis -- does gate level simulation have any significance post
synthesis but pre-layout?
Your post-synth timings are really just "invented" numbers.
The timing calculation done by synthesis is not that accurate, as it's
completely missing the real wire loads.
DC-T or DC-G will have a better correlation with backend, but the only
timing numbers you can really trust are the ones from signoff.
So if your intent is to verify that your design is closed, non
postlayout timings are useless as they do not represent physical
information.

If your STA is well done and your design is synchronous, then there is
no need to run a gatelevel.
Also, STA can model phenomena like On Chip Variation, which are
impossible to simulate with sdf.

We used to run a backannotated gatelevel is to verify those parts of
your design that are asynchronous to each other.
However, there are many formal tools to run those checks for you now
(for example from Atrenta) so even that case is fading.

The only reason I can think of is in cases where somebody did
something VERY UNSAFE, for example creating race conditions, clocking
flops with non-glitch-free signals or using flops with async set AND
reset not properly controlled.
These often simulate OK, but then the actual cell delays create
glitches that make a gate sim fail.
I have seen this happen in one case where a person did synthesize a
structure that he thought would create muxes. DC created those muxes
as generic gates, so paths that were thought to be constant were in
reality having glitches and that was a real bad bug he had to fix.

Also, with complexities of asics easily exceeding 20 Mgates in 65nm
(the graphics people are already at 100Mgates) it is plain impossible
to simulate a backannotated gatelevel.

The only use we still have for gatelevel sims is to validate the ATPG
patterns, and most are simulated with no timings.

2) If I understand this right, the gate level simulation is reading in
a SDF file which is generated by a P&R tool.
Correct

I believe PT or DC can also generate the SDF file. If that is the case, then can't the net
delays be incorporated at this stage for a quick approx. simulation?
Again, because those timings are not "real", in the sense that the
physical implementation will change them.
I have one device where DC closes timings using only HVT cells.
P&R does not close timing even using 100% SVT on some paths and this
is all the effect of wireloads. Only DC-T gave a usable netlist, and
that definitely was not HVT-only.


Also, what happens to the specify blocks that are coded as part of the
library modules? Will the timing from SDF files override them or will
Correct. The specify will be overridden by the SDF values.

Ciao, Marco.
 
Hi Mike,

How do you verify async data transfers ?
In order to get good synth-backend correlation, do you synthesize with
DC-T or DC-G ?

For a synchronous design with reliable design rules,
my successful RTL sim tells me that
the design will work on the chip as long as the Fmax
constraint is met in synthesis.
Ciao, Marco.
 
hairyotter wrote:

How do you verify async data transfers ?
I use only single clock modules
with known-good synchronizer sub-modules
on the input side.

do you synthesize with
DC-T or DC-G ?
Quartus.

-- Mike Treseler
 

Welcome to EDABoard.com

Sponsor

Back
Top