Effect of `timescale precision on simulation speed

A

Allan Herriman

Guest
Hi,

In this paper,
http://www.sunburst-design.com/papers/CummingsHDLCON2002_Parameters_rev1_2.pdf
Cliff Cummings says "The `timescale directive can have a huge impact
on the performance of most Verilog simulators." He goes on to say
that "adding a 1ps precision to a model that is adequately modeled
using either 1ns or 100ps time_precisions can increase simulation time
by more than 100% and simulation memory usage by more than 150%."


I thought that HDL simulators are discrete event based, and time jumps
from one set of scheduled events to the next.
This jump just adds an integer to the current time. The size of the
integer (e.g. 1 ns vs 1000 ps) being added shouldn't make a difference
to the (real) time taken to perform the addition.

What then, is the mechanism for the slowdown that Cliff observed?

Is it that the coarser precision has the effect of reducing the number
of events?
E.g.
I have signals scheduled to change at 1 ps, 3 ps, 15 ps and 33 ps
(which requires 4 "jumps")
Changing from 1 ps to 10 ps precision turns these into 0 ps, 0 ps, 10
ps and 30 ps (which requires 2 "jumps", a saving of 2 "jumps").
.... but this makes the simulation faster by changing the semantics of
the simulation, which sounds like cheating!


Is there another reason for the slowdown?

Thanks,
Allan.
 
Allan Herriman <allan.herriman.hates.spam@ctam.com.au.invalid> wrote in message news:<u7cavv8nihgh904pvjpesulk4ljn3pibut@4ax.com>...
What then, is the mechanism for the slowdown that Cliff observed?
I would guess that he was looking at simulators that used a "time wheel"
as the data structure for the events. I believe that Verilog-XL used
one in its gate engine. A time wheel can make scheduling events very
fast in a typical simulation, but it degrades badly if the precision is
too high so that most potential time slots are not occupied. You can
check the literature on simulation if you really care about this stuff.

Simulators that don't use this data structure would not show this effect.

Is it that the coarser precision has the effect of reducing the number
of events?
You will see some effect from this. Most data structures for tracking
schedules become slower to access as the number of entries they have to
track increases. Some of them are much faster at adding new events to
an existing time than adding a new time. And there is some processing
involved in advancing time, as you suggested. These effects can be
significant in some simulations.

Changing from 1 ps to 10 ps precision turns these into 0 ps, 0 ps, 10
ps and 30 ps (which requires 2 "jumps", a saving of 2 "jumps").
... but this makes the simulation faster by changing the semantics of
the simulation, which sounds like cheating!
Which is why it can only be done if the user requests it by setting the
timescale precision appropriately. If you need that precision, then don't
do it. If you don't need that precision, then you can allow the simulator
to run faster. If the simulator doesn't abide by the precision that you
have specified, then it really is cheating and that would be a bug.
 
On 2 Jan 2004 10:46:35 -0800, sharp@cadence.com (Steven Sharp) wrote:

[snip]

Thanks Steve.
 
On Fri, 02 Jan 2004 20:12:45 +1100, Allan Herriman
<allan.herriman.hates.spam@ctam.com.au.invalid> wrote:
Hi,

In this paper,
http://www.sunburst-design.com/papers/CummingsHDLCON2002_Parameters_rev1_2.pdf
Cliff Cummings says "The `timescale directive can have a huge impact
on the performance of most Verilog simulators." He goes on to say
that "adding a 1ps precision to a model that is adequately modeled
using either 1ns or 100ps time_precisions can increase simulation time
by more than 100% and simulation memory usage by more than 150%."


I thought that HDL simulators are discrete event based, and time jumps
from one set of scheduled events to the next.
This jump just adds an integer to the current time. The size of the
integer (e.g. 1 ns vs 1000 ps) being added shouldn't make a difference
to the (real) time taken to perform the addition.

What then, is the mechanism for the slowdown that Cliff observed?
For accurate gate level designs that use SDF annotation, higher
precision can make a large difference in time and space. The space
difference occurs because it is possible to pack delays (often
a table of 16 values are needed for SDF delays) into 1 or 2 bytes instead
of 4 or full 8 byte long long time values. The time difference
occurs because of SDF rounding with less time precision, many
internal edges will occur within one time step (tick) eliminating the need to
schedule events. I do not think maximum number of pending
scheduled but not matured events makes much performance difference
because event queues use log(n) algorithms, but Verilog event scheduling
and processing overhead is high so additional events slow down
simulation.

If SDF if not used, I do not think timescale will effect simulation
space or time.

You can see the C source for the P1364 Verilog algorithms in our
GPL Cver simulator at www.pragmatic-c.com/gpl-cver in files v_del.c
and v_sim.c.
/Steve

Is it that the coarser precision has the effect of reducing the number
of events?
E.g.
I have signals scheduled to change at 1 ps, 3 ps, 15 ps and 33 ps
(which requires 4 "jumps")
Changing from 1 ps to 10 ps precision turns these into 0 ps, 0 ps, 10
ps and 30 ps (which requires 2 "jumps", a saving of 2 "jumps").
... but this makes the simulation faster by changing the semantics of
the simulation, which sounds like cheating!


Is there another reason for the slowdown?

Thanks,
Allan.

--
Steve Meyer Phone: (612) 371-2023
Pragmatic C Software Corp. email: sjmeyer@pragmatic-c.com
520 Marquette Ave. So., Suite 900
Minneapolis, MN 55402
 

Welcome to EDABoard.com

Sponsor

Back
Top