timescale resolution - propagation across module boundary

G

Gav

Guest
Hi,

I have a simple (experimental) design consisting of testbench and DUT.
The timescale compiler directive in the testbench is 1ns / 1fs, while
that in the DUT is 1ns / 1ps.

My prior understanding of the rules concerning timescale resolution
was that the finest resolution set in any module is applied to all
modules. So for the example given, I would expect all delays in the
DUT to be timed to a resolution of 1 fs.

However, my simulation shows that within the DUT, only those delays
with a dependence on input pin transitions are resolved to 1fs, while
delays independent of the inputs are resolved to 1ps. For example, a
clock divider running off the input clock has a period resolved to
1fs, while a delay such as #1.0001 (units = 1ns) is rounded to 1.000.

So it seems that two different timescale resolutions are being used
within a single module. This doesn't seem like a good idea. Am I
missing something? Is this what you would expect? If so, is this
defined by the LRM or is it simulator dependent? Your views would be
appreciated.

I can provide some sample code if it would help. I am using Cadence NC-
Sim.

Thanks,
Gav.
 
On Fri, 30 Oct 2009 10:31:55 -0700 (PDT), Gav wrote:

Hi,

I have a simple (experimental) design consisting of testbench and DUT.
The timescale compiler directive in the testbench is 1ns / 1fs, while
that in the DUT is 1ns / 1ps.

My prior understanding of the rules concerning timescale resolution
was that the finest resolution set in any module is applied to all
modules. So for the example given, I would expect all delays in the
DUT to be timed to a resolution of 1 fs.

However, my simulation shows that within the DUT, only those delays
with a dependence on input pin transitions are resolved to 1fs, while
delays independent of the inputs are resolved to 1ps. For example, a
clock divider running off the input clock has a period resolved to
1fs, while a delay such as #1.0001 (units = 1ns) is rounded to 1.000.

So it seems that two different timescale resolutions are being used
within a single module. This doesn't seem like a good idea. Am I
missing something? Is this what you would expect? If so, is this
defined by the LRM or is it simulator dependent? Your views would be
appreciated.

I am using Cadence NC-Sim.
Hmmm. NC generally does separate compilation of the various
source files - I assume you're using it that way. So here is
what I think is happening. If I get time this weekend, which
is unlikely, I'll double-check my understanding against the
language standard. But here's my best guess anyhow.

// file A.v:
`timescale 1ns/10ps // just as an example
module A(input b, output reg a);
...
initial #1.001 a = b;
...

The compiler looks at your #1.001 in the context of the
current `timescale and rounds it down to 1.00ns because of
the local resolution. 1.00ns = 100x10ps.

Now let's instantiate this into another module...

// file B.v
`timescale 1ns/1ps
module B;
A inst_A(....);

Now your 100x10ps delay in A must be calculated using the
finest timescale around, so it becomes 1000x1ps and the
picosecond resolution in your original #1.001 delay is lost.
However, the simulator as a whole is indeed working at 1ps
granularity, so picosecond delays in module B will be
respected.

Personally I think this makes perfect sense. If you need
1ps resolution in some module, then you need to use a
timescale of at least that precision *in that module*.
If you set a coarser precision in a given module's source
file, you should expect the time delays in that file to
be rounded-off in accordance with the reduced precision.

If you were to concatenate all your Verilog files into
one huge source text and compile it in a single hit - as
Verilog-XL used to do, and as VCS still does today in its
default settings - I'm not sure what the rules are, but
they might well be different.

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
On Oct 30, 5:39 pm, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
On Fri, 30 Oct 2009 10:31:55 -0700 (PDT), Gav wrote:
Hi,

I have a simple (experimental) design consisting of testbench and DUT.
The timescale compiler directive in the testbench is 1ns / 1fs, while
that in the DUT is 1ns / 1ps.

My prior understanding of the rules concerning timescale resolution
was that the finest resolution set in any module is applied to all
modules. So for the example given, I would expect all delays in the
DUT to be timed to a resolution of 1 fs.

However, my simulation shows that within the DUT, only those delays
with a dependence on input pin transitions are resolved to 1fs, while
delays independent of the inputs are resolved to 1ps. For example, a
clock divider running off the input clock has a period resolved to
1fs, while a delay such as #1.0001 (units = 1ns) is rounded to 1.000.

So it seems that two different timescale resolutions are being used
within a single module. This doesn't seem like a good idea. Am I
missing something? Is this what you would expect? If so, is this
defined by the LRM or is it simulator dependent? Your views would be
appreciated.

I am using Cadence NC-Sim.

Hmmm.  NC generally does separate compilation of the various
source files - I assume you're using it that way.  So here is
what I think is happening.  If I get time this weekend, which
is unlikely, I'll double-check my understanding against the
language standard.  But here's my best guess anyhow.

  // file A.v:
  `timescale 1ns/10ps  // just as an example
  module A(input b, output reg a);
  ...
    initial #1.001 a = b;
  ...

The compiler looks at your #1.001 in the context of the
current `timescale and rounds it down to 1.00ns because of
the local resolution.  1.00ns = 100x10ps.

Now let's instantiate this into another module...

  // file B.v
  `timescale 1ns/1ps
  module B;
    A inst_A(....);

Now your 100x10ps delay in A must be calculated using the
finest timescale around, so it becomes 1000x1ps and the
picosecond resolution in your original #1.001 delay is lost.
However, the simulator as a whole is indeed working at 1ps
granularity, so picosecond delays in module B will be
respected.

Personally I think this makes perfect sense.  If you need
1ps resolution in some module, then you need to use a
timescale of at least that precision *in that module*.
If you set a coarser precision in a given module's source
file, you should expect the time delays in that file to
be rounded-off in accordance with the reduced precision.

If you were to concatenate all your Verilog files into
one huge source text and compile it in a single hit - as
Verilog-XL used to do, and as VCS still does today in its
default settings - I'm not sure what the rules are, but
they might well be different.

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.brom...@MYCOMPANY.comhttp://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
I think you hit on the issue, i.e. compile time versus
simulation run time. # delays are resolved by the compiler,
but the simulation must run at the finest resolution in
any module in order to propagate that timing through the
design. I'd be surprised if the # delays are handled
differently even when the compiler strings the sources
together.

Regards,
Gabor
 
On Fri, 30 Oct 2009 10:31:55 -0700 (PDT), Gav
<justforweblogin@gmail.com> wrote:

Hi,

I have a simple (experimental) design consisting of testbench and DUT.
The timescale compiler directive in the testbench is 1ns / 1fs, while
that in the DUT is 1ns / 1ps.

My prior understanding of the rules concerning timescale resolution
was that the finest resolution set in any module is applied to all
modules. So for the example given, I would expect all delays in the
DUT to be timed to a resolution of 1 fs.

However, my simulation shows that within the DUT, only those delays
with a dependence on input pin transitions are resolved to 1fs, while
delays independent of the inputs are resolved to 1ps. For example, a
clock divider running off the input clock has a period resolved to
1fs, while a delay such as #1.0001 (units = 1ns) is rounded to 1.000.

So it seems that two different timescale resolutions are being used
within a single module. This doesn't seem like a good idea. Am I
missing something? Is this what you would expect? If so, is this
defined by the LRM or is it simulator dependent? Your views would be
appreciated.
Here is what the standard says: "The `timescale compiler directive
specifies the unit of measurement for time and delay values and the
degree of accuracy for delays in all modules that follow this
directive until another `timescale compiler
directive is read. If there is no `timescale specified or it has been
reset by a `resetall directive, the time unit
and precision are simulator specific. It shall be an error if some
modules have a `timescale specified and others
do not.

So if you have:

`timescale 1ns/1fs
module tb;

endmodule

`timescale 1ns/1ps
module DUT;
endmodule

then the standard that exactly what you observe should happen.

This makes life of both the designer and the compiler writer easier.
It's easy for the designer to see what the delays will mean out of
their own module and not be surprised when some adds a 'timescale
somewhere else. It's easier for the compiler write to do separate
compile and/or separate the elaborate/simulate stages.
Doing a propagation of the smallest time step across modules would
introduce surprises for the user and difficulties for the optimizer.
--
Muzaffer Kal

DSPIA INC.
ASIC/FPGA Design Services

http://www.dspia.com
 
On Oct 30, 5:31 pm, Gav <justforweblo...@gmail.com> wrote:
Hi,

I have a simple (experimental) design consisting of testbench and DUT.
The timescale compiler directive in the testbench is 1ns / 1fs, while
that in the DUT is 1ns / 1ps.

My prior understanding of the rules concerning timescale resolution
was that the finest resolution set in any module is applied to all
modules. So for the example given, I would expect all delays in the
DUT to be timed to a resolution of 1 fs.

However, my simulation shows that within the DUT, only those delays
with a dependence on input pin transitions are resolved to 1fs, while
delays independent of the inputs are resolved to 1ps. For example, a
clock divider running off the input clock has a period resolved to
1fs, while a delay such as #1.0001 (units = 1ns) is rounded to 1.000.

So it seems that two different timescale resolutions are being used
within a single module. This doesn't seem like a good idea. Am I
missing something? Is this what you would expect? If so, is this
defined by the LRM or is it simulator dependent? Your views would be
appreciated.

I can provide some sample code if it would help. I am using Cadence NC-
Sim.

Thanks,
Gav.
Thank you all for your replies, they are very helpful. The distinction
between what happens at compile time vs. run time has clarified the
issue for me. I can proceed now with a little less FUD!

Thanks again,
Gav
 

Welcome to EDABoard.com

Sponsor

Back
Top