Parameters of Parameters

L

lingwitt

Guest
Hello.

Consider:

module A (input theInput, output theOutput);
parameter delay = 2;

// Do something

endmodule

module B (input theInput, output theOutput);
parameter delay = a.delay + 5;

A a(theInput, theOutput);

// Do something

endmodule

The idea is that I'd like to know some kind of
total accumulative delay during elaboration.

Unfortunately, the previous example isn't valid
syntax.


Is there some way of achieving what I want?


I could use `defines, but that's gross, because
each module would need a unique identifier for
the delay.

Thanks.
 
On Tue, 01 May 2007 15:02:21 +0100, Jonathan Bromley
<jonathan.bromley@MYCOMPANY.com> wrote:

I suspect I could just about keep you happy (on this
issue) if a parent module were permitted to call any
constant function in a child module instance.
Sorry, this is a wild goose chase. As Steven Sharp
correctly pointed out, there's no difficulty about reaching
into child modules to get parameter values (or call functions),
at least in simulation. The restriction is that you can't
do that in *constant expressions*, so - for example - you
can't use it to give a parameter its value. This restriction
would apply equally to calling a constant function as to
getting the value of a parameter.

However, I still agree that the ability to pull constant
information out of a child module and use it in the parent's
logic can be useful; and, if synthesis tools were ever to
permit such a thing, they could presumably do appropriate
optimisations on discovering that the value is constant.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
On May 2, 6:48 am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
However, I still agree that the ability to pull constant
information out of a child module and use it in the parent's
logic can be useful; and, if synthesis tools were ever to
permit such a thing, they could presumably do appropriate
optimisations on discovering that the value is constant.
Well, I'm certainly glad to hear this!

I can't really hope to argue with you on the same level
of experience, and since I can't figure out any better
way to solve my problem (besides the preprocessor),
I was starting to wonder if I'm a Dummkopf!

Thank you for your insightful posts.
 
Hi Lingwitt,
please see my comments below.

In any case, hardware description languages
are meant to model real hardware and its design.

Lower-level modules are designed with certain
characteristics that must be taken into account
when such modules are used in higher-level modules.

You are basically saying that there is no good
way to model that!

Why should a designer micromanage characteristics
like latency, when such details could be easily
abstracted away without logic? Latency is actually
a perfect example:

Several modules may have a latency
of one. The next higher-level modules
are made from several of these lower-
level modules at a time, so that they
have a latency of, say, 3. Then another
set of higher-level modules is constr-
ucted from those, and so on.

Each higher-level module should act as
a black box; its specification provides
a latency.
Specification is what you want, and implementation is one of the ways
to achieve what you want.You have to decide about maximum latency at
the top module, and then write implementation for it, intoducing the
hierarchy of lower-level modules. Then, there is a need to verify that
implementation is designed according to specification.
This is commonly used top-down (specfication -> implementation)
approach of hardware design. Another approach, presented in your
example, (down->top) basically means that there is no specification or
it is not accurate or incomplete. In other words - you don't know what
you want to achieve or your goal is very flexible.

What if a new lower-power design changes
the latency of the lowest-level module?
It may be good or bad. The formal criteria for this decision is the
specification document. And there are the tools (STA, timing
simulations) to verify such design properties against specification
document.

All the higher-level modules should handle
the change automatically. Why should a human
need to figure that kind of stuff out?

You advocate an approach that favors the tool
rather than the human.
Hardware design practice shows that this approach is perfectly right.
There is a strong need to formalize design development (and also
formalize design specification) in order to replace, as more as
possible, error-prone humans by tools ;)

Regards,
-Alex
 
lingwitt <lingwitt@gmail.com> writes:

On May 1, 3:38 am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com
wrote:
On 30 Apr 2007 17:16:05 -0700,

lingwitt <lingw...@gmail.com> wrote:
I require going from the leaf up.

In that case, drive a constant value out through a
regular port of the lower-level module. Synthesis
will strip it away, so it costs you nothing.

This is basically a hack for achieving what parameters
should allow:

(1) constants
(2) no cost

....

You advocate an approach that favors the tool
rather than the human.
It seems that there is a basic misunderstanding of the "intent"/
"purpose" of parameters in Verilog. Parameters are not mechanisms for
doing calculations of constants, although parameters do allow
expressions in their definitions to make their intended use simpler.
Parameters are simply a way of writing generic (parameterizable)
modules that can be instantiated to take on a limited number of
predefined shapes. One uses parameters to get shape information into
a cell, not out-of-it.

Now, it may seem counter-intuitive to you that the tool should
allow/require the design to be micro-managed at that level, but my
experience is that the hardware designers prefer it that way (their
managers certainly do), at least if you are working like I am with a
group assembling devices consisting of billions of transistors made up
of numerous identical parts and using current validation techniques
which only give statistical certainty of correctness. To make up for
that satistical uncertainty, you want certainty that one of those
parts doesn't "query it's environment" and instantiate itself in some
non-standard way (unless *YOU* have specifically told it to do so).
That way as your statistical testing approaches 100%, your chance of
the circuit working also apporaches 100% (and you can know in advance,
which parts are non-standard and need a different kind of testing).

That said, Verilog is a modelling language (and was so before it was a
design language). If latency is something you want to model and to
calculate, model it as Jonathan suggests. That's the intent of the
language. You can model anything you want and get the answers. You
can then use those answers to design your system. Note that that is a
two-stage (or iterative process). You can build your model to check
that you haven't laid out something that breaks your latency rules,
but you don't want to build something that attempts to fix your design
for you. It simply should report that the latency criteria have not
been met, but the designer (and not the device) should then determine
how to resolve the problem.

As a result, Verilog is probably not that language to describe a
series of self-assembling autonomous parts that self-configure
themselves into a solution. Instead it builds nice static models of
things that we have carefully described so that we can validate those
models perform as expected, because we don't trust ourselves not to
have made errors, errors that can uncovered by testing.

If you want to build some kind of self-configuring collection, use
another language (e.g. Perl)* and have it generate the Verilog that
desibes the configured system. That way, your configured system still
has the static properties that make it susceptible to normal
validation techniques and one doesn't have to worry that two different
validation runs will somehow coax the system into two different
configurations and thus can be considered statistcally independent
from the model. In addition, you have your programmable system the
adjusts auto-magically to your changing desires. It just doesn't do
so within the Verilog. But, if your desires change, you can simply
rerun the generation tool and get out new Verilog customized to those
desires, and then you can run your validation suite to give you
confidence that it works as expected.

*: You can also use generate statements in Verilog to achieve the same
effect. However, in any of the cases, you want the system configuring
itself based upon a specification. That specification is always
imposed from the "outside" which makes it inherently top-down at some
level.
 
On May 2, 4:16 pm, Alex <agnu...@gmail.com> wrote:

Another approach, presented in your
example, (down->top) basically means that there is no specification or
it is not accurate or incomplete. In other words - you don't know what
you want to achieve or your goal is very flexible.
You've nailed it.
That's exactly what I'm trying to achieve: flexibility of
specification
through flexibility of implementation.

What if a new lower-power design changes
the latency of the lowest-level module?

It may be good or bad. The formal criteria for this decision is the
specification document. And there are the tools (STA, timing
simulations) to verify such design properties against specification
document.
But how do you verify a change in the specification document?
In most cases, the implementation needs to be updated, but
certain changes shouldn't require rewrites.

All the higher-level modules should handle
the change automatically. Why should a human
need to figure that kind of stuff out?
You advocate an approach that favors the tool
rather than the human.

Hardware design practice shows that this approach is perfectly right.
There is a strong need to formalize design development (and also
formalize design specification) in order to replace, as more as
possible, error-prone humans by tools ;)
I completely agree with your last statement.
That is, your last statement completely agrees with me.

The sentence:

"You advocate an approach that favors the tool
rather than the human."

is perhaps better written as:

"You advocate an approach that eases the job of
the tool rather than the job of the human."

Basically, I'm advocating automatation to the greatest
extent possible.

The key is this:

Specification comes not only from the top,
but also from the bottom.

Top->Bottom: What needs to be achieved?
Bottom->Top: What can be achieved?

Real design is actually a combination of these, right?

It would be so wasteful to specify black boxes from the start,
only to find out that revision must be made on a grand scale
(something only to be found out upon detailed implementation).

Consider Microsoft's Vista: They specified something
they couldn't achieve, and they wasted billions and
years revising and widdling away and then throwing
out most of their work only to start afresh using
a specification wrought from their previous disasters.

Instead, it is best to leave as much of the specification
as possible open-ended for as long as possible in order
to make the design process as flexible as possible.

Iterations will fill in the gaps until the specification is complete.
May I also note that hacks will be avoided.
 
On May 2, 5:42 pm, Chris F Clark <c...@shell01.TheWorld.com> wrote:
It seems that there is a basic misunderstanding of the "intent"/
"purpose" of parameters in Verilog. Parameters are not mechanisms for
doing calculations of constants..
Parameters are simply a way of writing generic (parameterizable)
modules that can be instantiated to take on a limited number of
predefined shapes.
Parameters are indeed used for
calculations involving constants.

How many cycles should be counted?
What are the pixel widths of a screen?
What constants should be used for the transform?

Consider the generate statement, which
places parameters in the position of designer.

... using current validation techniques
which only give statistical certainty of correctness. To make up for
that satistical uncertainty, you want certainty that one of those
parts doesn't "query it's environment" and instantiate itself in some
non-standard way (unless *YOU* have specifically told it to do so).
This assumes the results of such queries
don't have well defined bounds.

Sometimes the propogation of a decision
is mathematical (arithmetic in the case
of latency!).

Why should a human have to waste his time
doing something a mindless machine could
yield?

That way as your statistical testing approaches 100%, your chance of
the circuit working also apporaches 100% (and you can know in advance,
which parts are non-standard and need a different kind of testing).
I suppose I'm trying to say that some
design ramifications can be specified
mathematically, because such abstract
specifications are already 100% (guaranteed).

That said, Verilog is a modelling language (and was so before it was a
design language).
Then it's unsurprising that verilog fails here.

... You can model anything you want and get the answers. You
can then use those answers to design your system. Note that that is a
two-stage (or iterative process). You can build your model to check
that you haven't laid out something that breaks your latency rules,
but you don't want to build something that attempts to fix your design
for you. It simply should report that the latency criteria have not
been met, but the designer (and not the device) should then determine
how to resolve the problem.
Iteration is required for mathematically
uncertain outcomes. The ramifications
for changes in latency are mathematically
certian.

As a result, Verilog is probably not that language to describe a
series of self-assembling autonomous parts that self-configure
themselves into a solution. Instead it builds nice static models of
things that we have carefully described so that we can validate those
models perform as expected, because we don't trust ourselves not to
have made errors, errors that can uncovered by testing.
Yes. Verilog is only a validation language.

If you want to build some kind of self-configuring collection, use
another language (e.g. Perl)* and have it generate the Verilog that
desibes the configured system. That way, your configured system still
has the static properties that make it susceptible to normal
validation techniques and one doesn't have to worry that two different
validation runs will somehow coax the system into two different
configurations and thus can be considered statistcally independent
from the model. In addition, you have your programmable system the
adjusts auto-magically to your changing desires. It just doesn't do
so within the Verilog. But, if your desires change, you can simply
rerun the generation tool and get out new Verilog customized to those
desires, and then you can run your validation suite to give you
confidence that it works as expected.
Excellent advice.

I'm surprised this is not the advocated solution.

You can also use generate statements in Verilog to achieve the same
effect. However, in any of the cases, you want the system configuring
itself based upon a specification. That specification is always
imposed from the "outside" which makes it inherently top-down at some
level.
This assumes the specification is complete,
which is only assured at the end of the
development process.
 
On 2 May 2007 17:29:11 -0700, lingwitt
<lingwitt@gmail.com> wrote:

It would be so wasteful to specify black boxes from the start,
only to find out that revision must be made on a grand scale
(something only to be found out upon detailed implementation).
I agree very strongly with this sentiment, but....

Instead, it is best to leave as much of the specification
as possible open-ended for as long as possible in order
to make the design process as flexible as possible.
.... that goes one step further than I'm prepared to follow.


Consider Microsoft's Vista:
Is that compulsory? :)

Iterations will fill in the gaps until the specification is complete.
May I also note that hacks will be avoided.
Again I fear you draw unjustified conclusions from a very
good start: I completely agree (both from experience, and
also in principle) that the specification and design
processes are iterative, and intimately linked - although
some of this iteration is across projects and across
individuals' experience, rather than within the life
of a single project. Sadly, though, I am all too aware
that iterative changes to a spec, especially when performed
under end-of-project time pressures, rarely succeed in
avoiding hacks.

Thanks for a thought-provoking discussion.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
Lingwitt,

You have a software background, maybe? Whatever, you are
doing what so many software folk do when getting under
the skin of hardware design: railing against it for
not being software. You have some interesting and valid
points, but I rather profoundly disagree with your
conclusions...

I require going from the leaf up.

In that case, drive a constant value out through a
regular port of the lower-level module. Synthesis
will strip it away, so it costs you nothing.

This is basically a hack for achieving what parameters
should allow:

(1) constants
(2) no cost
No, you're being presumptuous. Parameters most certainly
should not allow that. I completely agree with you
that *something* should allow it, but parameters were
carefully thought-through to do something different and
it would be foolhardy to press them into service to
do what you're asking. There is no doubt that they
*can* do it in some limited special cases, but that's
no way to go about designing (or using) a language feature.

your complaint about circularity being easy to see
is just not true - particularly when combined with
generates and/or arrays of instances, the relationships
among parameter values can easily become extremely
obscure unless you adhere to strict top-down propagation.

Could you give a quick example?
I can't quite picture it.
Suppose you have a child module's parameter used, at
static elaboration time, to influence the construction
of its parent module. Generate constructs (and module
instance arrays) are controlled by parameters. So it
would be easy for the child to configure its parameter
in such a way that it could suppress, or catastrophically
modify, the elaboration of its own instantiation; and
this bizarre behaviour is not something that you can
predict or manage from the child module, which is the
place where you're messing with the parameter.
By contrast, if you use parameters in the canonical
way, they are easily-documented hooks that allow you
to customise the way a child module is built, from the
parent module that instantiates it.

I'm not sure how the module tree could ever end up being
cyclic unless someone explicitly tried; similarly with
the parameters.
Yes, but that's exactly the point: you *are* explicitly
introducing cycles, as soon as you allow a child's
parameter to affect the static elaboration of its parent.

Lower-level modules are designed with certain
characteristics that must be taken into account
when such modules are used in higher-level modules.
Hardware people have fought a bitter struggle over
many decades to achieve re-use through top-down design.
Lower-level modules should be *configured* with certain
characteristics that suit the needs of their parent
module.

You are basically saying that there is no good
way to model
[the passing of useful static information
from a child module to its parent]

No, I'm not. I'm saying that parameters are absolutely
the wrong tool, and I've offered you other modelling
styles that plug the gap safely.

Why should a designer micromanage characteristics
like latency, when such details could be easily
abstracted away without logic?
Because, in the overwhelming majority of realistic
cases, they *can't* be abstracted away without logic.
Your fixed-latency example (below) is almost without
precedent in my own (long-ish) design experience.

Latency is actually a perfect example:
Several modules may have a latency
of one. The next higher-level modules
are made from several of these lower-
level modules at a time, so that they
have a latency of, say, 3. Then another
set of higher-level modules is constr-
ucted from those, and so on.

Each higher-level module should act as
a black box; its specification provides
a latency.

What if a new lower-power design changes
the latency of the lowest-level module?

All the higher-level modules should handle
the change automatically. Why should a human
need to figure that kind of stuff out?
Have you ever heard the phrase "plug compatible"?
The only way I can manage large and complex designs
is by isolating low-level detail (including latency)
at the lowest possible point. I emphatically do NOT
want low-level design decisions, or changes, propagating
their way up the design hierarchy to levels that were
(quite properly) designed with absolutely no concern
for the low-level details.

I have already somewhat agreed with you that there is
a need for low-level modules to be able to expose some
of their static characteristics to their parents, so
that the said parents can make use of that information
if they wish. I have, personally, been struggling with
exactly this problem in relation to SystemVerilog's
interface construct where, to model bus structures
effectively, I'd like an interconnect fabric to know
things about the modules that are connected to it.
I say "struggling" because after very careful thought
indeed I am still unable to find a way of adding such
a feature to the language without opening all manner
of Pandora's-box unwanted loopholes.

Basically, the lack of proper modeling tools
has created in the hardware community a culture
of workarounds.
We all work with the tools available to us. Lack of
proper concurrent programming tools has created in
the software community a culture of butt-ugly event
loops and inverted control structures.

I suspect I could just about keep you happy (on this
issue) if a parent module were permitted to call any
constant function in a child module instance. Of course,
Verilog freely permits this; but synthesis tools,
sadly, don't. Here I concede that we are poorly
served by our tools.

For instance, Most IP that can
have configurations with different latencies
provide signals that assert the readiness of
the output, even though the latency after
manufacture is not variable.
Phooey. Almost all non-trivial designs have
latency that either is dynamically variable,
or can trivially be inferred from the parameters
used to configure the design from above (e.g.
the number of taps in a FIR filter).

A rather better example might be a peripheral
that wishes to occupy a certain range of addresses.
To inform the parent module of this, it presumably
needs an "address in range" output signal that is
asserted when the peripheral itself decodes the
address to be in-range. If I understand your
position correctly, you are saying that this
address range should be a statically determinable
property of the module, and this property could
be used within the parent module, thereby avoiding
the cost of the "in-range" signal. Superficially
this seems a telling argument. However, it ain't.
Someone, somewhere, must do the address decoding.
Providing an output from the peripheral is simply
an example of delegation - the parent module using
a built-in function of the child module to discover
something *that it would have had to compute for
itself anyway*. Synthesis is quite good at flattening
hierarchy intelligently in cases like this. In
practice, then, you would gain nothing if the
peripheral (child) module could expose static
information rather than having a dynamic output.

It's worth bearing in mind that you gain essentially
nothing by having a piece of hardware stand idle.
Once you've built it, it's working anyway; you may as
well get it to do useful work for you. By contrast,
in software it is very important to avoid repeating
a calculation if you already have its result to hand.
Yes, I know there are some power-consumption issues
that make this story less simple than that; but it
remains a useful insight.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
On May 3, 4:31 am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
I completely agree (both from experience, and
also in principle) that the specification and design
processes are iterative, and intimately linked - although
some of this iteration is across projects and across
individuals' experience, rather than within the life
of a single project. Sadly, though, I am all too aware
that iterative changes to a spec, especially when performed
under end-of-project time pressures, rarely succeed in
avoiding hacks.
I must admit that my involvement in large-scale projects
is very lacking, but I have definitely experienced shades
of the following:

The need to specify exactly what to do intensifies
exponentially with the number of people involved.

I suppose this is why large groups are organized as a tree.
Personally, I'd like to sit at the top. ;-)

Thanks for a thought-provoking discussion.
Thank *you*!
 
On Apr 30, 12:23 pm, lingwitt <lingw...@gmail.com> wrote:
Hello.

Consider:

module A (input theInput, output theOutput);
parameter delay = 2;

// Do something

endmodule

module B (input theInput, output theOutput);
parameter delay = a.delay + 5;

A a(theInput, theOutput);

// Do something

endmodule

The idea is that I'd like to know some kind of
total accumulative delay during elaboration.

Unfortunately, the previous example isn't valid
syntax.

Is there some way of achieving what I want?

I could use `defines, but that's gross, because
each module would need a unique identifier for
the delay.

Thanks.
The following example works with Modelsim:

//-------------------------------------------
module a;
parameter A = 10;
endmodule

module b;
parameter B = a.A + 5;
a a ();
initial $display ("B = %0d", B);
endmodule
//-------------------------------------------

Which simulator are you using?

-Alex
 
Alex wrote:
The following example works with Modelsim:

//-------------------------------------------
module a;
parameter A = 10;
endmodule

module b;
parameter B = a.A + 5;
a a ();
initial $display ("B = %0d", B);
endmodule
//-------------------------------------------
Then they are not compliant with the language standard.
Hierarchical names are not allowed in constant expressions,
presumably because it can so easily lead to circular definitions.
 
lingwitt wrote:
The idea is that I'd like to know some kind of
total accumulative delay during elaboration.

Unfortunately, the previous example isn't valid
syntax.
You cannot do this in the upward direction, but
you can accumulate downward and get a total
down at the leaf instances of the hierarchy.
Just take in the cumulative delay from above, and
pass that plus the local delay down to the next
instance down with a parameter override.

Parameter propagation is supposed to flow from
the top of the hierarchy downward, not the other
way around.
 
Then they are not compliant with the language standard.
Hierarchical names are not allowed in constant expressions,
presumably because it can so easily lead to circular definitions.
Modelsim allows hierachical parameters referencing only for lower-
level instances. In this case, circular definitions are not an issue.
And it is convenient!

Here is an example:

module a_ext;
parameter A = 5;
endmodule

module a;
parameter A = 10;
endmodule

module b;
parameter B = a.A + a_ext.A + 5;
a a ();
a_ext a_ext (); // Comment this line to get an error message
initial $display ("B = %0d", B);
endmodule

-Alex
 
On Apr 30, 3:31 pm, s...@cadence.com wrote:
lingwitt wrote:

The idea is that I'd like to know some kind of
total accumulative delay during elaboration.

Unfortunately, the previous example isn't valid
syntax.

You cannot do this in the upward direction, but
you can accumulate downward and get a total
down at the leaf instances of the hierarchy.
Just take in the cumulative delay from above, and
pass that plus the local delay down to the next
instance down with a parameter override.

Parameter propagation is supposed to flow from
the top of the hierarchy downward, not the other
way around.
Thanks, but that doesn't solve this problem.

I require going from the leaf up.

I have to say, I think Xilinx actually managed
to get something right this time.
 
On Apr 30, 2:55 pm, Alex <agnu...@gmail.com> wrote:
The following example works with Modelsim:

//-------------------------------------------
module a;
parameter A = 10;
endmodule

module b;
parameter B = a.A + 5;
a a ();
initial $display ("B = %0d", B);
endmodule
//-------------------------------------------

Which simulator are you using?

-Alex
Interesting. I'm currently using iverilog.

I wonder if other simulators follow modelsim.
 
On Apr 30, 8:17 pm, lingwitt <lingw...@gmail.com> wrote:
On Apr 30, 2:55 pm, Alex <agnu...@gmail.com> wrote:





The following example works with Modelsim:

//-------------------------------------------
module a;
parameter A = 10;
endmodule

module b;
parameter B = a.A + 5;
a a ();
initial $display ("B = %0d", B);
endmodule
//-------------------------------------------

Which simulator are you using?

-Alex

Interesting. I'm currently using iverilog.

I wonder if other simulators follow modelsim.- Hide quoted text -

- Show quoted text -
That may be a bad idea since, as Steven mentioned, it allows circular
dependences.
The following code:

//-------------------------------------------
module a;
parameter A = 10;
endmodule

module b;
a #(a.A+5) a ();
initial $display ("a.A = %0d", a.A);
endmodule
//-------------------------------------------
will compile with Modelsim. During simulation, there is run-time
warning about circular dependence of parameters. And wonder what will
be the value of parameter A? It will be 60 (10+10*5), since before
issuing circular dependence warning, simulator runs through 10
(default number) iterations of parameter reassignment.

So, it is really better for simulator to strictly follow top-down
parameter propagation rules which does not allow circular
dependencies.

-Alex
 
On Apr 30, 10:53 pm, Alex <agnu...@gmail.com> wrote:
That may be a bad idea since, as Steven mentioned, it allows circular
dependences.
The following code:

//-------------------------------------------
module a;
parameter A = 10;
endmodule

module b;
a #(a.A+5) a ();
initial $display ("a.A = %0d", a.A);
endmodule

//-------------------------------------------

will compile with Modelsim. During simulation, there is run-time
warning about circular dependence of parameters. And wonder what will
be the value of parameter A? It will be 60 (10+10*5), since before
issuing circular dependence warning, simulator runs through 10
(default number) iterations of parameter reassignment.

So, it is really better for simulator to strictly follow top-down
parameter propagation rules which does not allow circular
dependencies.
Your example is a bit contrived.
Even a human can tell it is circular.

Actually, it seems pretty hard to make
unforeseen circular parameter defintions.

Moreover, these circularities can be caught
and reported.

In any case, surely my problem has been
encountered before. It seems generally
useful to be able to accrue total information
regarding submodules.

In fact, parameter's are pretty lame if you
can only propogate down.
 
On Apr 30, 2:55 pm, Alex <agnu...@gmail.com> wrote:
The following example works with Modelsim:

//-------------------------------------------
module a;
parameter A = 10;
endmodule

module b;
parameter B = a.A + 5;
a a ();
initial $display ("B = %0d", B);
endmodule
//-------------------------------------------

Which simulator are you using?

-Alex
Perhaps ModelSim works, but XST, the synthesis tool,
does not know what to do with it.
 
On May 1, 2:50 am, lingwitt <lingw...@gmail.com> wrote:
On Apr 30, 2:55 pm, Alex <agnu...@gmail.com> wrote:



The following example works with Modelsim:

//-------------------------------------------
module a;
parameter A = 10;
endmodule

module b;
parameter B = a.A + 5;
a a ();
initial $display ("B = %0d", B);
endmodule
//-------------------------------------------

Which simulator are you using?

-Alex

Perhaps ModelSim works, but XST, the synthesis tool,
does not know what to do with it.
I should clarify that I removed simulation-only code.
 

Welcome to EDABoard.com

Sponsor

Back
Top