R
rickman
Guest
On Mar 18, 9:38 am, Jonathan Bromley <jonathan.brom...@verilab.com>
wrote:
modeling" you mean you may not care that the model has any given
inaccuracy. I agree with that. But I don't think it is reasonable to
suggest that the current HDL models are in any sense optimal. There
is always room for improvement.
I think there is little value to looking at HDLs as general
programming languages or even as "programming languages" at all. They
really aren't intended to be programming languages. They are
"Hardware Desciption Languages". If you want to ignore the hardware
aspect of them then I feel you are tossing the baby out with the bath
water.
you are right, they are not intended to map directly to illegal
voltages or something specific. They are intended to indicate
something that is either unknown or improper. But this is likely a
minor point.
the output of that function should be no more known. Treating an X->1
transition as a positive clock edge not only makes the output knowable
when it is not, it hides the fact that there is a problem in the
design or simulation. That is what I want to know about. It is not
frequent, but I have seen problems in a simulation where an internal
point has a meta value far beyond the point I would have expected, but
it was not seen on the outside because the simulation did not properly
transmit that meta value. I had to trace a wrong, but valid state
down through the design unwinding the logic cause and effect to find
the point where I found the meta value. This is typically an error in
initialization or even in the simulation, but I feel it took longer to
find than it should have because the simulation did not properly
handle these meta values.
On the other hand, a FF feeding back on itself to divide by two will
always assume some value in the real world and so generally will
work. But in simulation the meta value will never resolve. That
seems to be too stringent. I guess I can't have it both ways...
input and also have that construct synthesizable which is my main
goal. At least I don't think I can get that to work. It would
certainly be a lot more work and would make the simulations run much
slower. If a logic function can be made to properly handle meta
values, I don't see why the code for a FF can't be defined in a way to
do the same thing. As you say, it is just how you define your
models... or how "they" define the models.
But I want to get work done, the paying kind. Issues with tools
prevent that and saying it is all in how I want to define my models
don't help the issue.
correct. (FULL STOP)
I would like as much capability in the HDL as possible that
facilitates my work. This is not a theoretical issue. This is
pragmatic. Unless it becomes a heavy burden on simulation or
otherwise causes a problem, why not make the simulations more
realistic and practical? You give reasons why I shouldn't want what I
want, but you haven't given any reasons why it shouldn't be done.
hardware description language that describes hardware as well as
possible. If I wanted to program I would use Forth (or when the
customer demands it C).
Rick
wrote:
I don't want to beat a dead horse, but when you say "it's justNote that the usual VHDL flipflop modelling style
does nothing at all if there's an X or U on the
clock - it certainly doesn't drive the FF's
output to X.
[Rickman]
There is a far cry from treating a transition between a boolean
undefined state and a 1 as a rising clock edge and ignoring the
transition altogether.
No, there isn't. Some people still write their VHDL flops
like this, giving an active clock for X->1:
if clock'event and clock = '1' then ...
It's just modelling, using the bare language's features.
Choose your model to suit your needs and convenience
(and, less happily, to suit the templates mandated by
synthesis tools).
modeling" you mean you may not care that the model has any given
inaccuracy. I agree with that. But I don't think it is reasonable to
suggest that the current HDL models are in any sense optimal. There
is always room for improvement.
I think there is little value to looking at HDLs as general
programming languages or even as "programming languages" at all. They
really aren't intended to be programming languages. They are
"Hardware Desciption Languages". If you want to ignore the hardware
aspect of them then I feel you are tossing the baby out with the bath
water.
If course these meta values can have meaning in a real circuit, butA simulation language's X value, in any of its various
flavours, is a trick to make Boolean algebra work even
when you have certain unknowable conditions. It doesn't,
and can't, directly mean anything in real circuits -
it merely means that we don't know enough about a bit's
simulated value to be sure it's 1 or 0.
you are right, they are not intended to map directly to illegal
voltages or something specific. They are intended to indicate
something that is either unknown or improper. But this is likely a
minor point.
That is my point. If you don't know about the input to a function,As soon as you
have these meta-values, you get all kinds of fallout in
any programming language: what should happen when you
test if(x)? what does a 0->x transition mean, when
your functional behaviour only makes sense for 0->1
transitions?
the output of that function should be no more known. Treating an X->1
transition as a positive clock edge not only makes the output knowable
when it is not, it hides the fact that there is a problem in the
design or simulation. That is what I want to know about. It is not
frequent, but I have seen problems in a simulation where an internal
point has a meta value far beyond the point I would have expected, but
it was not seen on the outside because the simulation did not properly
transmit that meta value. I had to trace a wrong, but valid state
down through the design unwinding the logic cause and effect to find
the point where I found the meta value. This is typically an error in
initialization or even in the simulation, but I feel it took longer to
find than it should have because the simulation did not properly
handle these meta values.
On the other hand, a FF feeding back on itself to divide by two will
always assume some value in the real world and so generally will
work. But in simulation the meta value will never resolve. That
seems to be too stringent. I guess I can't have it both ways...
I can't construct a FF to properly handle meta values on the clockFor every one of these questions, any
language must necessarily make a decision to mandate
the language's behaviour. Since people can combine
language constructs in all manner of interesting ways,
there is no one right answer and some work must be
left to the user. That way, the user gets to choose
how much trouble they should go to in attempting to
model reality.
input and also have that construct synthesizable which is my main
goal. At least I don't think I can get that to work. It would
certainly be a lot more work and would make the simulations run much
slower. If a logic function can be made to properly handle meta
values, I don't see why the code for a FF can't be defined in a way to
do the same thing. As you say, it is just how you define your
models... or how "they" define the models.
Yup, and I can write my own HDL tools and even the language itself.Of course, we have conventional patterns of code that
work well enough that we're happy with them most of
the time. The standard RTL flipflop templates fall
into that category; they're not part of the language
itself. As Cary and I pointed out in different ways,
you *can* (both in VHDL and Verilog) build quite
accurate FF models that trash their Q value when
bad things happen on clocks, resets and so on.
But I want to get work done, the paying kind. Issues with tools
prevent that and saying it is all in how I want to define my models
don't help the issue.
When I am doing RTL simulation I want to verify that my design isWell-written library cell models should do exactly
that, to provide the best possible checking that all
is well at gate level. But when we're doing RTL
simulation, we care primarily about 0/1 functional
behaviour and we (or, at least, I) should be happy
to accept that all bets are off if we let an X
creep on to our simulated clock signal. A simple
assertion on the clock's value will soon alert us
if that requirement is violated, at far lower cost
than futzing around with complicated X modelling
at each flop (whether built-in or hand-written).
correct. (FULL STOP)
I would like as much capability in the HDL as possible that
facilitates my work. This is not a theoretical issue. This is
pragmatic. Unless it becomes a heavy burden on simulation or
otherwise causes a problem, why not make the simulations more
realistic and practical? You give reasons why I shouldn't want what I
want, but you haven't given any reasons why it shouldn't be done.
You mean a language that is more hardware oriented? Yes! I want aSure there can always be issues that are hard
to fix. This isn't one of them.
I disagree, but I'm fully aware that many people
would prefer a language that's much more tightly
coupled to the specifics of real flops and other
components.
hardware description language that describes hardware as well as
possible. If I wanted to program I would use Forth (or when the
customer demands it C).
Rick