regarding "posedge"

On Mar 18, 9:38 am, Jonathan Bromley <jonathan.brom...@verilab.com>
wrote:
Note that the usual VHDL flipflop modelling style
does nothing at all if there's an X or U on the
clock - it certainly doesn't drive the FF's
output to X.

[Rickman]

There is a far cry from treating a transition between a boolean
undefined state and a 1 as a rising clock edge and ignoring the
transition altogether.

No, there isn't.  Some people still write their VHDL flops
like this, giving an active clock for X->1:
   if clock'event and clock = '1' then ...
It's just modelling, using the bare language's features.
Choose your model to suit your needs and convenience
(and, less happily, to suit the templates mandated by
synthesis tools).
I don't want to beat a dead horse, but when you say "it's just
modeling" you mean you may not care that the model has any given
inaccuracy. I agree with that. But I don't think it is reasonable to
suggest that the current HDL models are in any sense optimal. There
is always room for improvement.

I think there is little value to looking at HDLs as general
programming languages or even as "programming languages" at all. They
really aren't intended to be programming languages. They are
"Hardware Desciption Languages". If you want to ignore the hardware
aspect of them then I feel you are tossing the baby out with the bath
water.


A simulation language's X value, in any of its various
flavours, is a trick to make Boolean algebra work even
when you have certain unknowable conditions.  It doesn't,
and can't, directly mean anything in real circuits -
it merely means that we don't know enough about a bit's
simulated value to be sure it's 1 or 0.  
If course these meta values can have meaning in a real circuit, but
you are right, they are not intended to map directly to illegal
voltages or something specific. They are intended to indicate
something that is either unknown or improper. But this is likely a
minor point.

As soon as you
have these meta-values, you get all kinds of fallout in
any programming language: what should happen when you
test if(x)?  what does a 0->x transition mean, when
your functional behaviour only makes sense for 0->1
transitions?  
That is my point. If you don't know about the input to a function,
the output of that function should be no more known. Treating an X->1
transition as a positive clock edge not only makes the output knowable
when it is not, it hides the fact that there is a problem in the
design or simulation. That is what I want to know about. It is not
frequent, but I have seen problems in a simulation where an internal
point has a meta value far beyond the point I would have expected, but
it was not seen on the outside because the simulation did not properly
transmit that meta value. I had to trace a wrong, but valid state
down through the design unwinding the logic cause and effect to find
the point where I found the meta value. This is typically an error in
initialization or even in the simulation, but I feel it took longer to
find than it should have because the simulation did not properly
handle these meta values.

On the other hand, a FF feeding back on itself to divide by two will
always assume some value in the real world and so generally will
work. But in simulation the meta value will never resolve. That
seems to be too stringent. I guess I can't have it both ways...


For every one of these questions, any
language must necessarily make a decision to mandate
the language's behaviour.  Since people can combine
language constructs in all manner of interesting ways,
there is no one right answer and some work must be
left to the user.  That way, the user gets to choose
how much trouble they should go to in attempting to
model reality.
I can't construct a FF to properly handle meta values on the clock
input and also have that construct synthesizable which is my main
goal. At least I don't think I can get that to work. It would
certainly be a lot more work and would make the simulations run much
slower. If a logic function can be made to properly handle meta
values, I don't see why the code for a FF can't be defined in a way to
do the same thing. As you say, it is just how you define your
models... or how "they" define the models.


Of course, we have conventional patterns of code that
work well enough that we're happy with them most of
the time.  The standard RTL flipflop templates fall
into that category; they're not part of the language
itself.  As Cary and I pointed out in different ways,
you *can* (both in VHDL and Verilog) build quite
accurate FF models that trash their Q value when
bad things happen on clocks, resets and so on.
Yup, and I can write my own HDL tools and even the language itself.
But I want to get work done, the paying kind. Issues with tools
prevent that and saying it is all in how I want to define my models
don't help the issue.


Well-written library cell models should do exactly
that, to provide the best possible checking that all
is well at gate level.  But when we're doing RTL
simulation, we care primarily about 0/1 functional
behaviour and we (or, at least, I) should be happy
to accept that all bets are off if we let an X
creep on to our simulated clock signal.  A simple
assertion on the clock's value will soon alert us
if that requirement is violated, at far lower cost
than futzing around with complicated X modelling
at each flop (whether built-in or hand-written).
When I am doing RTL simulation I want to verify that my design is
correct. (FULL STOP)

I would like as much capability in the HDL as possible that
facilitates my work. This is not a theoretical issue. This is
pragmatic. Unless it becomes a heavy burden on simulation or
otherwise causes a problem, why not make the simulations more
realistic and practical? You give reasons why I shouldn't want what I
want, but you haven't given any reasons why it shouldn't be done.


Sure there can always be issues that are hard
to fix.  This isn't one of them.

I disagree, but I'm fully aware that many people
would prefer a language that's much more tightly
coupled to the specifics of real flops and other
components.
You mean a language that is more hardware oriented? Yes! I want a
hardware description language that describes hardware as well as
possible. If I wanted to program I would use Forth (or when the
customer demands it C).

Rick
 
Rick,

I too have no desire to beat this to death; we both
have work to do :) I think I understand where
you're coming from, and I certainly would never
suggest that what we have today is ideal. But
there's also my nagging sense that ultimately you
can get more done with a truly general-purpose
toolkit than you can with something that was aimed
at a specific problem. C beat Cobol and a zillion
others precisely because its generality allowed
users to layer almost any necessary domain-specific
stuff on top of the language (but, interestingly,
not enough to make C much good for hardware
design/verification.... hmmm, maybe it's not so
obvious after all.)

Anyhow, the position you take is very interesting
to me. I too have to get real work done, but I
don't feel a need for the kind of thing you request;
instead I hanker after more general tools such as
assertions. I know plenty of engineers whose
position is more closely aligned with yours than
with mine, but I'm not alone either.

I'll try to look out some of the material Cliff
Cummings produced on proposals for built-in X
handling in the Verilog language. It would be
very interesting to hear your take on it.

Thanks and regards
--
Jonathan Bromley
 
On 3/17/2011 11:22 PM, rickman wrote:
On Mar 17, 12:44 pm, "Cary R."<no-s...@host.spam> wrote:
On 3/17/2011 2:58 AM, Jonathan Bromley wrote:

Many of us felt that it was putting too much application-specific
stuff into what should be a general-purpose language; after all, you
can already model such things [FFs] perfectly well with UDPs, and
people who write gate-level cell models certainly do that.

I would say you can adequately model a FF with a well crafted UDP, but
it's amazing how many cell libraries only provide the basic
functionality and hardly any pessimism. I have developed what I consider
the definitive UDP that models a FF with asynchronous set/reset and even
it is not as good as I would like. It's fundamentally limited by the
functionality available in a UDP. I think with a second helper UDP I can
get all the functionality I desire, but I've been busy with other things
so I have not finished this.

For the interested here's the remaining problem.

Given a FF with a defined D input that is opposite the current Q value a
0->X on the clock should produce an X on the output but a subsequent
x->1 should correctly latch the value because at this point in time you
know an edge has occurred. You could actually code this in the UDP as
x->1 latches the D input, but that doesn't work since a 1->x followed by
a x->1 doesn't work correctly (should be undefined). I believe with a
second UDP I can record the previous transition type and then restrict
the x->1 latching to the case where it was proceeded by a 0->x. Even
better is a C model linked into the simulator, but I want to get this
straight with basic Verilog (which is portable) first.

In reality you shouldn't have an X in your clock tree, so this is really
a personal experiment to see how far you can push things.

Cary

I don't know that there is much value in providing this sort of
behavior and I don't know that it matches the real world in any useful
way. The fact that there should have been a clock edge somewhere
within the intermediate X region of a 0 to 1 transition is not
usefully modeled by making the output transition concurrent with the
final transition to 1. Within the X region may have been many
transitions and these may not meet specs for proper clocking of the
device and may even cause the FF to go metastable. So why treat the x
to 1 transition as a valid clock?

Rick
If you look in one of my later post you'll see that I'm doing this
mostly to understand the corner cases of UDPs and how far you can push
things if you were so inclined. In the quest for knowledge I was so
inclined. It also describes the specific case that I had seen where this
particular solution is the correct behavior (someone had connected two
buffers that had slightly different delay in parallel in the clock
tree). You are 100% correct, if the source of the X could create many
dynamic transitions during the X period then you need different
functionality. I also said in that post that I would need to consider
this (dynamic X behavior) when I had some free time to look at this
model again. My guess is that you need two models and which one you use
depends on the type of X you expect, statically unknown or dynamically
unknown.

The quick summary is this is a personal journey of education, not
something I'm building to include in a cell library.

Cary
 
Jonathan Bromley <jonathan.bromley@verilab.com> wrote:
(snip)

There is a far cry from treating a transition between a boolean
undefined state and a 1 as a rising clock edge and ignoring the
transition altogether.

No, there isn't. Some people still write their VHDL flops
like this, giving an active clock for X->1:
if clock'event and clock = '1' then ...
(snip)

A simulation language's X value, in any of its various
flavours, is a trick to make Boolean algebra work even
when you have certain unknowable conditions. It doesn't,
and can't, directly mean anything in real circuits -
it merely means that we don't know enough about a bit's
simulated value to be sure it's 1 or 0.
Well, it is also nice to initialize a state machine and be
sure that it can start up with any (unknown) initial state.

Though that doesn't always work. I had one once that the
simulator couldn't figure out. Instead of 'X' I set the
start values to large numbers, like 12345, and then watched
as they got to the right value.

Note, for example, that (with unsigned arithmetic) min(16'x,0)
is not zero in simultion, but it is with any actual value
for x.

-- glen
 
rickman <gnuarm@gmail.com> wrote:
(snip)

That is my point. If you don't know about the input to a function,
the output of that function should be no more known.
But that is the problem. In enough cases, the output of a function
is known with some inputs unknown. If you multiply by zero (and
don't allow for infinity) then the product is zero. If you subtract
a number from itself, the difference should be zero. IF you
add one, and then subtract the original, it should be one.

The simulator likely gets those wrong with X.

Treating an X->1
transition as a positive clock edge not only makes the output knowable
when it is not, it hides the fact that there is a problem in the
design or simulation. That is what I want to know about. It is not
frequent, but I have seen problems in a simulation where an internal
point has a meta value far beyond the point I would have expected, but
it was not seen on the outside because the simulation did not properly
transmit that meta value. I had to trace a wrong, but valid state
down through the design unwinding the logic cause and effect to find
the point where I found the meta value. This is typically an error in
initialization or even in the simulation, but I feel it took longer to
find than it should have because the simulation did not properly
handle these meta values.
(snip)

I can't construct a FF to properly handle meta values on the clock
input and also have that construct synthesizable which is my main
goal. At least I don't think I can get that to work. It would
certainly be a lot more work and would make the simulations run much
slower. If a logic function can be made to properly handle meta
values, I don't see why the code for a FF can't be defined in a way to
do the same thing. As you say, it is just how you define your
models... or how "they" define the models.
X on a clock is strange. It is a little more interesting on
a clock enable, though. It would seem that there are some state
machines that could reliably start with an X on the clock enable
in real life, but maybe not in simulation.

-- glen
 
On Fri, 18 Mar 2011 08:18:29 -0700 (PDT), rickman <gnuarm@gmail.com>
wrote:

....
I think there is little value to looking at HDLs as general
programming languages or even as "programming languages" at all. They
really aren't intended to be programming languages. They are
"Hardware Desciption Languages".
....
I would like as much capability in the HDL as possible that
facilitates my work.
....
When I am doing RTL simulation I want to verify that my design is
correct. (FULL STOP)
In my humble opinion these statements are not self-consistent in the
sense that you're not only using HDL to develop hardware but you're
using it to verify your design.
A language which only/strictly allows you to describe hardware is no
where nearly good enough as a verification language. To verify well,
quickly and with high coverage you need a much more capable language
than an HDL you describe. You need a complete programming language to
describe all the verification tasks you need to accomplish. Remember
that you'll spend a lot more time in that verification language than
in the description language so make sure that verification language is
as sophisticated as possible to save you time and sometime even make
it possible to say what you want for verification.
--
Muzaffer Kal

DSPIA INC.
ASIC/FPGA Design Services

http://www.dspia.com
 
On Mar 18, 3:02 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
rickman <gnu...@gmail.com> wrote:

(snip)

That is my point.  If you don't know about the input to a function,
the output of that function should be no more known.  

But that is the problem.  In enough cases, the output of a function
is known with some inputs unknown.  If you multiply by zero (and
don't allow for infinity) then the product is zero.  If you subtract
a number from itself, the difference should be zero.  IF you
add one, and then subtract the original, it should be one.

The simulator likely gets those wrong with X.
The context here is not logic gates where you can easily define a
table of outputs vs. inputs for each of the meta-values. VHDL does
that and I don't have any issues with how it is done. But I don't
agree that if you subtract a number from itself the result should be
zero if meta values are involved. Subtraction uses logic elements. I
expect that a subtraction results in meta values on the outputs
because of how the logic operates once you have defined how meta
values propagate through the logic. The real world does funny things
when you violate the input specs. That is part of what meta values
represent and the outputs should reflect that. Otherwise, what is the
point of simulation?


I can't construct a FF to properly handle meta values on the clock
input and also have that construct synthesizable which is my main
goal.  At least I don't think I can get that to work.  It would
certainly be a lot more work and would make the simulations run much
slower.  If a logic function can be made to properly handle meta
values, I don't see why the code for a FF can't be defined in a way to
do the same thing.  As you say, it is just how you define your
models... or how "they" define the models.

X on a clock is strange.  It is a little more interesting on
a clock enable, though.  It would seem that there are some state
machines that could reliably start with an X on the clock enable
in real life, but maybe not in simulation.
Yes, the issue of starting a circuit with meta values is common. If a
FF has a meta value on the enable and the input is different from the
output, the result should be a meta value. I suppose part of the
problem is that while gates are primitive elements in the math, FFs
are not elements in the language at all. They are inferred through
the constructs of the language, but not in the language itself. I
have always had an issue with that. I simply don't agree that an HDL
has to be a programming language first and describe hardware second.
It should be more tightly tied to hardware in my opinion.

Rick
 
On Mar 18, 9:32 pm, Muzaffer Kal <k...@dspia.com> wrote:
On Fri, 18 Mar 2011 08:18:29 -0700 (PDT), rickman <gnu...@gmail.com
wrote:

...

I think there is little value to looking at HDLs as general
programming languages or even as "programming languages" at all.  They
really aren't intended to be programming languages.  They are
"Hardware Desciption Languages".  
...
I would like as much capability in the HDL as possible that
facilitates my work.  

...

When I am doing RTL simulation I want to verify that my design is
correct.  (FULL STOP)

In my humble opinion these statements are not self-consistent in the
sense that you're not only using HDL to develop hardware but you're
using it to verify your design.
A language which only/strictly allows you to describe hardware is no
where nearly good enough as a verification language. To verify well,
quickly and with high coverage you need a much more capable language
than an HDL you describe. You need a complete programming language to
describe all the verification tasks you need to accomplish. Remember
that you'll spend a lot more time in that verification language than
in the description language so make sure that verification language is
as sophisticated as possible to save you time and sometime even make
it possible to say what you want for verification.
I don't agree that the language can't do both. It is doing both now,
just not a great job of the hardware description part. There is
nothing wrong with a language having programming capabilities. I'm
trying to point out that some suggest that by being as flexible in the
language as possible, you don't need the language to deal directly
with aspects of hardware. But the two are not incompatible. We
shouldn't make excuses for limitations in the hardware description
aspects by saying you can program around these limitations.

I think this is an issue that comes from the software side of
development where the mindset is that ultimately no one can understand
all the issues involved in a large design, so let the machine figure
it out for you. This creates problems that we turn back to the
machine to fix and the complexity of the tools gets every larger. I
think simpler tools with more predictable results is a better way to
go. Complexity in the tools puts a barrier between the designer and
the design. I want to be able to get closer to my design and have
less filtering between.

Rick
 
On 03/19/2011 03:24 PM, rickman wrote:

Yes, the issue of starting a circuit with meta values is common. If a
FF has a meta value on the enable and the input is different from the
output, the result should be a meta value. I suppose part of the
problem is that while gates are primitive elements in the math, FFs
are not elements in the language at all. They are inferred through
the constructs of the language, but not in the language itself.
In the early days of synthesis (Synopsys before 1990), FF inference
wasn't supported. Therefore, designers instantiated generic FFs and
described the combinational logic around it. Primitive, but it worked.

Now, if that is your preference, why isn't it entirely trivial
to you that you can just do it like that? Why would one need a
new language if you can simply use Verilog at an even lower
level than is commonly the case?

I simply don't see the point.

Jan

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
On Mar 19, 12:21 pm, Jan Decaluwe <j...@jandecaluwe.com> wrote:
On 03/19/2011 03:24 PM, rickman wrote:



Yes, the issue of starting a circuit with meta values is common.  If a
FF has a meta value on the enable and the input is different from the
output, the result should be a meta value.  I suppose part of the
problem is that while gates are primitive elements in the math, FFs
are not elements in the language at all.  They are inferred through
the constructs of the language, but not in the language itself.

In the early days of synthesis (Synopsys before 1990), FF inference
wasn't supported. Therefore, designers instantiated generic FFs and
described the combinational logic around it. Primitive, but it worked.

Now, if that is your preference, why isn't it entirely trivial
to you that you can just do it like that? Why would one need a
new language if you can simply use Verilog at an even lower
level than is commonly the case?

I simply don't see the point.

Jan
I agree. You don't see the point.

Rick
 
rickman <gnuarm@gmail.com> wrote:

(snip, I wrote)
But that is the problem.  In enough cases, the output of a function
is known with some inputs unknown.  If you multiply by zero (and
don't allow for infinity) then the product is zero.  If you subtract
a number from itself, the difference should be zero.  IF you
add one, and then subtract the original, it should be one.

The context here is not logic gates where you can easily define a
table of outputs vs. inputs for each of the meta-values. VHDL does
that and I don't have any issues with how it is done. But I don't
agree that if you subtract a number from itself the result should be
zero if meta values are involved. Subtraction uses logic elements. I
expect that a subtraction results in meta values on the outputs
because of how the logic operates once you have defined how meta
values propagate through the logic.
I might agree, but the problem is that state machines that start
up just fine in real life won't start up properly if X propagates
in all cases.

The real world does funny things
when you violate the input specs. That is part of what meta values
represent and the outputs should reflect that. Otherwise, what is the
point of simulation?
Well, violate the input spec is different. If I have logic that
is either 0 or 1, but I don't know which one, then subtract will
give zero. If it is somewhere in between, then that is different.

(snip)
X on a clock is strange.  It is a little more interesting on
a clock enable, though.  It would seem that there are some state
machines that could reliably start with an X on the clock enable
in real life, but maybe not in simulation.

Yes, the issue of starting a circuit with meta values is common. If a
FF has a meta value on the enable and the input is different from the
output, the result should be a meta value.
Again, the problem is state machines that initialize with real
data, but not with X. So, even though I agree with you mostly,
it would be nice to write systems that can verify the design,
and yet start up in any initial state.

I suppose part of the
problem is that while gates are primitive elements in the math, FFs
are not elements in the language at all. They are inferred through
the constructs of the language, but not in the language itself. I
have always had an issue with that.
Except for FF's, (and some state machines), I mostly write
structural verilog. So, yes, it does seem that FF's are not
part of the language, at least not from structural verilog.

I simply don't agree that an HDL
has to be a programming language first and describe hardware second.
It should be more tightly tied to hardware in my opinion.
-- glen
 
On 03/19/2011 05:24 PM, rickman wrote:
On Mar 19, 12:21 pm, Jan Decaluwe<j...@jandecaluwe.com> wrote:
On 03/19/2011 03:24 PM, rickman wrote:



Yes, the issue of starting a circuit with meta values is common. If a
FF has a meta value on the enable and the input is different from the
output, the result should be a meta value. I suppose part of the
problem is that while gates are primitive elements in the math, FFs
are not elements in the language at all. They are inferred through
the constructs of the language, but not in the language itself.

In the early days of synthesis (Synopsys before 1990), FF inference
wasn't supported. Therefore, designers instantiated generic FFs and
described the combinational logic around it. Primitive, but it worked.

Now, if that is your preference, why isn't it entirely trivial
to you that you can just do it like that? Why would one need a
new language if you can simply use Verilog at an even lower
level than is commonly the case?

I simply don't see the point.

Jan

I agree. You don't see the point.
Sure, we can't all be HDL language design geniuses.

Still, there is something that puzzles me. An HDL
like AHDL seems to be exactly the closer-to-hardware
HDL that you want. Of course, I believe that it is moving
into the ranks of forgotten HDLs for the same reason.
But still, one would expect that you would mention it
as an example to follow. At least this would get
the discussion real instead of vague and open-ended.

So if you ever did historical research about HDLs
(= googling for "HDL"), or even better, if you have
experience with an HDL like AHDL, you did a very good
job at hiding it.

Otherwise: those who don't know history are bound
to repeat it. Mistakes included.

Jan

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
On Mar 19, 1:47 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
rickman <gnu...@gmail.com> wrote:

(snip, I wrote)

But that is the problem. In enough cases, the output of a function
is known with some inputs unknown. If you multiply by zero (and
don't allow for infinity) then the product is zero. If you subtract
a number from itself, the difference should be zero. IF you
add one, and then subtract the original, it should be one.
The context here is not logic gates where you can easily define a
table of outputs vs. inputs for each of the meta-values.  VHDL does
that and I don't have any issues with how it is done.  But I don't
agree that if you subtract a number from itself the result should be
zero if meta values are involved.  Subtraction uses logic elements.  I
expect that a subtraction results in meta values on the outputs
because of how the logic operates once you have defined how meta
values propagate through the logic.  

I might agree, but the problem is that state machines that start
up just fine in real life won't start up properly if X propagates
in all cases.

The real world does funny things
when you violate the input specs.  That is part of what meta values
represent and the outputs should reflect that.  Otherwise, what is the
point of simulation?

Well, violate the input spec is different.  If I have logic that
is either 0 or 1, but I don't know which one, then subtract will
give zero.  If it is somewhere in between, then that is different.
Yes, but if you consider what 'X' means, it includes your case, but
does not only mean that. It means that the state is not known and can
be changing in an unknown way. So in reality, your simulation is not
the same as reality, because the states are not specified well
enough.

I remember finding this back when I started working with HDLs and a
tech support person who knew something about HDLs (this was back when
you could actually speak with someone knowledgeable on a hot line)
told me that this was a well known issue. I guess it has not been a
big enough problem to do anything about it. In the case of a FF with
feedback never getting out of a meta value is the same as the
subtraction case. But in the FF case the solution would be to test
with the input in each state. For the subtraction case you would need
to test with all possible combinations which would be an unrealistic
task. This would require telling the simulator that the two inputs
are not known, but stable, valid and equal. That doesn't sound too
realistic either.


X on a clock is strange. It is a little more interesting on
a clock enable, though. It would seem that there are some state
machines that could reliably start with an X on the clock enable
in real life, but maybe not in simulation.
Yes, the issue of starting a circuit with meta values is common.  If a
FF has a meta value on the enable and the input is different from the
output, the result should be a meta value.  

Again, the problem is state machines that initialize with real
data, but not with X.  So, even though I agree with you mostly,
it would be nice to write systems that can verify the design,
and yet start up in any initial state.
I'm not sure how that relates to FSMs that start up with unknown
inputs. If you don't know the value of a clock enable, how can you
know when or if it will capture the input signal? With FSMs it is
particularly difficult because they will eventually arrive at a known
state, but the process of getting there will not necessarily the same
in all cases. So how could that be accommodated?


I suppose part of the
problem is that while gates are primitive elements in the math, FFs
are not elements in the language at all.  They are inferred through
the constructs of the language, but not in the language itself.  I
have always had an issue with that.  

Except for FF's, (and some state machines), I mostly write
structural verilog.  So, yes, it does seem that FF's are not
part of the language, at least not from structural verilog.
Structural coding in VHDL is a real PITA because of the verbosity.
I'm a little unclear on what you say you do. I would instantiate FFs
and use RTL for the logic. Why would you instantiate the logic and
infer the FFs? FFs can be instantiated, no? I'm more familiar with
VHDL, but I don't use instantiation for low level objects.

Rick
 
rickman <gnuarm@gmail.com> wrote:

(snip, I wrote)

Again, the problem is state machines that initialize with real
data, but not with X.  So, even though I agree with you mostly,
it would be nice to write systems that can verify the design,
and yet start up in any initial state.

I'm not sure how that relates to FSMs that start up with unknown
inputs. If you don't know the value of a clock enable, how can you
know when or if it will capture the input signal? With FSMs it is
particularly difficult because they will eventually arrive at a known
state, but the process of getting there will not necessarily the same
in all cases. So how could that be accommodated?
OK, say you have a state machine that uses clock enable as
part of its feedback, and also is well designed such that it
works no matter the initial state. If it starts in X, then
it won't work.

Simplest I can think of is a FF with clock enable the XNOR
of its output and its output delayed by one clock cycle.
If it clock enable is low, the output won't change, the XNOR
will go high, and so the clock enable.

I haven't thought about why one would want to do that, but
it doesn't seem so strange.

-- glen
 
On Sun, 20 Mar 2011 07:04:59 +0000 (UTC), glen herrmannsfeldt wrote:

Again, the problem is state machines that initialize with real
data, but not with X.  So, even though I agree with you mostly,
it would be nice to write systems that can verify the design,
and yet start up in any initial state.

I'm not sure how that relates to FSMs that start up with unknown
inputs. If you don't know the value of a clock enable, how can you
know when or if it will capture the input signal? With FSMs it is
particularly difficult because they will eventually arrive at a known
state, but the process of getting there will not necessarily the same
in all cases. So how could that be accommodated?

OK, say you have a state machine that uses clock enable as
part of its feedback, and also is well designed such that it
works no matter the initial state. If it starts in X, then
it won't work.
The problem is that X is only an abstraction, and
it doesn't (and doesn't claim to) model all the
real possibilities.

In Verilog, the hardware meaning of X is pretty much
"it's either a 0 or a 1 but, for some reason, I can't
decide which it is". This idea gives some obvious
inconsistencies, as we've seen. Suppose, for example,
I have an XOR gate whose inputs are both X. In effect,
that means we have four possible values for the XOR's
inputs: 00, 01, 10, 11 - either input can be 0 or 1.
So we don't know the output and we must write the
truth table as X^X=X.

But suppose, for a moment, that both inputs are wired
to the same signal. Now we have only two possible
inputs: 00, 11. The output is reliably 0. How can
we capture this? Obviously we can't describe it in
the truth table of XOR, because that can't know about
correlations between its two inputs.

This is precisely why I believe it is futile to try
to make the language imitate hardware reality to the
extent that Rick seems to want. It will always be
possible to find cases where the modelling doesn't
match reality (is too pessimistic or too optimistic)
because X values carry no information about their
relationship to other X values. What we have right
now may not be the perfect compromise, but it's well
defined and we know how to live with it. (Did I
mention assertions?)

Symbolic simulation, and formal analysis, are able
to deal with these questions because they understand
the functionality of the whole circuit, not just a
set of uncorrelated truth tables. Conventional
functional simulation can't be that smart.

Another way to handle it, which has been used on
real projects, is to interfere with the simulation
at time zero so that all register-like signals are
initialized to a random 0 or 1 value instead of X
(it's tricky, but not impossible, to do this using
the Verilog VPI). By running many such simulations
with different random seeds you can learn a lot about
the real start-up and "X" behaviour. There has been
a serious proposal to add such a facility to the
Verilog standard, and I think there's one commercial
simulator that already supports it, but it's not
happening in the current standards effort. There
are some papers on this in the public domain that
I'll try to hunt down and link to.

cheers
--
Jonathan Bromley
 
On 03/20/2011 07:10 AM, rickman wrote:

Structural coding in VHDL is a real PITA because of the verbosity.
I'm a little unclear on what you say you do. I would instantiate FFs
and use RTL for the logic. Why would you instantiate the logic and
infer the FFs? FFs can be instantiated, no? I'm more familiar with
VHDL, but I don't use instantiation for low level objects.
That may explain why you were a little hasty in dismissing
my suggestion to use Verilog instantation as a cheap way to
implement a closer-to-hardware HDL. Your VHDL past is the problem.

I suggest to reconsider. First, let me point out that any
closer-to-hardware HDL will necessarily involve the use of
more "low level objects" in some form.

More importantly, as is commonly known and as I believe you
experienced yourself, Verilog is a neat and concise HDL as
opposed to super-verbose VHDL. Certainly for structural
coding. Therefore, you will probably find Verilog instantiations
a joy to work with.

And of course, this applies to any functionality, not just FFs.
For example, you could use it for the module power-of-2 counters
that, as I recall, you couldn't get to synthesize properly.

I suppose you now see how this proposal addresses the issues
of your concern.

Jan

--
Jan Decaluwe - Resources bvba - http://www.jandecaluwe.com
Python as a HDL: http://www.myhdl.org
VHDL development, the modern way: http://www.sigasi.com
Analog design automation: http://www.mephisto-da.com
World-class digital design: http://www.easics.com
 
On Mar 20, 3:04 am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
rickman <gnu...@gmail.com> wrote:

(snip, I wrote)

Again, the problem is state machines that initialize with real
data, but not with X. So, even though I agree with you mostly,
it would be nice to write systems that can verify the design,
and yet start up in any initial state.
I'm not sure how that relates to FSMs that start up with unknown
inputs.  If you don't know the value of a clock enable, how can you
know when or if it will capture the input signal?  With FSMs it is
particularly difficult because they will eventually arrive at a known
state, but the process of getting there will not necessarily the same
in all cases.  So how could that be accommodated?

OK, say you have a state machine that uses clock enable as
part of its feedback, and also is well designed such that it
works no matter the initial state.  If it starts in X, then
it won't work.

Simplest I can think of is a FF with clock enable the XNOR
of its output and its output delayed by one clock cycle.
If it clock enable is low, the output won't change, the XNOR
will go high, and so the clock enable.

I haven't thought about why one would want to do that, but
it doesn't seem so strange.

-- glen
But if you assume the starting state is unknown, how do you prove that
the FSM will work? You can't. You have to consider all possible
starting states and test them all or something similar. By specifying
an 'X' state, you have not provided enough information for a
simulation to know this will work. Just saying that the machine will
work no matter the starting state is not enough to resolve an 'X'.

Tell me what transition sequence you would expect to see as the FSM is
clocked with 'X' on the inputs?

Rick
 
On Mar 20, 8:35 am, Jonathan Bromley <s...@oxfordbromley.plus.com>
wrote:
On Sun, 20 Mar 2011 07:04:59 +0000 (UTC), glen herrmannsfeldt wrote:
Again, the problem is state machines that initialize with real
data, but not with X. So, even though I agree with you mostly,
it would be nice to write systems that can verify the design,
and yet start up in any initial state.

I'm not sure how that relates to FSMs that start up with unknown
inputs.  If you don't know the value of a clock enable, how can you
know when or if it will capture the input signal?  With FSMs it is
particularly difficult because they will eventually arrive at a known
state, but the process of getting there will not necessarily the same
in all cases.  So how could that be accommodated?

OK, say you have a state machine that uses clock enable as
part of its feedback, and also is well designed such that it
works no matter the initial state.  If it starts in X, then
it won't work.

The problem is that X is only an abstraction, and
it doesn't (and doesn't claim to) model all the
real possibilities.
The issue is not that 'X' can't model all possibilities, but that 'X'
is saying that you lack information about the state. To model the
real world in more detail you need more information than 'X'
reprensents.


In Verilog, the hardware meaning of X is pretty much
"it's either a 0 or a 1 but, for some reason, I can't
decide which it is".  
I don't agree with this really. It says you don't know the state, not
that it is at a valid and constant, but unknown value. It does not
represent having any information about the state of the signal. So
you can't make many of the assumptions you indicate below.


This idea gives some obvious
inconsistencies, as we've seen.  Suppose, for example,
I have an XOR gate whose inputs are both X.  In effect,
that means we have four possible values for the XOR's
inputs: 00, 01, 10, 11 - either input can be 0 or 1.
So we don't know the output and we must write the
truth table as X^X=X.
So far, so good.


But suppose, for a moment, that both inputs are wired
to the same signal.  Now we have only two possible
inputs: 00, 11.  The output is reliably 0.  How can
we capture this?  Obviously we can't describe it in
the truth table of XOR, because that can't know about
correlations between its two inputs.
I don't agree that knowing the same signal is on the two inputs of an
XOR gate is enough information to know the output. In a real world
circuit this can easily generate glitches on every state change of the
input. So you don't know the output and the output of 'x' is valid.


This is precisely why I believe it is futile to try
to make the language imitate hardware reality to the
extent that Rick seems to want.  It will always be
possible to find cases where the modelling doesn't
match reality (is too pessimistic or too optimistic)
because X values carry no information about their
relationship to other X values.  What we have right
now may not be the perfect compromise, but it's well
defined and we know how to live with it.  (Did I
mention assertions?)
I think you are overstating what I have asked for. I am not saying
you have to model logic perfectly. My original thought was that FFs
are not modeled well by assuming a transition from 'x' to '1' is a
rising clock edge. I still think this is a very poor model. Either
ignore invalid edges or generate an invalid output from the FF. The
former makes perfect sense to me while the latter would likely be hard
to do given the current languages.


Symbolic simulation, and formal analysis, are able
to deal with these questions because they understand
the functionality of the whole circuit, not just a
set of uncorrelated truth tables.  Conventional
functional simulation can't be that smart.

Another way to handle it, which has been used on
real projects, is to interfere with the simulation
at time zero so that all register-like signals are
initialized to a random 0 or 1 value instead of X
(it's tricky, but not impossible, to do this using
the Verilog VPI).  By running many such simulations
with different random seeds you can learn a lot about
the real start-up and "X" behaviour.  There has been
a serious proposal to add such a facility to the
Verilog standard, and I think there's one commercial
simulator that already supports it, but it's not
happening in the current standards effort.  There
are some papers on this in the public domain that
I'll try to hunt down and link to.
Yes, randomization of start up states can help with the simulation,
but it doesn't seem to be the right way to deal with the issue of FSM
startup. It only takes one missed case to spoil a design. I'm not
clear though on why a part of a design that needs to start up
correctly would not use an initialization through reset of similar.
When would this randomization be needed?

Rick
 
On Sun, 20 Mar 2011 09:55:57 -0700 (PDT), rickman wrote:

The problem is that X is only an abstraction, and
it doesn't (and doesn't claim to) model all the
real possibilities.

The issue is not that 'X' can't model all possibilities, but that 'X'
is saying that you lack information about the state. To model the
real world in more detail you need more information than 'X'
reprensents.
Yes, I think that's pretty much what I said. You need to
know about relationships that hold between different signals,
not just the values on individual signals.

In Verilog, the hardware meaning of X is pretty much
"it's either a 0 or a 1 but, for some reason, I can't
decide which it is".  

I don't agree with this really.
You're welcome to disagree all you like, but that's how it
works for all the Verilog built-in operators. I agree that
posedge is somewhat anomalous, and if() is inevitably
unsatisfactory, but the basic behaviour is as I stated.
Check out the behaviour of the ?: conditional operator
when the selector is X, if you want further confirmation:
wire [3:0] Y = 1'bX ? 4'b1010 : 4'b0110;
gives Y=4'bXX10.

It says you don't know the state, not
that it is at a valid and constant, but unknown value.
I didn't say "constant", and certainly didn't intend to
imply it. "Valid", though, I did mean. If you want
X to mean "some voltage on the wire that makes it
uncertain whether I have 0 or 1" then you must go to
analog modelling of some kind.

you can't make many of the assumptions you indicate below.
I'm not sure what you mean by that. I posited some
example situations, and didn't aim to "make assumptions".

But suppose, for a moment, that both inputs are wired
to the same signal.  Now we have only two possible
inputs: 00, 11.  The output is reliably 0.  How can
we capture this?  Obviously we can't describe it in
the truth table of XOR, because that can't know about
correlations between its two inputs.

I don't agree that knowing the same signal is on the two inputs of an
XOR gate is enough information to know the output. In a real world
circuit this can easily generate glitches on every state change of the
input. So you don't know the output and the output of 'x' is valid.
That's a different issue, easily modelled by adding timing
(a specify block, in Verilog-speak) to your XOR model. The
Verilog calculation "y = a ^ a" will ALWAYS yield zero
if 'a' is either 0 or 1; you won't see a glitch when 'a'
makes a transition. Of course you may get a glitch in the
real hardware; it you really want to see that in simulation,
you'll need to add timing to your model. That's irrelevant
to my point that the (stable) value of a^a is zero
regardless of the (stable) value of 'a', but ordinary
functional simulation can't produce that correct result
when a=X. Similarly, if you do provide the necessary
timing model so that flipping the XOR's inputs from 00
to 11 gives a visible output glitch, you won't - and
could not, by any stretch of the imagination - get
such a glitch when the inputs "transition" from XX
to XX, because there's no transition. The X value
allows propagation of unknown-ness through the design
in some cases, but doesn't allow you to represent
all the possibilities that you might care about.

I still think this
[posedge responds to 0X and X1 transitions]
is a very poor model. Either
ignore invalid edges or generate an invalid output from the FF. The
former makes perfect sense to me while the latter would likely be hard
to do given the current languages.
I don't disagree that the definition of posedge is odd, and
probably somewhat inappropriate, but that's the way it is
and we need to live with it. Did I mention assertions?

Note that a suitable SystemVerilog assertion could
allow you to trash the Q value of a conventional
synthesisable FF model if there's an X on the clock,
without compromising the ability to synthesise the
code, if you think that's the right thing to do.

randomization of start up states can help with the simulation,
but it doesn't seem to be the right way to deal with the issue of FSM
startup. It only takes one missed case to spoil a design.
I hope that no-one should equate "randomized" with "scattergun"
for functional verification. I can easily collect coverage
on those randomly generated initial states to check that
any specific critical case has been covered exhaustively,
while tolerating just a selection of starting values
for less critical things such as the counter in a
divide-by-N block.

If I'm really worried about such a thing, I can deploy
a formal assertion checker (or manual analysis) to prove
that all possible start states work properly. Pencil
and paper still has its place.

I'm not
clear though on why a part of a design that needs to start up
correctly would not use an initialization through reset of similar.
When would this randomization be needed?
Some of the systems I'm working on right now have
an audio timebase, typically 44kHz but could be even
as low as 8kHz. That's generated by dividing down
the system clock, probably 50MHz. No-one cares
about the absolute phase of this timebase relative
to reset. I can't necessarily reset the counter
at power-up (the chip may have multiple power
domains, some of which come out of power-down at
times when there's no reset happening). The
counter powers-up at X; oops, no simulation.
If I force it to zero at power-up, I'm testing
only one of thousands of possibilities. Why
not randomize its state at power-up? If I allow
that random startup value to vary across the
hundreds of simulations I do for other reasons,
I'll get pretty good confidence that all is well.
--
Jonathan Bromley
 
rickman <gnuarm@gmail.com> wrote:
(snip)

The issue is not that 'X' can't model all possibilities, but that 'X'
is saying that you lack information about the state. To model the
real world in more detail you need more information than 'X'
reprensents.
(snip)
I don't agree that knowing the same signal is on the two inputs of an
XOR gate is enough information to know the output. In a real world
circuit this can easily generate glitches on every state change of the
input. So you don't know the output and the output of 'x' is valid.
Well, that would be true if there was a different delay between
the paths. As verilog does model delay (though rarely used) it
would seem fair for it to include the delay.

(snip)
Yes, randomization of start up states can help with the simulation,
but it doesn't seem to be the right way to deal with the issue of FSM
startup. It only takes one missed case to spoil a design. I'm not
clear though on why a part of a design that needs to start up
correctly would not use an initialization through reset of similar.
When would this randomization be needed?
I like the randomization idea, but, yes that wouldn't be the
final anser for SM startup. I have used big constants
(such as 12345) when it made sense.

There have been suggestions to supply random bits for floating
point post-normalization, to simulate the uncertainty in the result.

-- glen
 

Welcome to EDABoard.com

Sponsor

Back
Top