A
Andy
Guest
KJ wrote:
since there is far less overhead for variable update and access than
for that of signals. More simulation performance relates to more corner
cases hit, and more bugs found.
the same, particularly when meeting requirements in the absence of a
clock!
problems are often the result of improperly handling (synchronizing)
reset inputs, no matter whether the end circuit is designed with an
async or a sync reset.
predictable/safe behavior in the absense of a clock: a failed clock
input. If the system is designed to shut off the clock, then not
handling that would be a design defect. When system outputs are used
to directly control other things that can destroy themselves (or
destroy something else) if not actively controlled (motor servo loops
are just one minor example), then performance w/o a clock becomes
vitally important, particularly in medical, automotive, and military
applications where human lives are at stake.
<snip...>
invovled in using async vs sync resets. Handling the deasserting edge
is no more or less difficult than properly synchronizing a reset input
in the first place. Both have to be done for each clock domain. If
there is no more skill involved, why not use the "safest" approach,
which guarantees performance even in the absence of a clock. Now, if it
were for an ASIC, where flop primitives with async resets come at a
real-estate, if not performance, disadvantage, then by all means use
the sync reset (which usually gets munged in with the gates anyway) if
there are no requirements for safing the outputs in the absence of the
clock.
Even if it were more difficult, that's why they pay us the big bucks!
Andy
Maximizing use of variables also maximizes RTL simulation performance,In the OP, the author has simply shown a way to provide the behavior of
two separate processes (one with async reset, one without), in one
process, a valuable technique if one wants to minimize processes and
maximize use of variables.
I'm not sure that 'maximize use of variables' is any sort of useful
metric (function/performance/power/code maintainability are more
useful), but I agree with you and the original post author that what
was posted is an improvement over using two processes and belongs in
the bag of tricks.
since there is far less overhead for variable update and access than
for that of signals. More simulation performance relates to more corner
cases hit, and more bugs found.
Agreed, but the devil's in the details of "most" and "virtually".But the original post also mentioned this in the context of this being
a good way to avoid unwanted gated clocks. And in my original post I
simply mentioned that an asynchronously resetable shift register used
to generate the reset signal to everything else in the design and then
using synchronous resets throughout the design avoids the entire
situation entirely and in most cases costs darn near nothing and
performs virtually the same.
Maybe I'm being picky, but that can be a big difference! They are NOTThe only functional difference between the two is that PRIOR to that
first rising edge of the clock the outputs are in a different state.
AFTER the first rising edge everything is the same. The reset signal
itself can come when the clock is shut off, it's just the result of
that reset that doesn't show up until the clock does start.
the same, particularly when meeting requirements in the absence of a
clock!
Maybe "generally" and "usually" in your work, but not in mine!As a practical matter, that functional difference is generally of no
importance....for the simple reason that the reason that the clock
isn't running is usually because something has knowingly shut it off
(i.e. maybe to conserve power). In any case, that thing that controls
the clock certainly knows to ignore the outputs of a function that it
is not actively using so the fact that the outputs aren't the way you
think they should be really doesn't matter darn near all the time.
This is not about proper initialization at startup, though thoseIf you think the slight functional difference is important because this
signal is a 'really important' signal that absolutely must come up
correctly (i.e. launch the missles) than think again. Before any
properly designed system would turn over control of that 'really
important' signal in the first place it would first test it to make
sure that it is working correctly (i.e. no false launches...no missed
launch commands). Only then would it allow that signal to control that
'really important' signal....and it would only do so after starting the
clock because the designer realizes that the outputs become valid after
the clock, not before.
problems are often the result of improperly handling (synchronizing)
reset inputs, no matter whether the end circuit is designed with an
async or a sync reset.
This is the most common root cause for the requirement forIf the clock isn't running because it is just busted than maybe the
slight functional difference does become important but only if it
prevents the system from properly diagnosing what field replacable unit
needs replacing or being able to route around the failing component.
predictable/safe behavior in the absense of a clock: a failed clock
input. If the system is designed to shut off the clock, then not
handling that would be a design defect. When system outputs are used
to directly control other things that can destroy themselves (or
destroy something else) if not actively controlled (motor servo loops
are just one minor example), then performance w/o a clock becomes
vitally important, particularly in medical, automotive, and military
applications where human lives are at stake.
<snip...>
Either one: ram and shift register primitives are a good example.There are many things that can be inferred from RTL that do not have
the option of an async reset,
Think you meant "the option of a sync reset"
Once the deasserted edge is handled, there is no more or less skilland mixing them with asynchronously reset
logic using the OP example is beneficial.
Agreed....keeping in mind that using async resets requires more 'skill'
(for lack of a better word) than sync resets.
invovled in using async vs sync resets. Handling the deasserting edge
is no more or less difficult than properly synchronizing a reset input
in the first place. Both have to be done for each clock domain. If
there is no more skill involved, why not use the "safest" approach,
which guarantees performance even in the absence of a clock. Now, if it
were for an ASIC, where flop primitives with async resets come at a
real-estate, if not performance, disadvantage, then by all means use
the sync reset (which usually gets munged in with the gates anyway) if
there are no requirements for safing the outputs in the absence of the
clock.
Even if it were more difficult, that's why they pay us the big bucks!
Andy