When are two clock domains actually considered asynchronous?

On Dec 15, 2:50 pm, Alessandro Basili <alessandro.bas...@cern.ch>
wrote:
On 12/14/2010 4:56 AM, rickman wrote:
I never said testing isn't useful.  I said that testing can't assure
that something works unless you test every possible condition and that
is not possible in an absolute sense.

I don't get that. What you call as "every possible condition" is
intended as every possible state of a particular set of logic. If your
logic cannot be described as a well defined number of states then I
agree that testing will never be enough (but in this case I suggest a
rework of the design).
If your states are not controllable (i.e. by an external control signal)
then there's little you can do in testing. If your states are even not
observable then I believe you are "doomed" (as somebody said earlier...).
If you are only talking about testing logic, yes, you can do that
exhaustively assuming a few conditions. But testing in general takes
on a lot more than that and the condition the OP was not a logic
issue, rather it would depend on uncontrolled variables and
potentially very intermittent.


Besides, you snipped the OP's comment I was responding to...

"the design seems to be working fine in hardware although I haven t
made any extensive tests yet."

I snipped the OP's comment because didn't add anything that wasn't
already said in your sentence.
It gave the context. Without the relevant context, which seems to be
the issue we are not communicating on, the points mean nothing.


The problem we are discussing is ***exactly*** the sort of thing you
may not find in testing.  The way the OP has constructed the circuit
it may work in 99.99% of the systems he builds.  Or it may work 99.99%
of the time in all of the systems.  But testing can't validate timing
since you can't control all of the variables.

This is why the way OP constructed the circuit was suboptimal and that
is why it was suggested not to use a generated clock, but an enable
signal (or even clock the UART with the original 25MHz clock).

The only thing testing can really do is to verify that your design
meets your requirements... if your requirements are testable!

If something cannot be testable, how would you accept those
requirements? And how can than be said the system doesn't work?
according to which requirement violation?



I believe that coding is just part of the story. Having the possibility
to test the design by a different team from the designers one will make
a huge difference.

"a huge difference"... I understand there are lots of ways to improve
testing, but that doesn't change the fundamental limitation of
testing.

The fundamental limitation of testing are most probably due to a very
poor description of the device under test. That is why documentation
should be the first step in a work flow, rather then the last one where
the designer tend to leave out all the "unnecessary" details which
eventually lead to misunderstanding. How hard is to test one single
flip-flop? Not much, only because if fully described.



http://www.designabstraction.co.uk/Articles/Common%20HDL%20Errors.PDF

I'm not sure what to say about this article.  It is actually a bit
shallow in my opinion.  But it also contains errors!  Much of it is
really just the opinion of the author.

Could you please post the errors? Since I haven't found them, most
probably your reading was deeper than mine and it would be very helpful
to me.
The author not only gives an opinion, it lays down an approach which is
at a higher level of abstraction than the flops and gates.
I don't have time to do that now. Maybe I can come back to this over
the weekend.


I think that terms like "good practices" are some of the least useful
concepts I've ever seen.  First, where are "good practices" defined?

In many books, standards, proceedings, articles and you may be lucky
that your company has already defined a set of them.
My experience is that most "good practices" are really just
experience. When they are "codified" they often loose their impact
because of being applied poorly. Yes, "good practices" is clearly
good. But it is a term that has no real definition except within some
specific context and so is not really useful in a conversation about a
concrete issue.


Without a clear, detailed definition of the term, it doesn't
communicate anything.  Usually it is used to mean "what I do".  It may
be defined within a company, in some limited ways it may be defined
within a sector.  But in general this is a term that has little
meaning as used by most people.  This is much like recommending to
design "carefully".  I can't say how many times I have seen that word
used in engineering without actually saying anything.

Usually it means "what most of the people do" as opposed of what you
suggested. Of course circumstances and requirements maybe some how very
demanding but an alternative architectural approach will surely have an
higher impact.
I see no value in doing "what most people do". I have seen the same
mistakes made over and over with people reciting their mantras.
Mostly these mistakes are just time wasters, but they are mistakes
none the less. I expect you will ask me for an example. One I see
often is the poor application of decoupling caps on devices. Many
people use all sorts of rules of thumb and claim that their rule must
not be violated or you aren't using "good practices". Meanwhile the
electric fields ignore their good practices and instead obey the laws
of physics.


I recently discussed a multiple clock design with my customer.  He
said he had more than 50 clocks in this design and wanted details on
how I deal with syncing multiple clock domains.  I explained that I do
all my work in one clock domain and use a particular logic circuit to
transport clocks, enables and data into that one domain.  I solve the
synchronization problem once at the interface and never have to worry
about it again.

I would recommend an alternative to the 50 clocks domains, instead.

What alternative would that be???  Is that different than the solution
I recommended?

As in the OP example, instead of suggesting how to control the timing of
the clk_250k I would recommend not to use it at all (that means one
clock less).
I'm not sure we are communicating on this one. That is what I told my
customer, use one clock on the inside and cross the clock domain at
the interface. This often makes the design much simpler and in the
customer's design allows the tools to figure out much simpler timing
and routing constraints.

Rick
 
As in the OP example, instead of suggesting how to control the timing of
the clk_250k I would recommend not to use it at all (that means one > clock less).
What are you're arguments against using the clk_250k apart from having
one clock less? clk_25 is only used for generating the clk_250k. No
data is passing from the clk_25 domain to the clk_250k domain and
clk_250k is put on a low skew global clock net.

/B
 
On 12/15/2010 11:36 PM, Beppe wrote:
As in the OP example, instead of suggesting how to control the timing of
the clk_250k I would recommend not to use it at all (that means one> clock less).

What are you're arguments against using the clk_250k apart from having
one clock less? clk_25 is only used for generating the clk_250k. No
data is passing from the clk_25 domain to the clk_250k domain and
clk_250k is put on a low skew global clock net.
I believe the reason of having an additional clock should be motivated.
All what I see reduces portability and invalidates a simple behavioral
description, since you need to instance the low skew clock directly in
your description. What would be your gain instead? The direct use of
clk_25 in your uart suits perfectly and the argument to reduce the
amount of power consumption on a bunch of flops (what, like 50???) is
not worth the effort.

> /B
 
Well, I can see the point of using an enable instead of a divided
clock even if this clock is on a low skew clock net. It’s good design
practice and you don’t introduce an additional clock. Fine. However, I
think you should always question good design practice and understand
why it’s better to do it the “good” way rather than the unknown,
unexplored, uncommon, etc. way. At least if you want to get some
deeper understanding of the subject. Also, what was good design
practice yesterday is not always good design practice today. E.g.
Xilinx have changed their recommended coding styles quite a lot since
the introduction of the 6-input LUTs.

I believe the reason of having an additional clock should be motivated.
All what I see reduces portability and invalidates a simple behavioral
description, since you need to instance the low skew clock directly in
your description. What would be your gain instead? The direct use of
clk_25 in your uart suits perfectly and the argument to reduce the
amount of power consumption on a bunch of flops (what, like 50???) is
not worth the effort.
How would it reduce portability? The DCM has already reduced the
portability and I don’t see how the clock divider would reduce it even
more. Well, I can agree that instantiating a design element somehow
invalidates a behavioral description (if that’s what you meant), but
how do you go through a design without instantiating a single vendor
specific component? BTW, I didn’t have to instantiate the BUFG, the
tool inferred it!

/B
 
On Dec 15, 5:36 pm, Beppe <beppe.e...@gmail.com> wrote:
As in the OP example, instead of suggesting how to control the timing of
the clk_250k I would recommend not to use it at all (that means one > clock less).

What are you're arguments against using the clk_250k apart from having
one clock less? clk_25 is only used for generating the clk_250k. No
data is passing from the clk_25 domain to the clk_250k domain and
clk_250k is put on a low skew global clock net.
I'm not trying to argue or convince you, so I don't have an argument.
You asked for opinions, I am offering mine. I don't know how much
logic you have in the various clock domains or if there are others. I
think you have said there is very little in the 28 MHz domain, but I'm
not at all clear on what is in the 25 MHz and 250 kHz domains.

I know little about letting the timing tools figure out how to handle
dual outputs from a DCM. Unless there is some compelling reason, I
would use clock crossing logic going between any of these clock
domains.

On the other hand, it is entirely possible to use a single clock
domain for the entire design. You have a 125 MHz clock domain which
is a super set of each of the slower clocks. A clock does not need to
be an integer multiple to use a clock enable. For that matter, it
doesn't even need to be 2x. In this design if the Midi code uses a
clock enable to set the rate, you could easily use the 28 MHz clock
with a 25 MHz enable.

But a lot depends on how much interface you have vs. the hassle of
converting code to work with enabled clocks. Clock crossing logic is
not large or complex. I mainly find it unpleasant in that it clutters
up a design somewhat. A recent design I did had expanded an existing
design running on a 12.288 MHz clock from off board PLL controlled
based on an interface FIFO. So this clock had to be used to establish
interface timing, both sides in fact. The code I was adding had to
have a section that ran on a 32 MHz clock to operate a digital PLL for
a third interface. I had planned to provide a clock crossing
interface between the two new sections of logic. But at some point
this became a PITA and I ended up making the entire new design run on
the 32 MHz clock and turned the 12.288 into an enable for the two
exiting interfaces. OTOH, the previous circuit was working fine and
had little to do with the new circuit (different modes of operation)
so it was left alone clocking off the 12.288. Yet another interface
for configuration was clocked off the interface clock (sort of a
bastard SPI) and was also not changed for the new section. So
multiple clocks are not bad, but I find it easier to live with if I
convert at the external interface and run on one clock as much as
possible.

Rick
 

Welcome to EDABoard.com

Sponsor

Back
Top