R
rickman
Guest
On Dec 15, 2:50 pm, Alessandro Basili <alessandro.bas...@cern.ch>
wrote:
exhaustively assuming a few conditions. But testing in general takes
on a lot more than that and the condition the OP was not a logic
issue, rather it would depend on uncontrolled variables and
potentially very intermittent.
the issue we are not communicating on, the points mean nothing.
the weekend.
experience. When they are "codified" they often loose their impact
because of being applied poorly. Yes, "good practices" is clearly
good. But it is a term that has no real definition except within some
specific context and so is not really useful in a conversation about a
concrete issue.
mistakes made over and over with people reciting their mantras.
Mostly these mistakes are just time wasters, but they are mistakes
none the less. I expect you will ask me for an example. One I see
often is the poor application of decoupling caps on devices. Many
people use all sorts of rules of thumb and claim that their rule must
not be violated or you aren't using "good practices". Meanwhile the
electric fields ignore their good practices and instead obey the laws
of physics.
customer, use one clock on the inside and cross the clock domain at
the interface. This often makes the design much simpler and in the
customer's design allows the tools to figure out much simpler timing
and routing constraints.
Rick
wrote:
If you are only talking about testing logic, yes, you can do thatOn 12/14/2010 4:56 AM, rickman wrote:
I never said testing isn't useful. I said that testing can't assure
that something works unless you test every possible condition and that
is not possible in an absolute sense.
I don't get that. What you call as "every possible condition" is
intended as every possible state of a particular set of logic. If your
logic cannot be described as a well defined number of states then I
agree that testing will never be enough (but in this case I suggest a
rework of the design).
If your states are not controllable (i.e. by an external control signal)
then there's little you can do in testing. If your states are even not
observable then I believe you are "doomed" (as somebody said earlier...).
exhaustively assuming a few conditions. But testing in general takes
on a lot more than that and the condition the OP was not a logic
issue, rather it would depend on uncontrolled variables and
potentially very intermittent.
It gave the context. Without the relevant context, which seems to beBesides, you snipped the OP's comment I was responding to...
"the design seems to be working fine in hardware although I haven t
made any extensive tests yet."
I snipped the OP's comment because didn't add anything that wasn't
already said in your sentence.
the issue we are not communicating on, the points mean nothing.
I don't have time to do that now. Maybe I can come back to this overThe problem we are discussing is ***exactly*** the sort of thing you
may not find in testing. The way the OP has constructed the circuit
it may work in 99.99% of the systems he builds. Or it may work 99.99%
of the time in all of the systems. But testing can't validate timing
since you can't control all of the variables.
This is why the way OP constructed the circuit was suboptimal and that
is why it was suggested not to use a generated clock, but an enable
signal (or even clock the UART with the original 25MHz clock).
The only thing testing can really do is to verify that your design
meets your requirements... if your requirements are testable!
If something cannot be testable, how would you accept those
requirements? And how can than be said the system doesn't work?
according to which requirement violation?
I believe that coding is just part of the story. Having the possibility
to test the design by a different team from the designers one will make
a huge difference.
"a huge difference"... I understand there are lots of ways to improve
testing, but that doesn't change the fundamental limitation of
testing.
The fundamental limitation of testing are most probably due to a very
poor description of the device under test. That is why documentation
should be the first step in a work flow, rather then the last one where
the designer tend to leave out all the "unnecessary" details which
eventually lead to misunderstanding. How hard is to test one single
flip-flop? Not much, only because if fully described.
http://www.designabstraction.co.uk/Articles/Common%20HDL%20Errors.PDF
I'm not sure what to say about this article. It is actually a bit
shallow in my opinion. But it also contains errors! Much of it is
really just the opinion of the author.
Could you please post the errors? Since I haven't found them, most
probably your reading was deeper than mine and it would be very helpful
to me.
The author not only gives an opinion, it lays down an approach which is
at a higher level of abstraction than the flops and gates.
the weekend.
My experience is that most "good practices" are really justI think that terms like "good practices" are some of the least useful
concepts I've ever seen. First, where are "good practices" defined?
In many books, standards, proceedings, articles and you may be lucky
that your company has already defined a set of them.
experience. When they are "codified" they often loose their impact
because of being applied poorly. Yes, "good practices" is clearly
good. But it is a term that has no real definition except within some
specific context and so is not really useful in a conversation about a
concrete issue.
I see no value in doing "what most people do". I have seen the sameWithout a clear, detailed definition of the term, it doesn't
communicate anything. Usually it is used to mean "what I do". It may
be defined within a company, in some limited ways it may be defined
within a sector. But in general this is a term that has little
meaning as used by most people. This is much like recommending to
design "carefully". I can't say how many times I have seen that word
used in engineering without actually saying anything.
Usually it means "what most of the people do" as opposed of what you
suggested. Of course circumstances and requirements maybe some how very
demanding but an alternative architectural approach will surely have an
higher impact.
mistakes made over and over with people reciting their mantras.
Mostly these mistakes are just time wasters, but they are mistakes
none the less. I expect you will ask me for an example. One I see
often is the poor application of decoupling caps on devices. Many
people use all sorts of rules of thumb and claim that their rule must
not be violated or you aren't using "good practices". Meanwhile the
electric fields ignore their good practices and instead obey the laws
of physics.
I'm not sure we are communicating on this one. That is what I told myI recently discussed a multiple clock design with my customer. He
said he had more than 50 clocks in this design and wanted details on
how I deal with syncing multiple clock domains. I explained that I do
all my work in one clock domain and use a particular logic circuit to
transport clocks, enables and data into that one domain. I solve the
synchronization problem once at the interface and never have to worry
about it again.
I would recommend an alternative to the 50 clocks domains, instead.
What alternative would that be??? Is that different than the solution
I recommended?
As in the OP example, instead of suggesting how to control the timing of
the clk_250k I would recommend not to use it at all (that means one
clock less).
customer, use one clock on the inside and cross the clock domain at
the interface. This often makes the design much simpler and in the
customer's design allows the tools to figure out much simpler timing
and routing constraints.
Rick