A
alb
Guest
Hi everyone,
I'm trying to understand how to improve our verification flow.
As of now we have a great deal of unit level testbenches which are
trying to make sure the unit is behaving correctly.
Now, if I make a fair analysis of my above statement, I'm already doomed!
What does it mean 'behaving correctly'? In our workflow we have an FPGA
spec which is coming from /above/ and we have a verification matrix to
define which verification method we apply for each requirement. The
problem is: we do not have a unit level spec so how can we make sure the
unit is correctly behaving?
Moreover at the system level there are a certain number of scenarios
which do not apply at unit level and viceversa, but the bottom line is
that the system as a whole should be fully verified.
Should I decline each system level requirement to a unit level one? That
would be nearly as long as writing RTL code using only fabric's
privitives.
Another issue is the coverage collection. Imagine I have my set of units
all individually tested, all of them happily reporting some sort of
functional coverage. First of all I do not now why the heck we collect
coverage if we do not have a spec to compare it with and second of all
how shall I collect coverage of a specific unit when it's integrated in
the overall system? Does it make any sense to do it?
A purely system level approach might have too poor
observability/controllability at unit level and would not be efficient
to spot unit level problems, especially in the very beginning of the
coding effort, where the debug cycle is very fast. But if I start to
write unit level testbenches it would be unlikely that I will reuse
those benches at system level.
As you may have noticed I'm a bit confused and any pointer would be
greatly appreciated.
Al
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
I'm trying to understand how to improve our verification flow.
As of now we have a great deal of unit level testbenches which are
trying to make sure the unit is behaving correctly.
Now, if I make a fair analysis of my above statement, I'm already doomed!
What does it mean 'behaving correctly'? In our workflow we have an FPGA
spec which is coming from /above/ and we have a verification matrix to
define which verification method we apply for each requirement. The
problem is: we do not have a unit level spec so how can we make sure the
unit is correctly behaving?
Moreover at the system level there are a certain number of scenarios
which do not apply at unit level and viceversa, but the bottom line is
that the system as a whole should be fully verified.
Should I decline each system level requirement to a unit level one? That
would be nearly as long as writing RTL code using only fabric's
privitives.
Another issue is the coverage collection. Imagine I have my set of units
all individually tested, all of them happily reporting some sort of
functional coverage. First of all I do not now why the heck we collect
coverage if we do not have a spec to compare it with and second of all
how shall I collect coverage of a specific unit when it's integrated in
the overall system? Does it make any sense to do it?
A purely system level approach might have too poor
observability/controllability at unit level and would not be efficient
to spot unit level problems, especially in the very beginning of the
coding effort, where the debug cycle is very fast. But if I start to
write unit level testbenches it would be unlikely that I will reuse
those benches at system level.
As you may have noticed I'm a bit confused and any pointer would be
greatly appreciated.
Al
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?