full functional coverage

A

alb

Guest
Hi everyone,

I'm trying to understand how to improve our verification flow.

As of now we have a great deal of unit level testbenches which are
trying to make sure the unit is behaving correctly.

Now, if I make a fair analysis of my above statement, I'm already doomed!
What does it mean 'behaving correctly'? In our workflow we have an FPGA
spec which is coming from /above/ and we have a verification matrix to
define which verification method we apply for each requirement. The
problem is: we do not have a unit level spec so how can we make sure the
unit is correctly behaving?

Moreover at the system level there are a certain number of scenarios
which do not apply at unit level and viceversa, but the bottom line is
that the system as a whole should be fully verified.

Should I decline each system level requirement to a unit level one? That
would be nearly as long as writing RTL code using only fabric's
privitives.

Another issue is the coverage collection. Imagine I have my set of units
all individually tested, all of them happily reporting some sort of
functional coverage. First of all I do not now why the heck we collect
coverage if we do not have a spec to compare it with and second of all
how shall I collect coverage of a specific unit when it's integrated in
the overall system? Does it make any sense to do it?

A purely system level approach might have too poor
observability/controllability at unit level and would not be efficient
to spot unit level problems, especially in the very beginning of the
coding effort, where the debug cycle is very fast. But if I start to
write unit level testbenches it would be unlikely that I will reuse
those benches at system level.

As you may have noticed I'm a bit confused and any pointer would be
greatly appreciated.

Al

--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
 
Hi Al,
Another issue is the coverage collection. Imagine I have my set of units
all individually tested, all of them happily reporting some sort of
functional coverage. First of all I do not now why the heck we collect
coverage if we do not have a spec to compare it with and second of all
how shall I collect coverage of a specific unit when it's integrated in
the overall system? Does it make any sense to do it?
Functional coverage (the one you write - such as with OSVVM) tells you whether you have exercised all of the items in your test plan. Your test may set out to achieve some objectives - the functional coverage observes these happening and validates that your directed test really did what it said it was going to do or that your random test did something useful. For more on this, see osvvm dot org and SynthWorks' OSVVM blog.

Code coverage (what the tool captures for you) tells you that you have executed all lines of code. If your functional coverage is 100%, but your code coverage is not, it indicates there is logic there that is not covered by your test plan - either extra logic (which is bad) or your test plan is incomplete (which is also bad).

Functional coverage at 100% and Code coverage at 100% indicates that you have completed validation of the design.

Best Regards,
Jim
 
Hi Al,
Looking at your second question:

A purely system level approach might have too poor
observability/controllability at unit level and would not be efficient
to spot unit level problems, especially in the very beginning of the
coding effort, where the debug cycle is very fast. But if I start to
write unit level testbenches it would be unlikely that I will reuse
those benches at system level.
For some ideas on this, see my paper titled, "Accelerating Verification Through Pre-Use of System-Level Testbench Components" that is posted on SynthWorks' papers page.

If you would like this in a class room setting, make sure to catch our VHDL TLM + OSVVM World Tour (our class VHDL Testbenches and Verification). Next stop is in Sweden on May 5-9 and Germany in July 14-18. http://www.synthworks.com/public_vhdl_courses.htm

Best Regards,
Jim
 
Hi Jim,
Jim Lewis <usevhdl@gmail.com> wrote:
Functional coverage (the one you write - such as with OSVVM) tells you
whether you have exercised all of the items in your test plan. Your
test may set out to achieve some objectives - the functional coverage
observes these happening and validates that your directed test really
did what it said it was going to do or that your random test did
something useful. For more on this, see osvvm dot org and SynthWorks'
OSVVM blog.

let's take a bus arbiter for example. It will provide access to the bus
to multiple 'masters' with a certain priority and/or schedule. The
problem arises when the 'slave' does not reply or finish at the expected
time. Let's assume the bus protocol allows the arbiter to 'put on hold'
the current master and allow others to access the bus.

In my unit testing I have full controllability and I can simulate a non
responsive slave, therfore verifying the correct behavior of the
arbiter, but when I integrate the arbiter in a system with several
slaves which have been properly verified, I do not have the freedom to
force a slave in a non responsive state, therefore at system level I'm
not able to cover that functionality (neither that code).

What happen then with my functional coverage? Should it be a collection
of system level and unit level reports?

Code coverage (what the tool captures for you) tells you that you have
executed all lines of code. If your functional coverage is 100%, but
your code coverage is not, it indicates there is logic there that is
not covered by your test plan - either extra logic (which is bad) or
your test plan is incomplete (which is also bad).

What if my code has been exercised by a unit level test? Theoretically
it has been exercised, but not in the system that it is supposed to work
in. Does that coverage count?

Functional coverage at 100% and Code coverage at 100% indicates that
you have completed validation of the design.

There might be functionality that is not practically verifiable because
it requires a too long simulation, whether it can be easily validated on
the bench with hardware running at full speed. What about those cases?
 
On 13/03/14 14:13, alb wrote:
Hi everyone,

I'm trying to understand how to improve our verification flow.

As of now we have a great deal of unit level testbenches which are
trying to make sure the unit is behaving correctly.

Now, if I make a fair analysis of my above statement, I'm already doomed!
What does it mean 'behaving correctly'? In our workflow we have an FPGA
spec which is coming from /above/ and we have a verification matrix to
define which verification method we apply for each requirement. The
problem is: we do not have a unit level spec so how can we make sure the
unit is correctly behaving?

Moreover at the system level there are a certain number of scenarios
which do not apply at unit level and viceversa, but the bottom line is
that the system as a whole should be fully verified.

Should I decline each system level requirement to a unit level one? That
would be nearly as long as writing RTL code using only fabric's
privitives.

Another issue is the coverage collection. Imagine I have my set of units
all individually tested, all of them happily reporting some sort of
functional coverage. First of all I do not now why the heck we collect
coverage if we do not have a spec to compare it with and second of all
how shall I collect coverage of a specific unit when it's integrated in
the overall system? Does it make any sense to do it?

A purely system level approach might have too poor
observability/controllability at unit level and would not be efficient
to spot unit level problems, especially in the very beginning of the
coding effort, where the debug cycle is very fast. But if I start to
write unit level testbenches it would be unlikely that I will reuse
those benches at system level.

As you may have noticed I'm a bit confused and any pointer would be
greatly appreciated.

Old engineering maxim: you can't test quality into a product.
(You have to design it into the product)

Don't confuse verification and validation. A crude distinction
is that one ensures you have the right design, the other
ensures you have implemented the design right. (I always forget
which is which, sigh!)

One designer's unit is another designer's system.

Unit tests are helpful but insufficient; you also need
system integration tests.
 
Hi Jim,
Jim Lewis <usevhdl@gmail.com> wrote:
[]
A purely system level approach might have too poor
observability/controllability at unit level and would not be efficient
to spot unit level problems, especially in the very beginning of the
coding effort, where the debug cycle is very fast. But if I start to
write unit level testbenches it would be unlikely that I will reuse
those benches at system level.

For some ideas on this, see my paper titled, "Accelerating
Verification Through Pre-Use of System-Level Testbench Components"
that is posted on SynthWorks' papers page.

I know that paper and it served me well in the past, but I have two
issues with the proposed approach:

1. Following the example on the paper with the MemIO, the CpuIF has
'two' interfaces, one towards the CPU, and another one toward the inner
part of the entity. If we do not hook up the internal interface somehow
we are not verifying the full unit. While in the example the internal
interface might be very simple (like a list of registers), it might not
be always the case.

2. As subblocks are available, together with testcases through the
system level testbench, a complex configuration system has to be
maintained in order to instantiate only what is needed. This overhead
might be trivial if we have 4 subblocks, but it may pose several
problems when the amount of them increases drastically.

I do not quite understand the reason to split testcases in separate
architectures, I use to envelop the TbMemIO in what I call 'harness' and
instantiate it in each of my testcases. The harness grows as BFMs become
available and it never breaks an earlier testcase since the newly
inserted BFM was not used in earlier testcases.

If you would like this in a class room setting, make sure to catch our
VHDL TLM + OSVVM World Tour (our class VHDL Testbenches and
Verification). Next stop is in Sweden on May 5-9 and Germany in July
14-18. http://www.synthworks.com/public_vhdl_courses.htm

Unfortunately cost and time are not yet available. I'd love to follow
that course but as of now I need to rely on my own trials and
guidance like yours ;-)
 
Hi Al,
... Snip ...

What happen then with my functional coverage? Should it be a collection
of system level and unit level reports?
See next answer.

What if my code has been exercised by a unit level test? Theoretically
it has been exercised, but not in the system that it is supposed to work
in. Does that coverage count?
Many do integrate the functional coverage of the core (unit) and system.
The risk is that the system is connected differently than the core level
testbench. However in your case you are testing for correctness and you
get this at the system level by accessing the working slaves.

However, VHDL does give you a couple of good ways to test
non-responsive slave model in the system. First you can use about
VHDL configuration to swap in the non-responsive slave model for one
of the correct models.

Alternately (not my preference), you can use VHDL-2008 external names
and VHDL-2008 force commands to drive the necessary signals to make a
correct model non-responsive.


There might be functionality that is not practically verifiable because
it requires a too long simulation, whether it can be easily validated on
the bench with hardware running at full speed. What about those cases?
For ASICs, the general solution to this is to use an emulator or FPGA board
to test it. In most cases, your lab board is as good as an emulator. The
only case where emulators would be better is if they collect coverage and
report it back in a form that can be integrated with your other tests.

Best Regards,
Jim
 
Hi Al,
1. Following the example on the paper with the MemIO, the CpuIF has
'two' interfaces, one towards the CPU, and another one toward the inner
part of the entity. If we do not hook up the internal interface somehow
we are not verifying the full unit. While in the example the internal
interface might be very simple (like a list of registers), it might not
be always the case.

It is a problem. If the interface is simple enough, you test it when
integrating the next block. If the interface is complex enough, it could
writing behavioral model to test it.

2. As subblocks are available, together with testcases through the
system level testbench, a complex configuration system has to be
maintained in order to instantiate only what is needed. This overhead
might be trivial if we have 4 subblocks, but it may pose several
problems when the amount of them increases drastically.
The alternative of course is to instantiate them all and deal with the
run time penalty of having extra models present that are not being used.
Depending on your system, this may be ok.

The other alternative is to develop separate testbenches for each of the
different sets of blocks being tested - I suspect the configurations will
always be easier than this.

You are right though as too many configurable items results in a
proliferation of configurations that I call a configuration expolsion. :)

I do not quite understand the reason to split testcases in separate
architectures, I use to envelop the TbMemIO in what I call 'harness' and
instantiate it in each of my testcases. The harness grows as BFMs become
available and it never breaks an earlier testcase since the newly
inserted BFM was not used in earlier testcases.
This separation is important for reuse at different level of testing.
Separate that test case from that models that implement the exact interface
behavior. The models can be implemented with either a procedure in a package
(for simple behaviors) or an entity and architecture (for more complex models -
this is what the paper I referenced shows).

Lets say you are testing a UART. Most UART tests can be done both at the
core-level and system-level. If you have the test cases separated in this
manner, then by using different behavior models in the system, you can
use the same test cases to test both.

If you would like this in a class room setting, make sure to catch our
VHDL TLM + OSVVM World Tour (our class VHDL Testbenches and
Verification). Next stop is in Sweden on May 5-9 and Germany in July
14-18. http://www.synthworks.com/public_vhdl_courses.htm

Unfortunately cost and time are not yet available. I'd love to follow
that course but as of now I need to rely on my own trials and
guidance like yours ;-)
Do you work for free? If not, I suspect the cost of you learning by
reading, trial and error, and searching the internet when you run
into issues is going to cost much more than a class. You seem to be
progressing ok though.

Jim
 
Hi Al,
I've found a third option which might be quite interesting
especially when dealing with standard buses:

klabs.org/richcontent/software_content/vhdl/force_errors.pdf ???

The interesting thing about this technique is how you can
randomly generate errors on a bus without the need to model them.
You do need to model the coverage though.

This is ok at a testbench level, but if you wanted to use it
inside your system, you will need to check and see if your synthesis
tool is tolerant of user defined resolution functions.

You will want to make sure that the error injector (on the
generation side) communicates with the checker (on the receiver
side) so that the checker knows it is expecting a particular type
of error. In addition, you will want your test case generator to
be able to initiate errors in a non-random fashion.

For my testbenches, I like each BFM to be able to generate any
type of error that the DUT can gracefully handle. Then I set
up the transactions so they can communicate this information.
To randomly generate stimulus and inject errors, I use the
OSVVM randomization methods to do this at the test case generation
level (TestCtrl).

Cheers,
Jim

P.S.
Like @Tom Gardner hinted at, I avoid the word "UNIT" as it
means different things to different organizations. I have seen
authors use UNIT to mean anywhere from Design Unit (Entity/Architecture)
to a Core to a Subsystem (a box of boards that plugs into an airplane).
As a result, I use Core/Core Tests as most people seem to relate that
to being a reusable piece of intellectual property - like a UART.
 
Hi Jim.
Jim Lewis <usevhdl@gmail.com> wrote:
[]
What if my code has been exercised by a unit level test? Theoretically
it has been exercised, but not in the system that it is supposed to work
in. Does that coverage count?
Many do integrate the functional coverage of the core (unit)
and system. The risk is that the system is connected
differently than the core level testbench.

that is indeed yet another reason for 'fearing' the core level
testbench approach indeed. Thanks for pointing that out.

However, VHDL does give you a couple of good ways to test
non-responsive slave model in the system. First you can use about
VHDL configuration to swap in the non-responsive slave model for one
of the correct models.

As easy as it sounds it looks to be the most elegant solution. A
simple beh. model of a non responsive module may trigger various
types of scenarios.

Alternately (not my preference), you can use VHDL-2008 external names
and VHDL-2008 force commands to drive the necessary signals to make a
correct model non-responsive.

I've found a third option which might be quite interesting
especially when dealing with standard buses:

klabs.org/richcontent/software_content/vhdl/force_errors.pdf ???

The interesting thing about this technique is how you can
randomly generate errors on a bus without the need to model them.
You do need to model the coverage though.

There might be functionality that is not practically
verifiable because it requires a too long simulation, whether
it can be easily validated on the bench with hardware running
at full speed. What about those cases?
For ASICs, the general solution to this is to use an emulator
or FPGA board to test it. In most cases, your lab board is as
good as an emulator. The only case where emulators would be
better is if they collect coverage and report it back in a
form that can be integrated with your other tests.

In that case we have a pile of docs (verification reports,
compliance matrices, coverage results, ...) which feels the gap
(and the day).

Al
 
On 20/03/14 00:19, Jim Lewis wrote:
Like @Tom Gardner hinted at, I avoid the word "UNIT" as it
means different things to different organizations. I have seen
authors use UNIT to mean anywhere from Design Unit (Entity/Architecture)
to a Core to a Subsystem (a box of boards that plugs into an airplane).
As a result, I use Core/Core Tests as most people seem to relate that
to being a reusable piece of intellectual property - like a UART.

IMNSHO, the "unit" is the thing to which the stimulus
is applied and the response measured. Hence a "unit"
can be a capacitor, opamp, bandpass filter, register,
adder, xor gate, ALU, CPU, whatever is hidden inside
an FPGA, PCBs containing one or more of the above, crate
of PCBs, a single statement a=b+c, a statement invoking
library functions anArrayOfPeople.sortByName() aMessage.email()

What too many people don't understand is that in many
of those cases they aren't "simple unit tests", rather
they are integration tests - even if it is only testing
that your single statement works with the library functions.

Another point, which is too often missed, is to ask "what
am I trying to prove with this test". Classically in hardware
you need one type of test to prove the design is correct,
and an /entirely different/ set of tests to prove that
each and every manufactured item has been manufactured
correctly.

But I'm sure you are fully aware of all that!
 
Hi Al,
Maybe this time I will answer the right question. :)

What I did not understand in your paper was the 'need' of a separate
architecture instead of a separate file with a new entity/architecture.
There are two issues. The obvious one is having multiple copies of the same entity can lead to maintenance issues if the interface changes.

The not so obvious is the impact of compile rules. Dependencies in VHDL are on primary units (entity, package declaration, configuration declaration) and not secondary design units (architecture or package body).

Lets assume the TestCtrl entity and architecture are in separate files and I am not using configurations. I can compile my design and test hierarchy and run test1. Now to run test2, all I need to do is compile test2 architecture and I can restart the testbench and it runs (because the most recently compiled architecture is selected).

Going further, when I add configurations, I can compile my design, test hierarchy, test1 architecture, and configuration1, and then run configuration1.. Then I can compile test2 architecture and configuration2, and then run configuration2. To re-run configuration1, all I need to do is select run configuration1.

On the other hand, if we put the entity in the same file as the architecture, then the process gets much harder. If you compile test1, since the entity was recompiled, the testbench hierarchy above TestCtrl must also be recompiled, then you can start a simulation. To run test2, again compile test2 and the testbench hierarchy above TestCtrl, and then start a simulation. Configuration declarations would require this same compile extensive process.

I note some simulators work around some of this stuff with switches and automatically. With switches works well, but is tedious. Automatically works most of the time, but from time to time it does not and that makes for debug sessions in which you are chasing a nonexistent problem.

Hope I answered the right question this time.

Cheers,
Jim
 
Hi Jim,

Jim Lewis <usevhdl@gmail.com> wrote:
[]
I've found a third option which might be quite interesting
especially when dealing with standard buses:
[]
This is ok at a testbench level, but if you wanted to use it
inside your system, you will need to check and see if your synthesis
tool is tolerant of user defined resolution functions.

I guess that's a kind of a lesson learned the hard way. I do not have as
of now any working example that follows the option above mentioned, but
if I manage to give it a try I'll post some results/impressions.

You will want to make sure that the error injector (on the
generation side) communicates with the checker (on the receiver
side) so that the checker knows it is expecting a particular type
of error. In addition, you will want your test case generator to
be able to initiate errors in a non-random fashion.

Right. My idea was to use the 'error injector' as a /slave/ but I do not
yet know how to connect it to my transaction generator (or whatever is
the appropriate name for it). I may potentially use global signals, but
I'm not a big fan of them since they tend to increase universe entropy
drastically!

In my framework I use two global records to communicate with the
'server' and I could in principle extend them to pass the information
further along within the defective module. It is not entirely clean
though, in fact the server cannot handshake with the defective module
since it does not know it has been so configured. Well, we can imagine
that if the global records elements are filled it is because the client
wants to inject those errors, so the server can assume the defective
module is in place and will handshake with it. Uhm, this is getting
interesting...

For my testbenches, I like each BFM to be able to generate any
type of error that the DUT can gracefully handle. Then I set
up the transactions so they can communicate this information.
To randomly generate stimulus and inject errors, I use the
OSVVM randomization methods to do this at the test case generation
level (TestCtrl).

And indeed the mechanism would work even if the BFM are encapsulated
within internal components of the DUT and implement the error injection
as suggested above.

BTW I just put my hands on 'Comprehensive functional verification' Wile,
Goss, Roesner, it's a 700 pages volume on the verification cycle, I
guess it'll keep me busy for some time ;-)

Al
 
Hi Jim,
Jim Lewis <usevhdl@gmail.com> wrote:
[]
I do not quite understand the reason to split testcases in separate
architectures, I use to envelop the TbMemIO in what I call 'harness' and
instantiate it in each of my testcases. The harness grows as BFMs become
available and it never breaks an earlier testcase since the newly
inserted BFM was not used in earlier testcases.
This separation is important for reuse at different level of testing.
Separate that test case from that models that implement the exact
interface behavior. The models can be implemented with either a
procedure in a package (for simple behaviors) or an entity and
architecture (for more complex models - this is what the paper I
referenced shows).

I think I did not correctly phrased what I meant. I do understand the
importance of testcases to be separate. I have my 'client/server' model
in place, where my transaction happens between them and then the server
implements each call with a specific low level transaction, involving
the BFM for each interface. In this way my client may operate as long as
the server interface is kept the same.

What I did not understand in your paper was the 'need' of a separate
architecture instead of a separate file with a new entity/architecture.

The harness I referred to earlier is what instantiate the DUT and all
the necessary BFMs, the client/server transaction happens with a pair of
global records (to_srv_ctrl/fr_srv_ctrl) which handles the 'handshake'
in the transaction. The harness grows as more BFMs become available and
more interfaces are being implemented.

Unfortunately cost and time are not yet available. I'd love to follow
that course but as of now I need to rely on my own trials and
guidance like yours ;-)
Do you work for free? If not, I suspect the cost of you learning by
reading, trial and error, and searching the internet when you run
into issues is going to cost much more than a class. You seem to be
progressing ok though.

I'm used to learn by reading (and asking). On top of this, my actual job
description does not require these skills...another reason for not being
able to ask for such a course! But I'm not giving up ;-)

Al
 

Welcome to EDABoard.com

Sponsor

Back
Top