A
alb
Guest
Hi everyone,
I'm designing a verification environment for our fpga designs which
essentially allows to incrementally test the whole system without the
need to break the verification effort in several block level testcases
which are often not reusable and too often not sufficiently debugged
either.
In order to perform CI (continuous integration) it would be better to
have in place selfchecking testbenches which run autonomously and
regularly (each build or each nth build).
When talking about a selfchecking testbench I've often heard about a
'golden model' against which we compare our results and here I'd like to
explain why I do not clearly see the reason for it.
In a 'verification plan' I have to match the 'requirements
specification', therefore I need to check that a) I've covered all
requirements and b) the criteria specified as a requirement is met.
Taking one example I'm currently working with: /The time between the
SYNC assertion and the REF assertion shall be less than 5 microseconds/
My selfchecking testbench needs to have a coverage model to verify that
my transactions a) do generate a transaction of the SYNC signal and b)
the corresponding REF signal has arrived withing the 5 microseconds.
So in my mind I consider a scoreboard as a mechanism to 'store'
transactions out of which I fill my coverage model, while a checker is a
mechanism which goes through each transaction and verify that the
requirement is met.
In the above example I may imagine to have my bfm which generates the
SYNC and samples the REF signal, storing the 'transaction' as a data
structure, possibly containing the 'time interval' between the two
events. The transaction is stored in the scoreboard which fills a sort
of coverage db while in the meantime the checker may, asynchronously,
examine the transaction and raise a flag pass/failed.
If all what I said does make sense to, at least some of, you then could
someone explain me where is the need for a 'golden model' in this
context? Isn't the requirement specification sufficient to fill our
needs?
Remaining in the hypothesis that I did understand something of what I
said, when it is time to do the 'verification report', how do we bind
the pass/failed criteria to the coverage db? I hinted the possibility
for the checker to be completely out of sync w.r.t. the coverage db.
On a side note, if any has some source code for scoreboards and
checkers, whose willing to share as a reference for this discussion I'd
appreciate.
Al
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
I'm designing a verification environment for our fpga designs which
essentially allows to incrementally test the whole system without the
need to break the verification effort in several block level testcases
which are often not reusable and too often not sufficiently debugged
either.
In order to perform CI (continuous integration) it would be better to
have in place selfchecking testbenches which run autonomously and
regularly (each build or each nth build).
When talking about a selfchecking testbench I've often heard about a
'golden model' against which we compare our results and here I'd like to
explain why I do not clearly see the reason for it.
In a 'verification plan' I have to match the 'requirements
specification', therefore I need to check that a) I've covered all
requirements and b) the criteria specified as a requirement is met.
Taking one example I'm currently working with: /The time between the
SYNC assertion and the REF assertion shall be less than 5 microseconds/
My selfchecking testbench needs to have a coverage model to verify that
my transactions a) do generate a transaction of the SYNC signal and b)
the corresponding REF signal has arrived withing the 5 microseconds.
So in my mind I consider a scoreboard as a mechanism to 'store'
transactions out of which I fill my coverage model, while a checker is a
mechanism which goes through each transaction and verify that the
requirement is met.
In the above example I may imagine to have my bfm which generates the
SYNC and samples the REF signal, storing the 'transaction' as a data
structure, possibly containing the 'time interval' between the two
events. The transaction is stored in the scoreboard which fills a sort
of coverage db while in the meantime the checker may, asynchronously,
examine the transaction and raise a flag pass/failed.
If all what I said does make sense to, at least some of, you then could
someone explain me where is the need for a 'golden model' in this
context? Isn't the requirement specification sufficient to fill our
needs?
Remaining in the hypothesis that I did understand something of what I
said, when it is time to do the 'verification report', how do we bind
the pass/failed criteria to the coverage db? I hinted the possibility
for the checker to be completely out of sync w.r.t. the coverage db.
On a side note, if any has some source code for scoreboards and
checkers, whose willing to share as a reference for this discussion I'd
appreciate.
Al
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?