scoreboards, checkers and golden models

A

alb

Guest
Hi everyone,

I'm designing a verification environment for our fpga designs which
essentially allows to incrementally test the whole system without the
need to break the verification effort in several block level testcases
which are often not reusable and too often not sufficiently debugged
either.

In order to perform CI (continuous integration) it would be better to
have in place selfchecking testbenches which run autonomously and
regularly (each build or each nth build).

When talking about a selfchecking testbench I've often heard about a
'golden model' against which we compare our results and here I'd like to
explain why I do not clearly see the reason for it.

In a 'verification plan' I have to match the 'requirements
specification', therefore I need to check that a) I've covered all
requirements and b) the criteria specified as a requirement is met.

Taking one example I'm currently working with: /The time between the
SYNC assertion and the REF assertion shall be less than 5 microseconds/

My selfchecking testbench needs to have a coverage model to verify that
my transactions a) do generate a transaction of the SYNC signal and b)
the corresponding REF signal has arrived withing the 5 microseconds.

So in my mind I consider a scoreboard as a mechanism to 'store'
transactions out of which I fill my coverage model, while a checker is a
mechanism which goes through each transaction and verify that the
requirement is met.

In the above example I may imagine to have my bfm which generates the
SYNC and samples the REF signal, storing the 'transaction' as a data
structure, possibly containing the 'time interval' between the two
events. The transaction is stored in the scoreboard which fills a sort
of coverage db while in the meantime the checker may, asynchronously,
examine the transaction and raise a flag pass/failed.

If all what I said does make sense to, at least some of, you then could
someone explain me where is the need for a 'golden model' in this
context? Isn't the requirement specification sufficient to fill our
needs?

Remaining in the hypothesis that I did understand something of what I
said, when it is time to do the 'verification report', how do we bind
the pass/failed criteria to the coverage db? I hinted the possibility
for the checker to be completely out of sync w.r.t. the coverage db.

On a side note, if any has some source code for scoreboards and
checkers, whose willing to share as a reference for this discussion I'd
appreciate.

Al

--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
 
On Friday, July 11, 2014 10:40:42 AM UTC-4, alb wrote:
If all what I said does make sense to, at least some of, you then could
someone explain me where is the need for a 'golden model' in this
context? Isn't the requirement specification sufficient to fill our
needs?

In your case, it appears that your 'golden model' implementation happens with "I may imagine to have my bfm which generates the SYNC and samples the REF signal, storing the 'transaction' as a data structure, possibly containing the 'time interval' between the two events".

Since you do not appear to be performing any function on the data then simply making sure that what goes in, comes out at the appropriate time is sufficient, but that then is your golden model. Your design may be translating between interfaces or other such useful things, but if at the end of the day you're simply moving data from a source to a destination with the expectation that everything sent gets received, then the golden model for that would be to take whatever is sent and post that to the expected output queue.

Now consider a case where the input data gets operated on to produce the output. The specification might say something like "JPEG compression" where what comes out is radically different than what went in. Now you have to actually compute what the expected output is and you have to use a known good model for producing that golden output so you have to question the source of that model.

Kevin Jennings
 
Hi Al,
The first problem is that the terminology is evolving and means different things to different people. One good source of terminology is the book, "Comprehensive Functional Verification".

A "golden model" or reference model answers the question, how do I predict what is going on in the system. Some designs need them, some don't.

A scoreboard is simply a means for correlating transactions into a DUT with responses.

A checker simply provides a means for collecting outputs of a design and validating functionality of a design.

A checker may operate alone or in conjunction with a reference model and/or a scoreboard. For example, if Sync and Ref are simple std_logic values (ie: = '1'), for every Sync there is a Ref, and there is no additional Sync until the first Ref is received, then a checker model can validate this all by itself.

OTOH, if Sync and Ref are actually transaction values (such as Integer) and the Ref value depends not only on Sync, but also on prior transactions, then it would be appropriate to use a reference model. If Ref depends only on Sync, then again the checker model can validate it alone.

Also if the system allows for pipelining of Sync and Ref, meaning, multiple Sync transactions may be received before a Ref response, then you will need a scoreboard. The scoreboard would store both the expected Ref value (if not just a '1') and the time at which it must occur by (ie: Sync Time + 5 us). This is a great example as it also demonstrates that a scoreboard must support more than just simple "=" comparisons - in this case, the actual time value of receiving Ref must be less than or equal the expected time.

BTW, the scoreboard model that we discuss and comes with SynthWorks' VHDL Testbenches and Verification class, handles both inorder and out of order transaction responses, and also, supports parametrization so that it can handle cases like the last one described above (less than equal to comparisons or more complex "ad hoc").


Cheers,
Jim
 
Hi Jim,
Jim Lewis <jim@synthworks.com> wrote:
The first problem is that the terminology is evolving and means
different things to different people. One good source of terminology
is the book, "Comprehensive Functional Verification".

I went through it once...maybe it's time to go through it once again ;-)

A "golden model" or reference model answers the question, how do I
predict what is going on in the system. Some designs need them, some
don't.

A golden model, as defined, does not need to be part of a test. Indeed
it needs to be part of the 'test report', where you go through your
collected data and check for each test if you passed or fail.

A scoreboard is simply a means for correlating transactions into a DUT
with responses.

That is an extremely nice definition. A sort of a 'dictionary' where for
each transaction there's a reply. Such a definition imply the need to be
able to associate a reply to a transaction, no matter the order of the
replies and or transactions.

A checker simply provides a means for collecting outputs of a design
and validating functionality of a design.

As defined, I see this step as separate element in my verification
effort, that not necessarily lives in the same moment of the result
collections. Is like if I first pick up the mushrooms and only then
select the good from the bad ones.

A checker may operate alone or in conjunction with a reference model
and/or a scoreboard. For example, if Sync and Ref are simple
std_logic values (ie: = '1'), for every Sync there is a Ref, and there
is no additional Sync until the first Ref is received, then a checker
model can validate this all by itself.

OTOH, if Sync and Ref are actually transaction values (such as
Integer) and the Ref value depends not only on Sync, but also on prior
transactions, then it would be appropriate to use a reference model.
If Ref depends only on Sync, then again the checker model can validate
it alone.

If Ref 'depends' on multiple Sync, than the real 'transaction' is that
multitude of Sync it is required to generate that Ref. Indeed there's
always a one-to-one relation between an input transaction and an output
one (unless the system is undeterministic and its output is not
predictable only from the inputs and its state).

At this point the transaction itself is made out of multiple packets (or
events), which need to be stored in a suitable structure in the
scoreboard.

Also if the system allows for pipelining of Sync and Ref, meaning,
multiple Sync transactions may be received before a Ref response, then
you will need a scoreboard. The scoreboard would store both the
expected Ref value (if not just a '1') and the time at which it must
occur by (ie: Sync Time + 5 us). This is a great example as it also
demonstrates that a scoreboard must support more than just simple "="
comparisons - in this case, the actual time value of receiving Ref
must be less than or equal the expected time.

I would rather remove the checking from the scoreboard and leave it to
the checker (possibly outside the simulation itself), in order to keep
the scoreboard logic as simple as possible.

BTW, the scoreboard model that we discuss and comes with SynthWorks'
VHDL Testbenches and Verification class, handles both inorder and out
of order transaction responses, and also, supports parametrization so
that it can handle cases like the last one described above (less than
equal to comparisons or more complex "ad hoc").

If the scoreboard stores information in a file, a python dictionary
would be a very simple and powerful structure to handle out of order
transactions since it's a keyed list. That is why I'm actually thinking
about separating the two tasks.

As for the class...one day, I hope! ;-)
 
Hi Kevin,

KJ <kkjennings@sbcglobal.net> wrote:
If all what I said does make sense to, at least some of, you then could
someone explain me where is the need for a 'golden model' in this
context? Isn't the requirement specification sufficient to fill our
needs?

Now consider a case where the input data gets operated on to produce
the output. The specification might say something like "JPEG
compression" where what comes out is radically different than what
went in. Now you have to actually compute what the expected output is
and you have to use a known good model for producing that golden
output so you have to question the source of that model.

Ok, I think I didn't pick the right example and therefore I failed to
see the need of a 'complex' model and indeed, as you said, I was using a
model anyway.

And I'm also not particularly smart either since we do have more complex
examples that would need a reference model, but I guess I was too fast
in discarding its need.

So the checker would need to *know* what to expect for a particular
transaction and rise a flag if the test fails or pass.

But, while I see some benefits in having a 'protocol checker' embedded
in the testbench in order to /validate/ the transactions, I have some
issues in understanding why would I need the checker with a golden model
to be also embedded in my testbench. Wouldn't it be easier to record
transactions, say on a file, and then post-process it? Unless the
stimulus has to adapt to the checker status, I don't see a particular
benefit in venturing with complex models in either vhdl or any other
language in a mixed language simulation (too much money and too many
quirks).

A post-processing phase instead could be easily handled with any
high-level language and has the advantage that does not need any
co-simulation environment to deal with. Moreover it decouples completely
the model from the test, the output could be compared with simple tools
and you keep the freedom to write your reference in what suits best for
the application.

While I see the benefit of a model when I need to emulate the
hardware/software environment *around* my DUT, I fail to see its use
in verifying the DUT behavior.

But, as already proved by my shortsightedness in the OP, I may miss
again the bigger picture.
 
On Tuesday, July 15, 2014 7:15:09 AM UTC+1, alb wrote:
I found for instance that python is extremely powerful at producing
relatively accurate models with very small efforts. The issue is how to
embed a python model into a vhdl based testbench? I've recently heard
about 'cocotb', maybe I should give it a try, but I've read it has
problems with Modelsim/Questa, which is the simulator we are using.

The problem with Modelsim/Questa is that Mentor have yet to implement the full VHDL-2008 standard, specifically the VHPI C API, which Cocotb uses to communicate with VHDL simulations.

If you have a mixed-language simulator license you can wrap the toplevel in Verilog wrapper, allowing Cocotb to use VPI to access the simulator.

The only other alternative is implementing an FLI layer for Cocotb, which due to the rather limited functionality offered by FLI is non-trivial.

You could also open a ticket with Mentor to demonstrate that there is demand for VHPI.

Thanks,

Chris
 
On 17/07/2014 12:18, Chris Higgs wrote:
On Tuesday, July 15, 2014 7:15:09 AM UTC+1, alb wrote:
I found for instance that python is extremely powerful at producing
relatively accurate models with very small efforts. The issue is how to
embed a python model into a vhdl based testbench? I've recently heard
about 'cocotb', maybe I should give it a try, but I've read it has
problems with Modelsim/Questa, which is the simulator we are using.

The problem with Modelsim/Questa is that Mentor have yet to implement the full VHDL-2008 standard, specifically the VHPI C API, which Cocotb uses to communicate with VHDL simulations.

If you have a mixed-language simulator license you can wrap the toplevel in Verilog wrapper, allowing Cocotb to use VPI to access the simulator.

The only other alternative is implementing an FLI layer for Cocotb, which due to the rather limited functionality offered by FLI is non-trivial.

FLI has "rather limited functionality", hum? what do you base that on?

IMHO the FLI give you more functionality than you can shake a stick at.

As per Aldec's presentation last week nobody is willing to pay Potential
Ventures to port the code to the FLI, this is purely a financial issue
and definitely not a technical one. During the Q&A session they
mentioned that the FLI didn't have the right functionality to create
processes in memory and creates signals to trigger on them, this is
basic(core) FLI stuff!

// Get pointer to port signal
ip->signala = mti_FindPort(ports, "signala");
// Create a process in memory
proc = mti_CreateProcess("myprocess", eval_int, ip);
// Create sensitivity
mti_Sensitize(proc, ip->signala, MTI_EVENT);

Regards,
Hans.
www.ht-lab.com

You could also open a ticket with Mentor to demonstrate that there is demand for VHPI.

Thanks,

Chris
 
On Thursday, July 17, 2014 2:02:07 PM UTC+1, HT-Lab wrote:
FLI has "rather limited functionality", hum? what do you base that on?
IMHO the FLI give you more functionality than you can shake a stick at.

It's likely that FLI provides all the required functionality, it's just
more awkward to use than VPI or VHPI.

Creating and tracking a process in order to generate a callback is one
example of the inconvenience, although that in itself is minor. However
if you look at the GPI layer we also need to create callbacks for various
phases in the simulation scheduler loop. While with FLI it's possible to
set a process priority referring to the scheduler phase, it's still not
obvious how you might simply register a callback for entering a given
phase, since you'd have to sensitise a process to *something*.

Ensuring that Cocotb interacts correctly with the simulation scheduling
loop was a major challenge and it doesn't look like FLI makes this any
easier.


As per Aldec's presentation last week nobody is willing to pay Potential
Ventures to port the code to the FLI, this is purely a financial issue
and definitely not a technical one. During the Q&A session they
mentioned that the FLI didn't have the right functionality to create
processes in memory and creates signals to trigger on them, this is
basic(core) FLI stuff!

I'm glad you listened to the presentation. I have to take issue with this
statement though as this is actually the opposite of what I said.

I appreciate that the sound quality of the recording is not great but if you
listen from 41:55 you'll hear the following:

I believe it would be possible to create processes using FLI and trigger
them on signals, which is effectively the functionality we need.

But you're correct that it's more a question of incentive - it's very likely
that whatever technical issues arise are solvable. It's still a non-trivial
task.

The most biggest obstacle is that it's not possible to gain access to
an FLI simulator without paying Mentor actual cash in not insignificant
amounts. If somebody would like to contribute a license to enable us to
develop an FLI interface I'm sure it would happen... or better yet if you
have the skills and access to FLI contribute some code!

Thanks,

Chris
 
Hi Chris,

On 17/07/2014 18:36, Chris Higgs wrote:
On Thursday, July 17, 2014 2:02:07 PM UTC+1, HT-Lab wrote:
FLI has "rather limited functionality", hum? what do you base that on?
IMHO the FLI give you more functionality than you can shake a stick at.

It's likely that FLI provides all the required functionality, it's just
more awkward to use than VPI or VHPI.

Its all in the eye of the beholder.

Creating and tracking a process in order to generate a callback is one
example of the inconvenience, although that in itself is minor. However
if you look at the GPI layer we also need to create callbacks for various
phases in the simulation scheduler loop. While with FLI it's possible to
set a process priority referring to the scheduler phase, it's still not
obvious how you might simply register a callback for entering a given
phase,

I must admit I didn't check the GPI layer but I would expect the
mti_CreateProcessWithPriority function to do the trick (which is the FLI
function I assume you are referring to). If you look at the example of
this function in the reference manual you will see it includes callbacks
for different scheduler regions.

>since you'd have to sensitise a process to *something*.

I am not sure what you mean, do you need to activate the process other
than by sensitivity signals? Perhaps mti_schedulewakeup is what you are
after.

Ensuring that Cocotb interacts correctly with the simulation scheduling
loop was a major challenge and it doesn't look like FLI makes this any
easier.


As per Aldec's presentation last week nobody is willing to pay Potential
Ventures to port the code to the FLI, this is purely a financial issue
and definitely not a technical one. During the Q&A session they
mentioned that the FLI didn't have the right functionality to create
processes in memory and creates signals to trigger on them, this is
basic(core) FLI stuff!

I'm glad you listened to the presentation. I have to take issue with this
statement though as this is actually the opposite of what I said.

I appreciate that the sound quality of the recording is not great but if you
listen from 41:55 you'll hear the following:

I believe it would be possible to create processes using FLI and trigger
them on signals, which is effectively the functionality we need.

I downloaded the recording and yes you are correct, my memory is not
what it used to be.

But you're correct that it's more a question of incentive - it's very likely
that whatever technical issues arise are solvable. It's still a non-trivial
task.

Yes, I can imagine this is not an easy task. However, given the
popularity of VHDL and Modelsim I assume this is high on your todo list.

The most biggest obstacle is that it's not possible to gain access to
an FLI simulator without paying Mentor actual cash in not insignificant
amounts.

Yes Modelsim (DE) is not particular low-cost, however, as with most
large corporations it is "just" a question of finding the right person.

If somebody would like to contribute a license to enable us to
develop an FLI interface I'm sure it would happen... or better yet if you
have the skills and access to FLI contribute some code!

Looks like an interesting challenge, unfortunately my brain is already
overloaded with to many languages and there is no more room not even for
a powerful language like Python.

Good luck,

Regards,
Hans.
www.ht-lab.com


Thanks,

Chris
 

Welcome to EDABoard.com

Sponsor

Back
Top