System Verilog - Randomization based on Coverage Data

T

Taha

Guest
Dear all,

Variables in a typical Constrained random test bench in System Verilog
are constrained at compile time using the "Constraint" construct.

Can any one think of advantages to a scheme that further constrains
Random variables using Coverage data (covergroups or SVA) obtained
during runtime? In essence, parts of the test vector space would be
eliminated during run-time as functional coverage points are
encountered.

An example of an advantage that I was able to come up with would be
the reduction of the test vector space dynamically so that the
constraint solver in System Verilog has a better chance of generating
test vectors that would cover the parts of the functional
specification that have not yet been exercised.

Are these types of schemes used in practice?

Any opinions/comments/discussions on the matter would be appreciated.

Thanks,

Sincerely,

Taha
 
On Wed, 18 Jul 2007 14:46:01 -0700, Taha <thamiral@gmail.com> wrote:

Can any one think of advantages to a scheme that further constrains
Random variables using Coverage data (covergroups or SVA) obtained
during runtime? In essence, parts of the test vector space would be
eliminated during run-time as functional coverage points are
encountered.
The "holy grail" of verification: Automatically generate just the
right stimulus to meet your coverage targets.

I presented a brief review paper on this at DVCon 2006. It wasn't
a very good paper, I don't think, and broadly speaking its conclusion
was "this is a very hard problem and I don't know how to solve it";
but it has a few references that you might find useful. Send me
a private email if you would like me to send you a copy -
replace "MYCOMPANY" with the obvious six-letter company name.

Also, take a look at what Certess are doing (www.certess.com) for
an interesting but very different take on the same problem.
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
 
On Thu, 19 Jul 2007 02:25:43 +0100, Jonathan Bromley
<jonathan.bromley@MYCOMPANY.com> wrote:

I presented a brief review paper on this at DVCon 2006. It wasn't
a very good paper, I don't think, and broadly speaking its conclusion
was "this is a very hard problem and I don't know how to solve it";
but it has a few references that you might find useful.
I've been keeping an eye on this for the last few years, but I only
managed to find the work at IBM Haifa. I'd be very interested to see
your paper - can you mail it to me?

Thanks -

Evan
eml@mycompany
 
On Jul 19, 2:46 am, Taha <thami...@gmail.com> wrote:
Dear all,

Can any one think of advantages to a scheme that further constrains
Random variables using Coverage data (covergroups or SVA) obtained
during runtime?

From pure language (SystemVerilog) perspective this is doable with:
my_obj.randomize() with {...};

Having said that, as Jonathan noted this is an area of "ideal
outlook" and not fully feasible - atleast not yet. Part of the problem
is the methodology - if I knew I only had to test these "coverpoints",
why not write tests directly? That would defeat the purpose of "random
testing" in some sense. In ideal case one should "bias the generator
around these cover areas" than target them really.

Summary: in theory it works and in real life, I've heard of some
internal tools in some big companies developed for this (through some
artificial intelligence, graph theory based), but not much publically
done. Also Verisity (now CDN) attempted this through one of their
tools sometime back. I don't see that any longer, so perhaps the
solution was not scalable for larger market.



Cheers
Ajeetha, CVC
www.noveldv.com
 
Thanks a lot for all the input.

Jonathan, I would be interested in reading your paper from DVCon. I
have sent you an email requesting you for a copy of the same.

Ajeetha, I agree the ideal test bench would accept a list of cover
points which would be used to bias a random generator to exercise
these cover points. This is what is ordinarily done in conjunction
with a directed test scheme which covers up any holes that the random
test bench cant cover.

What makes a constrained random test bench even more difficult is to
bias the random generator at runtime given the test vectors that have
already been generated (Example: myObj.randomize() with {...} or
assert property (foo) else ...). So far, the only reason I can think
of justifying this kind of a test bench is the "potential" of reducing
the number of test vectors that the random test bench will have to
generate. In fact, I think this advantage can be nullified if the
randomization was constrained correctly in the first place.

Has anyone seen test bench schemes that bias a random generator at
runtime based on test vectors that have been generated? If so, can
anyone theorize/discuss on the possible advantages of such a scheme? I
am quite curious to see if there are test bench schemes based on this
and the justification behind such a scheme.

Thanks for the discussion,

Sincerely,

Taha



On Jul 21, 11:51 am, "Ajeetha (www.noveldv.com)" <ajeetha@gmail.com>
wrote:
On Jul 19, 2:46 am, Taha <thami...@gmail.com> wrote:

Dear all,
Can any one think of advantages to a scheme that further constrains
Random variables using Coverage data (covergroups or SVA) obtained
during runtime?
From pure language (SystemVerilog) perspective this is doable with:

my_obj.randomize() with {...};

Having said that, as Jonathan noted this is an area of "ideal
outlook" and not fully feasible - atleast not yet. Part of the problem
is the methodology - if I knew I only had to test these "coverpoints",
why not write tests directly? That would defeat the purpose of "random
testing" in some sense. In ideal case one should "bias the generator
around these cover areas" than target them really.

Summary: in theory it works and in real life, I've heard of some
internal tools in some big companies developed for this (through some
artificial intelligence, graph theory based), but not much publically
done. Also Verisity (now CDN) attempted this through one of their
tools sometime back. I don't see that any longer, so perhaps the
solution was not scalable for larger market.

Cheers
Ajeetha, CVCwww.noveldv.com
 

Welcome to EDABoard.com

Sponsor

Back
Top