Test Driven Design?

T

Tim Wescott

Guest
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit, but
vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional "make
a test bench" is part way there, but as presented in my textbook* doesn't
impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
1998.

--
www.wescottdesign.com
 
Tim Wescott <tim@seemywebsite.really> wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit, but
vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional "make
a test bench" is part way there, but as presented in my textbook* doesn't
impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

We do it. We have an equivalence checker that fuzzes random inputs to
both the system and an executable 'golden model' of the system, looking for
discrepancies. If found, it'll then reduce down to a minimal example.

In particular this is very handy because running the test cases is then
synthesisable: so we can run the tests on FPGA rather than on a simulator.

Our paper has more details and the code is open source:
https://www.cl.cam.ac.uk/research/security/ctsrd/pdfs/201509-memocode2015-bluecheck.pdf

Theo
 
I do a sloppy version of it.
Sometime I allow myself not to do tests for some simple and small modules which will be tested on a higher hierarchy level anyway. (Because I also make tests for modules of different hierarchy levels)
Tests are randomized and they are launched with different seeds every time. If there is a problem, I also can launch the faulty test with a specific seed to repeat the problem.
 
On 5/16/2017 4:21 PM, Tim Wescott wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit, but
vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional "make
a test bench" is part way there, but as presented in my textbook* doesn't
impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
1998.

I'm not clear on all of the details of what defines "test driven
design", but I believe I've been using that all along. I've thought of
this as bottom up development where the lower level code is written
first *and thoroughly tested* before writing the next level of code.

How does "test driven design" differ from this significantly?

--

Rick C
 
On 05/16/2017 01:21 PM, Tim Wescott wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit, but
vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional "make
a test bench" is part way there, but as presented in my textbook* doesn't
impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
1998.

We don't do classical "first you write a testbench and prove it fails,
then you write the code that makes it pass" TDD but we do a whole lot of
unit testing before we try to integrate submodules into the larger design.

I get a ton of mileage from OSVVM (http://osvvm.org/) for constrained
random verification.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order. See above to fix.
 
On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:

On 5/16/2017 4:21 PM, Tim Wescott wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit,
but vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional
"make a test bench" is part way there, but as presented in my textbook*
doesn't impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
1998.

I'm not clear on all of the details of what defines "test driven
design", but I believe I've been using that all along. I've thought of
this as bottom up development where the lower level code is written
first *and thoroughly tested* before writing the next level of code.

How does "test driven design" differ from this significantly?

The big difference in the software world is that the tests are automated
and never retired. There are generally test suites to make the mechanics
of testing easier. Ideally, whenever you do a build you run the entire
unit-test suite fresh. This means that when you tweak some low-level
function, it still gets tested.

The other big difference, that's hard for one guy to do, is that if
you're going Full Agile you have one guy writing tests and another guy
writing "real" code. Ideally they're equally good, and they switch off.
The idea is basically that more brains on the problem is better.

If you look at the full description of TDD it looks like it'd be hard,
slow, and clunky, because the recommendation is to do things at a very
fine-grained level. However, I've done it, and the process of adding
features to a function as you add tests to the bench goes very quickly.
The actual development of the bottom layer is a bit slower, but when you
go to put the pieces together they just fall into place.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On 17/05/17 16:47, rickman wrote:
On 5/16/2017 4:21 PM, Tim Wescott wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit, but
vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional "make
a test bench" is part way there, but as presented in my textbook* doesn't
impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
1998.

I'm not clear on all of the details of what defines "test driven design", but I
believe I've been using that all along. I've thought of this as bottom up
development where the lower level code is written first *and thoroughly tested*
before writing the next level of code.

How does "test driven design" differ from this significantly?

In many software environments TDD - as it is
taught - more naturally fits top-down design.

That's not necessary, but that's the typical
mentality. TDD can and should be used for
"bottom-up" "integration tests".

The key point, all to often missed, is to /think/
about the benefits and disadvantages of each tool
in your armoury, and use only the most appropriate
combination for your problem at hand.
 
On 5/17/2017 1:17 PM, Tim Wescott wrote:
On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:

On 5/16/2017 4:21 PM, Tim Wescott wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit,
but vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional
"make a test bench" is part way there, but as presented in my textbook*
doesn't impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
1998.

I'm not clear on all of the details of what defines "test driven
design", but I believe I've been using that all along. I've thought of
this as bottom up development where the lower level code is written
first *and thoroughly tested* before writing the next level of code.

How does "test driven design" differ from this significantly?

The big difference in the software world is that the tests are automated
and never retired. There are generally test suites to make the mechanics
of testing easier. Ideally, whenever you do a build you run the entire
unit-test suite fresh. This means that when you tweak some low-level
function, it still gets tested.

The other big difference, that's hard for one guy to do, is that if
you're going Full Agile you have one guy writing tests and another guy
writing "real" code. Ideally they're equally good, and they switch off.
The idea is basically that more brains on the problem is better.

If you look at the full description of TDD it looks like it'd be hard,
slow, and clunky, because the recommendation is to do things at a very
fine-grained level. However, I've done it, and the process of adding
features to a function as you add tests to the bench goes very quickly.
The actual development of the bottom layer is a bit slower, but when you
go to put the pieces together they just fall into place.

I guess I'm still not picturing it. I think the part I don't get is
"adding features to a function". To me the features would *be*
functions that are written, tested and then added to next higher level
code. So I assume what you wrote applies to that next higher level.

I program in two languages, Forth and VHDL. In Forth functions (called
"words") are written at *very* low levels, often a word is a single line
of code and nearly all the time no more than five. Being very small a
word is much easier to write although the organization can be tough to
settle on.

In VHDL I typically don't decompose the code into such fine grains. It
is easy to write the code for the pieces, registers and logic. The hard
part is how they interconnect/interrelate. Fine decomposition tends to
obscure that rather than enhancing it. So I write large blocks of code
to be tested. I guess in those cases features would be "added" rather
than new modules being written for the new functionality.

I still write test benches for each module in VHDL. Because there is a
lot more work in writing a using a VHDL test bench than a Forth test
word this also encourages larger (and fewer) modules.

Needless to say, I don't find much synergy between the two languages.

--

Rick C
 
On Wed, 17 May 2017 13:39:55 -0400, rickman wrote:

On 5/17/2017 1:17 PM, Tim Wescott wrote:
On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:

On 5/16/2017 4:21 PM, Tim Wescott wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit,
but vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional
"make a test bench" is part way there, but as presented in my
textbook*
doesn't impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby,
Kluwer,
1998.

I'm not clear on all of the details of what defines "test driven
design", but I believe I've been using that all along. I've thought
of this as bottom up development where the lower level code is written
first *and thoroughly tested* before writing the next level of code.

How does "test driven design" differ from this significantly?

The big difference in the software world is that the tests are
automated and never retired. There are generally test suites to make
the mechanics of testing easier. Ideally, whenever you do a build you
run the entire unit-test suite fresh. This means that when you tweak
some low-level function, it still gets tested.

The other big difference, that's hard for one guy to do, is that if
you're going Full Agile you have one guy writing tests and another guy
writing "real" code. Ideally they're equally good, and they switch
off. The idea is basically that more brains on the problem is better.

If you look at the full description of TDD it looks like it'd be hard,
slow, and clunky, because the recommendation is to do things at a very
fine-grained level. However, I've done it, and the process of adding
features to a function as you add tests to the bench goes very quickly.
The actual development of the bottom layer is a bit slower, but when
you go to put the pieces together they just fall into place.

I guess I'm still not picturing it. I think the part I don't get is
"adding features to a function". To me the features would *be*
functions that are written, tested and then added to next higher level
code. So I assume what you wrote applies to that next higher level.

I program in two languages, Forth and VHDL. In Forth functions (called
"words") are written at *very* low levels, often a word is a single line
of code and nearly all the time no more than five. Being very small a
word is much easier to write although the organization can be tough to
settle on.

In VHDL I typically don't decompose the code into such fine grains. It
is easy to write the code for the pieces, registers and logic. The hard
part is how they interconnect/interrelate. Fine decomposition tends to
obscure that rather than enhancing it. So I write large blocks of code
to be tested. I guess in those cases features would be "added" rather
than new modules being written for the new functionality.

I still write test benches for each module in VHDL. Because there is a
lot more work in writing a using a VHDL test bench than a Forth test
word this also encourages larger (and fewer) modules.

Needless to say, I don't find much synergy between the two languages.

Part of what I'm looking for is a reading on whether it makes sense in
the context of an HDL, and if so, how it makes sense in the context of an
HDL (I'm using Verilog, because I'm slightly more familiar with it, but
that's incidental).

In Really Pure TDD for Java, C, or C++, you start by writing a test in
the absence of a function, just to see the compiler error out. Then you
write a function that does nothing. Then (for instance), you write a
test who's expected return value is "42", and an accompanying function
that just returns 42. Then you elaborate from there.

It sounds really dippy (I was about as skeptical as can be when it was
presented to me), but in a world where compilation is fast, there's very
little speed penalty.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On 05/16/2017 01:21 PM, Tim Wescott wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit, but
vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional "make
a test bench" is part way there, but as presented in my textbook* doesn't
impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
1998.

Can you elaborate on "Test Driven Design" please? Is this some
specialized design methodology, or a standard design methodology with
extensive module testing, or something else completely?

thanks,
BobH
 
On 05/17/2017 10:48 AM, Tim Wescott wrote:
On Wed, 17 May 2017 13:39:55 -0400, rickman wrote:

On 5/17/2017 1:17 PM, Tim Wescott wrote:
On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:

On 5/16/2017 4:21 PM, Tim Wescott wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit,
but vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional
"make a test bench" is part way there, but as presented in my
textbook*
doesn't impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby,
Kluwer,
1998.

I'm not clear on all of the details of what defines "test driven
design", but I believe I've been using that all along. I've thought
of this as bottom up development where the lower level code is written
first *and thoroughly tested* before writing the next level of code.

How does "test driven design" differ from this significantly?

The big difference in the software world is that the tests are
automated and never retired. There are generally test suites to make
the mechanics of testing easier. Ideally, whenever you do a build you
run the entire unit-test suite fresh. This means that when you tweak
some low-level function, it still gets tested.

The other big difference, that's hard for one guy to do, is that if
you're going Full Agile you have one guy writing tests and another guy
writing "real" code. Ideally they're equally good, and they switch
off. The idea is basically that more brains on the problem is better.

If you look at the full description of TDD it looks like it'd be hard,
slow, and clunky, because the recommendation is to do things at a very
fine-grained level. However, I've done it, and the process of adding
features to a function as you add tests to the bench goes very quickly.
The actual development of the bottom layer is a bit slower, but when
you go to put the pieces together they just fall into place.

I guess I'm still not picturing it. I think the part I don't get is
"adding features to a function". To me the features would *be*
functions that are written, tested and then added to next higher level
code. So I assume what you wrote applies to that next higher level.

I program in two languages, Forth and VHDL. In Forth functions (called
"words") are written at *very* low levels, often a word is a single line
of code and nearly all the time no more than five. Being very small a
word is much easier to write although the organization can be tough to
settle on.

In VHDL I typically don't decompose the code into such fine grains. It
is easy to write the code for the pieces, registers and logic. The hard
part is how they interconnect/interrelate. Fine decomposition tends to
obscure that rather than enhancing it. So I write large blocks of code
to be tested. I guess in those cases features would be "added" rather
than new modules being written for the new functionality.

I still write test benches for each module in VHDL. Because there is a
lot more work in writing a using a VHDL test bench than a Forth test
word this also encourages larger (and fewer) modules.

Needless to say, I don't find much synergy between the two languages.

Part of what I'm looking for is a reading on whether it makes sense in
the context of an HDL, and if so, how it makes sense in the context of an
HDL (I'm using Verilog, because I'm slightly more familiar with it, but
that's incidental).

In Really Pure TDD for Java, C, or C++, you start by writing a test in
the absence of a function, just to see the compiler error out. Then you
write a function that does nothing. Then (for instance), you write a
test who's expected return value is "42", and an accompanying function
that just returns 42. Then you elaborate from there.

It sounds really dippy (I was about as skeptical as can be when it was
presented to me), but in a world where compilation is fast, there's very
little speed penalty.

One project I've seen for this in an HDL context in terms of actual
full-scale TDD, complete with continuous integration of regression
tests, etc is VUnit. It combines HDL stub code with a Python wrapper in
order to automate the running of lots of little tests, rather than big
monolithic tests that keel over and die on the first error rather than
reporting them all out.

To be honest, my personal attempts to use it have been pretty
unsuccessful; it's non-trivial to get the environment set up and
working. But we're using it on the VHDL-2017 IEEE package sources to do
TDD there and when someone else was willing to get everything configured
(literally the guy who wrote it) he managed to get it all up and working.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order. See above to fix.
 
On Wed, 17 May 2017 11:05:02 -0700, BobH wrote:

On 05/16/2017 01:21 PM, Tim Wescott wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit,
but vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional
"make a test bench" is part way there, but as presented in my textbook*
doesn't impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
1998.


Can you elaborate on "Test Driven Design" please? Is this some
specialized design methodology, or a standard design methodology with
extensive module testing, or something else completely?

It is a specific software design methodology under the Agile development
umbrella.

There's a Wikipedia article on it, which is probably good (I'm just
trusting them this time):

https://en.wikipedia.org/wiki/Test-driven_development

It's basically a bit of structure on top of some common-sense
methodologies (i.e., design from the top down, then code from the bottom
up, and test the hell out of each bit as you code it).

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On Wed, 17 May 2017 11:29:55 -0700, Rob Gaddi wrote:

On 05/17/2017 10:48 AM, Tim Wescott wrote:
On Wed, 17 May 2017 13:39:55 -0400, rickman wrote:

On 5/17/2017 1:17 PM, Tim Wescott wrote:
On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:

On 5/16/2017 4:21 PM, Tim Wescott wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a
bit, but vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional
"make a test bench" is part way there, but as presented in my
textbook*
doesn't impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an
equivalent design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby,
Kluwer,
1998.

I'm not clear on all of the details of what defines "test driven
design", but I believe I've been using that all along. I've thought
of this as bottom up development where the lower level code is
written first *and thoroughly tested* before writing the next level
of code.

How does "test driven design" differ from this significantly?

The big difference in the software world is that the tests are
automated and never retired. There are generally test suites to make
the mechanics of testing easier. Ideally, whenever you do a build
you run the entire unit-test suite fresh. This means that when you
tweak some low-level function, it still gets tested.

The other big difference, that's hard for one guy to do, is that if
you're going Full Agile you have one guy writing tests and another
guy writing "real" code. Ideally they're equally good, and they
switch off. The idea is basically that more brains on the problem is
better.

If you look at the full description of TDD it looks like it'd be
hard, slow, and clunky, because the recommendation is to do things at
a very fine-grained level. However, I've done it, and the process of
adding features to a function as you add tests to the bench goes very
quickly.
The actual development of the bottom layer is a bit slower, but when
you go to put the pieces together they just fall into place.

I guess I'm still not picturing it. I think the part I don't get is
"adding features to a function". To me the features would *be*
functions that are written, tested and then added to next higher level
code. So I assume what you wrote applies to that next higher level.

I program in two languages, Forth and VHDL. In Forth functions
(called "words") are written at *very* low levels, often a word is a
single line of code and nearly all the time no more than five. Being
very small a word is much easier to write although the organization
can be tough to settle on.

In VHDL I typically don't decompose the code into such fine grains.
It is easy to write the code for the pieces, registers and logic. The
hard part is how they interconnect/interrelate. Fine decomposition
tends to obscure that rather than enhancing it. So I write large
blocks of code to be tested. I guess in those cases features would
be "added" rather than new modules being written for the new
functionality.

I still write test benches for each module in VHDL. Because there is
a lot more work in writing a using a VHDL test bench than a Forth test
word this also encourages larger (and fewer) modules.

Needless to say, I don't find much synergy between the two languages.

Part of what I'm looking for is a reading on whether it makes sense in
the context of an HDL, and if so, how it makes sense in the context of
an HDL (I'm using Verilog, because I'm slightly more familiar with it,
but that's incidental).

In Really Pure TDD for Java, C, or C++, you start by writing a test in
the absence of a function, just to see the compiler error out. Then
you write a function that does nothing. Then (for instance), you write
a test who's expected return value is "42", and an accompanying
function that just returns 42. Then you elaborate from there.

It sounds really dippy (I was about as skeptical as can be when it was
presented to me), but in a world where compilation is fast, there's
very little speed penalty.


One project I've seen for this in an HDL context in terms of actual
full-scale TDD, complete with continuous integration of regression
tests, etc is VUnit. It combines HDL stub code with a Python wrapper in
order to automate the running of lots of little tests, rather than big
monolithic tests that keel over and die on the first error rather than
reporting them all out.

To be honest, my personal attempts to use it have been pretty
unsuccessful; it's non-trivial to get the environment set up and
working. But we're using it on the VHDL-2017 IEEE package sources to do
TDD there and when someone else was willing to get everything configured
(literally the guy who wrote it) he managed to get it all up and
working.

Yup. And when the guy who wrote the software is setting it up and
configuring it, you just KNOW that it's got to be easy for ordinary
mortals.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On 5/17/2017 1:48 PM, Tim Wescott wrote:
On Wed, 17 May 2017 13:39:55 -0400, rickman wrote:

On 5/17/2017 1:17 PM, Tim Wescott wrote:
On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:

On 5/16/2017 4:21 PM, Tim Wescott wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit,
but vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional
"make a test bench" is part way there, but as presented in my
textbook*
doesn't impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

* "The Verilog Hardware Description Language", Thomas & Moorby,
Kluwer,
1998.

I'm not clear on all of the details of what defines "test driven
design", but I believe I've been using that all along. I've thought
of this as bottom up development where the lower level code is written
first *and thoroughly tested* before writing the next level of code.

How does "test driven design" differ from this significantly?

The big difference in the software world is that the tests are
automated and never retired. There are generally test suites to make
the mechanics of testing easier. Ideally, whenever you do a build you
run the entire unit-test suite fresh. This means that when you tweak
some low-level function, it still gets tested.

The other big difference, that's hard for one guy to do, is that if
you're going Full Agile you have one guy writing tests and another guy
writing "real" code. Ideally they're equally good, and they switch
off. The idea is basically that more brains on the problem is better.

If you look at the full description of TDD it looks like it'd be hard,
slow, and clunky, because the recommendation is to do things at a very
fine-grained level. However, I've done it, and the process of adding
features to a function as you add tests to the bench goes very quickly.
The actual development of the bottom layer is a bit slower, but when
you go to put the pieces together they just fall into place.

I guess I'm still not picturing it. I think the part I don't get is
"adding features to a function". To me the features would *be*
functions that are written, tested and then added to next higher level
code. So I assume what you wrote applies to that next higher level.

I program in two languages, Forth and VHDL. In Forth functions (called
"words") are written at *very* low levels, often a word is a single line
of code and nearly all the time no more than five. Being very small a
word is much easier to write although the organization can be tough to
settle on.

In VHDL I typically don't decompose the code into such fine grains. It
is easy to write the code for the pieces, registers and logic. The hard
part is how they interconnect/interrelate. Fine decomposition tends to
obscure that rather than enhancing it. So I write large blocks of code
to be tested. I guess in those cases features would be "added" rather
than new modules being written for the new functionality.

I still write test benches for each module in VHDL. Because there is a
lot more work in writing a using a VHDL test bench than a Forth test
word this also encourages larger (and fewer) modules.

Needless to say, I don't find much synergy between the two languages.

Part of what I'm looking for is a reading on whether it makes sense in
the context of an HDL, and if so, how it makes sense in the context of an
HDL (I'm using Verilog, because I'm slightly more familiar with it, but
that's incidental).

In Really Pure TDD for Java, C, or C++, you start by writing a test in
the absence of a function, just to see the compiler error out. Then you
write a function that does nothing. Then (for instance), you write a
test who's expected return value is "42", and an accompanying function
that just returns 42. Then you elaborate from there.

It sounds really dippy (I was about as skeptical as can be when it was
presented to me), but in a world where compilation is fast, there's very
little speed penalty.

I can't think of anything about HDL (VHDL in my case as I am not nearly
as familiar with Verilog) that would be a hindrance for this. The
verification would be done in a simulator. The only issue I find is
that simulators typically require you to set up a new workspace/project
for every separate simulation with a whole sub-directory tree below it.
Sometimes they want to put the source in one of the branches rather than
moving all the temporary files out of the way. So one evolving
simulation would be easier than many module simulations.

--

Rick C
 
rickman <gnuarm@gmail.com> wrote:
I can't think of anything about HDL (VHDL in my case as I am not nearly
as familiar with Verilog) that would be a hindrance for this. The
verification would be done in a simulator.

I think one awkwardness with Verilog (and I think VHDL) is the nature of an
'output'. To 'output' 42 from a module typically requires handshaking
signals, which you have to test at the same time as the data. Getting the
data right but the handshaking wrong is a serious bug. What would be a
simple test in software suddenly requires pattern matching a state machine
(and maybe fuzzing its inputs).

In C the control flow is always right - your module always returns 42, it
never returns 42,42,42 or X,42,X. HDLs like Bluespec decouple the semantic
content and take care of the control flow, but in Verilog you have to do all
of it (and test it) by hand.

Theo
 
On 5/17/2017 5:17 PM, Theo Markettos wrote:
rickman <gnuarm@gmail.com> wrote:
I can't think of anything about HDL (VHDL in my case as I am not nearly
as familiar with Verilog) that would be a hindrance for this. The
verification would be done in a simulator.

I think one awkwardness with Verilog (and I think VHDL) is the nature of an
'output'. To 'output' 42 from a module typically requires handshaking
signals, which you have to test at the same time as the data. Getting the
data right but the handshaking wrong is a serious bug. What would be a
simple test in software suddenly requires pattern matching a state machine
(and maybe fuzzing its inputs).

In C the control flow is always right - your module always returns 42, it
never returns 42,42,42 or X,42,X. HDLs like Bluespec decouple the semantic
content and take care of the control flow, but in Verilog you have to do all
of it (and test it) by hand.

I don't agree this is an issue. If the module returns specific data
timed to the inputs like a C function then it will have handshakes, but
that is part of the requirement and *must* be tested. In fact, I could
see the handshake requirement being in place before the data
requirement. Or it is not uncommon to have control and timing modules
that don't process any data.

Maybe you are describing something that is real and I'm just glossing
over it. But I think checking handshakes is something that would be
solved once and reused across modules once done. I know I've written
plenty of test code like that before, I just didn't think to make it
general purpose. That would be a big benefit to this sort of testing,
making the test benches modular so pieces can be reused. I tend to use
the module under test to test itself a lot. A UART transmitter tests a
UART receiver, an IRIG-B generator tests the IRIG-B receiver (from the
same design).

I guess this would be a real learning experience to come up with
efficient ways to develop the test code the same way the module under
test is developed. Right now I think I spend as much time on the test
bench as I do the module.

--

Rick C
 
On Wed, 17 May 2017 00:47:39 +0100, Theo Markettos wrote:

Tim Wescott <tim@seemywebsite.really> wrote:
Anyone doing any test driven design for FPGA work?

I've gone over to doing it almost universally for C++ development,
because It Just Works -- you lengthen the time to integration a bit,
but vastly shorten the actual integration time.

I did a web search and didn't find it mentioned -- the traditional
"make a test bench" is part way there, but as presented in my textbook*
doesn't impose a comprehensive suite of tests on each module.

So is no one doing it, or does it have another name, or an equivalent
design process with a different name, or what?

We do it. We have an equivalence checker that fuzzes random inputs to
both the system and an executable 'golden model' of the system, looking
for discrepancies. If found, it'll then reduce down to a minimal
example.

In particular this is very handy because running the test cases is then
synthesisable: so we can run the tests on FPGA rather than on a
simulator.

Our paper has more details and the code is open source:
https://www.cl.cam.ac.uk/research/security/ctsrd/pdfs/201509-
memocode2015-bluecheck.pdf

So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?

Or is it that one is carefully constructed to be clear and easy to
understand (and therefor review) while the other is constructed to
optimize over whatever constraints you want (size, speed, etc.)?

--
www.wescottdesign.com
 
Tim Wescott <tim@seemywebsite.really> wrote:
So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?

Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?

Or is it that one is carefully constructed to be clear and easy to
understand (and therefor review) while the other is constructed to
optimize over whatever constraints you want (size, speed, etc.)?

Essentially that. You can write a functionally correct but slow
implementation (completely unpipelined, for instance). You can write an
implementation that relies on things that aren't available in hardware
(a+b*c is easy for the simulator to check, but the hardware implementation
in IEEE floating point is somewhat more complex). You can also write high
level checks that don't know about implementation (if I enqueue E times and
dequeue D times to this FIFO, the current fill should always be E-D)

It helps if they're written by different people - eg we have 3
implementations of the ISA (hardware, emulator, formal model, plus the spec
and the test suite) that are used to shake out ambiguities: specify first,
write tests, three people implement without having seen the tests, see if
they differ. Fix the problems, write tests to cover the corner cases.
Rinse and repeat.

Theo
 
On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote:

Tim Wescott <tim@seemywebsite.really> wrote:
So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?

Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?

Or is it that one is carefully constructed to be clear and easy to
understand (and therefor review) while the other is constructed to
optimize over whatever constraints you want (size, speed, etc.)?

Essentially that. You can write a functionally correct but slow
implementation (completely unpipelined, for instance). You can write an
implementation that relies on things that aren't available in hardware
(a+b*c is easy for the simulator to check, but the hardware
implementation in IEEE floating point is somewhat more complex). You
can also write high level checks that don't know about implementation
(if I enqueue E times and dequeue D times to this FIFO, the current fill
should always be E-D)

It helps if they're written by different people - eg we have 3
implementations of the ISA (hardware, emulator, formal model, plus the
spec and the test suite) that are used to shake out ambiguities: specify
first, write tests, three people implement without having seen the
tests, see if they differ. Fix the problems, write tests to cover the
corner cases. Rinse and repeat.

Theo

It's a bit different on the software side -- there's a lot more of "poke
it THIS way, see if it squeaks THAT way". Possibly the biggest value is
that (in software at least, but I suspect in hardware) it encourages you
to keep any stateful information simple, just to make the tests simple --
and pure functions are, of course, the easiest.

I need to think about how this applies to my baby-steps project I'm
working on, if at all.

--
www.wescottdesign.com
 
Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos:
Tim Wescott <tim@seemywebsite.really> wrote:
So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?

Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?

the test?

if two different implementations agree, it adds a bit more confidence that an
implementation agreeing with itself.
 

Welcome to EDABoard.com

Sponsor

Back
Top