Test Driven Design?

On 18/05/17 15:22, Tim Wescott wrote:
On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote:

Tim Wescott <tim@seemywebsite.really> wrote:
So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?

Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?

Or is it that one is carefully constructed to be clear and easy to
understand (and therefor review) while the other is constructed to
optimize over whatever constraints you want (size, speed, etc.)?

Essentially that. You can write a functionally correct but slow
implementation (completely unpipelined, for instance). You can write an
implementation that relies on things that aren't available in hardware
(a+b*c is easy for the simulator to check, but the hardware
implementation in IEEE floating point is somewhat more complex). You
can also write high level checks that don't know about implementation
(if I enqueue E times and dequeue D times to this FIFO, the current fill
should always be E-D)

It helps if they're written by different people - eg we have 3
implementations of the ISA (hardware, emulator, formal model, plus the
spec and the test suite) that are used to shake out ambiguities: specify
first, write tests, three people implement without having seen the
tests, see if they differ. Fix the problems, write tests to cover the
corner cases. Rinse and repeat.

Theo

It's a bit different on the software side -- there's a lot more of "poke
it THIS way, see if it squeaks THAT way". Possibly the biggest value is
that (in software at least, but I suspect in hardware) it encourages you
to keep any stateful information simple, just to make the tests simple --
and pure functions are, of course, the easiest.

I need to think about how this applies to my baby-steps project I'm
working on, if at all.

Interesting questions with FSMs implemented in software...

Which of the many implementation patterns should
you choose?

My preference is anything that avoids deeply nested
if/the/else/switch statements, since they rapidly
become a maintenance nightmare. (I've seen nesting
10 deep!).

Also, design patterns that enable logging of events
and states should be encouraged and left in the code
at runtime. I've found them /excellent/ techniques for
correctly deflecting blame onto the other party :)

Should you design in a proper FSM style/language
and autogenerate the executable source code, or code
directly in the source language? Difficult, but there
are very useful OOP design patterns that make it easy.

And w.r.t. TDD, should your tests demonstrate the
FSM's design is correct or that the implementation
artefacts are correct?

Naive unit tests often end up testing the individual
low-level implementation artefacts, not the design.
Those are useful when refactoring, but otherwise
are not sufficient.
 
On 5/18/2017 12:14 PM, Tom Gardner wrote:
On 18/05/17 15:22, Tim Wescott wrote:
On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote:

Tim Wescott <tim@seemywebsite.really> wrote:
So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?

Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?

Or is it that one is carefully constructed to be clear and easy to
understand (and therefor review) while the other is constructed to
optimize over whatever constraints you want (size, speed, etc.)?

Essentially that. You can write a functionally correct but slow
implementation (completely unpipelined, for instance). You can write an
implementation that relies on things that aren't available in hardware
(a+b*c is easy for the simulator to check, but the hardware
implementation in IEEE floating point is somewhat more complex). You
can also write high level checks that don't know about implementation
(if I enqueue E times and dequeue D times to this FIFO, the current fill
should always be E-D)

It helps if they're written by different people - eg we have 3
implementations of the ISA (hardware, emulator, formal model, plus the
spec and the test suite) that are used to shake out ambiguities: specify
first, write tests, three people implement without having seen the
tests, see if they differ. Fix the problems, write tests to cover the
corner cases. Rinse and repeat.

Theo

It's a bit different on the software side -- there's a lot more of "poke
it THIS way, see if it squeaks THAT way". Possibly the biggest value is
that (in software at least, but I suspect in hardware) it encourages you
to keep any stateful information simple, just to make the tests simple --
and pure functions are, of course, the easiest.

I need to think about how this applies to my baby-steps project I'm
working on, if at all.

Interesting questions with FSMs implemented in software...

Which of the many implementation patterns should
you choose?

Personally, I custom design FSM code without worrying about what it
would be called. There really are only two issues. The first is
whether you can afford a clock delay in the output and how that impacts
your output assignments. The second is the complexity of the code
(maintenance).


My preference is anything that avoids deeply nested
if/the/else/switch statements, since they rapidly
become a maintenance nightmare. (I've seen nesting
10 deep!).

Such deep layering likely indicates a poor problem decomposition, but it
is hard to say without looking at the code.

Normally there is a switch for the state variable and conditionals
within each case to evaluate inputs. Typically this is not so complex.


Also, design patterns that enable logging of events
and states should be encouraged and left in the code
at runtime. I've found them /excellent/ techniques for
correctly deflecting blame onto the other party :)

Should you design in a proper FSM style/language
and autogenerate the executable source code, or code
directly in the source language? Difficult, but there
are very useful OOP design patterns that make it easy.

Designing in anything other than the HDL you are using increases the
complexity of backing up your tools. In addition to source code, it can
be important to be able to restore the development environment. I don't
bother with FSM tools other than tools that help me think.


And w.r.t. TDD, should your tests demonstrate the
FSM's design is correct or that the implementation
artefacts are correct?

I'll have to say that is a new term to me, "implementation
artefacts[sic]". Can you explain?

I test behavior. Behavior is what is specified for a design, so why
would you test anything else?


Naive unit tests often end up testing the individual
low-level implementation artefacts, not the design.
Those are useful when refactoring, but otherwise
are not sufficient.

--

Rick C
 
On 5/18/2017 12:08 PM, lasselangwadtchristensen@gmail.com wrote:
Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos:
Tim Wescott <tim@seemywebsite.really> wrote:
So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?

Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?

the test?

if two different implementations agree, it adds a bit more confidence that an
implementation agreeing with itself.

The point is if both designs were built with the same misunderstanding
of the requirements, they could both be wrong. While not common, this
is not unheard of. It could be caused by cultural biases (each company
is a culture) or a poorly written specification.

--

Rick C
 
On Thu, 18 May 2017 13:05:40 -0400, rickman wrote:

On 5/18/2017 12:08 PM, lasselangwadtchristensen@gmail.com wrote:
Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos:
Tim Wescott <tim@seemywebsite.really> wrote:
So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?

Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?

the test?

if two different implementations agree, it adds a bit more confidence
that an implementation agreeing with itself.

The point is if both designs were built with the same misunderstanding
of the requirements, they could both be wrong. While not common, this
is not unheard of. It could be caused by cultural biases (each company
is a culture) or a poorly written specification.

Yup. Although testing the real, obscure and complicated thing against
the fake, easy to read and understand thing does sound like a viable
test, too.

Prolly should both hit the thing with known test vectors written against
the spec, and do the behavioral vs. actual sim, too.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
 
On Tue, 16 May 2017 15:21:49 -0500
Tim Wescott <tim@seemywebsite.really> wrote:

> Anyone doing any test driven design for FPGA work?

If you do hardware design with an interpretive language, then
test driven design is essential:

http://docs.myhdl.org/en/stable/manual/unittest.html

My hobby project is long and slow, but I think this discipline
is slowly improving my productivity.

Jan Coombs
 
On 18/05/17 18:01, rickman wrote:
On 5/18/2017 12:14 PM, Tom Gardner wrote:
On 18/05/17 15:22, Tim Wescott wrote:
On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote:

Tim Wescott <tim@seemywebsite.really> wrote:
So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?

Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?

Or is it that one is carefully constructed to be clear and easy to
understand (and therefor review) while the other is constructed to
optimize over whatever constraints you want (size, speed, etc.)?

Essentially that. You can write a functionally correct but slow
implementation (completely unpipelined, for instance). You can write an
implementation that relies on things that aren't available in hardware
(a+b*c is easy for the simulator to check, but the hardware
implementation in IEEE floating point is somewhat more complex). You
can also write high level checks that don't know about implementation
(if I enqueue E times and dequeue D times to this FIFO, the current fill
should always be E-D)

It helps if they're written by different people - eg we have 3
implementations of the ISA (hardware, emulator, formal model, plus the
spec and the test suite) that are used to shake out ambiguities: specify
first, write tests, three people implement without having seen the
tests, see if they differ. Fix the problems, write tests to cover the
corner cases. Rinse and repeat.

Theo

It's a bit different on the software side -- there's a lot more of "poke
it THIS way, see if it squeaks THAT way". Possibly the biggest value is
that (in software at least, but I suspect in hardware) it encourages you
to keep any stateful information simple, just to make the tests simple --
and pure functions are, of course, the easiest.

I need to think about how this applies to my baby-steps project I'm
working on, if at all.

Interesting questions with FSMs implemented in software...

Which of the many implementation patterns should
you choose?

Personally, I custom design FSM code without worrying about what it would be
called. There really are only two issues. The first is whether you can afford
a clock delay in the output and how that impacts your output assignments. The
second is the complexity of the code (maintenance).


My preference is anything that avoids deeply nested
if/the/else/switch statements, since they rapidly
become a maintenance nightmare. (I've seen nesting
10 deep!).

Such deep layering likely indicates a poor problem decomposition, but it is hard
to say without looking at the code.

It was a combination of technical and personnel factors.
The overriding business imperative was, at each stage,
to make the smallest and /incrementally/ cheapest modification.

The road to hell is paved with good intentions.


Normally there is a switch for the state variable and conditionals within each
case to evaluate inputs. Typically this is not so complex.

This was an inherently complex task that was ineptly
implemented. I'm not going to define how ineptly,
because you wouldn't believe it. I only believe it
because I saw it, and boggled.


Also, design patterns that enable logging of events
and states should be encouraged and left in the code
at runtime. I've found them /excellent/ techniques for
correctly deflecting blame onto the other party :)

Should you design in a proper FSM style/language
and autogenerate the executable source code, or code
directly in the source language? Difficult, but there
are very useful OOP design patterns that make it easy.

Designing in anything other than the HDL you are using increases the complexity
of backing up your tools. In addition to source code, it can be important to be
able to restore the development environment. I don't bother with FSM tools
other than tools that help me think.

Very true. I use that argument, and more, to caution
people against inventing Domain Specific Languages
when they should be inventing Domain Specific Libraries.

Guess which happened in the case I alluded to above.


And w.r.t. TDD, should your tests demonstrate the
FSM's design is correct or that the implementation
artefacts are correct?

I'll have to say that is a new term to me, "implementation artefacts[sic]". Can
you explain?

Nothing non-obvious. An implementation artefact is
something that is part of /a/ specific design implementation,
as opposed to something that is an inherent part of
/the/ problem.


I test behavior. Behavior is what is specified for a design, so why would you
test anything else?

Clearly you haven't practiced XP/Agile/Lean development
practices.

You sound like a 20th century hardware engineer, rather
than a 21st century software "engineer". You must learn
to accept that all new things are, in every way, better
than the old ways.

Excuse me while I go and wash my mouth out with soap.


Naive unit tests often end up testing the individual
low-level implementation artefacts, not the design.
Those are useful when refactoring, but otherwise
are not sufficient.
 
On 18/05/17 18:05, rickman wrote:
On 5/18/2017 12:08 PM, lasselangwadtchristensen@gmail.com wrote:
Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos:
Tim Wescott <tim@seemywebsite.really> wrote:
So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?

Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?

the test?

if two different implementations agree, it adds a bit more confidence that an
implementation agreeing with itself.

The point is if both designs were built with the same misunderstanding of the
requirements, they could both be wrong. While not common, this is not unheard
of. It could be caused by cultural biases (each company is a culture) or a
poorly written specification.

The prior question is whether the specification is correct.

Or more realistically, to what extent it is/isn't correct,
and the best set of techniques and processes for reducing
the imperfection.

And that leads to XP/Agile concepts, to deal with the suboptimal
aspects of Waterfall Development.

Unfortunately the zealots can't accept that what you gain
on the swings you lose on the roundabouts.
 
On 18/05/17 19:03, Jan Coombs wrote:
On Tue, 16 May 2017 15:21:49 -0500
Tim Wescott <tim@seemywebsite.really> wrote:

Anyone doing any test driven design for FPGA work?

If you do hardware design with an interpretive language, then
test driven design is essential:

http://docs.myhdl.org/en/stable/manual/unittest.html

My hobby project is long and slow, but I think this discipline
is slowly improving my productivity.

It doesn't matter in the slightest whether or not the
language is interpreted.

Consider that, for example, C is (usually) compiled to
assembler. That assembler is then interpreted by microcode
(or more modern equivalent!) into RISC operations, which
is then interpreted by hardware.
 
On 5/18/2017 6:10 PM, Tom Gardner wrote:
On 18/05/17 18:05, rickman wrote:
On 5/18/2017 12:08 PM, lasselangwadtchristensen@gmail.com wrote:
Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos:
Tim Wescott <tim@seemywebsite.really> wrote:
So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?

Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?

the test?

if two different implementations agree, it adds a bit more confidence
that an
implementation agreeing with itself.

The point is if both designs were built with the same misunderstanding
of the
requirements, they could both be wrong. While not common, this is not
unheard
of. It could be caused by cultural biases (each company is a culture)
or a
poorly written specification.

The prior question is whether the specification is correct.

Or more realistically, to what extent it is/isn't correct,
and the best set of techniques and processes for reducing
the imperfection.

And that leads to XP/Agile concepts, to deal with the suboptimal
aspects of Waterfall Development.

Unfortunately the zealots can't accept that what you gain
on the swings you lose on the roundabouts.

I'm sure you know exactly what you meant. :)

--

Rick C
 
On 5/18/2017 6:06 PM, Tom Gardner wrote:
On 18/05/17 18:01, rickman wrote:
On 5/18/2017 12:14 PM, Tom Gardner wrote:

My preference is anything that avoids deeply nested
if/the/else/switch statements, since they rapidly
become a maintenance nightmare. (I've seen nesting
10 deep!).

Such deep layering likely indicates a poor problem decomposition, but
it is hard
to say without looking at the code.

It was a combination of technical and personnel factors.
The overriding business imperative was, at each stage,
to make the smallest and /incrementally/ cheapest modification.

The road to hell is paved with good intentions.

If we are bandying about platitudes I will say, penny wise, pound foolish.


Normally there is a switch for the state variable and conditionals
within each
case to evaluate inputs. Typically this is not so complex.

This was an inherently complex task that was ineptly
implemented. I'm not going to define how ineptly,
because you wouldn't believe it. I only believe it
because I saw it, and boggled.

Good design is about simplifying the complex. Ineptitude is a separate
issue and can ruin even simple designs.


Also, design patterns that enable logging of events
and states should be encouraged and left in the code
at runtime. I've found them /excellent/ techniques for
correctly deflecting blame onto the other party :)

Should you design in a proper FSM style/language
and autogenerate the executable source code, or code
directly in the source language? Difficult, but there
are very useful OOP design patterns that make it easy.

Designing in anything other than the HDL you are using increases the
complexity
of backing up your tools. In addition to source code, it can be
important to be
able to restore the development environment. I don't bother with FSM
tools
other than tools that help me think.

Very true. I use that argument, and more, to caution
people against inventing Domain Specific Languages
when they should be inventing Domain Specific Libraries.

Guess which happened in the case I alluded to above.

An exception to that rule is programming in Forth. It is a language
where programming *is* extending the language. There are many
situations where the process ends up with programs written what appears
to be a domain specific language, but working quite well. So don't
throw the baby out with the bath when trying to save designers from
themselves.


And w.r.t. TDD, should your tests demonstrate the
FSM's design is correct or that the implementation
artefacts are correct?

I'll have to say that is a new term to me, "implementation
artefacts[sic]". Can
you explain?

Nothing non-obvious. An implementation artefact is
something that is part of /a/ specific design implementation,
as opposed to something that is an inherent part of
/the/ problem.

Why would I want to test design artifacts? The tests in TDD are
developed from the requirements, not the design, right?


I test behavior. Behavior is what is specified for a design, so why
would you
test anything else?

Clearly you haven't practiced XP/Agile/Lean development
practices.

You sound like a 20th century hardware engineer, rather
than a 21st century software "engineer". You must learn
to accept that all new things are, in every way, better
than the old ways.

Excuse me while I go and wash my mouth out with soap.

Lol

--

Rick C
 
On 19/05/17 01:53, rickman wrote:
On 5/18/2017 6:06 PM, Tom Gardner wrote:
On 18/05/17 18:01, rickman wrote:
On 5/18/2017 12:14 PM, Tom Gardner wrote:
Also, design patterns that enable logging of events
and states should be encouraged and left in the code
at runtime. I've found them /excellent/ techniques for
correctly deflecting blame onto the other party :)

Should you design in a proper FSM style/language
and autogenerate the executable source code, or code
directly in the source language? Difficult, but there
are very useful OOP design patterns that make it easy.

Designing in anything other than the HDL you are using increases the
complexity
of backing up your tools. In addition to source code, it can be
important to be
able to restore the development environment. I don't bother with FSM
tools
other than tools that help me think.

Very true. I use that argument, and more, to caution
people against inventing Domain Specific Languages
when they should be inventing Domain Specific Libraries.

Guess which happened in the case I alluded to above.

An exception to that rule is programming in Forth. It is a language where
programming *is* extending the language. There are many situations where the
process ends up with programs written what appears to be a domain specific
language, but working quite well. So don't throw the baby out with the bath
when trying to save designers from themselves.

I see why you are saying that, but I disagree. The
Forth /language/ is pleasantly simple. The myriad
Forth words (e.g. cmove, catch, canonical etc) in most
Forth environments are part of the "standard library",
not the language per se.

Forth words are more-or-less equivalent to functions
in a trad language. Defining new words is therefore
like defining a new function.

Just as defining new words "looks like" defining
a DSL, so - at the "application level" - defining
new functions also looks like defining a new DSL.

Most importantly, both new functions and new words
automatically have the invaluable tools support without
having to do anything. With a new DSL, all the tools
(from parsers to browsers) also have to be built.



And w.r.t. TDD, should your tests demonstrate the
FSM's design is correct or that the implementation
artefacts are correct?

I'll have to say that is a new term to me, "implementation
artefacts[sic]". Can
you explain?

Nothing non-obvious. An implementation artefact is
something that is part of /a/ specific design implementation,
as opposed to something that is an inherent part of
/the/ problem.

Why would I want to test design artifacts? The tests in TDD are developed from
the requirements, not the design, right?

Ideally, but only to some extent. TDD frequently used
at a much lower level, where it is usually divorced
from specs.

TDD is also frequently used with - and implemented in
the form of - unit tests, which are definitely divorced
from the spec.

Hence, in the real world, there is bountiful opportunity
for diversion from the obvious pure sane course. And
Murphy's Law definitely applies.

Having said that, both TDD and Unit Testing are valuable
additions to a the designer's toolchest. But they must
be used intelligently[1], and are merely codifications of
things most of us have been doing for decades.

No change there, then.

[1] be careful of external consultants proselytising
the teaching courses they are selling. They have a
hammer, and everything /does/ look like a nail.
 
On 5/19/2017 4:59 AM, Tom Gardner wrote:
On 19/05/17 01:53, rickman wrote:
On 5/18/2017 6:06 PM, Tom Gardner wrote:
On 18/05/17 18:01, rickman wrote:
On 5/18/2017 12:14 PM, Tom Gardner wrote:
Also, design patterns that enable logging of events
and states should be encouraged and left in the code
at runtime. I've found them /excellent/ techniques for
correctly deflecting blame onto the other party :)

Should you design in a proper FSM style/language
and autogenerate the executable source code, or code
directly in the source language? Difficult, but there
are very useful OOP design patterns that make it easy.

Designing in anything other than the HDL you are using increases the
complexity
of backing up your tools. In addition to source code, it can be
important to be
able to restore the development environment. I don't bother with FSM
tools
other than tools that help me think.

Very true. I use that argument, and more, to caution
people against inventing Domain Specific Languages
when they should be inventing Domain Specific Libraries.

Guess which happened in the case I alluded to above.

An exception to that rule is programming in Forth. It is a language
where
programming *is* extending the language. There are many situations
where the
process ends up with programs written what appears to be a domain
specific
language, but working quite well. So don't throw the baby out with
the bath
when trying to save designers from themselves.

I see why you are saying that, but I disagree. The
Forth /language/ is pleasantly simple. The myriad
Forth words (e.g. cmove, catch, canonical etc) in most
Forth environments are part of the "standard library",
not the language per se.

Forth words are more-or-less equivalent to functions
in a trad language. Defining new words is therefore
like defining a new function.

I can't find a definition for "trad language".


Just as defining new words "looks like" defining
a DSL, so - at the "application level" - defining
new functions also looks like defining a new DSL.

Most importantly, both new functions and new words
automatically have the invaluable tools support without
having to do anything. With a new DSL, all the tools
(from parsers to browsers) also have to be built.

I have no idea what distinction you are trying to make. Why is making
new tools a necessary part of defining a domain specific language?

If it walks like a duck...

FRONT LED ON TURN

That could be the domain specific language under Forth for turning on
the front LED of some device. Sure looks like a language to me.

I have considered writing a parser for a type of XML file simply by
defining the syntax as Forth words. So rather than "process" the file
with an application program, the Forth compiler would "compile" the
file. I'd call that a domain specific language.


And w.r.t. TDD, should your tests demonstrate the
FSM's design is correct or that the implementation
artefacts are correct?

I'll have to say that is a new term to me, "implementation
artefacts[sic]". Can
you explain?

Nothing non-obvious. An implementation artefact is
something that is part of /a/ specific design implementation,
as opposed to something that is an inherent part of
/the/ problem.

Why would I want to test design artifacts? The tests in TDD are
developed from
the requirements, not the design, right?

Ideally, but only to some extent. TDD frequently used
at a much lower level, where it is usually divorced
from specs.

There is a failure in the specification process. The projects I have
worked on which required a formal requirements development process
applied it to every level. So every piece of code that would be tested
had requirements which defined the tests.


TDD is also frequently used with - and implemented in
the form of - unit tests, which are definitely divorced
from the spec.

They are? How then are the tests generated?


Hence, in the real world, there is bountiful opportunity
for diversion from the obvious pure sane course. And
Murphy's Law definitely applies.

Having said that, both TDD and Unit Testing are valuable
additions to a the designer's toolchest. But they must
be used intelligently[1], and are merely codifications of
things most of us have been doing for decades.

No change there, then.

[1] be careful of external consultants proselytising
the teaching courses they are selling. They have a
hammer, and everything /does/ look like a nail.

--

Rick C
 
On 05/17/2017 11:33 AM, Tim Wescott wrote:
snip
It's basically a bit of structure on top of some common-sense
methodologies (i.e., design from the top down, then code from the bottom
up, and test the hell out of each bit as you code it).

Other than occasional test fixtures, most of my FPGA work in recent
years has been FPGA verification of the digital sections of mixed signal
ASICs. Your description sounds exactly like the methodology used on both
the product ASIC side and the verification FPGA side. After the FPGA is
built and working, you test the hell out of the FPGA system and the
product ASIC with completely separate tools and techniques. When
problems are discovered, you often fall back to either the ASIC or FPGA
simulation test benches to isolate the issue.

The importance of good, detailed, self checking, top level test benches
cannot be over-stressed. For mid and low level blocks that are complex
or likely to see significant iterations (due to design spec changes)
self checking test benches are worth the effort. My experience with
manual checking test benches is that the first time you go through it,
you remember to examine all the important spots, the thoroughness of the
manual checking on subsequent runs falls off fast. Giving a manual check
test bench to someone else, is a waste of both of your time.

BobH
 
I've solved the problem with setting up a new project for each testbench by not using any projects. Vivado has a non project mode when you write a simple tcl script which tells vivado what sources to use and what to do with them.

I have a source directory with hdl files in our repository and dozens of scripts.Each script takes sources from the same directory and creates its own temp working directory and runs its test there. I also have a script which runs all the tests at once without GUI. I run it right before coming home. When I come at work in the next morning I run a script which analyses reports looking for errors. If there is an error somewhere, I run the corresponding test script with GUI switched on to look at waveforms.

Non-project mode not only allows me to run different tests simultaneously for the same sources, but also allows me to run multiple synthesis for them.

I use only this mode for more then 2 years and absolutely happy with that. Highly recommend!
 
On 5/19/2017 6:31 PM, Ilya Kalistru wrote:
I've solved the problem with setting up a new project for each testbench by not using any projects. Vivado has a non project mode when you write a simple tcl script which tells vivado what sources to use and what to do with them.

I have a source directory with hdl files in our repository and dozens of scripts.Each script takes sources from the same directory and creates its own temp working directory and runs its test there. I also have a script which runs all the tests at once without GUI. I run it right before coming home. When I come at work in the next morning I run a script which analyses reports looking for errors. If there is an error somewhere, I run the corresponding test script with GUI switched on to look at waveforms.

Non-project mode not only allows me to run different tests simultaneously for the same sources, but also allows me to run multiple synthesis for them.

I use only this mode for more then 2 years and absolutely happy with that. Highly recommend!

Interesting. Vivado is what, Xilinx?

--

Rick C
 
Den lørdag den 20. maj 2017 kl. 00.57.24 UTC+2 skrev rickman:
On 5/19/2017 6:31 PM, Ilya Kalistru wrote:
I've solved the problem with setting up a new project for each testbench by not using any projects. Vivado has a non project mode when you write a simple tcl script which tells vivado what sources to use and what to do with them.

I have a source directory with hdl files in our repository and dozens of scripts.Each script takes sources from the same directory and creates its own temp working directory and runs its test there. I also have a script which runs all the tests at once without GUI. I run it right before coming home. When I come at work in the next morning I run a script which analyses reports looking for errors. If there is an error somewhere, I run the corresponding test script with GUI switched on to look at waveforms.

Non-project mode not only allows me to run different tests simultaneously for the same sources, but also allows me to run multiple synthesis for them.

I use only this mode for more then 2 years and absolutely happy with that. Highly recommend!

Interesting. Vivado is what, Xilinx?

yes
 
Yes. It is xilinx vivado.

Another important advantage of non-project mode is that it is fully compatible with source control systems. When you don't have projects, you don't have piles of junk files of unknown purpose that changes every time you open a project or run a simulation. In non-project mode you have only hdl sources and tcl scripts. Therefore all information is stored in source control system but when you commit changes you commit only changes you have done, not random changes of unknown project files.

In this situation work with IP cores a bit trickier, but not much. Considering that you don't change ip's very often, it's not a problem at all.

I see that very small number of hdl designers know and use this mode. Maybe I should write an article about it. Where it would be appropriate to publish it?
 
On 5/20/2017 3:11 AM, Ilya Kalistru wrote:
Yes. It is xilinx vivado.

Another important advantage of non-project mode is that it is fully compatible with source control systems. When you don't have projects, you don't have piles of junk files of unknown purpose that changes every time you open a project or run a simulation. In non-project mode you have only hdl sources and tcl scripts. Therefore all information is stored in source control system but when you commit changes you commit only changes you have done, not random changes of unknown project files.

In this situation work with IP cores a bit trickier, but not much. Considering that you don't change ip's very often, it's not a problem at all.

I see that very small number of hdl designers know and use this mode. Maybe I should write an article about it. Where it would be appropriate to publish it?

Doesn't the tool still generate all the intermediate files? The Lattice
tool (which uses Synplify for synthesis) creates a huge number of files
that only the tools look at. They aren't really project files, they are
various intermediate files. Living in the project main directory they
really get in the way.

--

Rick C
 
On 20/05/17 08:11, Ilya Kalistru wrote:
Yes. It is xilinx vivado.

Another important advantage of non-project mode is that it is fully compatible with source control systems. When you don't have projects, you don't have piles of junk files of unknown purpose that changes every time you open a project or run a simulation. In non-project mode you have only hdl sources and tcl scripts. Therefore all information is stored in source control system but when you commit changes you commit only changes you have done, not random changes of unknown project files.

In this situation work with IP cores a bit trickier, but not much. Considering that you don't change ip's very often, it's not a problem at all.

I see that very small number of hdl designers know and use this mode. Maybe I should write an article about it. Where it would be appropriate to publish it?

That would be useful; the project mode is initially appealing,
but the splattered files and SCCS give me the jitters.

Publish it everywhere! Any blog and bulletin board you can find,
not limited to those dedicated to Xilinx.
 
Ilya Kalistru <stebanoid@gmail.com> wrote:
I've solved the problem with setting up a new project for each testbench
by not using any projects. Vivado has a non project mode when you write a
simple tcl script which tells vivado what sources to use and what to do
with them.

Something similar is possible with Intel FPGA (Altera) Quartus.
You need one tcl file for settings, and building is a few commands which we
run from a Makefile.

All our builds run in continuous integration, which extracts logs and
timing/area numbers. The bitfiles then get downloaded and booted on FPGA,
then the test suite and benchmarks are run automatically to monitor
performance. Numbers then come back to continuous integration for graphing.

Theo
 

Welcome to EDABoard.com

Sponsor

Back
Top