T
Tom Gardner
Guest
On 18/05/17 15:22, Tim Wescott wrote:
Interesting questions with FSMs implemented in software...
Which of the many implementation patterns should
you choose?
My preference is anything that avoids deeply nested
if/the/else/switch statements, since they rapidly
become a maintenance nightmare. (I've seen nesting
10 deep!).
Also, design patterns that enable logging of events
and states should be encouraged and left in the code
at runtime. I've found them /excellent/ techniques for
correctly deflecting blame onto the other party
Should you design in a proper FSM style/language
and autogenerate the executable source code, or code
directly in the source language? Difficult, but there
are very useful OOP design patterns that make it easy.
And w.r.t. TDD, should your tests demonstrate the
FSM's design is correct or that the implementation
artefacts are correct?
Naive unit tests often end up testing the individual
low-level implementation artefacts, not the design.
Those are useful when refactoring, but otherwise
are not sufficient.
On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote:
Tim Wescott <tim@seemywebsite.really> wrote:
So, you have two separate implementations of the system -- how do you
know that they aren't both identically buggy?
Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?
Or is it that one is carefully constructed to be clear and easy to
understand (and therefor review) while the other is constructed to
optimize over whatever constraints you want (size, speed, etc.)?
Essentially that. You can write a functionally correct but slow
implementation (completely unpipelined, for instance). You can write an
implementation that relies on things that aren't available in hardware
(a+b*c is easy for the simulator to check, but the hardware
implementation in IEEE floating point is somewhat more complex). You
can also write high level checks that don't know about implementation
(if I enqueue E times and dequeue D times to this FIFO, the current fill
should always be E-D)
It helps if they're written by different people - eg we have 3
implementations of the ISA (hardware, emulator, formal model, plus the
spec and the test suite) that are used to shake out ambiguities: specify
first, write tests, three people implement without having seen the
tests, see if they differ. Fix the problems, write tests to cover the
corner cases. Rinse and repeat.
Theo
It's a bit different on the software side -- there's a lot more of "poke
it THIS way, see if it squeaks THAT way". Possibly the biggest value is
that (in software at least, but I suspect in hardware) it encourages you
to keep any stateful information simple, just to make the tests simple --
and pure functions are, of course, the easiest.
I need to think about how this applies to my baby-steps project I'm
working on, if at all.
Interesting questions with FSMs implemented in software...
Which of the many implementation patterns should
you choose?
My preference is anything that avoids deeply nested
if/the/else/switch statements, since they rapidly
become a maintenance nightmare. (I've seen nesting
10 deep!).
Also, design patterns that enable logging of events
and states should be encouraged and left in the code
at runtime. I've found them /excellent/ techniques for
correctly deflecting blame onto the other party
Should you design in a proper FSM style/language
and autogenerate the executable source code, or code
directly in the source language? Difficult, but there
are very useful OOP design patterns that make it easy.
And w.r.t. TDD, should your tests demonstrate the
FSM's design is correct or that the implementation
artefacts are correct?
Naive unit tests often end up testing the individual
low-level implementation artefacts, not the design.
Those are useful when refactoring, but otherwise
are not sufficient.