Mealy vs. Moore FSM

R

Rick C

Guest
I was in another discussion on FSM because of a research paper that used the two terms. I've never liked the Mealy machine because the state diagram (a directed graph) doesn't properly describe the asynchronous nature of the outputs. It shows and output changing as part of the state transition, but in reality the output is a combinational circuit of inputs and state which can change without a corresponding change in state.

In the real world nearly all logic is actually sequential. If the inputs to a FSM are async, then the machine can go haywire if the inputs change too close to the clock edge. A race condition in the circuit can cause some state FFs to change and others not resulting in wrong states at best and invalid states at worst. So async inputs are always buffered through FFs to prevent this making then synchronous.

The result is virtually all FSMs are actually Moore or a combination of the two where the outputs are a registered version of what you would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is no opportunity for outputs to change other than when the code is run (equivalent to a clock edge). There is no such thing as a Mealy FSM in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?

--

Rick C.

- Get 1,000 miles of free Supercharging
- Tesla referral code - https://ts.la/richard11209
 
Rick C <gnuarm.deletethisbit@gmail.com> writes:

I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs.

....

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?

Not that much. I think the key in software implementations is how clear
you make the code. Even though software runs synchronously, having
multiple cores or running FSM in main loop and doing some
state-dependent IO in interrupts, peripherals or DMA may result in
Mealy-like system behaviour.

So, when Mealy-like behavious happens, at least make it as clear as
possible in the code.

About terms - even though the separation has always been clear to me,
I've never used the terms.

--
mikko
 
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?

I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

Jeroen Belleman
 
On 2019-08-14, Rick C <gnuarm.deletethisbit@gmail.com> wrote:
I was in another discussion on FSM because of a research paper that used the two terms. I've never liked the Mealy machine because the state diagram (a directed graph) doesn't properly describe the asynchronous nature of the outputs. It shows and output changing as part of the state transition, but in reality the output is a combinational circuit of inputs and state which can change without a corresponding change in state.

In the real world nearly all logic is actually sequential. If the inputs to a FSM are async, then the machine can go haywire if the inputs change too close to the clock edge. A race condition in the circuit can cause some state FFs to change and others not resulting in wrong states at best and invalid states at worst. So async inputs are always buffered through FFs to prevent this making then synchronous.

The result is virtually all FSMs are actually Moore or a combination of the two where the outputs are a registered version of what you would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is no opportunity for outputs to change other than when the code is run (equivalent to a clock edge). There is no such thing as a Mealy FSM in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?

Yeah, but it's been a few years since I did an inteative FSM.
most of the tricky coding I do these days is SQL, so event driven,
and the FSMs state is a database record, sometimes I cheat and use
a timestamp as part of the state, so the state can change (future
to past) without any event happening, any code running or any output
being produced.

--
When I tried casting out nines I made a hash of it.
 
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms.  I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs.  It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential.  If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge.  A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst.  So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge).  There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure
- event arrives, possibly from a message coming up
the network stack, possibly from a interrupt routine
- event is put in a queue, length >=1
- at a convenient time the event is taken from the
input queue
- the event is fully processed (outputs, next state)
before looping back to take the next event

There is no system clock at the FSM level; events are
processed when available and convenient.

So far that is effectively a Moore machine: events
and outputs change at the "same" time, not independently.

Now it is possible to conceive of a Mealy software FSM:
- have the structure outlined above for the "main" FSM
- when an input changes it generates a hardware interrupt
- in the interrupt routine, have something like
if (current_state=X) then output = foo;
if (current_state=Y) then output = baz;

There the output can change independently of the FSM state,
i.e. effectively an Mealy FSM.

Note, I don't advocate structuring software that way, but
I would consider it in an embedded system where the latency
between the input occurring and the output changing needs
to be minimised.

(In such a situation I'd prefer to have a system without
the indeterminacy of interrupts, by dedicating /a/ core
to waiting for the input to occur. Yes, that's the xCORE
and the software is in xC, so the IDE guarantees timing
without sucking-it-and-seeing what execution times might
turn out to be :) )
 
On Thu, 15 Aug 2019 13:10:39 +0200, Jeroen Belleman
<jeroen@nospam.please> wrote:

On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

Jeroen Belleman

We do lots of state machines in FPGAs and in software.

The advantage of using a formal FSM in software is that it centralizes
the logic in a way that looks good on a whiteboard photo, and forces
the programmer to account for all possible states, which seems to be a
novelty in most code.

A state machine is the exact opposite of hairball code.

A careful software FSM design can eliminate the need for an RTOS and
semaphores and blocks and queues and other complicated dangerous
things.

The ultimate state machine has an N bit state word, and a lookup table
in RAM or ROM or a dimensioned integer array, 2^(N+I)entries, of next
states and outputs. That prevents the "next state" logic from becoming
its own hairball.

Of course the I inputs must be synchronized.
 
On 15/08/19 15:20, jlarkin@highlandsniptechnology.com wrote:
On Thu, 15 Aug 2019 13:10:39 +0200, Jeroen Belleman
jeroen@nospam.please> wrote:

On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

Jeroen Belleman

We do lots of state machines in FPGAs and in software.

The advantage of using a formal FSM in software is that it centralizes
the logic in a way that looks good on a whiteboard photo, and forces
the programmer to account for all possible states, which seems to be a
novelty in most code.

A state machine is the exact opposite of hairball code.

Yes indeed.

It pains me that softies tend to think of FSMs as "something
to do with parsing in compilers".


A careful software FSM design can eliminate the need for an RTOS and
semaphores and blocks and queues and other complicated dangerous
things.

Not quite. You still need semaphores and queues to get
events to the FSM. However that is very contained/localised,
and several orders of magnitude more likely to work than
throwing together a few semaphores.

You will also frequently benefit from a few design
patterns, e.g. especially the half-async pattern.


The ultimate state machine has an N bit state word, and a lookup table
in RAM or ROM or a dimensioned integer array, 2^(N+I)entries, of next
states and outputs. That prevents the "next state" logic from becoming
its own hairball.

Of course the I inputs must be synchronized.

Synchronised to what, exactly.

I think you mean "serialised", which is entirely different
and easily achieved by putting events in a queue.
 
On Thu, 15 Aug 2019 16:59:48 +0200, Jeroen Belleman
<jeroen@nospam.please> wrote:

On 2019-08-15 14:41, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure

[...]

We live in different worlds. I'm a hardware guy. You are
evidently a software guy. The whole computer is a giant
Moore machine. Of course you can use it to simulate an
(asynchronous!) Mealy machine, but only if you don't look
too closely.

Jeroen Belleman

I have seen hardware guys screw up state machines!

One dangerous concept is "it can't get into that state so I don't have
to worry about it."
 
On Thu, 15 Aug 2019 15:36:27 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 15/08/19 15:20, jlarkin@highlandsniptechnology.com wrote:
On Thu, 15 Aug 2019 13:10:39 +0200, Jeroen Belleman
jeroen@nospam.please> wrote:

On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

Jeroen Belleman

We do lots of state machines in FPGAs and in software.

The advantage of using a formal FSM in software is that it centralizes
the logic in a way that looks good on a whiteboard photo, and forces
the programmer to account for all possible states, which seems to be a
novelty in most code.

A state machine is the exact opposite of hairball code.

Yes indeed.

It pains me that softies tend to think of FSMs as "something
to do with parsing in compilers".


A careful software FSM design can eliminate the need for an RTOS and
semaphores and blocks and queues and other complicated dangerous
things.

Not quite. You still need semaphores and queues to get
events to the FSM. However that is very contained/localised,
and several orders of magnitude more likely to work than
throwing together a few semaphores.

You will also frequently benefit from a few design
patterns, e.g. especially the half-async pattern.


The ultimate state machine has an N bit state word, and a lookup table
in RAM or ROM or a dimensioned integer array, 2^(N+I)entries, of next
states and outputs. That prevents the "next state" logic from becoming
its own hairball.

Of course the I inputs must be synchronized.

Synchronised to what, exactly.

At the beginning of the state machine execution, hardware or software
FSM, all the inputs should be latched simultaneously, before the state
logic goes to work. In both hardware and software machines, that
prevents creating illegal states because of time skews on inputs. If a
software FSM runs with interrupts blocked (and maybe hardware
interfaces frozen) that's not a problem, but it's still good to
aggregate and latch all inputs in a visible place before computing the
next state.

A software state machine is still made out of procedural,
sequentially-executed code!

I think you mean "serialised", which is entirely different
and easily achieved by putting events in a queue.

I meant that inputs should be atomically latched in one place at the
start of each execution/clock. "Events in a queue" is another family
of hazards.

An FPGA full of synchronous logic, with a common clock, and that does
the state logic in a single clock, satisfies the stable-setup
requirement automatically. Of course off-chip signals, or inputs from
another clock domain, must be synchronized before use.

One software (or even hardware) trick is to sample the inputs twice
and make sure nothing has changed. If it has, try again. Sometimes
just taking a third sample set is safe.
 
On 2019-08-15 14:41, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure

[...]

We live in different worlds. I'm a hardware guy. You are
evidently a software guy. The whole computer is a giant
Moore machine. Of course you can use it to simulate an
(asynchronous!) Mealy machine, but only if you don't look
too closely.

Jeroen Belleman
 
On 15.8.19 18:29, jlarkin@highlandsniptechnology.com wrote:
On Thu, 15 Aug 2019 16:59:48 +0200, Jeroen Belleman
jeroen@nospam.please> wrote:

On 2019-08-15 14:41, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure

[...]

We live in different worlds. I'm a hardware guy. You are
evidently a software guy. The whole computer is a giant
Moore machine. Of course you can use it to simulate an
(asynchronous!) Mealy machine, but only if you don't look
too closely.

Jeroen Belleman

I have seen hardware guys screw up state machines!

One dangerous concept is "it can't get into that state so I don't have
to worry about it."

Yes. Even if it cannot get into the state, there must be a way out.

--

-TV
 
On Thursday, August 15, 2019 at 10:59:53 AM UTC-4, Jeroen Belleman wrote:
On 2019-08-15 14:41, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure

[...]

We live in different worlds. I'm a hardware guy. You are
evidently a software guy. The whole computer is a giant
Moore machine. Of course you can use it to simulate an
(asynchronous!) Mealy machine, but only if you don't look
too closely.

While what you say may be factually correct, it is not a useful view of what is happening.

--

Rick C.

-- Get 1,000 miles of free Supercharging
-- Tesla referral code - https://ts.la/richard11209
 
On Thursday, August 15, 2019 at 8:41:13 AM UTC-4, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms.  I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs.  It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential.  If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge.  A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst.  So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge).  There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure
- event arrives, possibly from a message coming up
the network stack, possibly from a interrupt routine
- event is put in a queue, length >=1
- at a convenient time the event is taken from the
input queue
- the event is fully processed (outputs, next state)
before looping back to take the next event

There is no system clock at the FSM level; events are
processed when available and convenient.

So far that is effectively a Moore machine: events
and outputs change at the "same" time, not independently.

I'm not sure that is what defines a Moore machine. You use the term "events" for the input changes. A Mealy machine changes outputs based on state and inputs, so outputs can change from input changes even if the state has not changed.

The distinction is made based on which information is used to define the outputs. If the inputs as well as the state is used, it's supposed to be a Mealy FSM while a Moore FSM outputs only depend on the state. In software the whole thing is a bit muddled because there are no registers and no clock, but you can still tell what inputs are.


Now it is possible to conceive of a Mealy software FSM:
- have the structure outlined above for the "main" FSM
- when an input changes it generates a hardware interrupt
- in the interrupt routine, have something like
if (current_state=X) then output = foo;
if (current_state=Y) then output = baz;

There the output can change independently of the FSM state,
i.e. effectively an Mealy FSM.

Where I have trouble is seeing that this is a Mealy FSM rather than just some extra logic tacked onto a Moore FSM. In reality the entire issue is about how you analyze the FSM, not really the implementation. Moore explained how to minimize the FSM using a directed graph as the model where outputs are defined by the state. Mealy extended this to FSM where the outputs are also functions of inputs and the directed graph has outputs indicated on the state transitions rather than the states.

What is really interesting is that Mealy's paper uses a type of sequential logic that can't change state other than on "clock" edges. The logic value is represented by the presence of absence of a clock pulse. So there are no inputs that change state other than timed with the clock!


Note, I don't advocate structuring software that way, but
I would consider it in an embedded system where the latency
between the input occurring and the output changing needs
to be minimised.

(In such a situation I'd prefer to have a system without
the indeterminacy of interrupts, by dedicating /a/ core
to waiting for the input to occur. Yes, that's the xCORE
and the software is in xC, so the IDE guarantees timing
without sucking-it-and-seeing what execution times might
turn out to be :) )

You have a very negative view of CPUs. I design CPUs that have a one clock cycle latency for interrupt handling. I don't need to dedicate a large hunk of silicon and software to handling an interrupt.

--

Rick C.

+ Get 1,000 miles of free Supercharging
+ Tesla referral code - https://ts.la/richard11209
 
On 15/08/19 16:23, jlarkin@highlandsniptechnology.com wrote:
On Thu, 15 Aug 2019 15:36:27 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 15/08/19 15:20, jlarkin@highlandsniptechnology.com wrote:
On Thu, 15 Aug 2019 13:10:39 +0200, Jeroen Belleman
jeroen@nospam.please> wrote:

On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

Jeroen Belleman

We do lots of state machines in FPGAs and in software.

The advantage of using a formal FSM in software is that it centralizes
the logic in a way that looks good on a whiteboard photo, and forces
the programmer to account for all possible states, which seems to be a
novelty in most code.

A state machine is the exact opposite of hairball code.

Yes indeed.

It pains me that softies tend to think of FSMs as "something
to do with parsing in compilers".


A careful software FSM design can eliminate the need for an RTOS and
semaphores and blocks and queues and other complicated dangerous
things.

Not quite. You still need semaphores and queues to get
events to the FSM. However that is very contained/localised,
and several orders of magnitude more likely to work than
throwing together a few semaphores.

You will also frequently benefit from a few design
patterns, e.g. especially the half-async pattern.


The ultimate state machine has an N bit state word, and a lookup table
in RAM or ROM or a dimensioned integer array, 2^(N+I)entries, of next
states and outputs. That prevents the "next state" logic from becoming
its own hairball.

Of course the I inputs must be synchronized.

Synchronised to what, exactly.

At the beginning of the state machine execution, hardware or software
FSM, all the inputs should be latched simultaneously, before the state
logic goes to work.

True for hardware, but that's not what you do in software.


In both hardware and software machines, that
prevents creating illegal states because of time skews on inputs.

In software you achieve the same end by serialisation. You take
an input event, process it to completion, then loop and look at
the next input event (or wait until one is available).



If a
software FSM runs with interrupts blocked (and maybe hardware
interfaces frozen) that's not a problem, but it's still good to
aggregate and latch all inputs in a visible place before computing the
next state.

Blocked interrupts are bad news, of course. Best
approach is for an interrupt routine to grab the
input, stuff it in a software fifo, and return.

The main software look looks at the fifo output
and sucks events that have occurred, and processes
them.

The sole requirement is that the fifos are threadsafe.
The event processing is bog standard procedural
sequentially executed code.


A software state machine is still made out of procedural,
sequentially-executed code!


I think you mean "serialised", which is entirely different
and easily achieved by putting events in a queue.

I meant that inputs should be atomically latched in one place at the
start of each execution/clock. "Events in a queue" is another family
of hazards.

No, at the level of the FSM, it isn't the same. The
hazards associated with multi-threaded queues are very
well constrained and localised (think RTOS). The
solutions are common to any FSM that uses the event
queue technique.

BTW, none of that is novel. It was common in the 70s,
and no doubt earlier.


An FPGA full of synchronous logic, with a common clock, and that does
the state logic in a single clock, satisfies the stable-setup
requirement automatically. Of course off-chip signals, or inputs from
another clock domain, must be synchronized before use.

One software (or even hardware) trick is to sample the inputs twice
and make sure nothing has changed. If it has, try again. Sometimes
just taking a third sample set is safe.

Ugh. It might be the best available technique for hardware,
but it certainly isn't for software!
 
On 15/08/19 18:07, Rick C wrote:
On Thursday, August 15, 2019 at 8:41:13 AM UTC-4, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting in
wrong states at best and invalid states at worst. So async inputs are
always buffered through FFs to prevent this making then synchronous.

The result is virtually all FSMs are actually Moore or a combination of
the two where the outputs are a registered version of what you would
get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM in
software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time. Many people see
them as 'hairball logic' because they don't understand them. In my view,
there is no such thing as a Mealy FSM in software, period. Everything is
necessarily synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is an *asynchronous*
hardware FSM where it is guaranteed that only one input can change
simultaneously. That guarantee is a result of the software structure -
event arrives, possibly from a message coming up the network stack,
possibly from a interrupt routine - event is put in a queue, length >=1 -
at a convenient time the event is taken from the input queue - the event is
fully processed (outputs, next state) before looping back to take the next
event

There is no system clock at the FSM level; events are processed when
available and convenient.

So far that is effectively a Moore machine: events and outputs change at
the "same" time, not independently.

I'm not sure that is what defines a Moore machine. You use the term "events"
for the input changes. A Mealy machine changes outputs based on state and
inputs, so outputs can change from input changes even if the state has not
changed.

Not quite.

In a hardware Mealy machine the inputs can change outputs
even when there is no possibility that the state has
changed - because there has been no clock pulse.

The software equivalent is inputs changing outputs even
though the next_state = whatever statement has not been
executed.

Or, to put it another way, in a Mealy machine, the outputs
can change "between" clock pulses (hardware) or between
evaluating what the next state should be (software)



The distinction is made based on which information is used to define the
outputs. If the inputs as well as the state is used, it's supposed to be a
Mealy FSM while a Moore FSM outputs only depend on the state. In software
the whole thing is a bit muddled because there are no registers and no clock,
but you can still tell what inputs are.

In this context, the software equivalent of a clock is
evaluating next_state = somethingOrOther.

Now in software the next_state re-evaluation (usually)
does not occur at a regular interval, but whenever an input
event occurs. That's like a hardware asynchronous FSM.

Hardware asynchronous FSMs have a real problem, unless
you make an unrealistic assumption that inputs never
change at the same time.

Software has the strong advantage that one input change
is fully evaluated before the next is "received". That is
/serialisation/ of event processing, ad circumvents that
problem with hardware async FSMs.


Now it is possible to conceive of a Mealy software FSM: - have the
structure outlined above for the "main" FSM - when an input changes it
generates a hardware interrupt - in the interrupt routine, have something
like if (current_state=X) then output = foo; if (current_state=Y) then
output = baz;

There the output can change independently of the FSM state, i.e.
effectively an Mealy FSM.

Where I have trouble is seeing that this is a Mealy FSM rather than just some
extra logic tacked onto a Moore FSM.

It depends on what how the extra logic behaves in the absence
of a clock, i.e. "between clocks"


In reality the entire issue is about
how you analyze the FSM, not really the implementation. Moore explained how
to minimize the FSM using a directed graph as the model where outputs are
defined by the state. Mealy extended this to FSM where the outputs are also
functions of inputs and the directed graph has outputs indicated on the state
transitions rather than the states.

What is really interesting is that Mealy's paper uses a type of sequential
logic that can't change state other than on "clock" edges. The logic value
is represented by the presence of absence of a clock pulse. So there are no
inputs that change state other than timed with the clock!

In a Mealy machine the state can only change on clock edges
(just like a Moore machine), but outputs can change at any
time that inputs change.


Note, I don't advocate structuring software that way, but I would consider
it in an embedded system where the latency between the input occurring and
the output changing needs to be minimised.

(In such a situation I'd prefer to have a system without the indeterminacy
of interrupts, by dedicating /a/ core to waiting for the input to occur.
Yes, that's the xCORE and the software is in xC, so the IDE guarantees
timing without sucking-it-and-seeing what execution times might turn out to
be :) )

You have a very negative view of CPUs. I design CPUs that have a one clock
cycle latency for interrupt handling. I don't need to dedicate a large hunk
of silicon and software to handling an interrupt.

Correct, you don't need to dedicate a large hunk of
silicon to handling an interrupt.

The interesting case is how you handle multiple
inputs and significant processing simultaneously
while ensuring that timing of one process is
completely unaffected by the activities of other
processes.

In you processor, if 8 unrelated inputs occur
within one clock cycle, does the processing of
all 8 inputs start within the same clock cycle?
Does the processing of all inputs proceed
simultaneously in parallel?

The only way I can see that happening is if you
have 8 (or more!) cores.
 
In article <5cualel5fad20fhba2j6438tjercc5j95o@4ax.com>,
<jlarkin@highlandsniptechnology.com> wrote:

I have seen hardware guys screw up state machines!

One dangerous concept is "it can't get into that state so I don't have
to worry about it."

Yeah. That's where I like to ask "OK, which portion of your personal
bodily anatomy do you want to leave as a security deposit for that
situation?"

Glitches are sneaky, as are race conditions. I much prefer an FSM
which is guaranteed to get itself back to a sane state under all
conditions.
 
On 15/08/19 15:59, Jeroen Belleman wrote:
On 2019-08-15 14:41, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms.  I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs.  It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential.  If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge.  A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst.  So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge).  There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure

[...]

We live in different worlds. I'm a hardware guy. You are
evidently a software guy. The whole computer is a giant
Moore machine. Of course you can use it to simulate an
(asynchronous!) Mealy machine, but only if you don't look
too closely.

Nope, I'm not a "software guy". I've done everything
from low-noise analogue electronics, through semi-custom
and FPGA digital, through RTOS, through web-backend
order fulfilment systems, through RF propagation, through
telecoms billing, through... You get the idea.

The boundary between hardware and software is very grey
and fluid.

I noted the key points of Moore and Mealy machines, and
gave an illustration of how both can be coded in software.
Why don't you comment on that?

The "whole computer is a Moore machine" is one of those
statements that generates no light whatsoever.
 
On Thu, 15 Aug 2019 19:33:08 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 15/08/19 16:23, jlarkin@highlandsniptechnology.com wrote:
On Thu, 15 Aug 2019 15:36:27 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 15/08/19 15:20, jlarkin@highlandsniptechnology.com wrote:
On Thu, 15 Aug 2019 13:10:39 +0200, Jeroen Belleman
jeroen@nospam.please> wrote:

On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

Jeroen Belleman

We do lots of state machines in FPGAs and in software.

The advantage of using a formal FSM in software is that it centralizes
the logic in a way that looks good on a whiteboard photo, and forces
the programmer to account for all possible states, which seems to be a
novelty in most code.

A state machine is the exact opposite of hairball code.

Yes indeed.

It pains me that softies tend to think of FSMs as "something
to do with parsing in compilers".


A careful software FSM design can eliminate the need for an RTOS and
semaphores and blocks and queues and other complicated dangerous
things.

Not quite. You still need semaphores and queues to get
events to the FSM. However that is very contained/localised,
and several orders of magnitude more likely to work than
throwing together a few semaphores.

You will also frequently benefit from a few design
patterns, e.g. especially the half-async pattern.


The ultimate state machine has an N bit state word, and a lookup table
in RAM or ROM or a dimensioned integer array, 2^(N+I)entries, of next
states and outputs. That prevents the "next state" logic from becoming
its own hairball.

Of course the I inputs must be synchronized.

Synchronised to what, exactly.

At the beginning of the state machine execution, hardware or software
FSM, all the inputs should be latched simultaneously, before the state
logic goes to work.

True for hardware, but that's not what you do in software.

It's what *I* do in software.

In both hardware and software machines, that
prevents creating illegal states because of time skews on inputs.

In software you achieve the same end by serialisation. You take
an input event, process it to completion, then loop and look at
the next input event (or wait until one is available).

Sounds like a FSM with one state.


If a
software FSM runs with interrupts blocked (and maybe hardware
interfaces frozen) that's not a problem, but it's still good to
aggregate and latch all inputs in a visible place before computing the
next state.

Blocked interrupts are bad news, of course. Best
approach is for an interrupt routine to grab the
input, stuff it in a software fifo, and return.

The main software look looks at the fifo output
and sucks events that have occurred, and processes
them.

The sole requirement is that the fifos are threadsafe.
The event processing is bog standard procedural
sequentially executed code.


A software state machine is still made out of procedural,
sequentially-executed code!


I think you mean "serialised", which is entirely different
and easily achieved by putting events in a queue.

I meant that inputs should be atomically latched in one place at the
start of each execution/clock. "Events in a queue" is another family
of hazards.

No, at the level of the FSM, it isn't the same. The
hazards associated with multi-threaded queues are very
well constrained and localised (think RTOS). The
solutions are common to any FSM that uses the event
queue technique.

We generally use a state-machine design instead of an RTOS, for
deep-embedded controllers. We don't queue events, or multi-thread.

I've written three RTOS's, but a FSM is usually better, unless you
need sustained procedural bits, like printing long reports.

BTW, none of that is novel. It was common in the 70s,
and no doubt earlier.


An FPGA full of synchronous logic, with a common clock, and that does
the state logic in a single clock, satisfies the stable-setup
requirement automatically. Of course off-chip signals, or inputs from
another clock domain, must be synchronized before use.

One software (or even hardware) trick is to sample the inputs twice
and make sure nothing has changed. If it has, try again. Sometimes
just taking a third sample set is safe.

Ugh. It might be the best available technique for hardware,
but it certainly isn't for software!

I've done that in software too, when an external set of inputs could
occasionally have ugly transient states, like for instance both
contacts of an SPDT switch apparently closed, or noise spikes possible
on sensors. Works fine sometimes.

The real virtue of a FSM implementation is not the hardware or the
code, it's that it forces the engineer onto taking a methodical
approach, documenting it coherently, and thinking about all possible
system states. Hairballs don't do that, and most code is hairballs.

FPGA tools usually support state machines explicitly. Computer
languages don't.
 
On 2019-08-15 18:29, Tauno Voipio wrote:
On 15.8.19 18:29, jlarkin@highlandsniptechnology.com wrote:
On Thu, 15 Aug 2019 16:59:48 +0200, Jeroen Belleman
jeroen@nospam.please> wrote:

On 2019-08-15 14:41, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure

[...]

We live in different worlds. I'm a hardware guy. You are
evidently a software guy. The whole computer is a giant
Moore machine. Of course you can use it to simulate an
(asynchronous!) Mealy machine, but only if you don't look
too closely.

Jeroen Belleman

I have seen hardware guys screw up state machines!

One dangerous concept is "it can't get into that state so I don't have
to worry about it."


Yes. Even if it cannot get into the state, there must be a way out.

I'm only too well aware of that. Radiation can do strange things to
logic!

Jeroen Belleman
 
On Thursday, August 15, 2019 at 2:33:13 PM UTC-4, Tom Gardner wrote:
On 15/08/19 16:23, jlarkin@highlandsniptechnology.com wrote:
On Thu, 15 Aug 2019 15:36:27 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 15/08/19 15:20, jlarkin@highlandsniptechnology.com wrote:
On Thu, 15 Aug 2019 13:10:39 +0200, Jeroen Belleman
jeroen@nospam.please> wrote:

On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

Jeroen Belleman

We do lots of state machines in FPGAs and in software.

The advantage of using a formal FSM in software is that it centralizes
the logic in a way that looks good on a whiteboard photo, and forces
the programmer to account for all possible states, which seems to be a
novelty in most code.

A state machine is the exact opposite of hairball code.

Yes indeed.

It pains me that softies tend to think of FSMs as "something
to do with parsing in compilers".


A careful software FSM design can eliminate the need for an RTOS and
semaphores and blocks and queues and other complicated dangerous
things.

Not quite. You still need semaphores and queues to get
events to the FSM. However that is very contained/localised,
and several orders of magnitude more likely to work than
throwing together a few semaphores.

You will also frequently benefit from a few design
patterns, e.g. especially the half-async pattern.


The ultimate state machine has an N bit state word, and a lookup table
in RAM or ROM or a dimensioned integer array, 2^(N+I)entries, of next
states and outputs. That prevents the "next state" logic from becoming
its own hairball.

Of course the I inputs must be synchronized.

Synchronised to what, exactly.

At the beginning of the state machine execution, hardware or software
FSM, all the inputs should be latched simultaneously, before the state
logic goes to work.

True for hardware, but that's not what you do in software.

That is exactly true for software. To the perspective of the algorithm, there can not be a change in inputs while the code is processing. The inputs have been read and handed to the code. At that point they are all stable as if they were registered.


In both hardware and software machines, that
prevents creating illegal states because of time skews on inputs.

In software you achieve the same end by serialisation. You take
an input event, process it to completion, then loop and look at
the next input event (or wait until one is available).

I think you are abstracting too much. Most of the code I write doesn't have "events". The inputs are typically sampled and captured for the code to process much like registered logic.


If a
software FSM runs with interrupts blocked (and maybe hardware
interfaces frozen) that's not a problem, but it's still good to
aggregate and latch all inputs in a visible place before computing the
next state.

Blocked interrupts are bad news, of course. Best
approach is for an interrupt routine to grab the
input, stuff it in a software fifo, and return.

They are only bad if your interrupt response time is not adequate. I've seen code written that locks out interrupts for many milliseconds. It worked fine because other interrupts could wait that long.


The main software look looks at the fifo output
and sucks events that have occurred, and processes
them.

You seem to be stuck in a single model of processing. Not all software uses events or queues. Often the software periodically polls the I/O and processes what it sees, much like clocked sequential logic.


The sole requirement is that the fifos are threadsafe.
The event processing is bog standard procedural
sequentially executed code.


A software state machine is still made out of procedural,
sequentially-executed code!


I think you mean "serialised", which is entirely different
and easily achieved by putting events in a queue.

I meant that inputs should be atomically latched in one place at the
start of each execution/clock. "Events in a queue" is another family
of hazards.

No, at the level of the FSM, it isn't the same. The
hazards associated with multi-threaded queues are very
well constrained and localised (think RTOS). The
solutions are common to any FSM that uses the event
queue technique.

BTW, none of that is novel. It was common in the 70s,
and no doubt earlier.


An FPGA full of synchronous logic, with a common clock, and that does
the state logic in a single clock, satisfies the stable-setup
requirement automatically. Of course off-chip signals, or inputs from
another clock domain, must be synchronized before use.

One software (or even hardware) trick is to sample the inputs twice
and make sure nothing has changed. If it has, try again. Sometimes
just taking a third sample set is safe.

Ugh. It might be the best available technique for hardware,
but it certainly isn't for software!

If you want to work with overly complex, large code, fine. I try to work with simpler code that I can verify works correctly by inspection if at all possible.

--

Rick C.

+- Get 1,000 miles of free Supercharging
+- Tesla referral code - https://ts.la/richard11209
 

Welcome to EDABoard.com

Sponsor

Back
Top