Mealy vs. Moore FSM

On Thursday, August 15, 2019 at 2:20:08 PM UTC-4, Tom Gardner wrote:
On 15/08/19 18:07, Rick C wrote:
On Thursday, August 15, 2019 at 8:41:13 AM UTC-4, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting in
wrong states at best and invalid states at worst. So async inputs are
always buffered through FFs to prevent this making then synchronous.

The result is virtually all FSMs are actually Moore or a combination of
the two where the outputs are a registered version of what you would
get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM in
software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time. Many people see
them as 'hairball logic' because they don't understand them. In my view,
there is no such thing as a Mealy FSM in software, period. Everything is
necessarily synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is an *asynchronous*
hardware FSM where it is guaranteed that only one input can change
simultaneously. That guarantee is a result of the software structure -
event arrives, possibly from a message coming up the network stack,
possibly from a interrupt routine - event is put in a queue, length >=1 -
at a convenient time the event is taken from the input queue - the event is
fully processed (outputs, next state) before looping back to take the next
event

There is no system clock at the FSM level; events are processed when
available and convenient.

So far that is effectively a Moore machine: events and outputs change at
the "same" time, not independently.

I'm not sure that is what defines a Moore machine. You use the term "events"
for the input changes. A Mealy machine changes outputs based on state and
inputs, so outputs can change from input changes even if the state has not
changed.

Not quite.

In a hardware Mealy machine the inputs can change outputs
even when there is no possibility that the state has
changed - because there has been no clock pulse.

That is the point at issue. Wikipedia defines a Mealy FSM that way, but Mealy didn't define it so. As I mentioned, the hardware model Mealy used was not capable of having inputs changing separately from state changes (assuming there was a state change indicated on the diagram) because it was pulse based, not level based. EVERY signal was a timing pulse.

I prefer to consider the Mealy FSM described by Mealy, not Wikipedia.


The software equivalent is inputs changing outputs even
though the next_state = whatever statement has not been
executed.

Or, to put it another way, in a Mealy machine, the outputs
can change "between" clock pulses (hardware) or between
evaluating what the next state should be (software)

This was not from Mealy's paper.


The distinction is made based on which information is used to define the
outputs. If the inputs as well as the state is used, it's supposed to be a
Mealy FSM while a Moore FSM outputs only depend on the state. In software
the whole thing is a bit muddled because there are no registers and no clock,
but you can still tell what inputs are.

In this context, the software equivalent of a clock is
evaluating next_state = somethingOrOther.

Now in software the next_state re-evaluation (usually)
does not occur at a regular interval, but whenever an input
event occurs. That's like a hardware asynchronous FSM.

Hardware asynchronous FSMs have a real problem, unless
you make an unrealistic assumption that inputs never
change at the same time.

Software has the strong advantage that one input change
is fully evaluated before the next is "received". That is
/serialisation/ of event processing, ad circumvents that
problem with hardware async FSMs.

It's not at all important to receive only one input change at the same time.. In fact, if you read the papers, they don't talk about input signals changing individually, they talk about the input signals combining to form and input "alphabet" much like numbers or an ASCII code. So it is anticipated to have multiple inputs change at the same time. In hardware this can lead to race conditions if async logic is used, but that's an implementation issue, not a FSM issue.


Now it is possible to conceive of a Mealy software FSM: - have the
structure outlined above for the "main" FSM - when an input changes it
generates a hardware interrupt - in the interrupt routine, have something
like if (current_state=X) then output = foo; if (current_state=Y) then
output = baz;

There the output can change independently of the FSM state, i.e.
effectively an Mealy FSM.

Where I have trouble is seeing that this is a Mealy FSM rather than just some
extra logic tacked onto a Moore FSM.

It depends on what how the extra logic behaves in the absence
of a clock, i.e. "between clocks"

By "logic" I mean combinational logic, not sequential.


In reality the entire issue is about
how you analyze the FSM, not really the implementation. Moore explained how
to minimize the FSM using a directed graph as the model where outputs are
defined by the state. Mealy extended this to FSM where the outputs are also
functions of inputs and the directed graph has outputs indicated on the state
transitions rather than the states.

What is really interesting is that Mealy's paper uses a type of sequential
logic that can't change state other than on "clock" edges. The logic value
is represented by the presence of absence of a clock pulse. So there are no
inputs that change state other than timed with the clock!

In a Mealy machine the state can only change on clock edges
(just like a Moore machine), but outputs can change at any
time that inputs change.

Except as I've explained, that isn't what Mealy talked about. His hardware was not capable of such changes since the signals were pulses, i.e. clocks..


Note, I don't advocate structuring software that way, but I would consider
it in an embedded system where the latency between the input occurring and
the output changing needs to be minimised.

(In such a situation I'd prefer to have a system without the indeterminacy
of interrupts, by dedicating /a/ core to waiting for the input to occur.
Yes, that's the xCORE and the software is in xC, so the IDE guarantees
timing without sucking-it-and-seeing what execution times might turn out to
be :) )

You have a very negative view of CPUs. I design CPUs that have a one clock
cycle latency for interrupt handling. I don't need to dedicate a large hunk
of silicon and software to handling an interrupt.

Correct, you don't need to dedicate a large hunk of
silicon to handling an interrupt.

The interesting case is how you handle multiple
inputs and significant processing simultaneously
while ensuring that timing of one process is
completely unaffected by the activities of other
processes.

In you processor, if 8 unrelated inputs occur
within one clock cycle, does the processing of
all 8 inputs start within the same clock cycle?
Does the processing of all inputs proceed
simultaneously in parallel?

Which processor would this be exactly? I've designed many and used them, more than one at a time. That is, I design to suit the problem.


The only way I can see that happening is if you
have 8 (or more!) cores.

Or structure your solution to suit the problem. Do all 8 inputs need to be handled in any specific way? You need to define the problem in detail before a solution can be suggested.

--

Rick C.

-+ Get 1,000 miles of free Supercharging
-+ Tesla referral code - https://ts.la/richard11209
 
On 15/08/19 21:49, Rick C wrote:
On Thursday, August 15, 2019 at 2:20:08 PM UTC-4, Tom Gardner wrote:
On 15/08/19 18:07, Rick C wrote:
On Thursday, August 15, 2019 at 8:41:13 AM UTC-4, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper
that used the two terms. I've never liked the Mealy machine
because the state diagram (a directed graph) doesn't properly
describe the asynchronous nature of the outputs. It shows and
output changing as part of the state transition, but in reality the
output is a combinational circuit of inputs and state which can
change without a corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in
the circuit can cause some state FFs to change and others not
resulting in wrong states at best and invalid states at worst. So
async inputs are always buffered through FFs to prevent this making
then synchronous.

The result is virtually all FSMs are actually Moore or a
combination of the two where the outputs are a registered version
of what you would get from a Mealy FSM.

When done in software, virtually every design is Moore since there
is no opportunity for outputs to change other than when the code is
run (equivalent to a clock edge). There is no such thing as a
Mealy FSM in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time. Many people
see them as 'hairball logic' because they don't understand them. In
my view, there is no such thing as a Mealy FSM in software, period.
Everything is necessarily synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is an
*asynchronous* hardware FSM where it is guaranteed that only one input
can change simultaneously. That guarantee is a result of the software
structure - event arrives, possibly from a message coming up the
network stack, possibly from a interrupt routine - event is put in a
queue, length >=1 - at a convenient time the event is taken from the
input queue - the event is fully processed (outputs, next state) before
looping back to take the next event

There is no system clock at the FSM level; events are processed when
available and convenient.

So far that is effectively a Moore machine: events and outputs change
at the "same" time, not independently.

I'm not sure that is what defines a Moore machine. You use the term
"events" for the input changes. A Mealy machine changes outputs based on
state and inputs, so outputs can change from input changes even if the
state has not changed.

Not quite.

In a hardware Mealy machine the inputs can change outputs even when there
is no possibility that the state has changed - because there has been no
clock pulse.

That is the point at issue. Wikipedia defines a Mealy FSM that way, but
Mealy didn't define it so. As I mentioned, the hardware model Mealy used was
not capable of having inputs changing separately from state changes (assuming
there was a state change indicated on the diagram) because it was pulse
based, not level based. EVERY signal was a timing pulse.

You are the only person I have heard who thinks that way.

That's not what I was taught at university. I've just
checked a textbook from 1980 which states "The basic
distinction of a Mealy machine is that the outputs to
the outside world are a function of two sets of
variables: (1) the present input condition and (2) the
present state of the machine"


> I prefer to consider the Mealy FSM described by Mealy, not Wikipedia.

My university course and that textbook predated Wackypedia :)

Everybody else disagrees with your definition.


The software equivalent is inputs changing outputs even though the
next_state = whatever statement has not been executed.

Or, to put it another way, in a Mealy machine, the outputs can change
"between" clock pulses (hardware) or between evaluating what the next state
should be (software)

This was not from Mealy's paper.

Shrug.


The distinction is made based on which information is used to define the
outputs. If the inputs as well as the state is used, it's supposed to be
a Mealy FSM while a Moore FSM outputs only depend on the state. In
software the whole thing is a bit muddled because there are no registers
and no clock, but you can still tell what inputs are.

In this context, the software equivalent of a clock is evaluating
next_state = somethingOrOther.

Now in software the next_state re-evaluation (usually) does not occur at a
regular interval, but whenever an input event occurs. That's like a
hardware asynchronous FSM.

Hardware asynchronous FSMs have a real problem, unless you make an
unrealistic assumption that inputs never change at the same time.

Software has the strong advantage that one input change is fully evaluated
before the next is "received". That is /serialisation/ of event processing,
ad circumvents that problem with hardware async FSMs.

It's not at all important to receive only one input change at the same time.
In fact, if you read the papers, they don't talk about input signals changing
individually, they talk about the input signals combining to form and input
"alphabet" much like numbers or an ASCII code. So it is anticipated to have
multiple inputs change at the same time. In hardware this can lead to race
conditions if async logic is used, but that's an implementation issue, not a
FSM issue.

Async hardware FSM synthesis explicitly excludes
multiple inputs changing simultaneously, because
the trajectory through the state space becomes
impossible to predict and analyse.

Well, I ought to exclude very simple async hardware
FSMs of the component parts that are used to construct
sync hardware FSMs.



Now it is possible to conceive of a Mealy software FSM: - have the
structure outlined above for the "main" FSM - when an input changes it
generates a hardware interrupt - in the interrupt routine, have
something like if (current_state=X) then output = foo; if
(current_state=Y) then output = baz;

There the output can change independently of the FSM state, i.e.
effectively an Mealy FSM.

Where I have trouble is seeing that this is a Mealy FSM rather than just
some extra logic tacked onto a Moore FSM.

It depends on what how the extra logic behaves in the absence of a clock,
i.e. "between clocks"

By "logic" I mean combinational logic, not sequential.

Agreed.


In reality the entire issue is about how you analyze the FSM, not really
the implementation. Moore explained how to minimize the FSM using a
directed graph as the model where outputs are defined by the state.
Mealy extended this to FSM where the outputs are also functions of inputs
and the directed graph has outputs indicated on the state transitions
rather than the states.

What is really interesting is that Mealy's paper uses a type of
sequential logic that can't change state other than on "clock" edges.
The logic value is represented by the presence of absence of a clock
pulse. So there are no inputs that change state other than timed with
the clock!

In a Mealy machine the state can only change on clock edges (just like a
Moore machine), but outputs can change at any time that inputs change.

Except as I've explained, that isn't what Mealy talked about. His hardware
was not capable of such changes since the signals were pulses, i.e. clocks.

Your definition contradicts all definitions I've heard over the decades.


Note, I don't advocate structuring software that way, but I would
consider it in an embedded system where the latency between the input
occurring and the output changing needs to be minimised.

(In such a situation I'd prefer to have a system without the
indeterminacy of interrupts, by dedicating /a/ core to waiting for the
input to occur. Yes, that's the xCORE and the software is in xC, so the
IDE guarantees timing without sucking-it-and-seeing what execution
times might turn out to be :) )

You have a very negative view of CPUs. I design CPUs that have a one
clock cycle latency for interrupt handling. I don't need to dedicate a
large hunk of silicon and software to handling an interrupt.

Correct, you don't need to dedicate a large hunk of silicon to handling an
interrupt.

The interesting case is how you handle multiple inputs and significant
processing simultaneously while ensuring that timing of one process is
completely unaffected by the activities of other processes.

In you processor, if 8 unrelated inputs occur within one clock cycle, does
the processing of all 8 inputs start within the same clock cycle? Does the
processing of all inputs proceed simultaneously in parallel?

Which processor would this be exactly? I've designed many and used them,
more than one at a time. That is, I design to suit the problem
So, have you designed a processor that performed as
I mentioned above?


The only way I can see that happening is if you have 8 (or more!) cores.

Or structure your solution to suit the problem. Do all 8 inputs need to be
handled in any specific way? You need to define the problem in detail before
a solution can be suggested.

I'm not interested in single purpose problems
and processors.

I'm interested in generic processors and techniques
that can be used for many problems.

In this case I expect that 8 (to pick a number >>1)
inputs can be recognised and processing started within
a single clock cycle, and processing for all inputs
proceeds simultaneously. Thus if, to pick simple
numbers, processing each input takes 1us and the
processor clock is 10ns, then I expect all processing
to be complete 1010ns after all 8 inputs changed.

Obviously hardware has no problems with that, but
traditional processors and software simply cannot
achieve it.
 
On Thursday, August 15, 2019 at 7:04:15 PM UTC-4, Tom Gardner wrote:
On 15/08/19 21:49, Rick C wrote:
On Thursday, August 15, 2019 at 2:20:08 PM UTC-4, Tom Gardner wrote:
On 15/08/19 18:07, Rick C wrote:
On Thursday, August 15, 2019 at 8:41:13 AM UTC-4, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper
that used the two terms. I've never liked the Mealy machine
because the state diagram (a directed graph) doesn't properly
describe the asynchronous nature of the outputs. It shows and
output changing as part of the state transition, but in reality the
output is a combinational circuit of inputs and state which can
change without a corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in
the circuit can cause some state FFs to change and others not
resulting in wrong states at best and invalid states at worst. So
async inputs are always buffered through FFs to prevent this making
then synchronous.

The result is virtually all FSMs are actually Moore or a
combination of the two where the outputs are a registered version
of what you would get from a Mealy FSM.

When done in software, virtually every design is Moore since there
is no opportunity for outputs to change other than when the code is
run (equivalent to a clock edge). There is no such thing as a
Mealy FSM in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time. Many people
see them as 'hairball logic' because they don't understand them. In
my view, there is no such thing as a Mealy FSM in software, period.
Everything is necessarily synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is an
*asynchronous* hardware FSM where it is guaranteed that only one input
can change simultaneously. That guarantee is a result of the software
structure - event arrives, possibly from a message coming up the
network stack, possibly from a interrupt routine - event is put in a
queue, length >=1 - at a convenient time the event is taken from the
input queue - the event is fully processed (outputs, next state) before
looping back to take the next event

There is no system clock at the FSM level; events are processed when
available and convenient.

So far that is effectively a Moore machine: events and outputs change
at the "same" time, not independently.

I'm not sure that is what defines a Moore machine. You use the term
"events" for the input changes. A Mealy machine changes outputs based on
state and inputs, so outputs can change from input changes even if the
state has not changed.

Not quite.

In a hardware Mealy machine the inputs can change outputs even when there
is no possibility that the state has changed - because there has been no
clock pulse.

That is the point at issue. Wikipedia defines a Mealy FSM that way, but
Mealy didn't define it so. As I mentioned, the hardware model Mealy used was
not capable of having inputs changing separately from state changes (assuming
there was a state change indicated on the diagram) because it was pulse
based, not level based. EVERY signal was a timing pulse.

You are the only person I have heard who thinks that way.

You mean other than Mealy??? So there's at least two of us. But then I expect Mealy is dead so I guess I'm the only one left. lol


That's not what I was taught at university. I've just
checked a textbook from 1980 which states "The basic
distinction of a Mealy machine is that the outputs to
the outside world are a function of two sets of
variables: (1) the present input condition and (2) the
present state of the machine"

That does not contradict anything I've said. The original paper by Mealy addresses logic that only looks at the input once per clock. When it does look at the input it is the "current" or "present" input. Extrapolating that to FSM where the input can change at times other than on the clock is the error everyone seems to make.

You only have to consider the state transition diagrams to see I am right. Mealy's state diagrams show outputs associated with the transitions. As I have stated before, if an input changes which could alter the outputs, but there is no state transition, there is no way to notate that in his state diagrams.

Since Mealy's FSM hardware was not capable of inputs changing at any time that would not be a "clock edge" and the fact that his notation was not capable of notating output changes that were not associated with state changes, it is clear that he never intended to describe such FSMs that altered their outputs at any time other than state transitions.

The important point is that the output will reflect the manner in which a state was entered as well as the state itself. So the outputs have "memory" just like the state. Let an input that changes horses in midstream into the picture and you have lost the memory of how you got there.


I prefer to consider the Mealy FSM described by Mealy, not Wikipedia.

My university course and that textbook predated Wackypedia :)

Everybody else disagrees with your definition.

Except for Mealy. Hey, it's his name on the FSM type. I think we can listen to what he has to say about it. It wouldn't be the first time people mess up an idea.


The software equivalent is inputs changing outputs even though the
next_state = whatever statement has not been executed.

Or, to put it another way, in a Mealy machine, the outputs can change
"between" clock pulses (hardware) or between evaluating what the next state
should be (software)

This was not from Mealy's paper.

Shrug.

Shrug? Ok, I think we have enough history to know that if you don't want to believe something, it doesn't matter what the facts are. You just demonstrated that very clearly.


The distinction is made based on which information is used to define the
outputs. If the inputs as well as the state is used, it's supposed to be
a Mealy FSM while a Moore FSM outputs only depend on the state. In
software the whole thing is a bit muddled because there are no registers
and no clock, but you can still tell what inputs are.

In this context, the software equivalent of a clock is evaluating
next_state = somethingOrOther.

Now in software the next_state re-evaluation (usually) does not occur at a
regular interval, but whenever an input event occurs. That's like a
hardware asynchronous FSM.

Hardware asynchronous FSMs have a real problem, unless you make an
unrealistic assumption that inputs never change at the same time.

Software has the strong advantage that one input change is fully evaluated
before the next is "received". That is /serialisation/ of event processing,
ad circumvents that problem with hardware async FSMs.

It's not at all important to receive only one input change at the same time.
In fact, if you read the papers, they don't talk about input signals changing
individually, they talk about the input signals combining to form and input
"alphabet" much like numbers or an ASCII code. So it is anticipated to have
multiple inputs change at the same time. In hardware this can lead to race
conditions if async logic is used, but that's an implementation issue, not a
FSM issue.

Async hardware FSM synthesis explicitly excludes
multiple inputs changing simultaneously, because
the trajectory through the state space becomes
impossible to predict and analyse.

Well, I ought to exclude very simple async hardware
FSMs of the component parts that are used to construct
sync hardware FSMs.



Now it is possible to conceive of a Mealy software FSM: - have the
structure outlined above for the "main" FSM - when an input changes it
generates a hardware interrupt - in the interrupt routine, have
something like if (current_state=X) then output = foo; if
(current_state=Y) then output = baz;

There the output can change independently of the FSM state, i.e.
effectively an Mealy FSM.

Where I have trouble is seeing that this is a Mealy FSM rather than just
some extra logic tacked onto a Moore FSM.

It depends on what how the extra logic behaves in the absence of a clock,
i.e. "between clocks"

By "logic" I mean combinational logic, not sequential.

Agreed.


In reality the entire issue is about how you analyze the FSM, not really
the implementation. Moore explained how to minimize the FSM using a
directed graph as the model where outputs are defined by the state.
Mealy extended this to FSM where the outputs are also functions of inputs
and the directed graph has outputs indicated on the state transitions
rather than the states.

What is really interesting is that Mealy's paper uses a type of
sequential logic that can't change state other than on "clock" edges.
The logic value is represented by the presence of absence of a clock
pulse. So there are no inputs that change state other than timed with
the clock!

In a Mealy machine the state can only change on clock edges (just like a
Moore machine), but outputs can change at any time that inputs change.

Except as I've explained, that isn't what Mealy talked about. His hardware
was not capable of such changes since the signals were pulses, i.e. clocks.

Your definition contradicts all definitions I've heard over the decades.

You keep saying that as if I'd said the sky was green and grass was blue. Read Mealy's paper. Actually consider what his state diagram says. What Mealy presented was ideas on how to design state machines methodically using his state diagram. Anything else people attribute to his paper is not really there.


Note, I don't advocate structuring software that way, but I would
consider it in an embedded system where the latency between the input
occurring and the output changing needs to be minimised.

(In such a situation I'd prefer to have a system without the
indeterminacy of interrupts, by dedicating /a/ core to waiting for the
input to occur. Yes, that's the xCORE and the software is in xC, so the
IDE guarantees timing without sucking-it-and-seeing what execution
times might turn out to be :) )

You have a very negative view of CPUs. I design CPUs that have a one
clock cycle latency for interrupt handling. I don't need to dedicate a
large hunk of silicon and software to handling an interrupt.

Correct, you don't need to dedicate a large hunk of silicon to handling an
interrupt.

The interesting case is how you handle multiple inputs and significant
processing simultaneously while ensuring that timing of one process is
completely unaffected by the activities of other processes.

In you processor, if 8 unrelated inputs occur within one clock cycle, does
the processing of all 8 inputs start within the same clock cycle? Does the
processing of all inputs proceed simultaneously in parallel?

Which processor would this be exactly? I've designed many and used them,
more than one at a time. That is, I design to suit the problem
So, have you designed a processor that performed as
I mentioned above?

I don't know. Your problem is not actually defined. Of course a processor can process all 8 inputs. They do that all the time with bytes received from UARTs for example.


The only way I can see that happening is if you have 8 (or more!) cores.

Or structure your solution to suit the problem. Do all 8 inputs need to be
handled in any specific way? You need to define the problem in detail before
a solution can be suggested.

I'm not interested in single purpose problems
and processors.

I'm interested in generic processors and techniques
that can be used for many problems.

In this case I expect that 8 (to pick a number >>1)
inputs can be recognised and processing started within
a single clock cycle, and processing for all inputs
proceeds simultaneously. Thus if, to pick simple
numbers, processing each input takes 1us and the
processor clock is 10ns, then I expect all processing
to be complete 1010ns after all 8 inputs changed.

Obviously hardware has no problems with that, but
traditional processors and software simply cannot
achieve it.

Ok, you like your processor. So what? Those processors aren't very widely used. Obviously there are other choices that work just as well.

The way you just described the problem, my processor would easily do this job. I said it would respond in 1 clock cycle to any interrupt. It takes one clock cycle to execute what amounts to a subroutine call and saves both the IP and the PSW (the entire CPU state).

I've worked on an idea for a processor that combines a stack with addressing like registers which saves a lot of cycles. I've not tried to implement it yet though.

--

Rick C.

++ Get 1,000 miles of free Supercharging
++ Tesla referral code - https://ts.la/richard11209
 
On 16/08/19 07:57, bitrex wrote:
On 8/15/19 3:38 PM, John Larkin wrote:

An FPGA full of synchronous logic, with a common clock, and that does
the state logic in a single clock, satisfies the stable-setup
requirement automatically. Of course off-chip signals, or inputs from
another clock domain, must be synchronized before use.

One software (or even hardware) trick is to sample the inputs twice
and make sure nothing has changed. If it has, try again. Sometimes
just taking a third sample set is safe.

Ugh. It might be the best available technique for hardware,
but it certainly isn't for software!

I've done that in software too, when an external set of inputs could
occasionally have ugly transient states, like for instance both
contacts of an SPDT switch apparently closed, or noise spikes possible
on sensors. Works fine sometimes.

The real virtue of a FSM implementation is not the hardware or the
code, it's that it forces the engineer onto taking a methodical
approach, documenting it coherently, and thinking about all possible
system states. Hairballs don't do that, and most code is hairballs.

FPGA tools usually support state machines explicitly. Computer
languages don't.


FSMs are used quite often in video game programming too, to avoid the same
problems hairball logic has in hardware.

The software equivalent of hairball logic is if/else statements.

That's one technique, and a technique that often leads to
unmaintainable spaghetti code. I've seen a commercial
product made like that with 10-deep if-the-else nested
statements! Madness.

Another technique is a 2D table, where one dimension's
index is the current state and the other dimension's
index is event. Each table entry is a function pointer
to the action taken, plus the next state. That makes
damn sure you explicitly enumerate all possibilities,
including the "can't happen->panic" cases.

Another technique is well suited to complex FSMs that
specified in a hierarchical fashion (e.g. Harel Statecharts)
There each state is a OOP class, each event is a virtual method
in the classes. The abstract top level superclass implements
each method as "can't happen, record and panic". Where
a state consumes a method it re-implements the virtual method.
The current state is simply an instance of the relevant class.


if/else/if/else tangle of hairball logic. A state machine makes it a lot easier,
Mario has his running/jumping/standing little/big/firey states well-defined at
all times and there's a transition table for each set of states for the
different player button inputs.

Maintaining/enhancing if-the-elses over the years is
problematic. Each delta is sufficiently small that
it isn't worth flipping to a better implementation
strategy, but after a while it is a pig understand
the FSM sufficiently to make the next delta without
subtly buggering up the existing operation.

For small FSMs that will never change, that isn't
a problem.
 
On 16/08/19 05:23, Rick C wrote:
On Thursday, August 15, 2019 at 7:04:15 PM UTC-4, Tom Gardner wrote:
On 15/08/19 21:49, Rick C wrote:
On Thursday, August 15, 2019 at 2:20:08 PM UTC-4, Tom Gardner wrote:
On 15/08/19 18:07, Rick C wrote:
On Thursday, August 15, 2019 at 8:41:13 AM UTC-4, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper
that used the two terms. I've never liked the Mealy machine
because the state diagram (a directed graph) doesn't properly
describe the asynchronous nature of the outputs. It shows and
output changing as part of the state transition, but in reality
the output is a combinational circuit of inputs and state which
can change without a corresponding change in state.

In the real world nearly all logic is actually sequential. If
the inputs to a FSM are async, then the machine can go haywire
if the inputs change too close to the clock edge. A race
condition in the circuit can cause some state FFs to change and
others not resulting in wrong states at best and invalid states
at worst. So async inputs are always buffered through FFs to
prevent this making then synchronous.

The result is virtually all FSMs are actually Moore or a
combination of the two where the outputs are a registered
version of what you would get from a Mealy FSM.

When done in software, virtually every design is Moore since
there is no opportunity for outputs to change other than when
the code is run (equivalent to a clock edge). There is no such
thing as a Mealy FSM in software without greatly complicating
your code.

Anyone here who even thinks about FSM in terms of Mealy and
Moore?


I like to design in simple Mealy FSMs from time to time. Many
people see them as 'hairball logic' because they don't understand
them. In my view, there is no such thing as a Mealy FSM in
software, period. Everything is necessarily synchronized to the
processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is an
*asynchronous* hardware FSM where it is guaranteed that only one
input can change simultaneously. That guarantee is a result of the
software structure - event arrives, possibly from a message coming
up the network stack, possibly from a interrupt routine - event is
put in a queue, length >=1 - at a convenient time the event is
taken from the input queue - the event is fully processed (outputs,
next state) before looping back to take the next event

There is no system clock at the FSM level; events are processed
when available and convenient.

So far that is effectively a Moore machine: events and outputs
change at the "same" time, not independently.

I'm not sure that is what defines a Moore machine. You use the term
"events" for the input changes. A Mealy machine changes outputs
based on state and inputs, so outputs can change from input changes
even if the state has not changed.

Not quite.

In a hardware Mealy machine the inputs can change outputs even when
there is no possibility that the state has changed - because there has
been no clock pulse.

That is the point at issue. Wikipedia defines a Mealy FSM that way, but
Mealy didn't define it so. As I mentioned, the hardware model Mealy used
was not capable of having inputs changing separately from state changes
(assuming there was a state change indicated on the diagram) because it
was pulse based, not level based. EVERY signal was a timing pulse.

You are the only person I have heard who thinks that way.

You mean other than Mealy??? So there's at least two of us. But then I
expect Mealy is dead so I guess I'm the only one left. lol


That's not what I was taught at university. I've just checked a textbook
from 1980 which states "The basic distinction of a Mealy machine is that
the outputs to the outside world are a function of two sets of variables:
(1) the present input condition and (2) the present state of the machine"

That does not contradict anything I've said. The original paper by Mealy
addresses logic that only looks at the input once per clock. When it does
look at the input it is the "current" or "present" input. Extrapolating that
to FSM where the input can change at times other than on the clock is the
error everyone seems to make.

You only have to consider the state transition diagrams to see I am right.
Mealy's state diagrams show outputs associated with the transitions. As I
have stated before, if an input changes which could alter the outputs, but
there is no state transition, there is no way to notate that in his state
diagrams.

Since Mealy's FSM hardware was not capable of inputs changing at any time
that would not be a "clock edge" and the fact that his notation was not
capable of notating output changes that were not associated with state
changes, it is clear that he never intended to describe such FSMs that
altered their outputs at any time other than state transitions.

The important point is that the output will reflect the manner in which a
state was entered as well as the state itself. So the outputs have "memory"
just like the state. Let an input that changes horses in midstream into the
picture and you have lost the memory of how you got there.


I prefer to consider the Mealy FSM described by Mealy, not Wikipedia.

My university course and that textbook predated Wackypedia :)

Everybody else disagrees with your definition.

Except for Mealy. Hey, it's his name on the FSM type. I think we can listen
to what he has to say about it. It wouldn't be the first time people mess up
an idea.


The software equivalent is inputs changing outputs even though the
next_state = whatever statement has not been executed.

Or, to put it another way, in a Mealy machine, the outputs can change
"between" clock pulses (hardware) or between evaluating what the next
state should be (software)

This was not from Mealy's paper.

Shrug.

Shrug? Ok, I think we have enough history to know that if you don't want to
believe something, it doesn't matter what the facts are. You just
demonstrated that very clearly.


The distinction is made based on which information is used to define
the outputs. If the inputs as well as the state is used, it's
supposed to be a Mealy FSM while a Moore FSM outputs only depend on
the state. In software the whole thing is a bit muddled because
there are no registers and no clock, but you can still tell what
inputs are.

In this context, the software equivalent of a clock is evaluating
next_state = somethingOrOther.

Now in software the next_state re-evaluation (usually) does not occur
at a regular interval, but whenever an input event occurs. That's like
a hardware asynchronous FSM.

Hardware asynchronous FSMs have a real problem, unless you make an
unrealistic assumption that inputs never change at the same time.

Software has the strong advantage that one input change is fully
evaluated before the next is "received". That is /serialisation/ of
event processing, ad circumvents that problem with hardware async
FSMs.

It's not at all important to receive only one input change at the same
time. In fact, if you read the papers, they don't talk about input
signals changing individually, they talk about the input signals
combining to form and input "alphabet" much like numbers or an ASCII
code. So it is anticipated to have multiple inputs change at the same
time. In hardware this can lead to race conditions if async logic is
used, but that's an implementation issue, not a FSM issue.

Async hardware FSM synthesis explicitly excludes multiple inputs changing
simultaneously, because the trajectory through the state space becomes
impossible to predict and analyse.

Well, I ought to exclude very simple async hardware FSMs of the component
parts that are used to construct sync hardware FSMs.



Now it is possible to conceive of a Mealy software FSM: - have the
structure outlined above for the "main" FSM - when an input changes
it generates a hardware interrupt - in the interrupt routine, have
something like if (current_state=X) then output = foo; if
(current_state=Y) then output = baz;

There the output can change independently of the FSM state, i.e.
effectively an Mealy FSM.

Where I have trouble is seeing that this is a Mealy FSM rather than
just some extra logic tacked onto a Moore FSM.

It depends on what how the extra logic behaves in the absence of a
clock, i.e. "between clocks"

By "logic" I mean combinational logic, not sequential.

Agreed.


In reality the entire issue is about how you analyze the FSM, not
really the implementation. Moore explained how to minimize the FSM
using a directed graph as the model where outputs are defined by the
state. Mealy extended this to FSM where the outputs are also
functions of inputs and the directed graph has outputs indicated on
the state transitions rather than the states.

What is really interesting is that Mealy's paper uses a type of
sequential logic that can't change state other than on "clock"
edges. The logic value is represented by the presence of absence of a
clock pulse. So there are no inputs that change state other than
timed with the clock!

In a Mealy machine the state can only change on clock edges (just like
a Moore machine), but outputs can change at any time that inputs
change.

Except as I've explained, that isn't what Mealy talked about. His
hardware was not capable of such changes since the signals were pulses,
i.e. clocks.

Your definition contradicts all definitions I've heard over the decades.

You keep saying that as if I'd said the sky was green and grass was blue.
Read Mealy's paper. Actually consider what his state diagram says. What
Mealy presented was ideas on how to design state machines methodically using
his state diagram. Anything else people attribute to his paper is not really
there.


Note, I don't advocate structuring software that way, but I would
consider it in an embedded system where the latency between the
input occurring and the output changing needs to be minimised.

(In such a situation I'd prefer to have a system without the
indeterminacy of interrupts, by dedicating /a/ core to waiting for
the input to occur. Yes, that's the xCORE and the software is in
xC, so the IDE guarantees timing without sucking-it-and-seeing what
execution times might turn out to be :) )

You have a very negative view of CPUs. I design CPUs that have a
one clock cycle latency for interrupt handling. I don't need to
dedicate a large hunk of silicon and software to handling an
interrupt.

Correct, you don't need to dedicate a large hunk of silicon to handling
an interrupt.

The interesting case is how you handle multiple inputs and significant
processing simultaneously while ensuring that timing of one process is
completely unaffected by the activities of other processes.

In you processor, if 8 unrelated inputs occur within one clock cycle,
does the processing of all 8 inputs start within the same clock cycle?
Does the processing of all inputs proceed simultaneously in parallel?

Which processor would this be exactly? I've designed many and used
them, more than one at a time. That is, I design to suit the problem
So, have you designed a processor that performed as I mentioned above?

I don't know. Your problem is not actually defined. Of course a processor
can process all 8 inputs. They do that all the time with bytes received from
UARTs for example.

Sigh.

The key word is *independent* inputs.

UART bytes are eight bits from the same input.

I did not specify the input width. With the xCORE
processors they can be 32 bits.

To take an example, can your processor deal with
these inputs simultaneously:
- message from a USB interface
- message from an ethernet interface
- ADC input
- a serial bitstream at 250Mb/s, SERDESed into 32 bits
- two audio inputs (guaranteed latency is apparently critical!)
- and a few others I can't be bothered to dream up
plus the DSP/control algorithms associated with them



The only way I can see that happening is if you have 8 (or more!)
cores.

Or structure your solution to suit the problem. Do all 8 inputs need to
be handled in any specific way? You need to define the problem in detail
before a solution can be suggested.

I'm not interested in single purpose problems and processors.

I'm interested in generic processors and techniques that can be used for
many problems.

In this case I expect that 8 (to pick a number >>1) inputs can be
recognised and processing started within a single clock cycle, and
processing for all inputs proceeds simultaneously. Thus if, to pick simple
numbers, processing each input takes 1us and the processor clock is 10ns,
then I expect all processing to be complete 1010ns after all 8 inputs
changed.

Obviously hardware has no problems with that, but traditional processors
and software simply cannot achieve it.

Ok, you like your processor. So what? Those processors aren't very widely
used. Obviously there are other choices that work just as well.

The way you just described the problem, my processor would easily do this
job. I said it would respond in 1 clock cycle to any interrupt. It takes
one clock cycle to execute what amounts to a subroutine call and saves both
the IP and the PSW (the entire CPU state).

That's boring; even 6800s can respond in one instruction cycle!

What is interesting is if a processor can respond to many
interrupts with guaranteed latency.


I've worked on an idea for a processor that combines a stack with addressing
like registers which saves a lot of cycles. I've not tried to implement it
yet though.

Variations on a theme, not fundamentally different.
 
On 8/16/19 2:57 AM, bitrex wrote:

FSMs are used quite often in video game programming too, to avoid the
same problems hairball logic has in hardware.

The software equivalent of hairball logic is if/else statements.

Super Mario can be small mario, big mario, fireball-shooting-big mario
and you have to select the right sprite for each one. In each state he
can do different actions like jumping. and if he's fire-mario he can
shoot fireballs while he's in the air. And the player has multiple
buttons as input to control him.

so you can imagine the tangle of logic to try to figure out the player's
intent when the player hits the button on the controller to shoot a
fireball. Is mario capable of doing that right now? Yes okay. Check to
see if he's in the air. Okay load the shoot-fireball-midair sprite. If
not he's on the ground. Load a different sprite. But wait what if he's
running that needs a different shoot-fire sprite than standing still.
check for that.

if/else/if/else tangle of hairball logic. A state machine makes it a lot
easier, Mario has his running/jumping/standing little/big/firey states
well-defined at all times and there's a transition table for each set of
states for the different player button inputs.

Even so, the Japanese programmers in 1983 or whatever were not perfect
at implementing that state machine for the 6502 and there are ways to
make it glitch into forbidden edge-cases that should not be allowed
according to the rules of the game like this one.

The game doesn't crash but Mario grows taller and then shrinks every
time he shoots a fireball because a sprite for him shooting a fireball
while little doesn't exist for that unintended state.

<https://www.youtube.com/watch?v=bZFsJUHj-e0>
 
On 8/15/19 3:38 PM, John Larkin wrote:

An FPGA full of synchronous logic, with a common clock, and that does
the state logic in a single clock, satisfies the stable-setup
requirement automatically. Of course off-chip signals, or inputs from
another clock domain, must be synchronized before use.

One software (or even hardware) trick is to sample the inputs twice
and make sure nothing has changed. If it has, try again. Sometimes
just taking a third sample set is safe.

Ugh. It might be the best available technique for hardware,
but it certainly isn't for software!

I've done that in software too, when an external set of inputs could
occasionally have ugly transient states, like for instance both
contacts of an SPDT switch apparently closed, or noise spikes possible
on sensors. Works fine sometimes.

The real virtue of a FSM implementation is not the hardware or the
code, it's that it forces the engineer onto taking a methodical
approach, documenting it coherently, and thinking about all possible
system states. Hairballs don't do that, and most code is hairballs.

FPGA tools usually support state machines explicitly. Computer
languages don't.

FSMs are used quite often in video game programming too, to avoid the
same problems hairball logic has in hardware.

The software equivalent of hairball logic is if/else statements.

Super Mario can be small mario, big mario, fireball-shooting-big mario
and you have to select the right sprite for each one. In each state he
can do different actions like jumping. and if he's fire-mario he can
shoot fireballs while he's in the air. And the player has multiple
buttons as input to control him.

so you can imagine the tangle of logic to try to figure out the player's
intent when the player hits the button on the controller to shoot a
fireball. Is mario capable of doing that right now? Yes okay. Check to
see if he's in the air. Okay load the shoot-fireball-midair sprite. If
not he's on the ground. Load a different sprite. But wait what if he's
running that needs a different shoot-fire sprite than standing still.
check for that.

if/else/if/else tangle of hairball logic. A state machine makes it a lot
easier, Mario has his running/jumping/standing little/big/firey states
well-defined at all times and there's a transition table for each set of
states for the different player button inputs.
 
On 2019-08-15 19:56, Tom Gardner wrote:
On 15/08/19 15:59, Jeroen Belleman wrote:
On 2019-08-15 14:41, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure

[...]

We live in different worlds. I'm a hardware guy. You are
evidently a software guy. The whole computer is a giant
Moore machine. Of course you can use it to simulate an
(asynchronous!) Mealy machine, but only if you don't look
too closely.

Nope, I'm not a "software guy". I've done everything
from low-noise analogue electronics, through semi-custom
and FPGA digital, through RTOS, through web-backend
order fulfilment systems, through RF propagation, through
telecoms billing, through... You get the idea.

The boundary between hardware and software is very grey
and fluid.

I noted the key points of Moore and Mealy machines, and
gave an illustration of how both can be coded in software.
Why don't you comment on that?

The "whole computer is a Moore machine" is one of those
statements that generates no light whatsoever.

A Moore machine changes state at discrete instants in time,
it is a clocked machine. The output depends only on the
state. In a Mealy machine the output depends on the state
*and* on the input. It can also change state whenever the
input changes. There is no clock in a Mealy machine.

Software runs on clocked CPUs, so strictly speaking it
implements a Moore machine. I'll grant you that if the time
scales of input events and CPU clock instants is different
enough, the distinction may become invisible.

Jeroen Belleman
 
On 8/16/19 3:40 AM, Tom Gardner wrote:
On 16/08/19 07:57, bitrex wrote:
On 8/15/19 3:38 PM, John Larkin wrote:

An FPGA full of synchronous logic, with a common clock, and that does
the state logic in a single clock, satisfies the stable-setup
requirement automatically. Of course off-chip signals, or inputs from
another clock domain, must be synchronized before use.

One software (or even hardware) trick is to sample the inputs twice
and make sure nothing has changed. If it has, try again. Sometimes
just taking a third sample set is safe.

Ugh. It might be the best available technique for hardware,
but it certainly isn't for software!

I've done that in software too, when an external set of inputs could
occasionally have ugly transient states, like for instance both
contacts of an SPDT switch apparently closed, or noise spikes possible
on sensors. Works fine sometimes.

The real virtue of a FSM implementation is not the hardware or the
code, it's that it forces the engineer onto taking a methodical
approach, documenting it coherently, and thinking about all possible
system states. Hairballs don't do that, and most code is hairballs.

FPGA tools usually support state machines explicitly. Computer
languages don't.


FSMs are used quite often in video game programming too, to avoid the
same problems hairball logic has in hardware.

The software equivalent of hairball logic is if/else statements.

That's one technique, and a technique that often leads to
unmaintainable spaghetti code. I've seen a commercial
product made like that with 10-deep if-the-else nested
statements! Madness.

Another technique is a 2D table, where one dimension's
index is the current state and the other dimension's
index is event. Each table entry is a function pointer
to the action taken, plus the next state. That makes
damn sure you explicitly enumerate all possibilities,
including the "can't happen->panic" cases.

Another technique is well suited to complex FSMs that
specified in a hierarchical fashion (e.g. Harel Statecharts)
There each state is a OOP class, each event is a virtual method
in the classes. The abstract top level superclass implements
each method as "can't happen, record and panic". Where
a state consumes a method it re-implements the virtual method.
The current state is simply an instance of the relevant class.

Sounds interesting, for a microcontroller implementation I would avoid
multiple virtual methods and use static polymorphism/CTRP along with the
Visitor pattern to have one virtual method if possible, do double-dispatch.

because on e.g. AVR vtables are required to be loaded into your limited
SRAM at program start even though strictly they shouldn't have to be -
goddamn it!

<https://en.wikipedia.org/wiki/Visitor_pattern>

Even so it sounds a lot more elegant than most C implementations of
state machines I've seen. blech.


if/else/if/else tangle of hairball logic. A state machine makes it a
lot easier, Mario has his running/jumping/standing little/big/firey
states well-defined at all times and there's a transition table for
each set of states for the different player button inputs.

Maintaining/enhancing if-the-elses over the years is
problematic. Each delta is sufficiently small that
it isn't worth flipping to a better implementation
strategy, but after a while it is a pig understand
the FSM sufficiently to make the next delta without
subtly buggering up the existing operation.

For small FSMs that will never change, that isn't
a problem.

In most modern big-budget video gamez all the game-logic is abstracted
away from the graphics and physics code (which is usually C++ and often
derived from a third-party off-the-shelf API like Unity, I've tried
playing with writing my own 3D 'game engine' and it's a nightmare
time-sink even for a simplistic one doing all that OpenGL interfacing
yourself) and is done in a scripting language like Python or Lua, or
some bespoke virtual machine.
 
On Fri, 16 Aug 2019 11:48:07 +0200, Jeroen Belleman
<jeroen@nospam.please> wrote:

On 2019-08-15 19:56, Tom Gardner wrote:
On 15/08/19 15:59, Jeroen Belleman wrote:
On 2019-08-15 14:41, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure

[...]

We live in different worlds. I'm a hardware guy. You are
evidently a software guy. The whole computer is a giant
Moore machine. Of course you can use it to simulate an
(asynchronous!) Mealy machine, but only if you don't look
too closely.

Nope, I'm not a "software guy". I've done everything
from low-noise analogue electronics, through semi-custom
and FPGA digital, through RTOS, through web-backend
order fulfilment systems, through RF propagation, through
telecoms billing, through... You get the idea.

The boundary between hardware and software is very grey
and fluid.

I noted the key points of Moore and Mealy machines, and
gave an illustration of how both can be coded in software.
Why don't you comment on that?

The "whole computer is a Moore machine" is one of those
statements that generates no light whatsoever.

A Moore machine changes state at discrete instants in time,
it is a clocked machine. The output depends only on the
state. In a Mealy machine the output depends on the state
*and* on the input. It can also change state whenever the
input changes. There is no clock in a Mealy machine.

Software runs on clocked CPUs, so strictly speaking it
implements a Moore machine. I'll grant you that if the time
scales of input events and CPU clock instants is different
enough, the distinction may become invisible.

Jeroen Belleman

If you define the value of an integer (software variable or hardware
register) as the "state" of the system, future states are entered at
clock/loop execution time, as a function of present state and some
number of synchronized inputs. That's a fully synchronous state
machine, with no race or metastability hazards.

Outputs could be purely decoded from the state word, or could be made
by a combination of state and inputs. In the latter case, if the
inputs are synchronized (and the output logic is done right) the
outputs are also synchronous. Preferably, the outputs are all d-flops
off the common clock, or the software equivalent. Those flops are just
not part of the defined state word.

I guess that's two cases of state machines.

One possible additional case, where the inputs to the state machine
are not clock synchronous, is generally too hazardous to use.

Outputs generated by logic of state + inputs could also use
unsynchronized inputs, yet another usually-pathological variation.
 
On Friday, August 16, 2019 at 5:48:13 AM UTC-4, Jeroen Belleman wrote:
On 2019-08-15 19:56, Tom Gardner wrote:
On 15/08/19 15:59, Jeroen Belleman wrote:
On 2019-08-15 14:41, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure

[...]

We live in different worlds. I'm a hardware guy. You are
evidently a software guy. The whole computer is a giant
Moore machine. Of course you can use it to simulate an
(asynchronous!) Mealy machine, but only if you don't look
too closely.

Nope, I'm not a "software guy". I've done everything
from low-noise analogue electronics, through semi-custom
and FPGA digital, through RTOS, through web-backend
order fulfilment systems, through RF propagation, through
telecoms billing, through... You get the idea.

The boundary between hardware and software is very grey
and fluid.

I noted the key points of Moore and Mealy machines, and
gave an illustration of how both can be coded in software.
Why don't you comment on that?

The "whole computer is a Moore machine" is one of those
statements that generates no light whatsoever.

A Moore machine changes state at discrete instants in time,
it is a clocked machine. The output depends only on the
state. In a Mealy machine the output depends on the state
*and* on the input. It can also change state whenever the
input changes. There is no clock in a Mealy machine.

That is plain wrong. Mealy vs. Moore has nothing to do with clocked vs. async FSM.


Software runs on clocked CPUs, so strictly speaking it
implements a Moore machine. I'll grant you that if the time
scales of input events and CPU clock instants is different
enough, the distinction may become invisible.

Incorrect premise, so incorrect conclusion.

--

Rick C.

--+ Get 1,000 miles of free Supercharging
--+ Tesla referral code - https://ts.la/richard11209
 
On Friday, August 16, 2019 at 3:25:56 AM UTC-4, Tom Gardner wrote:
On 16/08/19 05:23, Rick C wrote:

I don't know. Your problem is not actually defined. Of course a processor
can process all 8 inputs. They do that all the time with bytes received from
UARTs for example.

Sigh.

The key word is *independent* inputs.

Nothing "key" about the word independent.


> UART bytes are eight bits from the same input.

Yes, and they are all independent. Otherwise we couldn't represent 256 characters.


I did not specify the input width. With the xCORE
processors they can be 32 bits.

To take an example, can your processor deal with
these inputs simultaneously:
- message from a USB interface
- message from an ethernet interface
- ADC input
- a serial bitstream at 250Mb/s, SERDESed into 32 bits
- two audio inputs (guaranteed latency is apparently critical!)
- and a few others I can't be bothered to dream up
plus the DSP/control algorithms associated with them

Of course so. Either one processor can handle all the events or use 8 processors like the xcore does. One large advantage of soft cores is the ease with which 9 cores can be used to process 9 independent inputs/tasks. Or 33 inputs and 33 tasks.


Ok, you like your processor. So what? Those processors aren't very widely
used. Obviously there are other choices that work just as well.

The way you just described the problem, my processor would easily do this
job. I said it would respond in 1 clock cycle to any interrupt. It takes
one clock cycle to execute what amounts to a subroutine call and saves both
the IP and the PSW (the entire CPU state).

That's boring; even 6800s can respond in one instruction cycle!

Lol! "Boring" is what makes it highly functional in the presence of interrupts.


What is interesting is if a processor can respond to many
interrupts with guaranteed latency.

What is not guaranteed about 1 clock cycle??? You seem to be confused about something.


I've worked on an idea for a processor that combines a stack with addressing
like registers which saves a lot of cycles. I've not tried to implement it
yet though.

Variations on a theme, not fundamentally different.

Actually, yes, it is fundamentally different. But when you are stuck in the mindset that only the xcore is a different idea you can't see it.

This is very off topic now that you have completely stopped talking about FSM. I'm going back to the topic now.

--

Rick C.

--- Get 1,000 miles of free Supercharging
--- Tesla referral code - https://ts.la/richard11209
 
On 2019-08-16 18:05, Rick C wrote:
On Friday, August 16, 2019 at 5:48:13 AM UTC-4, Jeroen Belleman
wrote:
On 2019-08-15 19:56, Tom Gardner wrote:
On 15/08/19 15:59, Jeroen Belleman wrote:
On 2019-08-15 14:41, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research
paper that used the two terms. I've never liked the
Mealy machine because the state diagram (a directed
graph) doesn't properly describe the asynchronous nature
of the outputs. It shows and output changing as part of
the state transition, but in reality the output is a
combinational circuit of inputs and state which can
change without a corresponding change in state.

In the real world nearly all logic is actually
sequential. If the inputs to a FSM are async, then the
machine can go haywire if the inputs change too close to
the clock edge. A race condition in the circuit can
cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So
async inputs are always buffered through FFs to prevent
this making then synchronous.

The result is virtually all FSMs are actually Moore or a
combination of the two where the outputs are a registered
version of what you would get from a Mealy FSM.

When done in software, virtually every design is Moore
since there is no opportunity for outputs to change other
than when the code is run (equivalent to a clock edge).
There is no such thing as a Mealy FSM in software without
greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy
and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they
don't understand them. In my view, there is no such thing
as a Mealy FSM in software, period. Everything is
necessarily synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is an
*asynchronous* hardware FSM where it is guaranteed that only
one input can change simultaneously. That guarantee is a
result of the software structure

[...]

We live in different worlds. I'm a hardware guy. You are
evidently a software guy. The whole computer is a giant Moore
machine. Of course you can use it to simulate an
(asynchronous!) Mealy machine, but only if you don't look too
closely.

Nope, I'm not a "software guy". I've done everything from
low-noise analogue electronics, through semi-custom and FPGA
digital, through RTOS, through web-backend order fulfilment
systems, through RF propagation, through telecoms billing,
through... You get the idea.

The boundary between hardware and software is very grey and
fluid.

I noted the key points of Moore and Mealy machines, and gave an
illustration of how both can be coded in software. Why don't you
comment on that?

The "whole computer is a Moore machine" is one of those
statements that generates no light whatsoever.

A Moore machine changes state at discrete instants in time, it is a
clocked machine. The output depends only on the state. In a Mealy
machine the output depends on the state *and* on the input. It can
also change state whenever the input changes. There is no clock in
a Mealy machine.

That is plain wrong. Mealy vs. Moore has nothing to do with clocked
vs. async FSM.

G.H. Mealy, "A method for synthesizing sequential circuits",
fig.2, fig.5. No clock or FF in sight.

What, pray tell, is the distinction?

Jeroen Belleman
 
On 16/08/19 16:02, bitrex wrote:
On 8/16/19 3:40 AM, Tom Gardner wrote:
On 16/08/19 07:57, bitrex wrote:
On 8/15/19 3:38 PM, John Larkin wrote:

An FPGA full of synchronous logic, with a common clock, and that does
the state logic in a single clock, satisfies the stable-setup
requirement automatically. Of course off-chip signals, or inputs from
another clock domain, must be synchronized before use.

One software (or even hardware) trick is to sample the inputs twice
and make sure nothing has changed. If it has, try again. Sometimes
just taking a third sample set is safe.

Ugh. It might be the best available technique for hardware,
but it certainly isn't for software!

I've done that in software too, when an external set of inputs could
occasionally have ugly transient states, like for instance both
contacts of an SPDT switch apparently closed, or noise spikes possible
on sensors. Works fine sometimes.

The real virtue of a FSM implementation is not the hardware or the
code, it's that it forces the engineer onto taking a methodical
approach, documenting it coherently, and thinking about all possible
system states. Hairballs don't do that, and most code is hairballs.

FPGA tools usually support state machines explicitly. Computer
languages don't.


FSMs are used quite often in video game programming too, to avoid the same
problems hairball logic has in hardware.

The software equivalent of hairball logic is if/else statements.

That's one technique, and a technique that often leads to
unmaintainable spaghetti code. I've seen a commercial
product made like that with 10-deep if-the-else nested
statements! Madness.

Another technique is a 2D table, where one dimension's
index is the current state and the other dimension's
index is event. Each table entry is a function pointer
to the action taken, plus the next state. That makes
damn sure you explicitly enumerate all possibilities,
including the "can't happen->panic" cases.

Another technique is well suited to complex FSMs that
specified in a hierarchical fashion (e.g. Harel Statecharts)
There each state is a OOP class, each event is a virtual method
in the classes. The abstract top level superclass implements
each method as "can't happen, record and panic". Where
a state consumes a method it re-implements the virtual method.
The current state is simply an instance of the relevant class.

Sounds interesting, for a microcontroller implementation I would avoid multiple
virtual methods and use static polymorphism/CTRP along with the Visitor pattern
to have one virtual method if possible, do double-dispatch.

because on e.g. AVR vtables are required to be loaded into your limited SRAM at
program start even though strictly they shouldn't have to be - goddamn it!

https://en.wikipedia.org/wiki/Visitor_pattern

Even so it sounds a lot more elegant than most C implementations of state
machines I've seen. blech.

I haven't used OOP on a tiny MCU, and doubt I ever will.
That v-table in RAM seems like an anti-pattern!

But the mechanism I briefly outlined works well. It is
*easy* to instrument so you can keep a record of the
event/state trajectory to find out the sequence that
lead to a problem. And for telecom systems handling
thousands of calls (identical FSM, but per-call state
and transitions) it is sufficiently economical to keep
a per-call trajectory around.

Such instrumentation is /great/ when it comes to
integration with other companies' products - it makes
it trivial to be able to point a finger and say "this
is what happened; I got it right, you got it wrong
at /this/ point". Squabbles and lawsuits are avoided;
what's not to like.


if/else/if/else tangle of hairball logic. A state machine makes it a lot
easier, Mario has his running/jumping/standing little/big/firey states
well-defined at all times and there's a transition table for each set of
states for the different player button inputs.

Maintaining/enhancing if-the-elses over the years is
problematic. Each delta is sufficiently small that
it isn't worth flipping to a better implementation
strategy, but after a while it is a pig understand
the FSM sufficiently to make the next delta without
subtly buggering up the existing operation.

For small FSMs that will never change, that isn't
a problem.

In most modern big-budget video gamez all the game-logic is abstracted away from
the graphics and physics code (which is usually C++ and often derived from a
third-party off-the-shelf API like Unity, I've tried playing with writing my own
3D 'game engine' and it's a nightmare time-sink even for a simplistic one doing
all that OpenGL interfacing yourself) and is done in a scripting language like
Python or Lua, or some bespoke virtual machine.

So far my tried and trusted techniques are
- if then else for up to two four states and two events
- jump table for simple dense FSMs that are unlikely
to change over time (e.g. parsing characters into
floating point numbers)
- classes/methods for everything else
 
On Friday, August 16, 2019 at 12:33:51 PM UTC-4, jla...@highlandsniptechnology.com wrote:
On Fri, 16 Aug 2019 11:48:07 +0200, Jeroen Belleman
jeroen@nospam.please> wrote:

On 2019-08-15 19:56, Tom Gardner wrote:
On 15/08/19 15:59, Jeroen Belleman wrote:
On 2019-08-15 14:41, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.

In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting
in wrong states at best and invalid states at worst. So async inputs
are always buffered through FFs to prevent this making then
synchronous.

The result is virtually all FSMs are actually Moore or a combination
of the two where the outputs are a registered version of what you
would get from a Mealy FSM.

When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM
in software without greatly complicating your code.

Anyone here who even thinks about FSM in terms of Mealy and Moore?


I like to design in simple Mealy FSMs from time to time.
Many people see them as 'hairball logic' because they don't
understand them. In my view, there is no such thing as a
Mealy FSM in software, period. Everything is necessarily
synchronized to the processor clock.

That latter is not a /useful/ distinction.

The nearest analogy to an event-driven software FSM is
an *asynchronous* hardware FSM where it is guaranteed
that only one input can change simultaneously. That
guarantee is a result of the software structure

[...]

We live in different worlds. I'm a hardware guy. You are
evidently a software guy. The whole computer is a giant
Moore machine. Of course you can use it to simulate an
(asynchronous!) Mealy machine, but only if you don't look
too closely.

Nope, I'm not a "software guy". I've done everything
from low-noise analogue electronics, through semi-custom
and FPGA digital, through RTOS, through web-backend
order fulfilment systems, through RF propagation, through
telecoms billing, through... You get the idea.

The boundary between hardware and software is very grey
and fluid.

I noted the key points of Moore and Mealy machines, and
gave an illustration of how both can be coded in software.
Why don't you comment on that?

The "whole computer is a Moore machine" is one of those
statements that generates no light whatsoever.

A Moore machine changes state at discrete instants in time,
it is a clocked machine. The output depends only on the
state. In a Mealy machine the output depends on the state
*and* on the input. It can also change state whenever the
input changes. There is no clock in a Mealy machine.

Software runs on clocked CPUs, so strictly speaking it
implements a Moore machine. I'll grant you that if the time
scales of input events and CPU clock instants is different
enough, the distinction may become invisible.

Jeroen Belleman


If you define the value of an integer (software variable or hardware
register) as the "state" of the system, future states are entered at
clock/loop execution time, as a function of present state and some
number of synchronized inputs. That's a fully synchronous state
machine, with no race or metastability hazards.

Outputs could be purely decoded from the state word, or could be made
by a combination of state and inputs. In the latter case, if the
inputs are synchronized (and the output logic is done right) the
outputs are also synchronous. Preferably, the outputs are all d-flops
off the common clock, or the software equivalent. Those flops are just
not part of the defined state word.

I guess that's two cases of state machines.

One possible additional case, where the inputs to the state machine
are not clock synchronous, is generally too hazardous to use.

Outputs generated by logic of state + inputs could also use
unsynchronized inputs, yet another usually-pathological variation.

If your FSM is in software, then by definition all outputs are synchronous. If a Mealy FSM has synchronous outputs the only difference with a Moore machine is that you are calling the outputs... outputs and not part of the state. If you lump the outputs with the state and call it all state, then there is no difference. It's just how you look at it.

Even in hardware I often define my state variables to optimize my output logic, most beneficial is when there is 1 to 1 correspondence between individual state variables and outputs which is what you get in the case described above.

--

Rick C.

-+- Get 1,000 miles of free Supercharging
-+- Tesla referral code - https://ts.la/richard11209
 
On Friday, August 16, 2019 at 2:09:54 PM UTC-4, Jeroen Belleman wrote:
On 2019-08-16 18:05, Rick C wrote:

That is plain wrong. Mealy vs. Moore has nothing to do with clocked
vs. async FSM.

G.H. Mealy, "A method for synthesizing sequential circuits",
fig.2, fig.5. No clock or FF in sight.

What, pray tell, is the distinction?

Sorry, I don't know what you are asking. I believe your thinking is so much different than yours that we have few common assumptions so you need to be more explicit in your questions and points.

I'm saying the theory that Mealy is describing has nothing to do with using async vs. sync logic. Are you trying to say his theory only applies to async sequential logic because that seems to be what he is using in his examples?

--

Rick C.

-++ Get 1,000 miles of free Supercharging
-++ Tesla referral code - https://ts.la/richard11209
 
On 16/08/19 23:44, Rick C wrote:
> If your FSM is in software, then by definition all outputs are synchronous.

In some senses that is true - but it is unhelpful
and avoids interesting and useful cases. That's
probably because your viewpoint appears to be
limited to working in one small part of the
"protocol stack", i.e. hardware and embedded
systems.

Firstly the level of "synchronism" (the computer
clock) is far far too small compared to many FSM
changes and propagation delays to be of any real
utility.

Secondly many FSMs are defined such that they can
be and are split across multiple computers with
no common clock whatsoever.

There are many many such FSMs in telecom systems
are good examples of both of those.

I defy you to define a clock that helps design
an FSM running in a user-space application in
a NUMA SMP computer.

Yes, I have designed and implemented such realtime
systems, and FSMs were at the core. A key design
pattern's name gives a hint as to the level at which
system design and modelling and implementation occurs:
"half-sync half-async".

In such systems any processor clock is completely
irrelevant. Hells teeth, most of the implementers
are only dimly aware of L1/L2/L3/etc caches in
the processor, and for most purposes they don't
need to be.
 
On Saturday, August 17, 2019 at 3:00:58 AM UTC-4, Tom Gardner wrote:
On 16/08/19 23:44, Rick C wrote:
If your FSM is in software, then by definition all outputs are synchronous.

In some senses that is true - but it is unhelpful
and avoids interesting and useful cases. That's
probably because your viewpoint appears to be
limited to working in one small part of the
"protocol stack", i.e. hardware and embedded
systems.

Firstly the level of "synchronism" (the computer
clock) is far far too small compared to many FSM
changes and propagation delays to be of any real
utility.

Secondly many FSMs are defined such that they can
be and are split across multiple computers with
no common clock whatsoever.

There are many many such FSMs in telecom systems
are good examples of both of those.

I defy you to define a clock that helps design
an FSM running in a user-space application in
a NUMA SMP computer.

Yes, I have designed and implemented such realtime
systems, and FSMs were at the core. A key design
pattern's name gives a hint as to the level at which
system design and modelling and implementation occurs:
"half-sync half-async".

In such systems any processor clock is completely
irrelevant. Hells teeth, most of the implementers
are only dimly aware of L1/L2/L3/etc caches in
the processor, and for most purposes they don't
need to be.

At this point you seem to be discussing things with yourself. I don't see how your comments have much relation to what I've said.

If you think FSMs are split between multiple computers, we are not discussing the same thing.

--

Rick C.

+-- Get 1,000 miles of free Supercharging
+-- Tesla referral code - https://ts.la/richard11209
 
On 17/08/19 08:10, Rick C wrote:
On Saturday, August 17, 2019 at 3:00:58 AM UTC-4, Tom Gardner wrote:
On 16/08/19 23:44, Rick C wrote:
If your FSM is in software, then by definition all outputs are synchronous.

In some senses that is true - but it is unhelpful
and avoids interesting and useful cases. That's
probably because your viewpoint appears to be
limited to working in one small part of the
"protocol stack", i.e. hardware and embedded
systems.

Firstly the level of "synchronism" (the computer
clock) is far far too small compared to many FSM
changes and propagation delays to be of any real
utility.

Secondly many FSMs are defined such that they can
be and are split across multiple computers with
no common clock whatsoever.

There are many many such FSMs in telecom systems
are good examples of both of those.

I defy you to define a clock that helps design
an FSM running in a user-space application in
a NUMA SMP computer.

Yes, I have designed and implemented such realtime
systems, and FSMs were at the core. A key design
pattern's name gives a hint as to the level at which
system design and modelling and implementation occurs:
"half-sync half-async".

In such systems any processor clock is completely
irrelevant. Hells teeth, most of the implementers
are only dimly aware of L1/L2/L3/etc caches in
the processor, and for most purposes they don't
need to be.

At this point you seem to be discussing things with yourself. I don't see how your comments have much relation to what I've said.

That's because you have a limited viewpoint of
how FSMs can be designed and implemented.

Since you don't even attempt to define a "clock"
for such software FSMs, I'm glad you recognise
that your statement that "If your FSM is in software,
then by definition all outputs are synchronous."
is largely misleading. Except in some special
circumstances, of course.

Now, if you are only discussing a limited subset of
FSM implementations, then it would be polite to
indicate that in your posts. If you don't then
it makes you look ignorant.


> If you think FSMs are split between multiple computers, we are not discussing the same thing.

Correct.

That's because you have a limited view of what
constitutes an FSM, and how they can be designed
and implemented.

Such limitations in your experience have manifested
themselves in other ways.
 
On Saturday, August 17, 2019 at 3:31:03 AM UTC-4, Tom Gardner wrote:
On 17/08/19 08:10, Rick C wrote:
On Saturday, August 17, 2019 at 3:00:58 AM UTC-4, Tom Gardner wrote:
On 16/08/19 23:44, Rick C wrote:
If your FSM is in software, then by definition all outputs are synchronous.

In some senses that is true - but it is unhelpful
and avoids interesting and useful cases. That's
probably because your viewpoint appears to be
limited to working in one small part of the
"protocol stack", i.e. hardware and embedded
systems.

Firstly the level of "synchronism" (the computer
clock) is far far too small compared to many FSM
changes and propagation delays to be of any real
utility.

Secondly many FSMs are defined such that they can
be and are split across multiple computers with
no common clock whatsoever.

There are many many such FSMs in telecom systems
are good examples of both of those.

I defy you to define a clock that helps design
an FSM running in a user-space application in
a NUMA SMP computer.

Yes, I have designed and implemented such realtime
systems, and FSMs were at the core. A key design
pattern's name gives a hint as to the level at which
system design and modelling and implementation occurs:
"half-sync half-async".

In such systems any processor clock is completely
irrelevant. Hells teeth, most of the implementers
are only dimly aware of L1/L2/L3/etc caches in
the processor, and for most purposes they don't
need to be.

At this point you seem to be discussing things with yourself. I don't see how your comments have much relation to what I've said.

That's because you have a limited viewpoint of
how FSMs can be designed and implemented.

You keep twisting things. I'm not trying to limit anything. I'm talking about what the definition of a Mealy FSM is. Mealy is the final word on that. Others seem to have read into his paper things that aren't there. It's that simple. I don't know why you can't seem to grasp the fact that Mealy was never talking about the sort of FSM that is now attributed to him.


Since you don't even attempt to define a "clock"
for such software FSMs, I'm glad you recognise
that your statement that "If your FSM is in software,
then by definition all outputs are synchronous."
is largely misleading. Except in some special
circumstances, of course.

I've already given you the clock for a software FSM. All you need to do is read what I wrote.


Now, if you are only discussing a limited subset of
FSM implementations, then it would be polite to
indicate that in your posts. If you don't then
it makes you look ignorant.

I am discussing exactly what I've been saying I'm talking about all along. You seem to be losing it here.


If you think FSMs are split between multiple computers, we are not discussing the same thing.

Correct.

That's because you have a limited view of what
constitutes an FSM, and how they can be designed
and implemented.

Such limitations in your experience have manifested
themselves in other ways.

No, I think a FSM is just that. A machine. If you want to call multiple FSM connected together a FSM, fine. Of course multiple FSM can be combined into one. I have no idea why you decided to branch out to this area other than to have something to argue about.

It's pretty clear you are no longer interested in discussing the original topic, Mealy FSMs. Ok, I guess we are through then.

--

Rick C.

+-+ Get 1,000 miles of free Supercharging
+-+ Tesla referral code - https://ts.la/richard11209
 

Welcome to EDABoard.com

Sponsor

Back
Top