R
Rick C
Guest
On Thursday, August 15, 2019 at 2:20:08 PM UTC-4, Tom Gardner wrote:
That is the point at issue. Wikipedia defines a Mealy FSM that way, but Mealy didn't define it so. As I mentioned, the hardware model Mealy used was not capable of having inputs changing separately from state changes (assuming there was a state change indicated on the diagram) because it was pulse based, not level based. EVERY signal was a timing pulse.
I prefer to consider the Mealy FSM described by Mealy, not Wikipedia.
This was not from Mealy's paper.
It's not at all important to receive only one input change at the same time.. In fact, if you read the papers, they don't talk about input signals changing individually, they talk about the input signals combining to form and input "alphabet" much like numbers or an ASCII code. So it is anticipated to have multiple inputs change at the same time. In hardware this can lead to race conditions if async logic is used, but that's an implementation issue, not a FSM issue.
By "logic" I mean combinational logic, not sequential.
Except as I've explained, that isn't what Mealy talked about. His hardware was not capable of such changes since the signals were pulses, i.e. clocks..
Which processor would this be exactly? I've designed many and used them, more than one at a time. That is, I design to suit the problem.
Or structure your solution to suit the problem. Do all 8 inputs need to be handled in any specific way? You need to define the problem in detail before a solution can be suggested.
--
Rick C.
-+ Get 1,000 miles of free Supercharging
-+ Tesla referral code - https://ts.la/richard11209
On 15/08/19 18:07, Rick C wrote:
On Thursday, August 15, 2019 at 8:41:13 AM UTC-4, Tom Gardner wrote:
On 15/08/19 12:10, Jeroen Belleman wrote:
On 2019-08-14 23:38, Rick C wrote:
I was in another discussion on FSM because of a research paper that
used the two terms. I've never liked the Mealy machine because the
state diagram (a directed graph) doesn't properly describe the
asynchronous nature of the outputs. It shows and output changing as
part of the state transition, but in reality the output is a
combinational circuit of inputs and state which can change without a
corresponding change in state.
In the real world nearly all logic is actually sequential. If the
inputs to a FSM are async, then the machine can go haywire if the
inputs change too close to the clock edge. A race condition in the
circuit can cause some state FFs to change and others not resulting in
wrong states at best and invalid states at worst. So async inputs are
always buffered through FFs to prevent this making then synchronous.
The result is virtually all FSMs are actually Moore or a combination of
the two where the outputs are a registered version of what you would
get from a Mealy FSM.
When done in software, virtually every design is Moore since there is
no opportunity for outputs to change other than when the code is run
(equivalent to a clock edge). There is no such thing as a Mealy FSM in
software without greatly complicating your code.
Anyone here who even thinks about FSM in terms of Mealy and Moore?
I like to design in simple Mealy FSMs from time to time. Many people see
them as 'hairball logic' because they don't understand them. In my view,
there is no such thing as a Mealy FSM in software, period. Everything is
necessarily synchronized to the processor clock.
That latter is not a /useful/ distinction.
The nearest analogy to an event-driven software FSM is an *asynchronous*
hardware FSM where it is guaranteed that only one input can change
simultaneously. That guarantee is a result of the software structure -
event arrives, possibly from a message coming up the network stack,
possibly from a interrupt routine - event is put in a queue, length >=1 -
at a convenient time the event is taken from the input queue - the event is
fully processed (outputs, next state) before looping back to take the next
event
There is no system clock at the FSM level; events are processed when
available and convenient.
So far that is effectively a Moore machine: events and outputs change at
the "same" time, not independently.
I'm not sure that is what defines a Moore machine. You use the term "events"
for the input changes. A Mealy machine changes outputs based on state and
inputs, so outputs can change from input changes even if the state has not
changed.
Not quite.
In a hardware Mealy machine the inputs can change outputs
even when there is no possibility that the state has
changed - because there has been no clock pulse.
That is the point at issue. Wikipedia defines a Mealy FSM that way, but Mealy didn't define it so. As I mentioned, the hardware model Mealy used was not capable of having inputs changing separately from state changes (assuming there was a state change indicated on the diagram) because it was pulse based, not level based. EVERY signal was a timing pulse.
I prefer to consider the Mealy FSM described by Mealy, not Wikipedia.
The software equivalent is inputs changing outputs even
though the next_state = whatever statement has not been
executed.
Or, to put it another way, in a Mealy machine, the outputs
can change "between" clock pulses (hardware) or between
evaluating what the next state should be (software)
This was not from Mealy's paper.
The distinction is made based on which information is used to define the
outputs. If the inputs as well as the state is used, it's supposed to be a
Mealy FSM while a Moore FSM outputs only depend on the state. In software
the whole thing is a bit muddled because there are no registers and no clock,
but you can still tell what inputs are.
In this context, the software equivalent of a clock is
evaluating next_state = somethingOrOther.
Now in software the next_state re-evaluation (usually)
does not occur at a regular interval, but whenever an input
event occurs. That's like a hardware asynchronous FSM.
Hardware asynchronous FSMs have a real problem, unless
you make an unrealistic assumption that inputs never
change at the same time.
Software has the strong advantage that one input change
is fully evaluated before the next is "received". That is
/serialisation/ of event processing, ad circumvents that
problem with hardware async FSMs.
It's not at all important to receive only one input change at the same time.. In fact, if you read the papers, they don't talk about input signals changing individually, they talk about the input signals combining to form and input "alphabet" much like numbers or an ASCII code. So it is anticipated to have multiple inputs change at the same time. In hardware this can lead to race conditions if async logic is used, but that's an implementation issue, not a FSM issue.
Now it is possible to conceive of a Mealy software FSM: - have the
structure outlined above for the "main" FSM - when an input changes it
generates a hardware interrupt - in the interrupt routine, have something
like if (current_state=X) then output = foo; if (current_state=Y) then
output = baz;
There the output can change independently of the FSM state, i.e.
effectively an Mealy FSM.
Where I have trouble is seeing that this is a Mealy FSM rather than just some
extra logic tacked onto a Moore FSM.
It depends on what how the extra logic behaves in the absence
of a clock, i.e. "between clocks"
By "logic" I mean combinational logic, not sequential.
In reality the entire issue is about
how you analyze the FSM, not really the implementation. Moore explained how
to minimize the FSM using a directed graph as the model where outputs are
defined by the state. Mealy extended this to FSM where the outputs are also
functions of inputs and the directed graph has outputs indicated on the state
transitions rather than the states.
What is really interesting is that Mealy's paper uses a type of sequential
logic that can't change state other than on "clock" edges. The logic value
is represented by the presence of absence of a clock pulse. So there are no
inputs that change state other than timed with the clock!
In a Mealy machine the state can only change on clock edges
(just like a Moore machine), but outputs can change at any
time that inputs change.
Except as I've explained, that isn't what Mealy talked about. His hardware was not capable of such changes since the signals were pulses, i.e. clocks..
Note, I don't advocate structuring software that way, but I would consider
it in an embedded system where the latency between the input occurring and
the output changing needs to be minimised.
(In such a situation I'd prefer to have a system without the indeterminacy
of interrupts, by dedicating /a/ core to waiting for the input to occur.
Yes, that's the xCORE and the software is in xC, so the IDE guarantees
timing without sucking-it-and-seeing what execution times might turn out to
be )
You have a very negative view of CPUs. I design CPUs that have a one clock
cycle latency for interrupt handling. I don't need to dedicate a large hunk
of silicon and software to handling an interrupt.
Correct, you don't need to dedicate a large hunk of
silicon to handling an interrupt.
The interesting case is how you handle multiple
inputs and significant processing simultaneously
while ensuring that timing of one process is
completely unaffected by the activities of other
processes.
In you processor, if 8 unrelated inputs occur
within one clock cycle, does the processing of
all 8 inputs start within the same clock cycle?
Does the processing of all inputs proceed
simultaneously in parallel?
Which processor would this be exactly? I've designed many and used them, more than one at a time. That is, I design to suit the problem.
The only way I can see that happening is if you
have 8 (or more!) cores.
Or structure your solution to suit the problem. Do all 8 inputs need to be handled in any specific way? You need to define the problem in detail before a solution can be suggested.
--
Rick C.
-+ Get 1,000 miles of free Supercharging
-+ Tesla referral code - https://ts.la/richard11209