L
Les Cargill
Guest
Don Y wrote:
A kernel is \"swap()\" ( register exchange ) plus libraries. swap()
is irreducible; libraries less so.
Looks like big voodoo but it\'s not. Now, throw in a MMU and life
gets interesting...
For a motor controller, chances are reeally good it\'ll be PWM. etc, etc.
The rest is serialization.
But you can usually sketch that out in one paragraph. Not always. IMO,
when you use \"big process\", chances are you\'ll over do it because your
risk perception is not well calibrated. Plus you have to break
things up...
Maybe you need message sequence charts for use cases. No problem.
<snip>
It is fun but not the good sort of fun
<snip>
Well of course. Might be off but you do what you can. Or you use a
bench supply and measure it.
And hacking can be quite legit.
Seems like a peripheral might have been in order. Of course, if you\'re
the peripheral...
--
Les Cargill
On 8/13/2022 4:32 PM, Les Cargill wrote:
snip
Performance of a kernel affects every job running on the machine.
You write a sloppy app? <shrug> YOUR app sucks -- but no one
else\'s.
If you have a simple kernel, then it need not be efficient as it
doesn\'t \"do much\".
A kernel is \"swap()\" ( register exchange ) plus libraries. swap()
is irreducible; libraries less so.
Looks like big voodoo but it\'s not. Now, throw in a MMU and life
gets interesting...
But, the more you do, the more your implementation
affects overall performance. If faulting in a page is expensive,
then apps won\'t want to incur that cost and will try to wire-down
everything at the start. Of course, not possible for everyone to
do so without running out of resources. And, the poor folks who
either didn\'t know they *could* do that (or, felt responsible enough
NOT to impose their greed on others) end up taking the bigger hits.
When you look at big projects, productivity falls dramatically
as the complexity increases (app->driver->os, etc.)
But that\'s because human communication only goes so far.
Communication also happens *in* the \"system\". What happens if THIS
ftn invocation doesn\'t happen (client/server is offline, crashed,
busy, etc.)? How do we handle that case locally? Is there a way
to recover? Or, do we just abend?
But people like things they can see, even if that\'s an illusion.
The best computer is the one you don\'t even know is there until it
stops working.
And, why so many folks sit down and write code without having any
formal
documents to describe WHAT the code must do and the criteria against
which it will be tested/qualified! <rolls eyes
If you start with the test harness you get more done. Once you have
the prototype up, then write the documents. You\'ll simply know more
that
way.
I approach it from the top, down. Figure out what the *requirements*
are (how can you design a test harness if you don\'t know what you\'ll be
testing or the criteria that will be important?).
\"I\'m making an <x>. It uses interfaces <y,z...>.\" That\'s how you know.
How do you know it will use those i/fs? What if there are no existing
APIs to draw upon (you\'re making a motor controller and have never made one
before; you\'re measuring positions of an LVDT instrumented actuator;
you\'re...)
For a motor controller, chances are reeally good it\'ll be PWM. etc, etc.
The rest is serialization.
When you\'r3e making *things*, you are often dealing with sensors and
mechanisms
that are novel to a particular application...
But you can usually sketch that out in one paragraph. Not always. IMO,
when you use \"big process\", chances are you\'ll over do it because your
risk perception is not well calibrated. Plus you have to break
things up...
Maybe you need message sequence charts for use cases. No problem.
<snip>
That\'s a dangerous place.
It\'s a THRILLING place! It forces you to really put forth your best
effort.
And, causes you to think of what you *really* want to do -- instead of just
hammering out a schematic and a bunch of code.
It is fun but not the good sort of fun
<snip>
Nicely put - at least you could quantify the costs.
Don\'t you quantify the load you\'re going to put on a power supply before
you design the power supply? It\'s called *design* not \"hacking\".
Well of course. Might be off but you do what you can. Or you use a
bench supply and measure it.
And hacking can be quite legit.
[of *course* I had already done the math, that\'s called engineering!
a \"programmer\" would have just written the code and wondered why
the system couldn\'t meet it\'s performance goals]
The final product often ran at 100% of real-time as it had to react
to the
actions of the user; you can\'t limit how quickly a user drags a barcode
label across a photodiode (no, you can\'t put a \"barcode reader\" in the
design as that costs recurring dollars!)
I\'m familiar.
It got to be a favorite exercise to see how quickly and *continuously* you
could swipe a barcode label across the sensor to see if the code would
crash, misread, etc. The specification was for 100 inches per second
(which really
isn\'t that fast if you are deliberately trying to *be* fast) which would
generate edges/events at ~15KHz. When an opcode fetch takes O(1us), you
quickly run out of instructions between events!
Seems like a peripheral might have been in order. Of course, if you\'re
the peripheral...
[Of course, everything ground to a halt during such abuse -- but, picked up
where it left off as soon as your arm got tired! :> ]
--
Les Cargill