D
Don Y
Guest
On 6/27/2023 1:58 AM, Martin Brown wrote:
It\'s a great example because it requires an entirely different mindset;
much like OOPS vs. procedural coding.
And, if you\'ve been working in resource starved environments (e.g.,
deeply embedded systems with hardware of that era), the least of which
problems is getting used to the inefficiency of such representations!
[I can recall writing a service for a PROM programmer -- in Pascal.
Of course, you write a routine to convert a nybble to ASCII; then
use that to convert a byte; then that to convert an \"address\"; etc.
Because that\'s how you would do it in ASM on a *tiny* processor! In
Pascal, it just looks stupid and unnecessarily complex! You have to
be able to map your approach to the environment/tools that you\'re
using to address the problem space.]
But, there are tools/technologies that let you express problems
with full parallelism. Granted, as you work on each subproblem
you think serially. But, the tool/technology lets those individual
subproblems come together *correctly* -- if you\'ve embedded the
right dependencies in the expression!
Dunning-Kruger. He\'s obviously only looked at toy applications...
likely written in simple languages (e.g., BASIC). And, thinks
you solve performance problems by buying faster hardware.
Ditto the simulations. And, likely heavily relied upon in the
design of the silicon/discretes that\'s used! I can recall doing
full customs and having to model the effects of temperature,
supply and process variations in all my performance models.
Won\'t work to design something that runs ONLY at \"STP\"!
I\'d be happy for a power supply that didn\'t shit the bed. I watch
countless devices headed to the tip simply because their designers
couldn\'t/wouldn\'t design a supply that \"ran forever\" (isn\'t my
software expected to do so??)
And most employers neither want to hire qualified people nor take their
EXPERT ADVICE on how to tackle particular jobs.
I\'ve had employers/clients treat projects as \"time-limited\": \"So,
what SUBSET of the product do you want to implement?\" (clearly,
if you only have X manhours to throw at a project that requires
Y > X, something just isn\'t going to get done. would you like to
make that decision now? Or, live with whatever the outcome happens
to be? Or, get smart and no-bid the job??!)
Ask a coder how long a particular piece of code takes to execute.
(particularly amusing for folks who *claim* their application is HRT;
\"what guarantees do you have as to meeting your deadline(s)?\")
Or, to guesstimate *percentage* of time in each portion of the code.
(do you know how the compiler is likely going to render your code?
do you know what the hardware will do with it? If you *measure*
it, how sure are you that it will perform similarly in all possible
cases?)
Or, how deep the stack penetration. (how can you know how much space to
allocate for the stack if you don\'t know what worst-case penetration
will be? what do you mean, you\'re relying on libraries to which you
don\'t have sources??? how have their needs been quantified?)
Our \"CS\" department was a subset of the EE curriculum. So, you learned
how to design a CPU as well as WHY you wanted it to have a particular set
of features.
On the CS side, you understood why call-by-value and call-by-reference
semantics differed -- and the advantages/consequences of each. And, how
to convert one to another (imagine how to implement by-value syntax
for an argument that was many KB -- to avoid the downside of by-reference
semantics!) What can you do *in* the processor to make these things possible?
What are the costs? Liabilities?
And what excuse for power supplies that fail?
Why do you derate components instead of using them at their rated
limits? Ans: because experience has TAUGHT you to do so.
The same sorts of practices exist in software engineering -- for folks
who are aware of them. And, they provide the same sorts of reliability
(robustness).
I built a bar-code reader into a product many decades ago. As cost was
ALWAYS an issue, it was little more than an optical (reflective) sensor
conditioned by a comparator that noticed black/white levels and AGC\'d
the signal into a single digital \"level\".
That directly fed an interrupt.
That ran continuously (because it would be a crappy UI if the
user had to push a button to say \"I want to scan a barcode, now!\"
The design targeted a maximum scan rate of 100 ips. Bar transitions
could occur at (worst case) ~7 microsecond intervals. (40 year
old processors!)
And, nothing to prevent a malicious user from rubbing a label across
the detector -- back and forth -- as fast as humanly possible (just to
piss off the software and/or \"prove\" it to be defective: \"If it doesn\'t
handle 300ips, how do we know it is correctly handling 100ips?\"
Yup. You could consume 100% of real-time by doing so. But, the
processor wouldn\'t crash. Data wouldn\'t be corrupted. And, when
your arm eventually got tired, you\'d look up to see the correct
barcode value displayed!
Because the system was *designed* to handle overload *gracefully*.
Ever see a PC handle a hung disk?
Most often, these are \"people failures\". Someone failed to perform a
step in a procedure that was indicated/mandated.
How many technicians design custom silicon?
We had a guy who was perpetually *RE*bugging the floating point libraries
in our products (we treated software modules as components -- with specific
part numbers catalogued and entered into \"inventory\". Why reinvent the
wheel for every project?) It got to the point that we would track the
\"most recent KNOWN GOOD release and always avoid the \"latest\".
And, no *followup*!
Conversely, writing a piece of code that can stand for years/decades
and be understood by those that follow is a *skill*. When your product
life is measure d in a few years, you\'re never really \"out of development\".
*Designing* a solution that can stand the test of time is a considerable
effort. FAT12, FAT16, FAT32, exFAT, NTFS, etc. Each an embarassing
admission that the designers had no imagination to foretell the inevitable!
[How many gazillions of man-hours have developers AND USERS wasted
to short-sighted implementation decisions? Incl those that have some
\"rationale\" behind them?]
On 26/06/2023 14:51, John Larkin wrote:
On Mon, 26 Jun 2023 04:08:42 -0700, Don Y
blockedofcourse@foo.invalid> wrote:
On 6/26/2023 2:45 AM, Martin Brown wrote:
Most computer languages look somewhat like Basic apart from APL & Forth.
(and a few exotic modern CompSci languages like Haskell)
I guess that depends on how you define \"like\".
Coding in any of the LISP dialects is likely a rude awakening for
the uninitiated. Ladder logic?
Remiss of me not to mention LISP as one of the earliest entirely different to
Basic languages (aka Lots of Irritating Single Parenthesises). I once long ago
worked on a Lisp compiler.
It\'s a great example because it requires an entirely different mindset;
much like OOPS vs. procedural coding.
And, if you\'ve been working in resource starved environments (e.g.,
deeply embedded systems with hardware of that era), the least of which
problems is getting used to the inefficiency of such representations!
[I can recall writing a service for a PROM programmer -- in Pascal.
Of course, you write a routine to convert a nybble to ASCII; then
use that to convert a byte; then that to convert an \"address\"; etc.
Because that\'s how you would do it in ASM on a *tiny* processor! In
Pascal, it just looks stupid and unnecessarily complex! You have to
be able to map your approach to the environment/tools that you\'re
using to address the problem space.]
Much of the similarity is a consequence (IMO) of the serial
way that humans tend to think -- esp when it comes to algorithms...
it\'s almost always a set of *steps* instead of a network.
So do all mathematical proofs and for that matter proofs of correctness of
software systems - one step at a time built on solid foundations. I had a play
with Z and VDM a few decades ago but found them unweildly (and distinct
overkill for the reliability we needed).
But, there are tools/technologies that let you express problems
with full parallelism. Granted, as you work on each subproblem
you think serially. But, the tool/technology lets those individual
subproblems come together *correctly* -- if you\'ve embedded the
right dependencies in the expression!
Computer programming is most always procedural. When parallel things
neeed to be done, it\'s usually broken into threads or processes with
semiphores, locks, blocks, interrupts, flags, fifos, things like that.
Most programmers never use state machines.
You have some very funny ideas. Computer science uses all of the methods
available to it and more besides.
Dunning-Kruger. He\'s obviously only looked at toy applications...
likely written in simple languages (e.g., BASIC). And, thinks
you solve performance problems by buying faster hardware.
FPGA design is done in synchronous clocked logic in nonprocedural
languages; everything happens everywhere all at once. Crossing a clock
boundary is recognized as something to avoid or handle very carefully.
Computer programming is a lot like old-style hairball async logic
design and has correspondingly many bugs.
And the FPGA program is designed and implemented in the software that you so
despise. How can you possibly trust it to do the right thing?
Ditto the simulations. And, likely heavily relied upon in the
design of the silicon/discretes that\'s used! I can recall doing
full customs and having to model the effects of temperature,
supply and process variations in all my performance models.
Won\'t work to design something that runs ONLY at \"STP\"!
You should be hand coding it manually single bit by bit since you have made the
case so cogently that no software can ever be trusted to work.
I\'d be happy for a power supply that didn\'t shit the bed. I watch
countless devices headed to the tip simply because their designers
couldn\'t/wouldn\'t design a supply that \"ran forever\" (isn\'t my
software expected to do so??)
Computing languages are fad driven, and that drives good things out of
circulation, a sort of Gresham\'s Law of computing.
I don\'t think that is true at all. The older computer languages were
limited by
computing power and hardware available at the time. Modern languages harness
the huge computing resources available today to take some of the tedious grunt
work out of coding and detecting errors.
I think the BSPs, HALs, OSs, etc. are more guilty of that. Folks don\'t code
on bare metal anymore -- just as they don\'t put a CPU on a schematic any
longer. They are \"sold\" the notion that they can treat this API as
a well-defined abstraction -- without ever defining the abstraction well! :
They don\'t know what their implementations \"cost\" or how to even *measure*
performance -- because they don\'t know what\'s involved.
OTOH, a lot of \"coding\" is taught targeting folks who will be building
web pages or web apps where there is no concern for resource management
(it works or it doesn\'t).
Coding has no theory, no math, and usually little testing. Comments
Software development has a hell of a lot of maths and provably correct software
is essentially just a branch of applied mathematics. It is also expensive very
difficult to do and so most practitioners don\'t do it.
And most employers neither want to hire qualified people nor take their
EXPERT ADVICE on how to tackle particular jobs.
I\'ve had employers/clients treat projects as \"time-limited\": \"So,
what SUBSET of the product do you want to implement?\" (clearly,
if you only have X manhours to throw at a project that requires
Y > X, something just isn\'t going to get done. would you like to
make that decision now? Or, live with whatever the outcome happens
to be? Or, get smart and no-bid the job??!)
Ask a coder how long a particular piece of code takes to execute.
(particularly amusing for folks who *claim* their application is HRT;
\"what guarantees do you have as to meeting your deadline(s)?\")
Or, to guesstimate *percentage* of time in each portion of the code.
(do you know how the compiler is likely going to render your code?
do you know what the hardware will do with it? If you *measure*
it, how sure are you that it will perform similarly in all possible
cases?)
Or, how deep the stack penetration. (how can you know how much space to
allocate for the stack if you don\'t know what worst-case penetration
will be? what do you mean, you\'re relying on libraries to which you
don\'t have sources??? how have their needs been quantified?)
My universities computing department grew out of the maths laboratory and were
exiled to a computer tower when their big machines started to require insane
amounts of power and acolytes to tend to their needs.
Our \"CS\" department was a subset of the EE curriculum. So, you learned
how to design a CPU as well as WHY you wanted it to have a particular set
of features.
On the CS side, you understood why call-by-value and call-by-reference
semantics differed -- and the advantages/consequences of each. And, how
to convert one to another (imagine how to implement by-value syntax
for an argument that was many KB -- to avoid the downside of by-reference
semantics!) What can you do *in* the processor to make these things possible?
What are the costs? Liabilities?
are rare and usually illiterate. Bugs are normal because we can always
fix them in the next weekly or daily release. Billion dollar projects
literally crash from dumb bugs. We are in the Dark Ages of
programming.
And what excuse for power supplies that fail?
I reckon more like Medieval cathedral building - if it still standing after 5
years then it was a good \'un. If it falls down or the tower goes wonky next
time make the foundations and lower walls a bit thicker.
Why do you derate components instead of using them at their rated
limits? Ans: because experience has TAUGHT you to do so.
The same sorts of practices exist in software engineering -- for folks
who are aware of them. And, they provide the same sorts of reliability
(robustness).
I built a bar-code reader into a product many decades ago. As cost was
ALWAYS an issue, it was little more than an optical (reflective) sensor
conditioned by a comparator that noticed black/white levels and AGC\'d
the signal into a single digital \"level\".
That directly fed an interrupt.
That ran continuously (because it would be a crappy UI if the
user had to push a button to say \"I want to scan a barcode, now!\"
The design targeted a maximum scan rate of 100 ips. Bar transitions
could occur at (worst case) ~7 microsecond intervals. (40 year
old processors!)
And, nothing to prevent a malicious user from rubbing a label across
the detector -- back and forth -- as fast as humanly possible (just to
piss off the software and/or \"prove\" it to be defective: \"If it doesn\'t
handle 300ips, how do we know it is correctly handling 100ips?\"
Yup. You could consume 100% of real-time by doing so. But, the
processor wouldn\'t crash. Data wouldn\'t be corrupted. And, when
your arm eventually got tired, you\'d look up to see the correct
barcode value displayed!
Because the system was *designed* to handle overload *gracefully*.
Ever see a PC handle a hung disk?
UK emergency 999 system went down on Sunday morning (almost certainly a
software update gone wrong) and guess what the backup system didn\'t work
properly either. It took them ~3 hours to inform the government too!
https://www.publictechnology.net/articles/news/nhs-launches-âfull-investigationâ-90-minute-999-outage
It affected all the UK emergency services not just NHS.
Same happened with passport control a couple of weeks ago - a fault deemed too
\"sensitive\" (ie embarrassing) to disclose how it happened.
https://www.bbc.co.uk/news/uk-65731795
Most often, these are \"people failures\". Someone failed to perform a
step in a procedure that was indicated/mandated.
Who said \"Anybody can learn to code\" ?
It is true that anybody can learn to code but there are about three orders of
magnitude difference between the best professional coders (as you disparagingly
choose to call them) and the worst ones. I prefer the description software
engineer although I am conscious that many journeyman coders are definitely not
doing engineering or anything like!
How many technicians design custom silicon?
I have known individuals who quite literally had to be kept away from important
projects because their ham fisted \"style\" of hack it and be damned would break
the whole project resulting in negative progress.
We had a guy who was perpetually *RE*bugging the floating point libraries
in our products (we treated software modules as components -- with specific
part numbers catalogued and entered into \"inventory\". Why reinvent the
wheel for every project?) It got to the point that we would track the
\"most recent KNOWN GOOD release and always avoid the \"latest\".
One of the snags is that at university level anyone who has any aptitude for
the subject at all can hack their assessment projects out of solid code in no
time flat - ie. ignore all the development processes they are supposed to have
been taught. You can get away with murder on something that requires less than
3 man months work and no collaboration.
And, no *followup*!
Conversely, writing a piece of code that can stand for years/decades
and be understood by those that follow is a *skill*. When your product
life is measure d in a few years, you\'re never really \"out of development\".
*Designing* a solution that can stand the test of time is a considerable
effort. FAT12, FAT16, FAT32, exFAT, NTFS, etc. Each an embarassing
admission that the designers had no imagination to foretell the inevitable!
[How many gazillions of man-hours have developers AND USERS wasted
to short-sighted implementation decisions? Incl those that have some
\"rationale\" behind them?]