dead programming languages...

On a sunny day (Fri, 24 Feb 2023 09:19:39 -0800) it happened John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote in
<41shvhtp8m6edsp9ilhrfjqbp5uikoltc0@4ax.com>:

On Fri, 24 Feb 2023 06:07:48 GMT, Jan Panteltje <alien@comet.invalid
wrote:

On a sunny day (Thu, 23 Feb 2023 08:54:20 -0800) it happened John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote in
266fvhl8gae2sdj0ecp7n511phphmkg47i@4ax.com>:

On Thu, 23 Feb 2023 06:34:25 GMT, Jan Panteltje <alien@comet.invalid
wrote:

On a sunny day (Wed, 22 Feb 2023 11:05:30 -0800) it happened John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote in
3opcvh111k7igirlsm6anc8eekalofvtcj@4ax.com>:

https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

Cplushplush is a crime against humanity
C will do better
But asm is the thing, it will always be there
and gives you full control.
It is not that hard to write an integer math library in asm..

I did that for the 68K. The format was signed 32.32. That worked great
for control systems. Macros made it look like native instructions.

But asm is clumsy for risc CPUs. Plain bare-metal c makes sense for
small instruments.

True, I have no experience with asm on my Raspberries for example.
But lots of C, gcc is a nice compiler.
Most (all?) things I wrote for x86 in C also compile and run on the Raspberries.
Some minor make file changes were required for the latest gcc version..

I\'m planning a series of PoE aerospace instruments based on Pi Pico
and the WizNet ethernet chip. You could help if you\'re interested.

The dev system will be a Pi4.

Yes we discussed your Amazon Pi-4 before..
Sure if I can help I will, after all you inspired me to use some Minicircuits RF stuff :)
Just ask here,
I do not have a Pi pico though... no experience with that.
I have 2 Pi4\'s one 4 GB and one 8 GB version.
The latest I use for web browsing, the former records security cameras, weather, air-traffic,
radiation, GPS position, etc..
The 8 GB also is used as internet router for the LAN.
And I have a whole lot of older Raspberries...
Most Pis are on 24/7 on a UPS, the Pi4 each have a 4 TB harddisk connected via USB.


https://www.amazon.com/MARSTUDY-Raspberry-Model-Ultimate-Pre-Installed/dp/B0BB912MV1/ref=sr_1_3?crid=3FKMZETXZJ1B9&keywords=marst
udy+raspberry+pi+4+model+b+ultimate+starter+kit&qid=1677259015&sprefix=MARSTUDY+Raspberry+Pi+4+Model+B%2Caps%2C143&sr=8-3

I ordered one and it was up and running their dev software in 10
minutes.

My website is back up, now at
www.panteltje.nl
and
www.panteltje.online
Some projects can be downloaded from:
https://panteltje.nl/panteltje/newsflex/download.html

Still some work needed on the new site.
 
Am 25.02.23 um 05:57 schrieb John Larkin:
On Fri, 24 Feb 2023 21:06:13 -0700, Don Y

I designed a CPU from MSI TTL logic, for a marine data logger. It had
a 20 KHz 4-phase clock and three opcodes.

We did a stack CPU patterned after Andrew Tanenbaum\'s
\"Experimantal Machine\", slightly simplified and only 16 bit
in HP\'s dynamic N-MOS process as a group project.
Spice on a CDC Cyber 276.

On the Multi-project wafer, we inherited a metal bar from
a neigbour project, so we did not have to debug it.
Design rule checkers were not up to the task, then.

Gerhard
 
On 2/24/2023 11:13 PM, Gerhard Hoffmann wrote:
i4004...that brings back memories/nightmares of all-night sessions... :-(
toggle switch code entry!

We burned small ROMs and plugged into prototype... and hoped!  :

In my case, i2708. A friend and I wrote a floating point package
for the 8080 this way. Later, it was replaced by an AMD Am9511.
That IC took a lot of iterations, they were not bug compatible.
What healed one version provoked errors in another one.

We used 1702s. IIRC, 256x8? The notion that you could *erase*
it was mind-blowing!

Later we got in-circuit emulators from tek, intel and HP.
The Pascal compiler for the HP64000 was the crappiest piece of
it that I ever have seen.

I thought the HP64000 was the crappiest piece of \"it\"... :>

I\'ve only had to \"toggle\" code into minicomputers.  You quickly
learn to make your programs REALLY TINY as each \"STORE\" operation
is a potential for a screwup!  (and, reading back the code as
a string of lights isn\'t very convenient!)

I am just amazed (and chagrined!) at how slow the processor
was -- yet how *fast* we considered it to be!  <frown>  I
guess everything truly *is* relative!

At the univ we had a PDP11/40E. Someone wrote a p-code machine
for it in microprogram. It was blazing fast for the time, at
least when it wasn\'t too hot.

We had a 6180 that served the majority of student needs.
It was surprisingly responsive given the number of interactive
terminals that could be in use at a given time. I always liked
the fact that I could hammer on the ESC key and get a new shell,
stacking the existing *paused* shell in the background. I\'m
sure the sysadmins groaned about how many hundreds of (would be)
zombie processes were typically paged out (consuming resources).

Manufacturers were eager to get their machines into the schools
hands (IBM was annoyed that they initially went with the 645
and boxed them out of the business!). So, damn near every class
was taught on a different machine, with different tools, under
a different OS, etc. Talk about needless duplication of effort!

There were only 5 PDP11/40E in the world and we had 2 of them,
both CPUs on the same UniBus, apart from tests only one working.
E means microprogrammable.
Usually it ran an obscure operating system called Unix V6,
tape directly from K&R.

The sources for sixth edition of UNIX have been \"around\" for
quite some time (dating back to its original use). IIRC,
it has even been \"formally published\".
 
On Fri, 24 Feb 2023 10:47:01 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

This leads to people rejiggering their code. Or, rejiggering
\"priorities\" on naive RTOSs (a tasks priority should be inherently
designed, not something that needs to be \"tuned\" based on observations
of how things perform)

What is so hard to assign RTOS thread priorities ?

Start by searching for least time critical timing and assign the
lowest priorities to them (or move it into the NULL task). After this,
there will usually be only one or two threads requiring high priority.

The higher a thread priority is, the shorter time it should run. If a
thread needs high priority and executes for a long time, split the
thread in two or more parts, one part that can run longer on a lower
priority and one that executes quickly at a higher priority.

In most cases, you can only have the ISR and one or two highest
priority threads running in hard RT, the rest threads are more or less
soft RT. Calculating the latencies for the second (and third) highest
priority threads is quite demanding, since you must count the ISR
worst case execution time as well as the top priority thread worst
case execution time and the thread switching times. The sum of worst
case execution times becomes quickly so large that the lower priority
threads can be only soft RT, while still providing reasonable average
performances.

Monitoring how long the RTOS spends in the NULL task gives a quick
view how much is spent in various higher priority threads.Spending
less than 50 % in the null task should alert studying how the high
priority threads are running.
 
On Fri, 24 Feb 2023 12:43:59 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 2/24/2023 3:25 AM, upsidedown@downunder.com wrote:
In industrial control systems in addition to the actual measured value
(e.g. from an ADC) you also have a separate data quality variable and
often also a time stamp (sometimes with a time quality field,
resolution, synched etc.).

In the 8/16/32 bit data quality variable, you can report e.g.
overflow, underflow, frozen (old) values, faulty value (e.g. ADC
reference missing or input cable grounded, manually forced values etc.
The actual value, data quality and time stamp are handled as a unit
through the application.

So, you pass structs/rudimentary objects instead of \"data values\".

If some internal calculation causes overflow, the overflow bit can be
set in the data quality variable or some special value replaced and a
derived value data quality bit can be set.

When the result is to be used for control, the data quality bits can
be analyzed and determined, if the value can be used for control or if
some other action must be taken. Having a data quality word with every
variable makes it possible have special handling of some faulty
signals without shutting down the whole system after first error in
some input or calculations. Such systems can run for years without
restarts.

I\'ve always worked with a strict definition of the accuracy and precision
at which the system must be calibrated. So, at the point of data acquisition,
you could determine whether the measurement \"made sense\" or not; if not,
then something is broken.

In a multi vendor environment in which standards allow all kinds of
variations, you must be able to handle whatever representation might
be received. A field device replacement might have different
characteristics. It is not acceptable to restart a whole system just
because a single peripheral device replacement.
 
On Fri, 24 Feb 2023 10:58:06 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

Folks in the desktop world had a tough time adjusting to multitasking
(as most were writing monolithic programs, not designing \"systems\").
Esp to preemptive multitasking (\"you mean the time between statements
22 and 23 can be, conceivably, HOURS??\")

The same applies also to programmers with mainframe / batch
background. When assigned to a multitasking project, you had to keep
an eye on them for months, so that they would not e.g. use busy loops.

Programming on a PDP-11 systems (64 KiB application address space)
seemed to cause a lot of problems to them, trying to squeeze a large
program into that address space with huge overlay loading trees. It
was much easier to split the program into multiple tasks, each much
smaller than 64 KiB :).
 
In article <sHAJL.1581433$GNG9.839316@fx18.iad>,
bitrex <user@example.net> wrote:
On 2/22/2023 2:05 PM, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

User-defined strong types that enforce their own usage are probably
worth the price of admission alone; e.g. quantities in newton/meters
should be of type NewtonMeters and foot/pounds should be FootPounds, and
casually performing operations with the two causes a compile error.

Internal units must be consistent. In entering a measurement in energy
you can have the choice between input in Newtonmeters or footpounds,
but you should always trust the energy is in Nm (J) in internal
calculations.

It is not a problem to add inputs (that are given using) in Newtonmeters
or footpounds. A fatal error is if you add Newtonmeters and meters/second.

I was the architect of a program (1980) that optimised the total output
possible of the Brent drilling islands.
The outcome was in m3/sec, actually.
The output was presented in kilobarrels/day.
The input was rife with imperial stuff.

{I encountered an exception to the rule,once. You can comfortably
calculate with an electron charge (10E-16 C). But you run into
trouble if you a try to interpolate with a 6 degree polynomial
on some machines with fp that doesn\'t go lower than 10E-80.
The solution is to use pC (picocoulomb) and document that.)

Groetjes Albert
--
Don\'t praise the day before the evening. One swallow doesn\'t make spring.
You must not say \"hey\" before you have crossed the bridge. Don\'t sell the
hide of the bear until you shot it. Better one bird in the hand than ten in
the air. First gain is a cat spinning. - the Wise from Antrim -
 
On Sat, 25 Feb 2023 07:33:11 +0100, Gerhard Hoffmann <dk4xp@arcor.de>
wrote:

Am 25.02.23 um 05:57 schrieb John Larkin:
On Fri, 24 Feb 2023 21:06:13 -0700, Don Y

I designed a CPU from MSI TTL logic, for a marine data logger. It had
a 20 KHz 4-phase clock and three opcodes.

We did a stack CPU patterned after Andrew Tanenbaum\'s
\"Experimantal Machine\", slightly simplified and only 16 bit
in HP\'s dynamic N-MOS process as a group project.
Spice on a CDC Cyber 276.

On the Multi-project wafer, we inherited a metal bar from
a neigbour project, so we did not have to debug it.
Design rule checkers were not up to the task, then.

Gerhard

I didn\'t simulate mine. I just checked it carefully and it worked.

It had no ALU. The ADC output was ASCII so one could just move bytes
to the printer. It grounded the wait line until it was ready for
another character.

Next-gen data loggers used an MC6800 (slow depletion-load) uP and ran
MIDGET, my tiny RTOS. The RTOS and the application were a single
monolithic assembly. Not a lot of abstraction.
 
In article <ttamag$28fr8$1@dont-email.me>,
Martin Brown <\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 23/02/2023 16:43, John Larkin wrote:
On Wed, 22 Feb 2023 20:33:57 -0700, Don Y
blockedofcourse@foo.invalid> wrote:

On 2/22/2023 8:15 PM, Sylvia Else wrote:
But can you afford the memory and time overheads inherent in run-time range
checks of things like array accesses?

That;s a small cost. Modern tools can often make (some) of those
tests at compile time.

The bigger problem with many newer languages is that they rely heavily
on dynamic memory allocation, garbage collection, etc.

And, most of the folks I\'ve met can\'t look at a line of arbitrary
code and tell you -- with *confidence* -- that they know what it
costs to execute, regardless of how comfortable they are with
the language in question.

Programmers typically can\'t estimate run times for chunks of their
code. They typically guess pessimistically, by roughly 10:1.

Anyone who is serious about timing code knows how to read the free
running system clock. RDTSC in Intel CPUs is very handy (even if they
warn against using it for this purpose it works very well).

Other CPUs have equivalent performance monitoring registers although
they may be hidden in some fine print in dark recesses of the manual.

On at least the Raspberry pi (1) and on the Raspberry one plus
they are promimently present in the hardware manual. Basically
you read from an address. The Raspberry pico system development
kit devotes a whole chapter on clocks.

I had success bit-banging mechanical instruments (2 metallopones
and an organ) using the RDSTSC, but these were synchronous, i.e.
a separate clock signal. I attempt to generate a midi signal for
a keyboard (on the same parallel port) and the period\'s were
a clean 32 uS duration, measured to the precision of a 20 Mhz
logic analyser, using one of 8 cores.
However the signal was spoiled by a periodic 1 mS interruption.
I tried to restrict booting to 7 processors, mapped all the
hardware interrupts away from the 8th processor and used the
\'taskset\' utility .. to no avail.
A while ago I succeeded in flashing the msp430 with bit-banging.
This now fails also.
The jury is still out but I suspect systemd is the cullprit.

--
Martin Brown
Groetjes Albert
--
Don\'t praise the day before the evening. One swallow doesn\'t make spring.
You must not say \"hey\" before you have crossed the bridge. Don\'t sell the
hide of the bear until you shot it. Better one bird in the hand than ten in
the air. First gain is a cat spinning. - the Wise from Antrim -
 
On Sat, 25 Feb 2023 00:59:20 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 2/24/2023 11:13 PM, Gerhard Hoffmann wrote:
i4004...that brings back memories/nightmares of all-night sessions... :-(
toggle switch code entry!

We burned small ROMs and plugged into prototype... and hoped!  :

In my case, i2708. A friend and I wrote a floating point package
for the 8080 this way. Later, it was replaced by an AMD Am9511.
That IC took a lot of iterations, they were not bug compatible.
What healed one version provoked errors in another one.

We used 1702s. IIRC, 256x8? The notion that you could *erase*
it was mind-blowing!

Later we got in-circuit emulators from tek, intel and HP.
The Pascal compiler for the HP64000 was the crappiest piece of
it that I ever have seen.

I thought the HP64000 was the crappiest piece of \"it\"... :

HP, Intel, and other people designed super-CISC microcoded computers
that were awful. Their horribleness is probably what inspired RISC.


I\'ve only had to \"toggle\" code into minicomputers.  You quickly
learn to make your programs REALLY TINY as each \"STORE\" operation
is a potential for a screwup!  (and, reading back the code as
a string of lights isn\'t very convenient!)

I am just amazed (and chagrined!) at how slow the processor
was -- yet how *fast* we considered it to be!  <frown>  I
guess everything truly *is* relative!

At the univ we had a PDP11/40E. Someone wrote a p-code machine
for it in microprogram. It was blazing fast for the time, at
least when it wasn\'t too hot.

We had a 6180 that served the majority of student needs.
It was surprisingly responsive given the number of interactive
terminals that could be in use at a given time. I always liked
the fact that I could hammer on the ESC key and get a new shell,
stacking the existing *paused* shell in the background. I\'m
sure the sysadmins groaned about how many hundreds of (would be)
zombie processes were typically paged out (consuming resources).

Manufacturers were eager to get their machines into the schools
hands (IBM was annoyed that they initially went with the 645
and boxed them out of the business!). So, damn near every class
was taught on a different machine, with different tools, under
a different OS, etc. Talk about needless duplication of effort!

There were only 5 PDP11/40E in the world and we had 2 of them,
both CPUs on the same UniBus, apart from tests only one working.
E means microprogrammable.
Usually it ran an obscure operating system called Unix V6,
tape directly from K&R.

The sources for sixth edition of UNIX have been \"around\" for
quite some time (dating back to its original use). IIRC,
it has even been \"formally published\".

The first PDP-11 was the 11/20, and I had something like serial number
11. We ordered the standard 4K words of core but they shipped 8K by
mistake. I ran steamship throttle control simulations in Focal, which
was an amazing language.

We later ran the RSTS time-share system on the 11/20. It would run for
months between power fails. DEC should have dominated computing but
screwed it up, so we got the Intel+Microsoft bag-of-worms that we now
have.

What\'s crazy is that a computer, including the first IBM PCs, would
power up and be ready to go in a couple of seconds. Now, with 4000x
the compute power, it takes minutes.

One thing about these old stories is that they remind us of what might
have been. Sigh.
 
In article <acjjvhl0gbmmuuafensbc5h81c5fumb1k6@4ax.com>,
<upsidedown@downunder.com> wrote:
On Fri, 24 Feb 2023 10:47:01 -0700, Don Y
blockedofcourse@foo.invalid> wrote:


This leads to people rejiggering their code. Or, rejiggering
\"priorities\" on naive RTOSs (a tasks priority should be inherently
designed, not something that needs to be \"tuned\" based on observations
of how things perform)

What is so hard to assign RTOS thread priorities ?

Start by searching for least time critical timing and assign the
lowest priorities to them (or move it into the NULL task). After this,
there will usually be only one or two threads requiring high priority.

The higher a thread priority is, the shorter time it should run. If a

A dogma that is repeated without proof.

I programmed the delay line of the paranal telescopes. (ESO project)
It spent at least 30 % of its time on the highest priority
level doing floating point calculations for the position of the mirror
and send that message out swiftly.
The point is, high priority is exactly what it is, high priority.
Receiving messages, calculating the next base position, user interaction
that all can wait.

thread needs high priority and executes for a long time, split the
thread in two or more parts, one part that can run longer on a lower
priority and one that executes quickly at a higher priority.

Makes no sense. There is no reason it can be split in this way.
In this case the interrupt was a message from the metrology system
where the current mirror was. All calculations depended on this
value. No reason to be distracted by disk reading or whatever.

Groetjes Albert
--
Don\'t praise the day before the evening. One swallow doesn\'t make spring.
You must not say \"hey\" before you have crossed the bridge. Don\'t sell the
hide of the bear until you shot it. Better one bird in the hand than ten in
the air. First gain is a cat spinning. - the Wise from Antrim -
 
On a sunny day (Sat, 25 Feb 2023 16:28:45 +0100) it happened
albert@cherry.(none) (albert) wrote in
<nnd$7c04bc09$6aa2525f@c37458a0375e8e8e>:

In article <ttamag$28fr8$1@dont-email.me>,
Martin Brown <\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 23/02/2023 16:43, John Larkin wrote:
On Wed, 22 Feb 2023 20:33:57 -0700, Don Y
blockedofcourse@foo.invalid> wrote:

On 2/22/2023 8:15 PM, Sylvia Else wrote:
But can you afford the memory and time overheads inherent in run-time range
checks of things like array accesses?

That;s a small cost. Modern tools can often make (some) of those
tests at compile time.

The bigger problem with many newer languages is that they rely heavily
on dynamic memory allocation, garbage collection, etc.

And, most of the folks I\'ve met can\'t look at a line of arbitrary
code and tell you -- with *confidence* -- that they know what it
costs to execute, regardless of how comfortable they are with
the language in question.

Programmers typically can\'t estimate run times for chunks of their
code. They typically guess pessimistically, by roughly 10:1.

Anyone who is serious about timing code knows how to read the free
running system clock. RDTSC in Intel CPUs is very handy (even if they
warn against using it for this purpose it works very well).

Other CPUs have equivalent performance monitoring registers although
they may be hidden in some fine print in dark recesses of the manual.

On at least the Raspberry pi (1) and on the Raspberry one plus
they are promimently present in the hardware manual. Basically
you read from an address. The Raspberry pico system development
kit devotes a whole chapter on clocks.

I had success bit-banging mechanical instruments (2 metallopones
and an organ) using the RDSTSC, but these were synchronous, i.e.
a separate clock signal. I attempt to generate a midi signal for
a keyboard (on the same parallel port) and the period\'s were
a clean 32 uS duration, measured to the precision of a 20 Mhz
logic analyser, using one of 8 cores.
However the signal was spoiled by a periodic 1 mS interruption.
I tried to restrict booting to 7 processors, mapped all the
hardware interrupts away from the 8th processor and used the
\'taskset\' utility .. to no avail.
A while ago I succeeded in flashing the msp430 with bit-banging.
This now fails also.
The jury is still out but I suspect systemd is the cullprit.

The simple way around irregular output is a FIFO, like I do here:
https://panteltje.nl/panteltje/raspberry_pi_dvb-s_transmitter/
all the rasppberry does is keep the fifo buffer filled.
That filling is interrupted on a regular basis by the task switch allowing anything
else to run..
If you do have an FPGA and coded a processor in FPGA then you basically also can
use part of it as FIFO, so need less external chips, glue logic.
 
On Sat, 25 Feb 2023 15:02:48 +0200, upsidedown@downunder.com wrote:

On Fri, 24 Feb 2023 10:58:06 -0700, Don Y
blockedofcourse@foo.invalid> wrote:

Folks in the desktop world had a tough time adjusting to multitasking
(as most were writing monolithic programs, not designing \"systems\").
Esp to preemptive multitasking (\"you mean the time between statements
22 and 23 can be, conceivably, HOURS??\")

The same applies also to programmers with mainframe / batch
background. When assigned to a multitasking project, you had to keep
an eye on them for months, so that they would not e.g. use busy loops.

Programming on a PDP-11 systems (64 KiB application address space)
seemed to cause a lot of problems to them, trying to squeeze a large
program into that address space with huge overlay loading trees. It
was much easier to split the program into multiple tasks, each much
smaller than 64 KiB :).

A lot of overlay thrashing was audible. Things would rattle in the
racks as disc heads flailed.

For some people, complexity becomes a fun game, which is why we have
about 6000 increasingly-abstract computer languages.

A better game is to brainstorm for simplicity. Not many people enjoy
that.
 
On 2/25/2023 2:29 AM, upsidedown@downunder.com wrote:
On Fri, 24 Feb 2023 10:47:01 -0700, Don Y
blockedofcourse@foo.invalid> wrote:


This leads to people rejiggering their code. Or, rejiggering
\"priorities\" on naive RTOSs (a tasks priority should be inherently
designed, not something that needs to be \"tuned\" based on observations
of how things perform)

What is so hard to assign RTOS thread priorities ?

Start by searching for least time critical timing and assign the
lowest priorities to them (or move it into the NULL task). After this,
there will usually be only one or two threads requiring high priority.

Let\'s assume a system with only periodic tasks.
Further, assume each tasks deadline is generous: finish
before you are next expecting to be \"made ready\".
(I.e., a 1KHz task has a whole millisecond to do it\'s work.
Note that this may not always be the case; a task could be
made ready -- released -- every millisecond but only allowed
20 us to meet its deadline!)

To save words, have a look at Koopman\'s example, here:

<https://betterembsw.blogspot.com/2014/05/real-time-scheduling-analysis-for.html>

He addresses RMA but there are similar problems for all of the
scheduling algorithms. How much they \"waste\" resources (i.e.,
potential computing power) varies.

The higher a thread priority is, the shorter time it should run. If a
thread needs high priority and executes for a long time, split the
thread in two or more parts, one part that can run longer on a lower
priority and one that executes quickly at a higher priority.

Do you know how quickly a task will execute BEFORE you\'ve finished
implementing it? And, what if the \"lower priority\" part gets starved
out -- so that it fails to complete before the deadline of the
task from which it was \"split out\"? Is it now deemed acceptable to
NOT do that bit of work, because it was \"split out\"? Or, was it still
essential to the original task and has now been sacrificed?

In most cases, you can only have the ISR and one or two highest
priority threads running in hard RT, the rest threads are more or less
soft RT. Calculating the latencies for the second (and third) highest
priority threads is quite demanding, since you must count the ISR
worst case execution time as well as the top priority thread worst
case execution time and the thread switching times. The sum of worst
case execution times becomes quickly so large that the lower priority
threads can be only soft RT, while still providing reasonable average
performances.

I object to the SRT & HRT classifications commonly (mis)used.
They lead to people making naive design decisions -- like implementing
keyclick in an ISR!

RE-think of HRT as: \"if I don\'t meet my deadline, then I may
as well give up!\" Isn\'t that what you are effectively saying
when you design *to* meet your HRT task deadlines? You keep
mangling the system to ensure they are met.

SRT is then: \"deadlines are just nice goals but not drop-dead
points, in time.\"

These can be reinterpreted more abstractly: what is the value of
meeting a deadline? or, the *cost* of missing it?

This is how you *really* make decisions in a design. You don\'t
give the \"keyboard-feedback/keyclick\" task incredibly high
priority because it has such a short deadline relative to the
readying event (a keypress). In the grand scheme of things,
it is NOT IMPORTANT. The value obtained by meeting that deadline
(and the cost of missing it) can readily be dismissed, esp if
some other \"slower\" task (like stopping the incoming missile)
is attainable because of the resources freed up by abandoning
that silly keyclick!

What you want, when making scheduling decisions, is a tuple:

{deadline, execution time, value of meeting, cost of missing}

And, your scheduler wants to be able to eject a task that
it considers not worthy of consideration AT THE PRESENT TIME
so that it doesn\'t waste any time TRYING, futilely, to fulfill
*its* goals -- because that would negatively impact other tasks
(possibly causing them to miss their deadlines which can cascade).

[This is *hard* and resource intensive!]

Yet, if you look at scheduling algorithms (which is what gets codified),
you never hear these sorts of issues factored into the scheduling
decision. Instead, it\'s deadlines, slack time, \"priority\" (and
what, exactly, is \"priority\"? important-ness? Or, position in
the scheduling decision selection queue??)

Monitoring how long the RTOS spends in the NULL task gives a quick
view how much is spent in various higher priority threads.Spending
less than 50 % in the null task should alert studying how the high
priority threads are running.

From the above:
\"A specifically bad practice is basing real time performance decisions
solely on spare capacity (e.g., “CPU is only 80% loaded on average”)
in the absence of mathematical scheduling analysis, because it does
not guarantee safety critical tasks will meet their deadlines. Similarly,
monitoring spare CPU capacity as the only way to infer whether deadlines
are being met is a specifically bad practice, because it does not actually
tell you whether high frequency deadlines are being met or not.\"
 
On 2/25/2023 5:20 AM, upsidedown@downunder.com wrote:
On Fri, 24 Feb 2023 12:43:59 -0700, Don Y
blockedofcourse@foo.invalid> wrote:

On 2/24/2023 3:25 AM, upsidedown@downunder.com wrote:
In industrial control systems in addition to the actual measured value
(e.g. from an ADC) you also have a separate data quality variable and
often also a time stamp (sometimes with a time quality field,
resolution, synched etc.).

In the 8/16/32 bit data quality variable, you can report e.g.
overflow, underflow, frozen (old) values, faulty value (e.g. ADC
reference missing or input cable grounded, manually forced values etc.
The actual value, data quality and time stamp are handled as a unit
through the application.

So, you pass structs/rudimentary objects instead of \"data values\".

If some internal calculation causes overflow, the overflow bit can be
set in the data quality variable or some special value replaced and a
derived value data quality bit can be set.

When the result is to be used for control, the data quality bits can
be analyzed and determined, if the value can be used for control or if
some other action must be taken. Having a data quality word with every
variable makes it possible have special handling of some faulty
signals without shutting down the whole system after first error in
some input or calculations. Such systems can run for years without
restarts.

I\'ve always worked with a strict definition of the accuracy and precision
at which the system must be calibrated. So, at the point of data acquisition,
you could determine whether the measurement \"made sense\" or not; if not,
then something is broken.

In a multi vendor environment in which standards allow all kinds of
variations, you must be able to handle whatever representation might
be received. A field device replacement might have different
characteristics. It is not acceptable to restart a whole system just
because a single peripheral device replacement.

We pieced together systems from different vendors\' products.
But, assumed the responsibility of *qualifying* each choice
(so we could continue to claim the specified performance).
The customer could freely change those decisions -- but at
his peril.
 
On 2/25/2023 6:02 AM, upsidedown@downunder.com wrote:
On Fri, 24 Feb 2023 10:58:06 -0700, Don Y
blockedofcourse@foo.invalid> wrote:

Folks in the desktop world had a tough time adjusting to multitasking
(as most were writing monolithic programs, not designing \"systems\").
Esp to preemptive multitasking (\"you mean the time between statements
22 and 23 can be, conceivably, HOURS??\")

The same applies also to programmers with mainframe / batch
background. When assigned to a multitasking project, you had to keep
an eye on them for months, so that they would not e.g. use busy loops.

Or, treating memory as \"infinite\"!

[The reverse is now becoming a problem; folks accustomed to \"working
in the small\" suddenly faced with systems that have mechanisms that
dramatically affect performance but that they\'re simply not accustomed
to (nor understand!). This is particularly true of hardware types.]

Programming on a PDP-11 systems (64 KiB application address space)
seemed to cause a lot of problems to them, trying to squeeze a large
program into that address space with huge overlay loading trees. It
was much easier to split the program into multiple tasks, each much
smaller than 64 KiB :).
 
On 24-Feb-23 10:43 am, Don Y wrote:
Every language is dangerous when the practitioners don\'t understand the
tools of their trade, sufficiently.  Would you hand a soldering iron to
an accountant and expect him to make a good joint?

And would you be able to smoke it afterwards?

Sylvia.
 
On Sat, 25 Feb 2023 08:57:27 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 2/25/2023 2:29 AM, upsidedown@downunder.com wrote:
On Fri, 24 Feb 2023 10:47:01 -0700, Don Y
blockedofcourse@foo.invalid> wrote:


This leads to people rejiggering their code. Or, rejiggering
\"priorities\" on naive RTOSs (a tasks priority should be inherently
designed, not something that needs to be \"tuned\" based on observations
of how things perform)

What is so hard to assign RTOS thread priorities ?

Start by searching for least time critical timing and assign the
lowest priorities to them (or move it into the NULL task). After this,
there will usually be only one or two threads requiring high priority.

Let\'s assume a system with only periodic tasks.
Further, assume each tasks deadline is generous: finish
before you are next expecting to be \"made ready\".
(I.e., a 1KHz task has a whole millisecond to do it\'s work.
Note that this may not always be the case; a task could be
made ready -- released -- every millisecond but only allowed
20 us to meet its deadline!)

To save words, have a look at Koopman\'s example, here:

https://betterembsw.blogspot.com/2014/05/real-time-scheduling-analysis-for.html

He addresses RMA but there are similar problems for all of the
scheduling algorithms. How much they \"waste\" resources (i.e.,
potential computing power) varies.

The higher a thread priority is, the shorter time it should run. If a
thread needs high priority and executes for a long time, split the
thread in two or more parts, one part that can run longer on a lower
priority and one that executes quickly at a higher priority.

Do you know how quickly a task will execute BEFORE you\'ve finished
implementing it?

You should have a quite good idea how long an ISR (and highest
priority tasks) execute depending of the amount of work to be done.
With processors with caches assume 100 ¤ cache misses. Running with
actual hardware and lower cache misses will give more time for lower
priority tasks.


And, what if the \"lower priority\" part gets starved
out -- so that it fails to complete before the deadline of the
task from which it was \"split out\"? Is it now deemed acceptable to
NOT do that bit of work, because it was \"split out\"? Or, was it still
essential to the original task and has now been sacrificed?

In soft RT you have to know what you can sacrify. Splitting a task
makes it possible to have one HRT task that NEVER misses a deadline
and to a lower priority SRT task that might miss a deadline once a
minute or once a week.

In most cases, you can only have the ISR and one or two highest
priority threads running in hard RT, the rest threads are more or less
soft RT. Calculating the latencies for the second (and third) highest
priority threads is quite demanding, since you must count the ISR
worst case execution time as well as the top priority thread worst
case execution time and the thread switching times. The sum of worst
case execution times becomes quickly so large that the lower priority
threads can be only soft RT, while still providing reasonable average
performances.

I object to the SRT & HRT classifications commonly (mis)used.
They lead to people making naive design decisions -- like implementing
keyclick in an ISR!

The NULL task is a good place for the (l)user interface !

RE-think of HRT as: \"if I don\'t meet my deadline, then I may
as well give up!\" Isn\'t that what you are effectively saying
when you design *to* meet your HRT task deadlines? You keep
mangling the system to ensure they are met.

SRT is then: \"deadlines are just nice goals but not drop-dead
points, in time.\"

In some cases and some cultures missing the HRT deadline might be a
reason for the designer to shoot oneself.


These can be reinterpreted more abstractly: what is the value of
meeting a deadline? or, the *cost* of missing it?

This is how you *really* make decisions in a design. You don\'t
give the \"keyboard-feedback/keyclick\" task incredibly high
priority because it has such a short deadline relative to the
readying event (a keypress). In the grand scheme of things,
it is NOT IMPORTANT. The value obtained by meeting that deadline
(and the cost of missing it) can readily be dismissed, esp if
some other \"slower\" task (like stopping the incoming missile)
is attainable because of the resources freed up by abandoning
that silly keyclick!

What you want, when making scheduling decisions, is a tuple:

{deadline, execution time, value of meeting, cost of missing}

And, your scheduler wants to be able to eject a task that
it considers not worthy of consideration AT THE PRESENT TIME
so that it doesn\'t waste any time TRYING, futilely, to fulfill
*its* goals -- because that would negatively impact other tasks
(possibly causing them to miss their deadlines which can cascade).

[This is *hard* and resource intensive!]

Yet, if you look at scheduling algorithms (which is what gets codified),
you never hear these sorts of issues factored into the scheduling
decision. Instead, it\'s deadlines, slack time, \"priority\" (and
what, exactly, is \"priority\"? important-ness? Or, position in
the scheduling decision selection queue??)

Monitoring how long the RTOS spends in the NULL task gives a quick
view how much is spent in various higher priority threads.Spending
less than 50 % in the null task should alert studying how the high
priority threads are running.

I said \"quick view\", not that is the only method of characterizing the
system.

From the above:
\"A specifically bad practice is basing real time performance decisions
solely on spare capacity (e.g., “CPU is only 80% loaded on average”)
in the absence of mathematical scheduling analysis, because it does
not guarantee safety critical tasks will meet their deadlines. Similarly,
monitoring spare CPU capacity as the only way to infer whether deadlines
are being met is a specifically bad practice, because it does not actually
tell you whether high frequency deadlines are being met or not.\"

With 80 % average CPU load, SRT tasks may miss their deadlines quite
often, but this doesn\'t necessary harm the HRT tasks, unless the high
CPU load also means extra page faulting etc.
 
On Thursday, February 23, 2023 at 11:32:45 AM UTC-4, Dimiter_Popoff wrote:
On 2/23/2023 17:12, Ricky wrote:
On Thursday, February 23, 2023 at 9:04:31 AM UTC-5, Dimiter_Popoff wrote:
On 2/23/2023 13:00, Sylvia Else wrote:
On 23-Feb-23 8:22 pm, Ricky wrote:
On Thursday, February 23, 2023 at 3:51:26 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 4:01 pm, Ricky wrote:
On Wednesday, February 22, 2023 at 10:15:36 PM UTC-5, Sylvia Else
wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in
C++ or
Rust.

But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

You mean as opposed to programs randomly failing in the field?

It\'s not as if the checks make a program work properly, they only make
the failure mode clearer. If an improper access occurs frequently, then
this would likely show up during development/testing. If it occurs
rarely, then you\'ll still see what look like random failures in the
field.

If you have checks in place, you will know something about what failed
and where to look in the code.


Whether it\'s worth the extra cost of hardware will depend on how many
incarnations there are going to be.

Extra hardware cost???

Faster processor, more memory.

Sylvia.
It is not just about the cost of hardware. It more about doing
*the same* thing done before - with bloated resources.
Which is sort of forgivable, however if it were not the bloat
so much more can be done using today\'s CPU/memory resources than
virtually everyone (well, except me :) does.
Talk about *gigabytes* of RAM and getting a video player complain
about having insufficient memory... (yes, I had that not long ago,
on windows 10 with 8G RAM playing a 2G mkv file....). And if it
were only that, this is just the tip of the iceberg. Everyone is
hasty to just sell something, as long as people can\'t see to
what extent is is not even half baked they just go ahead.

Anytime someone talks about \"bloat\" in software, I realize they don\'t program.

It\'s like electric cars. The only people who complain about them are the people who don\'t drive them.
LOL, you should work on your realization abilities.
You have never communicated with a person who has programmed more
than I have.
Checking who you are conversing with is also a good idea before
saying something stupid again.

Then why are you babbling about software bloat? Why don\'\'t you babble about hardware bloat? How many billions of transistors on today\'s top end CPUs? How many in an 8051? No reason to claim the current top end CPUs are bloated. Every transistor has been added as part of some specific function. Software is the same way.

Maybe you are not ignorant, but I find it is the ignorant who talk about software bloat.

--

Rick C.

--- Get 1,000 miles of free Supercharging
--- Tesla referral code - https://ts.la/richard11209
 
On Thursday, February 23, 2023 at 7:11:31 PM UTC-4, Sylvia Else wrote:
On 24-Feb-23 2:10 am, Ricky wrote:
On Thursday, February 23, 2023 at 6:00:51 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 8:22 pm, Ricky wrote:
On Thursday, February 23, 2023 at 3:51:26 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 4:01 pm, Ricky wrote:
On Wednesday, February 22, 2023 at 10:15:36 PM UTC-5, Sylvia Else wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

You mean as opposed to programs randomly failing in the field?

It\'s not as if the checks make a program work properly, they only make
the failure mode clearer. If an improper access occurs frequently, then
this would likely show up during development/testing. If it occurs
rarely, then you\'ll still see what look like random failures in the field.

If you have checks in place, you will know something about what failed and where to look in the code.


Whether it\'s worth the extra cost of hardware will depend on how many
incarnations there are going to be.

Extra hardware cost???

Faster processor, more memory.

Sylvia.

0.01% faster... 10 bytes more memory. WTF???


If every non-constant array index is bounds checked, and every pointer
access is implemented via code that checks the pointer for validity
first, then it will neither by 0.01% not 10 bytes more.

Compilers may reduce this by proving that certain accesses are always
valid, but I believe the overhead will still be significant.

Woosh! That\'s the sound of the point under discussion rushing over your head.

Now you are trying to argue details of how much CPU time or memory, yet offer zero data. Ok, you win. The cost is only zero 99.999% of designs. Somewhere, there\'s a design that adding bounds checking pushed a specific design into a slightly larger CPU chip with some fractional amount more memory.

Most of the people posting here are happy to show they are idiots. You usually refrain from such posts. But once you\'ve made a poor statement, you are inclined to double down, dig your heels in and stick with your guns, in spite of having zero data to support your point.

--

Rick C.

--+ Get 1,000 miles of free Supercharging
--+ Tesla referral code - https://ts.la/richard11209
 

Welcome to EDABoard.com

Sponsor

Back
Top