dead programming languages...

On Friday, February 24, 2023 at 2:34:07 AM UTC-4, Jan Panteltje wrote:
On a sunny day (Thu, 23 Feb 2023 07:12:17 -0800 (PST)) it happened Ricky
gnuarm.del...@gmail.com> wrote in
db92411e-80c3-4bfc...@googlegroups.com>:

Anytime someone talks about \"bloat\" in software, I realize they don\'t program.

You talk crap, show us some code you wrote.

Wow! I didn\'t realize I was attacking the foundations of the church. Even Jan is talking like Larkin. Bring out your code! Bring out your code!


It\'s like electric cars. The only people who complain about them are the people
who don\'t drive them.

Right and the grid is full here.
People with \'lectric cars in north-west US now without power because of the ice cold weather
ARE STUCK.

Yes, they are stuck because of the snow and ice, same as every other car type. Only a moron would try to blame the weather on electric cars.


> Do you get anything for your repeated sig?

Do you get anything for your repeated idiocy?

--

Rick C.

-+- Get 1,000 miles of free Supercharging
-+- Tesla referral code - https://ts.la/richard11209
 
On 2/25/2023 3:24 PM, Sylvia Else wrote:
On 24-Feb-23 10:43 am, Don Y wrote:


Every language is dangerous when the practitioners don\'t understand the
tools of their trade, sufficiently.  Would you hand a soldering iron to
an accountant and expect him to make a good joint?

And would you be able to smoke it afterwards?

I would suspect (prejudice?) that the sort of person who would
be an accountant probably wouldn\'t be interested!
 
On 2/25/2023 4:12 PM, upsidedown@downunder.com wrote:
To save words, have a look at Koopman\'s example, here:

https://betterembsw.blogspot.com/2014/05/real-time-scheduling-analysis-for.html

He addresses RMA but there are similar problems for all of the
scheduling algorithms. How much they \"waste\" resources (i.e.,
potential computing power) varies.

The higher a thread priority is, the shorter time it should run. If a
thread needs high priority and executes for a long time, split the
thread in two or more parts, one part that can run longer on a lower
priority and one that executes quickly at a higher priority.

Do you know how quickly a task will execute BEFORE you\'ve finished
implementing it?

You should have a quite good idea how long an ISR (and highest
priority tasks) execute depending of the amount of work to be done.

ISRs should do very *little* work. They complicate the system design
because they are, essentially, uncontrollable; you can\'t dynamically
opt to \"miss\" them.

With processors with caches assume 100 ¤ cache misses. Running with
actual hardware and lower cache misses will give more time for lower
priority tasks.

Until the system is done, you can\'t know how other tasks will
affect *your* execution. And, how things will evolve, in the future.

With the criteria I proposed, it doesn;\'t matter what else is going
on or how the code evolves. The deadline remains unchanged. The
system (scheduler) will sort out what it can do to maximize the \"value\"
of your workload. *It* will decide whether or not that keyclick
will get serviced, based on how \"valuable\" you claim it to be
(relative to the other competing activities) or how \"costly\" you
claim *missing* it will be. There\'s no need for you to try to
assign (arbitrary) priorities to tasks.

And, what if the \"lower priority\" part gets starved
out -- so that it fails to complete before the deadline of the
task from which it was \"split out\"? Is it now deemed acceptable to
NOT do that bit of work, because it was \"split out\"? Or, was it still
essential to the original task and has now been sacrificed?

In soft RT you have to know what you can sacrify. Splitting a task
makes it possible to have one HRT task that NEVER misses a deadline
and to a lower priority SRT task that might miss a deadline once a
minute or once a week.

If you treat EVERY task as having \"soft\" deadlines, then it
forces you to think about what you will do in the event of
a missed deadline; should I keep working at it? if so, for
how long? how can I recover from this \"failure\"?

By contrast, if you treat any of them as *hard*, then you know
what to do when you miss the deadline: give up (or panic).

Unfortunately, most designs provide no feedback to the
application as to whether or not deadlines are being missed.
To do so, you need some sort of \"time feedback\" -- either
explicit or implicit (e.g., a FIFO overflowing).

In most cases, you can only have the ISR and one or two highest
priority threads running in hard RT, the rest threads are more or less
soft RT. Calculating the latencies for the second (and third) highest
priority threads is quite demanding, since you must count the ISR
worst case execution time as well as the top priority thread worst
case execution time and the thread switching times. The sum of worst
case execution times becomes quickly so large that the lower priority
threads can be only soft RT, while still providing reasonable average
performances.

I object to the SRT & HRT classifications commonly (mis)used.
They lead to people making naive design decisions -- like implementing
keyclick in an ISR!

The NULL task is a good place for the (l)user interface !

That assumes you think it to be unimportant. How does the user
signal \"abort\", if you\'re not looking at him to see when he presses
that button? Do you move that into an ISR (as a bandage to
work-around the fact that you\'ve deprecated the UI to a \"non-task\")

RE-think of HRT as: \"if I don\'t meet my deadline, then I may
as well give up!\" Isn\'t that what you are effectively saying
when you design *to* meet your HRT task deadlines? You keep
mangling the system to ensure they are met.

SRT is then: \"deadlines are just nice goals but not drop-dead
points, in time.\"

In some cases and some cultures missing the HRT deadline might be a
reason for the designer to shoot oneself.

And, in most applications, its an example of resources needlessly dedicated
to something that wasn\'t *all* that important -- but, the designer
couldn\'t figure out how to *value* it!

Monitoring how long the RTOS spends in the NULL task gives a quick
view how much is spent in various higher priority threads.Spending
less than 50 % in the null task should alert studying how the high
priority threads are running.

I said \"quick view\", not that is the only method of characterizing the
system.

From the above:
\"A specifically bad practice is basing real time performance decisions
solely on spare capacity (e.g., “CPU is only 80% loaded on average”)
in the absence of mathematical scheduling analysis, because it does
not guarantee safety critical tasks will meet their deadlines. Similarly,
monitoring spare CPU capacity as the only way to infer whether deadlines
are being met is a specifically bad practice, because it does not actually
tell you whether high frequency deadlines are being met or not.\"

With 80 % average CPU load, SRT tasks may miss their deadlines quite
often, but this doesn\'t necessary harm the HRT tasks, unless the high
CPU load also means extra page faulting etc.

CPU load doesn\'t tell you anything about how many deadlines are being
missed nor who is missing them, or the \"cost\" of doing so.

Measure.

If you don\'t have a way of seeing these overruns, how can you determine
an appropriate remedy?
 
søndag den 26. februar 2023 kl. 04.07.43 UTC+1 skrev Don Y:
On 2/25/2023 4:12 PM, upsid...@downunder.com wrote:
To save words, have a look at Koopman\'s example, here:

https://betterembsw.blogspot.com/2014/05/real-time-scheduling-analysis-for.html

He addresses RMA but there are similar problems for all of the
scheduling algorithms. How much they \"waste\" resources (i.e.,
potential computing power) varies.

The higher a thread priority is, the shorter time it should run. If a
thread needs high priority and executes for a long time, split the
thread in two or more parts, one part that can run longer on a lower
priority and one that executes quickly at a higher priority.

Do you know how quickly a task will execute BEFORE you\'ve finished
implementing it?

You should have a quite good idea how long an ISR (and highest
priority tasks) execute depending of the amount of work to be done.
ISRs should do very *little* work. They complicate the system design
because they are, essentially, uncontrollable; you can\'t dynamically
opt to \"miss\" them.

I\'d say that depends, the work needs to be done either way and if it must not be
missed why not do it in the ISR?

and depending on how you view the world isn\'t everything in a multitasking OS an
ISR in some sense?
 
On 2/25/2023 8:45 PM, Lasse Langwadt Christensen wrote:
søndag den 26. februar 2023 kl. 04.07.43 UTC+1 skrev Don Y:
On 2/25/2023 4:12 PM, upsid...@downunder.com wrote:
To save words, have a look at Koopman\'s example, here:

https://betterembsw.blogspot.com/2014/05/real-time-scheduling-analysis-for.html

He addresses RMA but there are similar problems for all of the
scheduling algorithms. How much they \"waste\" resources (i.e.,
potential computing power) varies.

The higher a thread priority is, the shorter time it should run. If a
thread needs high priority and executes for a long time, split the
thread in two or more parts, one part that can run longer on a lower
priority and one that executes quickly at a higher priority.

Do you know how quickly a task will execute BEFORE you\'ve finished
implementing it?

You should have a quite good idea how long an ISR (and highest
priority tasks) execute depending of the amount of work to be done.
ISRs should do very *little* work. They complicate the system design
because they are, essentially, uncontrollable; you can\'t dynamically
opt to \"miss\" them.

I\'d say that depends, the work needs to be done either way and if it must not be
missed why not do it in the ISR?

But that\'s the point. People treat HRT as \"can not be missed\".
But, are you sure it *can\'t*? If you put it in an ISR, then
it likely will always be addressed, even if not essential
(what if the task that feeds the ISR or consumes its data
fails to run because of processor overload? you\'ve dedicated
resources in that ISR for something that is going to be
discarded!)

and depending on how you view the world isn\'t everything in a multitasking OS an
ISR in some sense?

Yes. But you have more control over those non-ISR activities.
How often do you turn *off* an ISR (to conserve resources)?
How often does a task not run as an implicit resource economy?
 
On a sunny day (Sat, 25 Feb 2023 16:18:08 -0800 (PST)) it happened Ricky
<gnuarm.deletethisbit@gmail.com> wrote in
<93b422c9-7b72-46cf-8d55-af6a95b34fdan@googlegroups.com>:

On Friday, February 24, 2023 at 2:34:07=E2=80=AFAM UTC-4, Jan Panteltje wrote:

On a sunny day (Thu, 23 Feb 2023 07:12:17 -0800 (PST)) it happened Ricky

gnuarm.del...@gmail.com> wrote in
db92411e-80c3-4bfc...@googlegroups.com>:

Anytime someone talks about \"bloat\" in software, I realize they don\'t program.


You talk crap, show us some code you wrote.

Wow! I didn\'t realize I was attacking the foundations of the church. Even
Jan is talking like Larkin. Bring out your code! Bring out your code!



It\'s like electric cars. The only people who complain about them are the
people
who don\'t drive them.

Right and the grid is full here.
People with \'lectric cars in north-west US now without power because of the
ice cold weather
ARE STUCK.

Yes, they are stuck because of the snow and ice, same as every other car type.
Only a moron would try to blame the weather on electric cars.

One million there with electricity was in the new yesterday
How are you going to charge and where?

Old petrol of diesel will get you out of there to a sunny warm place :)

Do you get anything for your repeated sig?

Do you get anything for your repeated idiocy?

Evading question, no code shown, you lose :)
 
On a sunny day (Sat, 25 Feb 2023 16:05:22 -0800 (PST)) it happened Ricky
<gnuarm.deletethisbit@gmail.com> wrote in
<f70d9454-84e6-4bb2-9efd-b7415d089c03n@googlegroups.com>:

Then why are you babbling about software bloat? Why don\'\'t you babble about
hardware bloat? How many billions of transistors on today\'s top end CPUs?
How many in an 8051? No reason to claim the current top end CPUs are bloated.
Every transistor has been added as part of some specific function.
Software is the same way.

Maybe you are not ignorant, but I find it is the ignorant who talk about software
bloat.

My apologies, of COURSE there is hardware bloat, and even hard+software bloat.
It becomes almost impossible here to pay with cash, banks, online banking,
smartphones, CARS (LOL)
Now the high altitude nuke will come or just a decent strength solar storm and society as we know it comes to an end.
Or a good hack attack.

Remember Apollo to the moon?
In 1968 I was doing TV head control room here to make sure people in my country could see the
guys walking on the moon.
The equipment in that studio used mostly tubes,,
During the day I had to keep all that running (repairs) technically and in the evening the control room
to make all those connections, shifts... And all that with seconds to do it, black screens do cause an uprise in the country.
No millions of transistors, no ICs, NO computers!!!!
Look at the HARDWARE in the Apollo flights, its COMPUTER, it can be found online.
Now with all the tera-hertz tera-bytes millions of transistors we still have not put humans on Mars.

(and that would have united humanity and pushed the image of the US like the moon shots did).
All that hardware bloat FOR WHAT? To view popeye on your smartphone?

Or CNN adds on your TV?

Or see ByeThen make silly mistakes?
BITCONS????
Now that sure sucks energy and hardware....

Hardware bloat you see for the strangest things.

I do many things with simple hardware and code
like for example controlling my drone with a simple Microchip PIC with code written in asm:
https://panteltje.nl/panteltje/quadcopter/index.html
Has not failed yet, even testing the code did not kill the drone:)
Such a first test flight is fascinating, and you have to have good reaction to go back to manual IF you can..
and then safely land...
Different from the one hundred tries you coded before it worked to say \'hello world\' or something ;-)
taking 12 MB binary made by a compiler from ?language, using many bloated libraries linked in on the 10 core
you need and 1 GB memory space and 32 GB SDcard...
Sure GPS, that will fail too after the high altitude nuke.
Or if jammed, or the things simply shot out of the air, or sats collided with space debris or other sats..
Get a life, bloaters
People can no longer find their way without GPS.
If you really know your stuff then things can be done with very simple means.
But people no longer know the basics, it is all about top down, few know bottom, they were not born then yet :).
And that creates vulnerabilities too, attacks become easy for those who DO know the basics.
 
On Sat, 25 Feb 2023 06:20:30 GMT, Jan Panteltje <alien@comet.invalid>
wrote:

On a sunny day (Fri, 24 Feb 2023 09:19:39 -0800) it happened John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote in
41shvhtp8m6edsp9ilhrfjqbp5uikoltc0@4ax.com>:

On Fri, 24 Feb 2023 06:07:48 GMT, Jan Panteltje <alien@comet.invalid
wrote:

On a sunny day (Thu, 23 Feb 2023 08:54:20 -0800) it happened John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote in
266fvhl8gae2sdj0ecp7n511phphmkg47i@4ax.com>:

On Thu, 23 Feb 2023 06:34:25 GMT, Jan Panteltje <alien@comet.invalid
wrote:

On a sunny day (Wed, 22 Feb 2023 11:05:30 -0800) it happened John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote in
3opcvh111k7igirlsm6anc8eekalofvtcj@4ax.com>:

https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

Cplushplush is a crime against humanity
C will do better
But asm is the thing, it will always be there
and gives you full control.
It is not that hard to write an integer math library in asm..

I did that for the 68K. The format was signed 32.32. That worked great
for control systems. Macros made it look like native instructions.

But asm is clumsy for risc CPUs. Plain bare-metal c makes sense for
small instruments.

True, I have no experience with asm on my Raspberries for example.
But lots of C, gcc is a nice compiler.
Most (all?) things I wrote for x86 in C also compile and run on the Raspberries.
Some minor make file changes were required for the latest gcc version..

I\'m planning a series of PoE aerospace instruments based on Pi Pico
and the WizNet ethernet chip. You could help if you\'re interested.

The dev system will be a Pi4.

Yes we discussed your Amazon Pi-4 before..
Sure if I can help I will, after all you inspired me to use some Minicircuits RF stuff :)
Just ask here,
I do not have a Pi pico though... no experience with that.
I have 2 Pi4\'s one 4 GB and one 8 GB version.
The latest I use for web browsing, the former records security cameras, weather, air-traffic,
radiation, GPS position, etc..
The 8 GB also is used as internet router for the LAN.
And I have a whole lot of older Raspberries...
Most Pis are on 24/7 on a UPS, the Pi4 each have a 4 TB harddisk connected via USB.


https://www.amazon.com/MARSTUDY-Raspberry-Model-Ultimate-Pre-Installed/dp/B0BB912MV1/ref=sr_1_3?crid=3FKMZETXZJ1B9&keywords=marst
udy+raspberry+pi+4+model+b+ultimate+starter+kit&qid=1677259015&sprefix=MARSTUDY+Raspberry+Pi+4+Model+B%2Caps%2C143&sr=8-3

I ordered one and it was up and running their dev software in 10
minutes.

My website is back up, now at
www.panteltje.nl
and
www.panteltje.online
Some projects can be downloaded from:
https://panteltje.nl/panteltje/newsflex/download.html

Still some work needed on the new site.

Cool so far!


Email me
jjlarkin
roundthing
highlandtechnology
pointything
com

I\'m working on a product-line definition document. I\'ll send it to you
(in a few weeks maybe?) and get your opinion and see if you might want
to be involved as a casual or formal consultant. We have some
big-picture architectural decisions to make, and that\'s always a fun
and scary part of a project. Like, for example, do we want to try to
phase-lock the clock of the Pico to an external 10 MHz source?

In parallel, we\'ll be trying to find customers who might buy it.
Sometimes we have wonderful ideas that nobody wants.

That little RP2040 chip is awesome but I\'ll never understand all of it
myself.
 
On Sunday, 26 February 2023 at 15:39:43 UTC, John Larkin wrote:
On Sat, 25 Feb 2023 06:20:30 GMT, Jan Panteltje <al...@comet.invalid
wrote:

On a sunny day (Fri, 24 Feb 2023 09:19:39 -0800) it happened John Larkin
jla...@highlandSNIPMEtechnology.com> wrote in
41shvhtp8m6edsp9i...@4ax.com>:

On Fri, 24 Feb 2023 06:07:48 GMT, Jan Panteltje <al...@comet.invalid
wrote:

On a sunny day (Thu, 23 Feb 2023 08:54:20 -0800) it happened John Larkin
jla...@highlandSNIPMEtechnology.com> wrote in
266fvhl8gae2sdj0e...@4ax.com>:

On Thu, 23 Feb 2023 06:34:25 GMT, Jan Panteltje <al...@comet.invalid
wrote:

On a sunny day (Wed, 22 Feb 2023 11:05:30 -0800) it happened John Larkin
jla...@highlandSNIPMEtechnology.com> wrote in
3opcvh111k7igirls...@4ax.com>:

https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

Cplushplush is a crime against humanity
C will do better
But asm is the thing, it will always be there
and gives you full control.
It is not that hard to write an integer math library in asm..

I did that for the 68K. The format was signed 32.32. That worked great
for control systems. Macros made it look like native instructions.

But asm is clumsy for risc CPUs. Plain bare-metal c makes sense for
small instruments.

True, I have no experience with asm on my Raspberries for example.
But lots of C, gcc is a nice compiler.
Most (all?) things I wrote for x86 in C also compile and run on the Raspberries.
Some minor make file changes were required for the latest gcc version..

I\'m planning a series of PoE aerospace instruments based on Pi Pico
and the WizNet ethernet chip. You could help if you\'re interested.

The dev system will be a Pi4.

Yes we discussed your Amazon Pi-4 before..
Sure if I can help I will, after all you inspired me to use some Minicircuits RF stuff :)
Just ask here,
I do not have a Pi pico though... no experience with that.
I have 2 Pi4\'s one 4 GB and one 8 GB version.
The latest I use for web browsing, the former records security cameras, weather, air-traffic,
radiation, GPS position, etc..
The 8 GB also is used as internet router for the LAN.
And I have a whole lot of older Raspberries...
Most Pis are on 24/7 on a UPS, the Pi4 each have a 4 TB harddisk connected via USB.


https://www.amazon.com/MARSTUDY-Raspberry-Model-Ultimate-Pre-Installed/dp/B0BB912MV1/ref=sr_1_3?crid=3FKMZETXZJ1B9&keywords=marst
udy+raspberry+pi+4+model+b+ultimate+starter+kit&qid=1677259015&sprefix=MARSTUDY+Raspberry+Pi+4+Model+B%2Caps%2C143&sr=8-3

I ordered one and it was up and running their dev software in 10
minutes.

My website is back up, now at
www.panteltje.nl
and
www.panteltje.online
Some projects can be downloaded from:
https://panteltje.nl/panteltje/newsflex/download.html

Still some work needed on the new site.

Cool so far!




Email me
jjlarkin
roundthing
highlandtechnology
pointything
com

I\'m working on a product-line definition document. I\'ll send it to you
(in a few weeks maybe?) and get your opinion and see if you might want
to be involved as a casual or formal consultant. We have some
big-picture architectural decisions to make, and that\'s always a fun
and scary part of a project. Like, for example, do we want to try to
phase-lock the clock of the Pico to an external 10 MHz source?
I\'ve been thinking about that (but not for 10MHz). Its probably easier to use the
RP2040 chip rather than a ready made Pico because then it is easy to have
an external clock input. The tricky bit is allowing bootup to take place at
a clock frequency which is compatible with the USB programming code and
then switching clock frequency to the one needed for the actual application.
It might be easiest to go via an intermediate clock from one of the internal
sources such as the ring oscillator because glitch-free clock switching is
then available.
The sequence would be:
Startup using standard 12MHz clock
Do things with USB like programming the device
Switch glitchlessly to internal ring oscillator
Reconfigure external clock input to work from my special oscillator
Switch glitchlessly to my special oscillator.
Run final code
John

In parallel, we\'ll be trying to find customers who might buy it.
Sometimes we have wonderful ideas that nobody wants.

That little RP2040 chip is awesome but I\'ll never understand all of it
myself.
 
On Sun, 26 Feb 2023 10:23:30 -0800 (PST), John Walliker
<jrwalliker@gmail.com> wrote:

On Sunday, 26 February 2023 at 15:39:43 UTC, John Larkin wrote:
On Sat, 25 Feb 2023 06:20:30 GMT, Jan Panteltje <al...@comet.invalid
wrote:

On a sunny day (Fri, 24 Feb 2023 09:19:39 -0800) it happened John Larkin
jla...@highlandSNIPMEtechnology.com> wrote in
41shvhtp8m6edsp9i...@4ax.com>:

On Fri, 24 Feb 2023 06:07:48 GMT, Jan Panteltje <al...@comet.invalid
wrote:

On a sunny day (Thu, 23 Feb 2023 08:54:20 -0800) it happened John Larkin
jla...@highlandSNIPMEtechnology.com> wrote in
266fvhl8gae2sdj0e...@4ax.com>:

On Thu, 23 Feb 2023 06:34:25 GMT, Jan Panteltje <al...@comet.invalid
wrote:

On a sunny day (Wed, 22 Feb 2023 11:05:30 -0800) it happened John Larkin
jla...@highlandSNIPMEtechnology.com> wrote in
3opcvh111k7igirls...@4ax.com>:

https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

Cplushplush is a crime against humanity
C will do better
But asm is the thing, it will always be there
and gives you full control.
It is not that hard to write an integer math library in asm..

I did that for the 68K. The format was signed 32.32. That worked great
for control systems. Macros made it look like native instructions.

But asm is clumsy for risc CPUs. Plain bare-metal c makes sense for
small instruments.

True, I have no experience with asm on my Raspberries for example.
But lots of C, gcc is a nice compiler.
Most (all?) things I wrote for x86 in C also compile and run on the Raspberries.
Some minor make file changes were required for the latest gcc version..

I\'m planning a series of PoE aerospace instruments based on Pi Pico
and the WizNet ethernet chip. You could help if you\'re interested.

The dev system will be a Pi4.

Yes we discussed your Amazon Pi-4 before..
Sure if I can help I will, after all you inspired me to use some Minicircuits RF stuff :)
Just ask here,
I do not have a Pi pico though... no experience with that.
I have 2 Pi4\'s one 4 GB and one 8 GB version.
The latest I use for web browsing, the former records security cameras, weather, air-traffic,
radiation, GPS position, etc..
The 8 GB also is used as internet router for the LAN.
And I have a whole lot of older Raspberries...
Most Pis are on 24/7 on a UPS, the Pi4 each have a 4 TB harddisk connected via USB.


https://www.amazon.com/MARSTUDY-Raspberry-Model-Ultimate-Pre-Installed/dp/B0BB912MV1/ref=sr_1_3?crid=3FKMZETXZJ1B9&keywords=marst
udy+raspberry+pi+4+model+b+ultimate+starter+kit&qid=1677259015&sprefix=MARSTUDY+Raspberry+Pi+4+Model+B%2Caps%2C143&sr=8-3

I ordered one and it was up and running their dev software in 10
minutes.

My website is back up, now at
www.panteltje.nl
and
www.panteltje.online
Some projects can be downloaded from:
https://panteltje.nl/panteltje/newsflex/download.html

Still some work needed on the new site.

Cool so far!




Email me
jjlarkin
roundthing
highlandtechnology
pointything
com

I\'m working on a product-line definition document. I\'ll send it to you
(in a few weeks maybe?) and get your opinion and see if you might want
to be involved as a casual or formal consultant. We have some
big-picture architectural decisions to make, and that\'s always a fun
and scary part of a project. Like, for example, do we want to try to
phase-lock the clock of the Pico to an external 10 MHz source?

I\'ve been thinking about that (but not for 10MHz). Its probably easier to use the
RP2040 chip rather than a ready made Pico because then it is easy to have
an external clock input. The tricky bit is allowing bootup to take place at
a clock frequency which is compatible with the USB programming code and
then switching clock frequency to the one needed for the actual application.
It might be easiest to go via an intermediate clock from one of the internal
sources such as the ring oscillator because glitch-free clock switching is
then available.
The sequence would be:
Startup using standard 12MHz clock
Do things with USB like programming the device
Switch glitchlessly to internal ring oscillator
Reconfigure external clock input to work from my special oscillator
Switch glitchlessly to my special oscillator.
Run final code

Yes, we could have a 10 MHz PLL, with a VCXO, and let it clock the Pi
eventually. Absent an external reference, the VCXO would approximately
center itself, good enough. We also have products that detect loss of
an external reference input and, in that case, drive the VCXO to a
factory-calibrated 10 MHz, within a few PPM. One PWM pin can do that.

John

In parallel, we\'ll be trying to find customers who might buy it.
Sometimes we have wonderful ideas that nobody wants.

That little RP2040 chip is awesome but I\'ll never understand all of it
myself.

The product line would be a lot of little blue boxes that do aerospace
sorts of i/o, with a Pico inside each one. A key feature would be
realtime sync between boxes and time sync compatibility with another
product line.

Some things like DC power supplies and dummy loads can make/accept a
single trigger to do things now, so the Pi clock can just be the usual
one. But in some cases we want to make polyphase AC outputs and have
phase sync between boxes, so we\'d need a 10 MHz reference input for
long-term coherence.

One way is to have such boxes use an FPGA for the waveform generation
and include a 10 MHz PLL off to the side, and let the RPi use its
crummy XO clock and drift as much as it wants. But it\'s at least worth
thinking about using one of the ARM cores to do brut-force waveform
generation.

I am looking for Pi and FPGA people to help. I plan to hang out in
some local maker spaces, trolling for talent.
 
Buy Vape Cartridges Online
Variegated Plants For Sale Near Me
Bruce Banner #3 Strain
Buy Edibles Online
Buy Dank Gummies 500mg
Brass Knuckles For Sale
White Monstera For Sale
Buy AK-47 Weed Online
Buy One Up Mushroom Bar 3.5G
Tales Of Arabian Nights
Buy Green Crack Online
Ghost Train Haze For Sale
Buy Alaskan Thunder Fuck Online
Buy Budheads Edibles Chewy Cubes 600 mg
Buy Rhaphidophora tetrasperma
Buy Acapulco Gold strain online
Batman 66 Pinball For Sale
Monstera Albo For Sale Florida
Buy Gas Heads Edibles 600mg
Buy Bhang Cartridges Online
Philodendron fibraecataphyllum
Buy Iron Man Pinball Online
Buy Sour Diesel Online
Caudex (Beaucarnea)
Twilight Zone Pinball For Sale
Buy Nova Vape Carts Online
Maranta Lemon Lime For Sale
Philodendron Caramel Marble Variegated
Blueberry Strain For Sale
Pinball Machine Star Wars
Philodendron Florida Beauty Variegata
Buy Kali Mist Online
Jurassic Park Pinball
Buy Chocolope Online
Buy Durban Poison Online
Buy Spliffin Vape Cartridges Online
Buy Skywalker OG Online
Buy Push Vape Cartridges Online
Buy Wonders 1000mg THC Canna Lean Online
Buy Grapefruit Online
Friendly Farms Carts For Sale
Buy Lemon Haze Strain
Buy Weed Online
Variegated Plant Shop
Pinball Machine For Sale Near Me
eBAY PinBall Machine
Buy Grease Monkey Exotic Carts
710 Kingpen Catridges For Sale
Buy Moonrock clear carts online
Rare Variegated Plants For Sale
Variegated Plants For Sale UK
Variegated Plants For Sale NZ
Philodendron Florida Beauty Variegated For Sale
Rove Carts For Sale
Buy Stiiizy Carts Online


https://megaweedmarketltd.com/
https://qualitypinballcompany.com/
https://qualityvariegatedplants.com/

https://megaweedmarketltd.com/product/bruce_banner_strain/
https://megaweedmarketltd.com/product/dank_gummies/
https://megaweedmarketltd.com/product/brass_knuckles_for_sale/
https://qualityvariegatedplants.com/product/white-monstera-for-sale/
https://megaweedmarketltd.com/product/ak_47_strain/
https://megaweedmarketltd.com/product/one_up_bar/
https://qualitypinballcompany.com/product/tales_of_arabian_nights/
https://megaweedmarketltd.com/product/green_crack_strain/
https://megaweedmarketltd.com/product/ghost-train-haze/
https://megaweedmarketltd.com/product/buy-alaskan-thunder-fuck-online/
https://megaweedmarketltd.com/product/budheads/
https://qualityvariegatedplants.com/product/buy-rhaphidophora-tetrasperma/
https://megaweedmarketltd.com/product/buy-acapulco-gold-strain-online/
https://qualitypinballcompany.com/product/batman_66_pinball_for_sale/
https://qualityvariegatedplants.com/product/monstera-albo-for-sale-florida/
https://megaweedmarketltd.com/product/gas_heads/
https://megaweedmarketltd.com/product/buy_bhang_cartridges_online/
https://qualityvariegatedplants.com/product/philodendron-fibraecataphyllum/
https://qualitypinballcompany.com/product/buy_iron_man_pinball_online/
https://megaweedmarketltd.com/product/sour-diesel/
https://qualityvariegatedplants.com/product/caudex-beaucarnea/
https://qualitypinballcompany.com/product/twilight_zone_pinball_for_sale/
https://megaweedmarketltd.com/product/buy_nova_vape_carts_online/
https://qualityvariegatedplants.com/product/maranta-lemon-lime-for-sale/
https://qualityvariegatedplants.com/product/philodendron-caramel-marble/
https://megaweedmarketltd.com/product/blueberry_strain/
https://qualitypinballcompany.com/product/pinball_machine_star_wars/
https://qualityvariegatedplants.com/product/philodendron-florida-beauty-2/
https://megaweedmarketltd.com/product/kali-mist/
https://qualitypinballcompany.com/product/jurassic_park_pinball/
https://megaweedmarketltd.com/product/chocolope/
https://megaweedmarketltd.com/product/buy-durban-poison-online/
https://megaweedmarketltd.com/product/spliffin_cartridges/
https://megaweedmarketltd.com/product/skywalker_strain/
https://megaweedmarketltd.com/product/buy_push_vape_cartridges_online/
https://megaweedmarketltd.com/product/thc_lean/
https://megaweedmarketltd.com/product/grapefruit/
https://megaweedmarketltd.com/product/friendly_farms/
https://megaweedmarketltd.com/product/lemon_haze/
https://megaweedmarketltd.com/product/buy_grease_monkey_exotic_carts/
https://megaweedmarketltd.com/product/710_kingpen_cartridges_for_sale/
https://megaweedmarketltd.com/product/buy_moonrock_clear_carts_online/
https://qualityvariegatedplants.com/product/philodendron-florida-beauty-variegated-for-sale/
https://qualityvariegatedplants.com/product/philodendron-florida-beauty-for-sale-near-me/
https://megaweedmarketltd.com/product/rove_carts/
https://megaweedmarketltd.com/product/stiiizy-carts/
 
On 2023-02-24, Jan Panteltje <alien@comet.invalid> wrote:
On a sunny day (Thu, 23 Feb 2023 07:12:17 -0800 (PST)) it happened Ricky
gnuarm.deletethisbit@gmail.com> wrote in
db92411e-80c3-4bfc-ba26-670e877a8cbdn@googlegroups.com>:

Anytime someone talks about \"bloat\" in software, I realize they don\'t program.

You talk crap, show us some code you wrote.


It\'s like electric cars. The only people who complain about them are the people
who don\'t drive them.

Right and the grid is full here.
People with \'lectric cars in north-west US now without power because of the ice cold weather
ARE STUCK.

When there\'s no electricity you can\'t refuel an petroleum fueled
veichiole in the normal way either.

--
Jasen.
pǝsɹǝʌǝɹ sʇɥƃᴉɹ ll∀
 
On 2/27/2023 3:03 AM, Jasen Betts wrote:
On 2023-02-24, Jan Panteltje <alien@comet.invalid> wrote:

Right and the grid is full here.
People with \'lectric cars in north-west US now without power because of the ice cold weather
ARE STUCK.

When there\'s no electricity you can\'t refuel an petroleum fueled
veichiole in the normal way either.

I suspect a 1KW genset would be enough to power a single fuel pump,
operating it at its nominal flow rate.

So, a gallon of fuel siphoned out of a volunteer vehicle (to power
the pump) lets you pump enough fuel to run the genset for \"some time\".
This fuel then lets the genset pump fuel to refill the volunteer
vehicle and the line of cars queued up behind it.

I don\'t see how you get the equivalent sort of fallback for an
electric vehicle, though...
 
On 26-Feb-23 11:10 am, Ricky wrote:
On Thursday, February 23, 2023 at 7:11:31 PM UTC-4, Sylvia Else wrote:
On 24-Feb-23 2:10 am, Ricky wrote:
On Thursday, February 23, 2023 at 6:00:51 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 8:22 pm, Ricky wrote:
On Thursday, February 23, 2023 at 3:51:26 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 4:01 pm, Ricky wrote:
On Wednesday, February 22, 2023 at 10:15:36 PM UTC-5, Sylvia Else wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

You mean as opposed to programs randomly failing in the field?

It\'s not as if the checks make a program work properly, they only make
the failure mode clearer. If an improper access occurs frequently, then
this would likely show up during development/testing. If it occurs
rarely, then you\'ll still see what look like random failures in the field.

If you have checks in place, you will know something about what failed and where to look in the code.


Whether it\'s worth the extra cost of hardware will depend on how many
incarnations there are going to be.

Extra hardware cost???

Faster processor, more memory.

Sylvia.

0.01% faster... 10 bytes more memory. WTF???


If every non-constant array index is bounds checked, and every pointer
access is implemented via code that checks the pointer for validity
first, then it will neither by 0.01% not 10 bytes more.

Compilers may reduce this by proving that certain accesses are always
valid, but I believe the overhead will still be significant.

Woosh! That\'s the sound of the point under discussion rushing over your head.

Now you are trying to argue details of how much CPU time or memory, yet offer zero data. Ok, you win. The cost is only zero 99.999% of designs. Somewhere, there\'s a design that adding bounds checking pushed a specific design into a slightly larger CPU chip with some fractional amount more memory.

Most of the people posting here are happy to show they are idiots. You usually refrain from such posts. But once you\'ve made a poor statement, you are inclined to double down, dig your heels in and stick with your guns, in spite of having zero data to support your point.

Programs tend to do a lot of array access, and pointer references. Even
if you may think you\'re just crunching data, that has to come from
somewhere, and the results have to be stored somewhere. So the time and
memory consequences of array bound and pointer validation are not trivial.

Sylvia.
 
On a sunny day (Mon, 27 Feb 2023 10:03:10 -0000 (UTC)) it happened Jasen Betts
<usenet@revmaps.no-ip.org> wrote in <tthv4u$kq4$1@gonzo.revmaps.no-ip.org>:

On 2023-02-24, Jan Panteltje <alien@comet.invalid> wrote:
On a sunny day (Thu, 23 Feb 2023 07:12:17 -0800 (PST)) it happened Ricky
gnuarm.deletethisbit@gmail.com> wrote in
db92411e-80c3-4bfc-ba26-670e877a8cbdn@googlegroups.com>:

Anytime someone talks about \"bloat\" in software, I realize they don\'t program.

You talk crap, show us some code you wrote.


It\'s like electric cars. The only people who complain about them are the people
who don\'t drive them.

Right and the grid is full here.
People with \'lectric cars in north-west US now without power because of the ice cold weather
ARE STUCK.

When there\'s no electricity you can\'t refuel an petroleum fueled
veichiole in the normal way either.

But you can have a jerrycan or 2 at hand.
 
On Monday, February 27, 2023 at 9:50:18 PM UTC+11, Don Y wrote:
On 2/27/2023 3:03 AM, Jasen Betts wrote:
On 2023-02-24, Jan Panteltje <al...@comet.invalid> wrote:

Right and the grid is full here.
People with \'lectric cars in north-west US now without power because of the ice cold weather
ARE STUCK.

When there\'s no electricity you can\'t refuel an petroleum fueled
vehicle in the normal way either.
I suspect a 1KW genset would be enough to power a single fuel pump,
operating it at its nominal flow rate.

So, a gallon of fuel siphoned out of a volunteer vehicle (to power
the pump) lets you pump enough fuel to run the genset for \"some time\".
This fuel then lets the genset pump fuel to refill the volunteer
vehicle and the line of cars queued up behind it.

I don\'t see how you get the equivalent sort of fallback for an electric vehicle, though...

It\'s called a grid-scale battery. Pumped storage is another solution. Hydroelectric dams don\'t tend to freeze up and the compressed air equivalent is even less likely to get messed up.

Emergency situations call for stored energy, and tanks of fossil fuel aren\'t the only option these days, though half-wits like Jan Panteltje can\'t imagine anything they haven\'t actually used.

--
Bill Sloman, Sydney
 
On 2/27/2023 4:37 AM, Sylvia Else wrote:
Programs tend to do a lot of array access, and pointer references. Even if you
may think you\'re just crunching data, that has to come from somewhere, and the
results have to be stored somewhere. So the time and memory consequences of
array bound and pointer validation are not trivial.

The language will dictate what is checked/enforced and, more importantly,
*who* will be responsible for that. This is true of many potential coder
f*ckups (is it OK for me to divide by zero? scribble on program TEXT? etc.)

In ASM, I decide what to check and I am responsible for doing the checking.
If you were paranoid, you could use the VAX\'s INDEX opcode to qualify
each reference. The 68K did it with the CHECK opcode. (IIRC, intel
has some x86 hooks to do this as well, in hardware).

In C, the same largely applies. There\'s nothing preventing me from
indexing off a pointer (akin to accessing an array subscript) into
la-la-land.

\"Old\" languages (and interpreted languages) tended to perform these
checks -- along with some of the newer /langues du jour/ (Java, Rust,
Haskell, etc.)

C++, notably, doesn\'t (in the language proper). The developer can implement
iterators and accessors for particular classes (and templates) and decide
what a \"bad reference\" really means, in those contexts. These can be
potentially VERY expensive, depending on the contract(s) that the developer
wants to enforce.

In the desktop world, memory protection mechanisms will more often
catch these errors that the compiler \"blessed\". As these mechanisms
become increasingly more ubiquitous, we\'ll likely see new run-time
mechanisms evolve to handle them (in my design, a SIGSEGV means your
app gets blacklisted; it\'s broke, why should I waste resources running
it, AGAIN? What *other* faults does it have that I might not catch??)

Note there are other means of ensuring your code doesn\'t (intentionally)
walk off (either) end of an object\'s domain. So, coding practices can
go a long way to eliminate the need for run-time checks.

By far, the biggest runtime \"cost\" of any programming effort is that
of instrumenting the code for runtime tests as well as acquiring
performance metrics. But, you\'re not just looking for bounds checks
but, also, for logical inconsistencies, etc. (e.g., how did
*Can\'t*, happen?)
 
On 24/02/2023 19:32, Don Y wrote:
On 2/24/2023 8:49 AM, Martin Brown wrote:
Anyone who is serious about timing code knows how to read the free
running system clock. RDTSC in Intel CPUs is very handy (even if they
warn against using it for this purpose it works very well).

But, if you are running code on anything other than bare metal, you are
seeing a composite effect from the OS, other coexecuting tasks, etc.

Agreed you don\'t want to do it on a heavily loaded machine but if you
take an average over 10^6 function calls the OS noise is greatly
diminished. Roughly a second is good enough for 3-4 sig figs.

As most of the code I\'ve written is RT (all of the products I\'ve designed
but none of the desktop tools I\'ve written), I have to have a good handle
on how an algorithm will perform, in the time domain, over the range of
different operating conditions.  Hence, the *design* of the algorithm
becomes keenly important.

Design of algorithms is always important and knowing when to break the
rules is too. Fancy sorts on very short lists don\'t work too well.

And, while there are hardware mechanisms (on SOME processors and to varying
degrees) that act as performance enhancers, there are also things that
work to defeat these mechanisms (e.g., different tasks competing for cache
lines).

Cache line behaviour is very difficult to understand. I have some code
which flips between two different benchmark speeds depending on exactly
where it loads in relation to the 64 byte cache line boundary.
In a desktop environment, the jiffy is an eternity; not so in embedded
designs.  So, while a desktop app can run \"long enough\" to make effective
use of some of these mechanisms, an embedded one may actually just be
perpetually \"muddying the waters\".  (if a line of YOUR task\'s code gets
installed in the cache, but pieces of 30 other tasks run before you get
another opportunity, how likely is the cache to still be \"warm\", from
YOUR point of view?)

That is always a problem as is memory fragmentation in kit that runs for
extended periods of time and allocate buffers of widely varying sizes.
Many domestic routers have this problem after a month or so of up time.
Other CPUs have equivalent performance monitoring registers although
they may be hidden in some fine print in dark recesses of the manual.

You often need to understand more than the CPU to be able to guesstimate
(or measure) performance.  E.g., at the most abstract implementation
levels in my design, users write \"scripts\" in a JITed language (Limbo).
As it does a lot of dynamic object allocation, behind the scenes, the
GC has to run periodically.

GC is the enemy of hard realtime work. By Murphy\'s Law it will always
get invoked just when you need every last uS you can glean.
So, if you happen to measure performance of an algorithm -- but the GC ran
during some number of those iterations -- your results can be difficult
to interpret; what\'s the performance of the algorithm vs. that of the
supporting *system*?

These days most binary operations are single cycle and potentially
less if there are sub expressions that have no interdependencies.
Divides are still a lot slower. This makes Pade 5,4 a good choice for
rational approximations on current Intel CPU\'s the numerator and
denominator evaluate in parallel (the physical hardware is that good)
and some of the time for the divide is lost along the way.

But, that\'s still only for \"bigger\" processors.  Folks living in 8051-land
are still looking at subroutines to perform math operations (in anything
larger than a byte).

I remember those days somewhat unfondly.

In the old days we were warned to avoid conditional branches but today
you can get away with them and sometimes active loop unrolling will
work against you. If it has to be as fast as possible then you need to
understand every last quirk of the target CPU.

And, then you are intimately bound to THAT target.  You have to rethink
your
implementation when you want (or need!) to move to another (economic
reasons,
parts availability, etc.).

You generally end up bound to one family of CPUs. We didn\'t always pick
wisely and I variously backed Zilog Z8000 (remember them?) and the
brilliant for its time NEC 7210 graphics processor. OTOH we did OK with
TI 99xx/99k Motorola 68xx/68k and Intel x86 so I can\'t really complain.

I find it best (embedded systems) to just design with good algorithms,
understand how their performance can vary (and try to avoid application
in those cases where it will!) and rely on the RTOS to ensure the
important stuff gets done in a timely manner.

My time doing that sort of embedded realtime stuff the battle was mostly
to get the entire thing to fit in a 4k or 8k ROM somehow! To some extent
it would be whatever speed it could manage and any glaring deficiencies
in performance would be dealt with once it was all functional.

If your notion of how your \"box\" works is largely static, then you will
likely end up with more hardware than you need.  If you are cost conscious,
then you\'re at a disadvantage if you can\'t decouple things that MUST
get done, now, from things that you would LIKE to get done, now.

[This is the folly that most folks fall into with HRT designs; they
overspecify the hardware because they haven\'t considered how to
address missed deadlines.  Because many things that they want to think
of as being \"hard\" deadlines really aren\'t.]

How fast things will go in practice can only be determined today by
putting all of the pieces together and seeing how fast it will run.

Exactly.  Then, wonder what might happen if something gets *added* to the
mix (either a future product enhancement -- but same old hardware -- or
interactions with external agencies in ways that you hadn\'t anticipated).

[IMO, this is where the future of embedded systems lies -- especially with
IoT.  Folks are going to think of all sorts of ways to leverage
functionality
in devices B and F to enhance the performance of a system neither the
Bnor F designers had envisioned at their respective design times!]

It is getting harder to predict this.
[[E.g., I use an EXISTING security camera to determine when the mail has
been
delivered.  Why try to \"wire\" the mailbox with some sort of \"mail
sensor\"??]]

Benchmarks can be misleading too. It doesn\'t tell you how the
component will behave in the environment where you will actually use it.

Yes.  So, \"published\" benchmarks always have to be taken with a grain
of salt.

Even when you are controlling the benchmarks you can still get anomolous
results. The fastest cbrt in a bechmarking loop is not always the same
as the fastest on in a real world solve a cubic equation program!

In the 70/80\'s (heyday of MPU diversity), vendors all had their own
favorite benchmarks that they would use to tout how great THEIR
product was, vs. their competitor(s) -- cuz they all wanted design-ins.
We\'d run our own benchmarks and make OUR decisions based on how
the product performed running the sorts of code *we* wanted to run
on the sorts of hardware COSTS that we wanted to afford.

So did compilers. I knew one lot who recognised certain of the common
benchmarks and specifically optimised to be top of the table in some.

Even C is becoming difficult, in some cases, to \'second guess\'.
And, ASM isn\'t immune as the hardware is evolving to provide
performance enhancing features that can\'t often be quantified,
at design/compile time.

I have a little puzzle on that one too. I have some verified correct
code for cube root running on x87 and SSE/AVX hardware and when
benchmarked aggressively for blocks of 10^7 cycles gets progressively
faster it can be by as much as a factor of two. I know that others
have seen this effect sometimes too but it only happens sometimes -
usually on dense frequently executed x87 code. These are cube root
benchmarks:

How have you eliminated the effects of the rest of the host system
from your measurements?  Are you sure other tasks aren\'t muddying
the cache?  Or, forcing pages out of RAM?  (Or, *allowing* you better
access to cache and physical memory?)

It is quite reproducible and others see it too. The results are
consistent and depend only on the number of consecutive x87
instructions. The results for non-x87 code are rock solid +/- 1 cycle.

System cbrt on GNU and MS are best avoided entirely - both are amongst
the slowest and least accurate of all those I have ever tested. The
best public algorithm for general use is by Sun for their Unix
library. It is a variant of Kahan\'s magic constant hack. BSD is slower
but OK.

Most applications aren\'t scraping around in the mud, trying to eek out a
few processor cycles.  Instead, their developers are more worried about
correctness and long term \"support\"; I have algorithms that start with
large walls of commentary that essentially says:  \"Here, There be Dragon\'s.
Don\'t muck with this unless you completely understand all of the following
documentation and the reasons behind each of the decisions made in THIS
implementation.\"

In this case correctness is taken as a given - there are other
performance tests against known difficult edge cases and random input
with a distribution that focusses down on the most difficult region.

[So, when some idiot DOES muck with it, I can just grin and say, \"Well, I
guess YOU\'LL have to figure out what you did wrong, eh?  Or, figure out how
to restore the original implementation and salvage any other hacks you
may have needed.\"]

I recall someone left an undocumented booby trap of that sort in a 68k
Unix port knowing full well that when they left the company (which they
intended to do fairly shortly) someone would fall into it.
We don\'t need no stinkin\' OS!

You may not think you need one but you do need a way to share the
physical resources between the tasks that want to use them.

Single-threaded and trivial designs can usually live without an OS.
I still see people using \"superloops\" (which should have gone away
in the 70\'s as they are brittle to maintain).

But, you\'ve really constrained a product\'s design if it doesn\'t
exploit the abstractions that an OS supplies.

I have done simple things on PICs with everything in a loop and the odd
ISR like a bare metal LCD display - 16877 has just enough pins for that.

I built a box, some years ago, and recreated a UNIX-like API
for the application.  Client was blown away when I was able to
provide the same functionality that was available on the \"console
port\" (RS232) on ALL the ports.  Simultaneously.  By adding *one*
line of code for each port!

Cuz, you know, sooner or later something like that would be on
the wish list and a more naive implementation would lead to
a costly refactoring.

Cooperative multitasking can be done with interrupts for the realtime
IO but life is a lot easier with a bit of help from an RTOS.

Especially if you truly are operating with timeliness constraints.

How do you know that each task met its deadline?  Can you even quantify
them?  What do you do when a task *misses* its deadline?  How do you
schedule your tasks\' execution to optimize resource utilization?
What happens when you add a task?  Or, change a task\'s workload?

What used to annoy me (and still does) are programmers that think their
task should always have the highest priority over everything else.

In any serious multiprocessor you quickly learn that the task that keeps
everything else loaded with work to do is critical to performance.

--
Martin Brown
 
On 28/02/2023 02:20, Don Y wrote:
On 2/27/2023 4:37 AM, Sylvia Else wrote:
Programs tend to do a lot of array access, and pointer references.
Even if you may think you\'re just crunching data, that has to come
from somewhere, and the results have to be stored somewhere. So the
time and memory consequences of array bound and pointer validation are
not trivial.

The language will dictate what is checked/enforced and, more importantly,
*who* will be responsible for that.  This is true of many potential coder
f*ckups (is it OK for me to divide by zero?  scribble on program TEXT?
etc.)

You can have different interpretations of the language that either
implement strict stack overflow, array bounds and numeric overflow
failures like divide by zero, sqrt(-1) a hard fault.

I recall one teaching compiler that by default would generate a hard
trap code for using an uninitialised variable. It was easy enough to
have it as a compile time warning but you had to RTFM to find out how to
do that. Few of the students ever did - only the brighter ones...
In ASM, I decide what to check and I am responsible for doing the checking.
If you were paranoid, you could use the VAX\'s INDEX opcode to qualify
each reference.  The 68K did it with the CHECK opcode.  (IIRC, intel
has some x86 hooks to do this as well, in hardware).

BOUND dates back to iAPX 186 ~1995? although I have never seen it used
that way in anger. OS/2\'s page descriptor tables gave very robust
control of what you could do to a chunk of memory. Any bad access was
terminal for the offender and unable to affect any other processes.
In C, the same largely applies.  There\'s nothing preventing me from
indexing off a pointer (akin to accessing an array subscript) into
la-la-land.

The old smash along strings copying everything until you finally hit a
null terminator is now deprecated. Most compilers warn against it...

\"Old\" languages (and interpreted languages) tended to perform these
checks -- along with some of the newer /langues du jour/ (Java, Rust,
Haskell, etc.)

Quite a few have debugging modes where protective filler is used to make
uninitialised variables visible to the runtime system and so that
overwriting the end of allocated memory by modest amounts harmlessly
corrupts some padding added there for catching fence post errors.

C++, notably, doesn\'t (in the language proper).  The developer can
implement
iterators and accessors for particular classes (and templates) and decide
what a \"bad reference\" really means, in those contexts.  These can be
potentially VERY expensive, depending on the contract(s) that the developer
wants to enforce.

There is no reason why C/C++ could not range check static arrays or even
dynamic ones. It knows at runtime how big they actually are.

The big difficulty in C is with raw pointers to a God alone knows what.
MS delighted in such horrid constructs as a part of the Windows code.

In the desktop world, memory protection mechanisms will more often
catch these errors that the compiler \"blessed\".  As these mechanisms
become increasingly more ubiquitous, we\'ll likely see new run-time
mechanisms evolve to handle them (in my design, a SIGSEGV means your
app gets blacklisted; it\'s broke, why should I waste resources running
it, AGAIN? What *other* faults does it have that I might not catch??)

Flat memory architecture is the enemy of robust behaviour here. If you
can construct a pointer to someone else\'s task and have write access
then there is scope for doing a lot of unintentional damage.

Preventing user data from being executed makes things a lot safer too.

Note there are other means of ensuring your code doesn\'t (intentionally)
walk off (either) end of an object\'s domain.  So, coding practices can
go a long way to eliminate the need for run-time checks.

The best of both worlds is a debugging environment that protects the
innocent from the most common mistakes we humans make either by compile
time static code analysis (which has come on a long way from Lint) or by
adding sufficient runtime checks to trap a decent fraction of them.

By far, the biggest runtime \"cost\" of any programming effort is that
of instrumenting the code for runtime tests as well as acquiring
performance metrics.  But, you\'re not just looking for bounds checks
but, also, for logical inconsistencies, etc.  (e.g., how did
*Can\'t*, happen?)

Fence post errors with binary logic are particularly bad news.

Like the raise undercarriage command that wasn\'t prevented from being
used when the plane was on the ground or the plane that flipped itself
over when crossing the equator. It is traditional for UK gunnery tables
Coriolis force correction to be wrong in the Southern hemisphere too - a
failure that continued as late as the Falklands war (at least at the
outset). They realised PDQ that the \"correction\" was doubling the error!

Comp.risks is littered with coding mistakes that really should never
have happened but the pointy haired boss said \"ship it and be damned\".


--
Martin Brown
 
On 3/1/2023 3:29 AM, Martin Brown wrote:
On 28/02/2023 02:20, Don Y wrote:
On 2/27/2023 4:37 AM, Sylvia Else wrote:
Programs tend to do a lot of array access, and pointer references. Even if
you may think you\'re just crunching data, that has to come from somewhere,
and the results have to be stored somewhere. So the time and memory
consequences of array bound and pointer validation are not trivial.

The language will dictate what is checked/enforced and, more importantly,
*who* will be responsible for that.  This is true of many potential coder
f*ckups (is it OK for me to divide by zero?  scribble on program TEXT? etc.)

You can have different interpretations of the language that either implement
strict stack overflow, array bounds and numeric overflow failures like divide
by zero, sqrt(-1) a hard fault.

But the language, itself, doesn\'t (or does) specify these things.
And, the consequences thereof. E.g., I can reference &array[-1]
but can\'t *DEreference* it. A compiler vendor can opt to
help me by catching my \"misbehaving\" but the language doesn\'t
(or does) mandate that.

How a piece of code behaves also is largely defined by the environment
in which it operates. How *should* strlen(&END_OF_MEMORY) behave
(assuming that location isn\'t NUL)?

I recall one teaching compiler that by default would generate a hard trap code
for using an uninitialised variable. It was easy enough to have it as a compile
time warning but you had to RTFM to find out how to do that. Few of the
students ever did - only the brighter ones...

In ASM, I decide what to check and I am responsible for doing the checking.
If you were paranoid, you could use the VAX\'s INDEX opcode to qualify
each reference.  The 68K did it with the CHECK opcode.  (IIRC, intel
has some x86 hooks to do this as well, in hardware).

BOUND dates back to iAPX 186 ~1995? although I have never seen it used that way
in anger. OS/2\'s page descriptor tables gave very robust control of what you
could do to a chunk of memory. Any bad access was terminal for the offender and
unable to affect any other processes.

But an object need not fill an entire page. The appeal of (generally)
segmented architectures is that you could, in theory (with enough
hardware resources), create fine-grained domains for individual
objects. E.g., a 7 byte array.

The \"managed environments\" that many new languages try to create
attempt to mimic the behavior of such hypothetical hardware at
compile time (cuz the hardware to do so doesn\'t exist).

In C, the same largely applies.  There\'s nothing preventing me from
indexing off a pointer (akin to accessing an array subscript) into
la-la-land.

The old smash along strings copying everything until you finally hit a null
terminator is now deprecated. Most compilers warn against it...

But you can\'t always constrain an object\'s size in any meaningful way.
Just like you can\'t constrain ASM to not go chasing pointers
indefinitely.

\"Old\" languages (and interpreted languages) tended to perform these
checks -- along with some of the newer /langues du jour/ (Java, Rust,
Haskell, etc.)

Quite a few have debugging modes where protective filler is used to make
uninitialised variables visible to the runtime system and so that overwriting
the end of allocated memory by modest amounts harmlessly corrupts some padding
added there for catching fence post errors.

So, release DEBUG code? :> If the language called for those rules to
be in place, then you\'d not need to have different DEBUG and RELEASE
images.

These are all \"hacks\" to work around permissions that the language
affords that are often misused.

C++, notably, doesn\'t (in the language proper).  The developer can implement
iterators and accessors for particular classes (and templates) and decide
what a \"bad reference\" really means, in those contexts.  These can be
potentially VERY expensive, depending on the contract(s) that the developer
wants to enforce.

There is no reason why C/C++ could not range check static arrays or even
dynamic ones. It knows at runtime how big they actually are.

But the language doesn\'t afford that guarantee. Else we wouldn\'t have
buffer overrun errors.

The big difficulty in C is with raw pointers to a God alone knows what.
MS delighted in such horrid constructs as a part of the Windows code.

The advantage, there, is that one can build designs that can\'t
conveniently be pigeon-holed into the rules of the language.
But, you do so without the language protecting you from your
\"transgressions\".

In the desktop world, memory protection mechanisms will more often
catch these errors that the compiler \"blessed\".  As these mechanisms
become increasingly more ubiquitous, we\'ll likely see new run-time
mechanisms evolve to handle them (in my design, a SIGSEGV means your
app gets blacklisted; it\'s broke, why should I waste resources running
it, AGAIN? What *other* faults does it have that I might not catch??)

Flat memory architecture is the enemy of robust behaviour here. If you can
construct a pointer to someone else\'s task and have write access then there is
scope for doing a lot of unintentional damage.

One could still layer a protection system atop a flat address space.
I see that as the inevitable direction for hardware to follow. The
silicon implementation(s) is well understood -- as are the mechanisms
to build and maintain TLBs. When folks start looking at eliminating
\"bad behaviors\" by spending a bit on hardware, this will be one of the
standout conclusions reached. Esp as codebases get larger and more complex
(complex: doesn\'t fit in a single brain)

> Preventing user data from being executed makes things a lot safer too.

Thankfully, you can address encapsulation with code instead of completely
relying on hardware. E.g., I can publish a list of function entry
points and still prevent *you* from accessing any to which I think you
*shouldn\'t* have access. And, *dynamically* change those ACLs.

So, instead of archaic notions of privilege, I can dole it out
in a very fine-grained manner. E.g., *you* can invoke shutdown()
even though you don\'t have any other privileges (but someone else
can\'t do that while *could* do all sorts of other privileged
operations)

Note there are other means of ensuring your code doesn\'t (intentionally)
walk off (either) end of an object\'s domain.  So, coding practices can
go a long way to eliminate the need for run-time checks.

The best of both worlds is a debugging environment that protects the innocent
from the most common mistakes we humans make either by compile time static code
analysis (which has come on a long way from Lint) or by adding sufficient
runtime checks to trap a decent fraction of them.

I think that only exists (in the languages described here) as a matter
of discipline. If you are pedantic, you can sprinkle lots of invariants
through your code -- both to serve as documentary as well as to
*prove* that the code *is* behaving per contract.

But, there\'s nothing FORCING you to do so. People being lazy, they
opt to rationalize how the invariants are \"so obvious\" that they need
not be formally specified.

Then, they wonder why their code doesn\'t work! (isn\'t it OBVIOUS?)

By far, the biggest runtime \"cost\" of any programming effort is that
of instrumenting the code for runtime tests as well as acquiring
performance metrics.  But, you\'re not just looking for bounds checks
but, also, for logical inconsistencies, etc.  (e.g., how did
*Can\'t*, happen?)

Fence post errors with binary logic are particularly bad news.

Like the raise undercarriage command that wasn\'t prevented from being used when
the plane was on the ground or the plane that flipped itself over when crossing
the equator. It is traditional for UK gunnery tables Coriolis force correction
to be wrong in the Southern hemisphere too - a failure that continued as late
as the Falklands war (at least at the outset). They realised PDQ that the
\"correction\" was doubling the error!

Too funny!

Comp.risks is littered with coding mistakes that really should never have
happened but the pointy haired boss said \"ship it and be damned\".

I think many \"coders\" just treat it like a job. How fussy do you get
about waxing your car or mowing your lawn? Surely you can tell when
you\'ve \"skimped\"... why?

I see a lot of folks blaming their PHBs for their bugs. But, I see
a similar number of bugs in FOSS software. Where was your \"boss\"
when you were writing those? (Or, do you now resort to a different
excuse: \"I wasn\'t being PAID to do that work...\"?)

Testing is tedious and, to many, boring. Often, all they do is
*verify* known good cases without looking at their code to
hypothesize cases that will misfire!

\"What happens if I unplug this communication cable... NOW?
What about *now*? What if I plug this one back in just as
I\'m unplugging this other?\"

\"You\'re not supposed to DO that!\"

\"Then why did your code LET ME?!\"
 

Welcome to EDABoard.com

Sponsor

Back
Top