dead programming languages...

On 23/02/2023 03:15, Sylvia Else wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.


But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

Today CPU time is cheap and most embedded controllers are way faster
than they need to be to do the job (this was not always true).

Checks and asserts can help in debugging code but if any of them have
side effects then it can make for unwelcome interesting behaviour when
the final optimised version is created.

The standard trick is to develop it with all the range checking on and
some form of postmortem call stack dump if it ever crashes and then
disable all the checking in production code but leave the post mortem
stack traceback and keep a copy of the map file and production code.

That way with a bit of luck you can identify and eliminate any in field
failures reliably. This presupposes you have a way to communicate with
the embedded firmware and do a soft reset to regain control.

Unlike hardware which wears out with time software should become more
reliable with accumulated runtime in different environments.

--
Martin Brown
 
On 2/23/2023 2:18 AM, Martin Brown wrote:

Maybe C++ provided that you only use a restricted subset of the language much
like the Spark dialect of High Integrity Ada.

The problem with C++ -- and many OOPS -- is there is lots that goes on
\"between the lines\" (or, in the whitespace between operators!). You
*really* have to be vigilant if you are trying to work in a fixed resource
environment.

And, it is inherently harder to guesstimate the resources you will need
for a design (which will be reified in the form of real chips!).

Modula2 came close to being ideal
(particularly back in its brief heyday) but never took off.

Blech.

Close enough to
bare metal to do device drivers but with strong typing.

The one that used to drive me crazy was hardware engineers who mapped DACs and
ADCs onto the least significant bits of a register - requiring significant code
changes when a new DAC/ADC with more bits came along.

You\'ve obviously never encountered a brain-damaged hardware design
where the designer -- knowing that the coder can do shits in
software -- puts a latch in a design and requires a tight
loop of shift-store-repeat where just letting the hardware clock
a parallel loaded register out would be infinitely more efficient
(for essentially the same cost!)

[Hardware types tend to really misunderstand software]

User-defined strong types that enforce their own usage are probably worth the
price of admission alone; e.g. quantities in newton/meters should be of type
NewtonMeters and foot/pounds should be FootPounds, and casually performing
operations with the two causes a compile error.

Unfortunately average to bad programmers will get around such strong typing
restrictions by the random application of casts :(
(until it compiles)

However, this quickly starts to \'smell\'; a prudent coder would
realize that he\'s doing something wrong. A maintenance person
would likely take a hatchet to it in a major refactoring.

By making all operations visible (incl casts), it makes you think
\"why is all of this \'work\' being done for such a \'simple\' problem?\"
(Unless, of course, you\'re just a coder -- how do I get it to
*run* and LOOK like it is performing correctly?)

Programs should where possible use one consistent set of units throughout
unless their task is converting between different units.

If you can design EVERYTHING in a system (no libraries or third-party
components), then you *may* be able to achieve such a consistent
environment. I\'ve found it an interesting challenge to sort out the
\"base methods\" that every object should implement. I suspect
third party components wouldn\'t even address those issues as they
can/do extend beyond their domains.

But, most developers have to act as \"intellectual glue\" between systems
and components that were designed by folks with no awareness of the others
who *might* be involved (at some later date/project). How much effort
is expended trying to massage data into a form that \"component X\"
expects, even if it is not \"natural\" for the application?

Just think of how much code has had to be written and rewritten because
of nonsensical \"efficiency\" concerns, over the years. I\'m sure everyone
is looking forward to 2038! (Not) Make your reservations now... seats are
selling fast! :>

Think about how many times folks have imposed (completely!) arbitrary
constraints on things -- simply because the examples that came to mind,
to them, at the time, *seemed* to \"fit\". Instead of asking why such a limit
*should* exist...

Look at software that was tailored to older iron and see how porting
it to modern architectures is essentially impossible; the designers
shoehorned things into the existing architectural model instead of
thinking about how they logically should be \"packed\".

[It\'s not unique to software development. How often have you filled out a
paper form only to discover that they\'ve given you 1.5 inches to write in
your entire address? And, when you look at the form, carefully, you realize
this decision was made just so the next field could line up with some
arbitrary feature on the page! Or, 25 nicely printed \"boxes\" to fit a
character at a time -- only to discover that your street name is
26 characters in length? (which character should I elide? maybe skip the
last one? or, the space between house number and street name? or...]

I recall someone once ordered a gross of grosses due to their odd
misunderstanding of the ordering system and their mistake only became apparent
when a 40T trailer arrived instead of the usual van.

Too funny. OTOH, one would assume they would notice the *price*
(weight, etc.) was not what they expected and wonder why...!
 
On 2/23/2023 2:28 AM, Martin Brown wrote:
On 23/02/2023 03:15, Sylvia Else wrote:
But can you afford the memory and time overheads inherent in run-time range
checks of things like array accesses?

Today CPU time is cheap and most embedded controllers are way faster than they
need to be to do the job (this was not always true).

Actually, it\'s not *always* the case, today. If you\'ve got a product that
has to operate off a *tiny* (8mm) lithium coin cell for 8+ hours, you really
start to think about how many opcode fetches you can do and whether or not you
might be able to lower the system clock frequency to get a bit more out of
that fixed \"power\" resource.

Checks and asserts can help in debugging code  but if any of them have side
effects then it can make for unwelcome interesting behaviour when the final
optimised version is created.

In some markets, the \"debug\" code (assertions, invariants, etc.) can\'t
be present in the production code. The reasoning is that they are
\"not supposed to happen\". Including them is a risk. Effectively:
if (FALSE) {
// do a bunch of stuff
}
if FALSE truly *is* FALSE, then why is the code there? if it\'s
*not* FALSE, then how will the product react when \"a bunch of stuff\"
actually happens at some future runtime?

The standard trick is to develop it with all the range checking on and some
form of postmortem call stack dump if it ever crashes and then disable all the
checking in production code but leave the post mortem stack traceback and keep
a copy of the map file and production code.

Or, use \"black boxes\" to log event streams. The code *always* runs
so the above problem is not an issue. And, if the shit DOES hit
the fan, you can extract information from the various BB\'s tobetter
understand what happened.

That way with a bit of luck you can identify and eliminate any in field
failures reliably. This presupposes you have a way to communicate with the
embedded firmware and do a soft reset to regain control.

Unlike hardware which wears out with time software should become more reliable
with accumulated runtime in different environments.

The difference (besides the obvious *complexity* differences) is that software
doesn\'t maintain a fixed feature set. Imagine redesigning your hardware
for continually changing operating conditions and wondering why you have
such a high failure rate! \"Gee, the 2A version worked just fine! Why are
all of the 5A versions blowing up??\"
 
On 23-Feb-23 8:22 pm, Ricky wrote:
On Thursday, February 23, 2023 at 3:51:26 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 4:01 pm, Ricky wrote:
On Wednesday, February 22, 2023 at 10:15:36 PM UTC-5, Sylvia Else wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

You mean as opposed to programs randomly failing in the field?

It\'s not as if the checks make a program work properly, they only make
the failure mode clearer. If an improper access occurs frequently, then
this would likely show up during development/testing. If it occurs
rarely, then you\'ll still see what look like random failures in the field.

If you have checks in place, you will know something about what failed and where to look in the code.


Whether it\'s worth the extra cost of hardware will depend on how many
incarnations there are going to be.

Extra hardware cost???

Faster processor, more memory.

Sylvia.
 
On 2/23/2023 13:00, Sylvia Else wrote:
On 23-Feb-23 8:22 pm, Ricky wrote:
On Thursday, February 23, 2023 at 3:51:26 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 4:01 pm, Ricky wrote:
On Wednesday, February 22, 2023 at 10:15:36 PM UTC-5, Sylvia Else
wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in
C++ or
Rust.

But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

You mean as opposed to programs randomly failing in the field?

It\'s not as if the checks make a program work properly, they only make
the failure mode clearer. If an improper access occurs frequently, then
this would likely show up during development/testing. If it occurs
rarely, then you\'ll still see what look like random failures in the
field.

If you have checks in place, you will know something about what failed
and where to look in the code.


Whether it\'s worth the extra cost of hardware will depend on how many
incarnations there are going to be.

Extra hardware cost???

Faster processor, more memory.

Sylvia.

It is not just about the cost of hardware. It more about doing
*the same* thing done before - with bloated resources.
Which is sort of forgivable, however if it were not the bloat
so much more can be done using today\'s CPU/memory resources than
virtually everyone (well, except me :) does.
Talk about *gigabytes* of RAM and getting a video player complain
about having insufficient memory... (yes, I had that not long ago,
on windows 10 with 8G RAM playing a 2G mkv file....). And if it
were only that, this is just the tip of the iceberg. Everyone is
hasty to just sell something, as long as people can\'t see to
what extent is is not even half baked they just go ahead.

------------------------------------------------------
Dimiter Popoff, TGI http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/
 
On Thursday, February 23, 2023 at 6:00:51 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 8:22 pm, Ricky wrote:
On Thursday, February 23, 2023 at 3:51:26 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 4:01 pm, Ricky wrote:
On Wednesday, February 22, 2023 at 10:15:36 PM UTC-5, Sylvia Else wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

You mean as opposed to programs randomly failing in the field?

It\'s not as if the checks make a program work properly, they only make
the failure mode clearer. If an improper access occurs frequently, then
this would likely show up during development/testing. If it occurs
rarely, then you\'ll still see what look like random failures in the field.

If you have checks in place, you will know something about what failed and where to look in the code.


Whether it\'s worth the extra cost of hardware will depend on how many
incarnations there are going to be.

Extra hardware cost???

Faster processor, more memory.

Sylvia.

0.01% faster... 10 bytes more memory. WTF???

--

Rick C.

+- Get 1,000 miles of free Supercharging
+- Tesla referral code - https://ts.la/richard11209
 
On Thursday, February 23, 2023 at 9:04:31 AM UTC-5, Dimiter_Popoff wrote:
On 2/23/2023 13:00, Sylvia Else wrote:
On 23-Feb-23 8:22 pm, Ricky wrote:
On Thursday, February 23, 2023 at 3:51:26 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 4:01 pm, Ricky wrote:
On Wednesday, February 22, 2023 at 10:15:36 PM UTC-5, Sylvia Else
wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in
C++ or
Rust.

But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

You mean as opposed to programs randomly failing in the field?

It\'s not as if the checks make a program work properly, they only make
the failure mode clearer. If an improper access occurs frequently, then
this would likely show up during development/testing. If it occurs
rarely, then you\'ll still see what look like random failures in the
field.

If you have checks in place, you will know something about what failed
and where to look in the code.


Whether it\'s worth the extra cost of hardware will depend on how many
incarnations there are going to be.

Extra hardware cost???

Faster processor, more memory.

Sylvia.
It is not just about the cost of hardware. It more about doing
*the same* thing done before - with bloated resources.
Which is sort of forgivable, however if it were not the bloat
so much more can be done using today\'s CPU/memory resources than
virtually everyone (well, except me :) does.
Talk about *gigabytes* of RAM and getting a video player complain
about having insufficient memory... (yes, I had that not long ago,
on windows 10 with 8G RAM playing a 2G mkv file....). And if it
were only that, this is just the tip of the iceberg. Everyone is
hasty to just sell something, as long as people can\'t see to
what extent is is not even half baked they just go ahead.

Anytime someone talks about \"bloat\" in software, I realize they don\'t program.

It\'s like electric cars. The only people who complain about them are the people who don\'t drive them.

--

Rick C.

++ Get 1,000 miles of free Supercharging
++ Tesla referral code - https://ts.la/richard11209
 
On 2/23/2023 17:12, Ricky wrote:
On Thursday, February 23, 2023 at 9:04:31 AM UTC-5, Dimiter_Popoff wrote:
On 2/23/2023 13:00, Sylvia Else wrote:
On 23-Feb-23 8:22 pm, Ricky wrote:
On Thursday, February 23, 2023 at 3:51:26 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 4:01 pm, Ricky wrote:
On Wednesday, February 22, 2023 at 10:15:36 PM UTC-5, Sylvia Else
wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in
C++ or
Rust.

But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

You mean as opposed to programs randomly failing in the field?

It\'s not as if the checks make a program work properly, they only make
the failure mode clearer. If an improper access occurs frequently, then
this would likely show up during development/testing. If it occurs
rarely, then you\'ll still see what look like random failures in the
field.

If you have checks in place, you will know something about what failed
and where to look in the code.


Whether it\'s worth the extra cost of hardware will depend on how many
incarnations there are going to be.

Extra hardware cost???

Faster processor, more memory.

Sylvia.
It is not just about the cost of hardware. It more about doing
*the same* thing done before - with bloated resources.
Which is sort of forgivable, however if it were not the bloat
so much more can be done using today\'s CPU/memory resources than
virtually everyone (well, except me :) does.
Talk about *gigabytes* of RAM and getting a video player complain
about having insufficient memory... (yes, I had that not long ago,
on windows 10 with 8G RAM playing a 2G mkv file....). And if it
were only that, this is just the tip of the iceberg. Everyone is
hasty to just sell something, as long as people can\'t see to
what extent is is not even half baked they just go ahead.

Anytime someone talks about \"bloat\" in software, I realize they don\'t program.

It\'s like electric cars. The only people who complain about them are the people who don\'t drive them.

LOL, you should work on your realization abilities.
You have never communicated with a person who has programmed more
than I have.
Checking who you are conversing with is also a good idea before
saying something stupid again.
 
On Wed, 22 Feb 2023 22:02:47 -0500, bitrex <user@example.net> wrote:

On 2/22/2023 2:05 PM, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

User-defined strong types that enforce their own usage are probably
worth the price of admission alone; e.g. quantities in newton/meters
should be of type NewtonMeters and foot/pounds should be FootPounds, and
casually performing operations with the two causes a compile error.

Yikes. Most of the embedded products that I coded were in assembly.

Most type conversions took zero nanoseconds.
 
For embedded programming? What choice do you have? Unless you\'re planning to write your own compiler, you use the available compilers for the IC and you learn the embedded IC\'s dialect for that language.


https://www.st.com/en/development-tools/stm32-software-development-tools.html

https://www.analog.com/en/design-center/evaluation-hardware-and-software/software/adswt-cces.html

https://www.microchip.com/en-us/tools-resources/develop/mplab-xc-compilers

https://www.ti.com/design-resources/embedded-development/ccs-development-tools/compilers.html

https://www.nec.com/en/global/solutions/hpc/sx/tools.html

https://www.iar.com/ewarm

https://www.keil.com/
 
On Wed, 22 Feb 2023 22:02:47 -0500, bitrex <user@example.net> wrote:

On 2/22/2023 2:05 PM, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

User-defined strong types that enforce their own usage are probably
worth the price of admission alone; e.g. quantities in newton/meters
should be of type NewtonMeters and foot/pounds should be FootPounds, and
casually performing operations with the two causes a compile error.

What did you use the unit newton/meter for? I guess it could define
the stiffness of a spring.

How about foot/pound? That could be softness.
 
On Thu, 23 Feb 2023 09:18:02 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/02/2023 03:02, bitrex wrote:
On 2/22/2023 2:05 PM, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

Maybe C++ provided that you only use a restricted subset of the language
much like the Spark dialect of High Integrity Ada. Modula2 came close to
being ideal (particularly back in its brief heyday) but never took off.
Close enough to bare metal to do device drivers but with strong typing.

The one that used to drive me crazy was hardware engineers who mapped
DACs and ADCs onto the least significant bits of a register - requiring
significant code changes when a new DAC/ADC with more bits came along.

That\'s the way some chips come. I\'d expect that some code changes will
be necessary when a new ADC or DAC is installed.

One ADC that we use has a bit in a register that sets whether the SPI
interface clocks on the rising or falling edge.
 
On 2/23/2023 10:59 AM, John Larkin wrote:
On Wed, 22 Feb 2023 22:02:47 -0500, bitrex <user@example.net> wrote:

On 2/22/2023 2:05 PM, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

User-defined strong types that enforce their own usage are probably
worth the price of admission alone; e.g. quantities in newton/meters
should be of type NewtonMeters and foot/pounds should be FootPounds, and
casually performing operations with the two causes a compile error.

Yikes. Most of the embedded products that I coded were in assembly.

Most type conversions took zero nanoseconds.
These kind of abstractions rarely have any runtime costs associated with
them, modern compilers are very intelligent. If all that\'s really
happening is you\'re doing elementary operations with the literal types
then that\'s what it\'ll compile to, even on the lowest optimization
settings.
 
On 2/23/2023 18:08, John Larkin wrote:
On Thu, 23 Feb 2023 09:18:02 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/02/2023 03:02, bitrex wrote:
On 2/22/2023 2:05 PM, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

Maybe C++ provided that you only use a restricted subset of the language
much like the Spark dialect of High Integrity Ada. Modula2 came close to
being ideal (particularly back in its brief heyday) but never took off.
Close enough to bare metal to do device drivers but with strong typing.

The one that used to drive me crazy was hardware engineers who mapped
DACs and ADCs onto the least significant bits of a register - requiring
significant code changes when a new DAC/ADC with more bits came along.

That\'s the way some chips come. I\'d expect that some code changes will
be necessary when a new ADC or DAC is installed.

Indeed, that\'s how most I have seen come, too. But he is right of
course, put the ADC left most and you just use as an integer of that
size without even knowing the number of real bits.
 
On 2/22/2023 11:14 PM, Don Y wrote:
On 2/22/2023 9:00 PM, Clifford Heath wrote:
On 23/02/23 14:02, bitrex wrote:
On 2/22/2023 2:05 PM, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages

Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

User-defined strong types that enforce their own usage are probably
worth the price of admission alone; e.g. quantities in newton/meters
should be of type NewtonMeters and foot/pounds should be FootPounds,
and casually performing operations with the two causes a compile error.

On the contrary, the language should automatically provide the
appropriate conversions.

I disagree, esp for anything beyond simple types.  (i.e., promote a
char to an int and hope the developer truly understands how signedness
is handled, etc.)

Pascal?

Requiring an explicit cast reassure me (the NEXT guy looking at
the code) that the developer actually intended to do what
he\'s doing in the way he\'s doing it -- even if it \"makes sense\".
This is what I liked most about C++ (overloading operators
so the *syntax* was clearer by hiding the machinery -- but not
eliminating it!)

How many times do you see an int being used as a pointer?
Is it *really* intended to be a pointer in THIS context?
I get tired of having to chase down compiler warnings
of this sort of thing in inherited code:  \"If you WANT
it to be a pointer, then explicitly cast it as such!
Don\'t just count on the compiler to *use* it as one!\"

C++11 and later include an optional type in <cstdint> called \"uintptr_t\"
which can hold a data pointer, and is what you\'d reinterpret_cast a data
pointer to if you needed to do something unusual with it
 
On 2/23/2023 9:12 AM, bitrex wrote:
These kind of abstractions rarely have any runtime costs associated with them,
modern compilers are very intelligent.

The point of a type system (instead of just the underlying mechanisms
of the architecture) is to ensure what you are doing \"makes sense\".
Inches and seconds can both be expressed as integers. So, the
hardware will gladly add one to the other. But, does it make
sense to be doing so, in the application domain?

If all that\'s really happening is you\'re
doing elementary operations with the literal types then that\'s what it\'ll
compile to, even on the lowest optimization settings.

OTOH, converting an int to a float, or a float to a double (or, vice versa)
can be costly. And, they both represent the same numerical value
(at least over part of the domain) so you don\'t look like you\'ve gained
anything; you\'ve just made it easier to add two NOW compatible types together.

One thing that HLLs hide from coders is these costs. I\'ve back-ported
algorithms to really tiny machines and you quickly discover that dealing
with \"longs\" has a huge cost increase for each operation. (my TCP/IP
stack on 8b processors was very carefully crafted so it didn\'t waste gobs of
cycles processing longs -- IP addresses -- unnecessarily)

And, of course, if interacting with another architecture, then there are
marshalling and type conversion costs that make function calls considerably
more costly than simply letting the compiler build a stack frame for you!
 
On 2/23/2023 9:04 AM, Dimiter_Popoff wrote:
On 2/23/2023 13:00, Sylvia Else wrote:
On 23-Feb-23 8:22 pm, Ricky wrote:
On Thursday, February 23, 2023 at 3:51:26 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 4:01 pm, Ricky wrote:
On Wednesday, February 22, 2023 at 10:15:36 PM UTC-5, Sylvia Else
wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in
C++ or
Rust.

But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

You mean as opposed to programs randomly failing in the field?

It\'s not as if the checks make a program work properly, they only make
the failure mode clearer. If an improper access occurs frequently, then
this would likely show up during development/testing. If it occurs
rarely, then you\'ll still see what look like random failures in the
field.

If you have checks in place, you will know something about what
failed and where to look in the code.


Whether it\'s worth the extra cost of hardware will depend on how many
incarnations there are going to be.

Extra hardware cost???

Faster processor, more memory.

Sylvia.

It is not just about the cost of hardware. It more about doing
*the same* thing done before - with bloated resources.
Which is sort of forgivable, however if it were not the bloat
so much more can be done using today\'s CPU/memory resources than
virtually everyone (well, except me :) does.
Talk about *gigabytes* of RAM and getting a video player complain
about having insufficient memory... (yes, I had that not long ago,
on windows 10 with 8G RAM playing a 2G mkv file....). And if it
were only that, this is just the tip of the iceberg. Everyone is
hasty to just sell something, as long as people can\'t see to
what extent is is not even half baked they just go ahead.

------------------------------------------------------
Dimiter Popoff, TGI             http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/
It\'s not super-uncommon for games that are otherwise decent to get
released to Steam pretty poorly-optimized.

Then some professional gamer with absolute top-of-the-line hardware with
a thousand people watching is laughing at your game stutter and jerk.
It\'s not good advertising..
 
On Thu, 23 Feb 2023 14:15:28 +1100, Sylvia Else <sylvia@email.invalid>
wrote:

On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.


But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

Sylvia.

On a Raspberry Pi Pico? I think not.

In a hard embedded product, what do you do when a range check fails?
 
On 2/23/2023 1:51 AM, Sylvia Else wrote:
It\'s not as if the checks make a program work properly, they only make the
failure mode clearer. If an improper access occurs frequently, then this would
likely show up during development/testing. If it occurs rarely, then you\'ll
still see what look like random failures in the field.

Whether it\'s worth the extra cost of hardware will depend on how many
incarnations there are going to be.

Newer \"programming environments\" often restrict what you can do to
eliminate/minimize these possibilities.

And, hardware costs keep falling. People used to write single-threaded
applications. Then, they gradually adopted multitasking environments.
Boxes used to be little islands connected (if at all) with serial
ports. Now networking is second nature (in terms of the hardware and
software modules available). We\'ll be seeing more use of virtual
memory and more sophisticated OSs running on bare metal.

Each step (advance?) makes it easier for a developer to build more complex
designs. There are costs associated with each improvement. But, the
overall trajectory is downward so these costs get easier to absorb.

E.g., I can *prevent* a task from \"calling\" a particular function,
unless the system has been configured to allow it. I can ensure you
can *only* access certain entry points of routines. And, hide all
of their internal data and workings. Each of these have previously
been opportunities for bugs to creep into code.
 
On Wed, 22 Feb 2023 20:33:57 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 2/22/2023 8:15 PM, Sylvia Else wrote:
But can you afford the memory and time overheads inherent in run-time range
checks of things like array accesses?

That;s a small cost. Modern tools can often make (some) of those
tests at compile time.

The bigger problem with many newer languages is that they rely heavily
on dynamic memory allocation, garbage collection, etc.

And, most of the folks I\'ve met can\'t look at a line of arbitrary
code and tell you -- with *confidence* -- that they know what it
costs to execute, regardless of how comfortable they are with
the language in question.

Programmers typically can\'t estimate run times for chunks of their
code. They typically guess pessimistically, by roughly 10:1.

There\'s nothing unreasonable about an IRQ doing a decent amount of i/o
and signal processing on a small ARM at 100 KHz, if programmed in bare
c.

Even C is becoming difficult, in some cases, to \'second guess\'.
And, ASM isn\'t immune as the hardware is evolving to provide
performance enhancing features that can\'t often be quantified,
at design/compile time.

E.g., I rely on RPCs extensively in my current design. But, what
will it cost to marshall a given set of arguments, schedule the
packet for delivery, accept it at the service end, run the stub,
pack up results -- and ship them back to the caller/client?
How will this change as the workload seen by a service increases?
Or, the number of clients -- of potentially different services -- on
a particular node increases?

And, while the interface looks like a traditional function call,
the developer now has to consider the possibility that the
service may be unavailable -- even if it WAS available on the
previous line of code! (developers have a tough time thinking
in pseudo-parallel, let alone *true* parallelism)

Lots of CPUs help.

So, the language becomes less of an issue but the system design
and OS features/mechanisms (the days of toy RTOSs are rapidly coming
to an end)

We don\'t need no stinkin\' OS!
 

Welcome to EDABoard.com

Sponsor

Back
Top