dead programming languages...

On 2/23/2023 11:00 AM, Wanderer wrote:
For embedded programming? What choice do you have? Unless you\'re planning to
write your own compiler, you use the available compilers for the IC and you
learn the embedded IC\'s dialect for that language.

Some languages are interpreted; one can port the interpreter to
a new architecture relatively easily.

Even compiled and JIT\'ed languages tend to support most *popular*
processors (and the number of processor variants seems to be
DEcreasing, over time). There are other costs associated with
\"fringe\" processors!

IME, you want to avoid (or wrap in some abstraction) any processor/vendor
specific hooks esp if you may want to reuse the code on some other
platform. This, of course, is the biggest argument against ASM
(I have the \"same\" code running on SPARC, x86 and ARM; had much of
it been written in ASM, that would have been a herculean task!)
 
On 2/23/2023 9:24 AM, bitrex wrote:
C++11 and later include an optional type in <cstdint> called \"uintptr_t\" which
can hold a data pointer, and is what you\'d reinterpret_cast a data pointer to
if you needed to do something unusual with it

But, you should be asking yourself, \"Why is this \'unusual\'?\"

Are you sure you are understanding the relationships of the data involved?
Is your interpretation (for efficiency?) bending that relationship in
ways that you may later discover are inappropriate?

E.g., I rely on (*ptr).method in my design, a lot. But, \"ptr\" is just an
abstraction (in my case) and expecting it to be a real (local) pointer will
lead to things breaking. (for example, passing it to another process that
shares a memory space with you won\'t work -- it only \"makes sense\" to YOUR
process)

Instead, it\'s more akin to a fd so has no \"value\" (worth) in and
of itself; something must *interpret* the \"value\" (numeric) to
make use of it.
 
On Thu, 23 Feb 2023 06:34:25 GMT, Jan Panteltje <alien@comet.invalid>
wrote:

On a sunny day (Wed, 22 Feb 2023 11:05:30 -0800) it happened John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote in
3opcvh111k7igirlsm6anc8eekalofvtcj@4ax.com>:

https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

Cplushplush is a crime against humanity
C will do better
But asm is the thing, it will always be there
and gives you full control.
It is not that hard to write an integer math library in asm..

I did that for the 68K. The format was signed 32.32. That worked great
for control systems. Macros made it look like native instructions.

But asm is clumsy for risc CPUs. Plain bare-metal c makes sense for
small instruments.
 
On 2/23/2023 9:49 AM, Don Y wrote:
On 2/23/2023 9:24 AM, bitrex wrote:
C++11 and later include an optional type in <cstdint> called \"uintptr_t\"
which can hold a data pointer, and is what you\'d reinterpret_cast a data
pointer to if you needed to do something unusual with it

But, you should be asking yourself, \"Why is this \'unusual\'?\"

Are you sure you are understanding the relationships of the data involved?
Is your interpretation (for efficiency?) bending that relationship in
ways that you may later discover are inappropriate?

E.g., I rely on (*ptr).method in my design, a lot.  But, \"ptr\" is just an
abstraction (in my case) and expecting it to be a real (local) pointer will
lead to things breaking.  (for example, passing it to another process that
shares a memory space with you won\'t work -- it only \"makes sense\" to YOUR
process)

Instead, it\'s more akin to a fd so has no \"value\" (worth) in and
of itself; something must *interpret* the \"value\" (numeric) to
make use of it.

Said another way, you can\'t arbitrarily cast it to some other
pointer type, even if the stated method is supported on that type!
 
On 2/23/2023 10:12 AM, bitrex wrote:
I feel like once a \"hard embedded\" product has enough degrees of freedom in its
inputs to make range check failures a likely occurrence, one isn\'t really
designing a \"hard embedded\" thing anymore.

I\'m curious what kind of designs these are where people have to do so many
range checks, like I\'ve rarely been concerned a 10 bit i2c ADC is ever going to
accidentally return a value of 1024 or something and then need to accommodate
that possibility in my code.

There are (at least) two different scenarios.

The first (by far largest) is catching latent mistakes that the developer
made in his assumptions and implementation. We still have folks who
don\'t know how to guard against buffer overruns. Or, access \"released\"
memory (\"No, that\'s no longer on the stack!\"). Or, fencepost errors, etc.

And, in most products, there are no protections against <something>
scribbling <somewhere> that it shouldn\'t! Without appropriate hardware
protections, how do you know that this hasn\'t happened?

What if you have a malevolent actor involved? You wouldn\'t let \"just any\"
piece of code run on your PC/phone/process controller, would you?

And, there are also \"unexpected\" situations that come up. E.g., what
do you do if someone connects the I/O\'s (cables) incorrectly? Do you
allow the mechanism to destroy itself or cause harm? Or, do you
*notice* that something is remiss and take steps to protect the
device, operator, data, etc.?

And, things also *break*. What do you do if your accelerometer reports
a signal that represents 400g\'s? Do you mindlessly believe it? Or,
do you start thinking: \"Hey, that *may* actually be the signal
coming from the sensor. Or, it may be a hardware fault. But, in either
case, it doesn\'t make sense in the context in which I created this
application. Let\'s panic(). Or, refuse to act on it and continue
our other functions.\"

[A check engine\" light doesn\'t always result in the vehicle shutting down!]

What do you do if you get a bad flash and start seeing write (or read!)
induced errors? (Why do disk drives have ECC?)

Or, if the memory access pattern necessitated by the way the *compiler*
organized the code causes similar disturb errors in RAM?

Or, if you\'ve stumbled on a yet-to-be-reported bug in the \"chip\"?

Bounds checks, invariants, assertions, etc. act to reassure the
developer (and those that follow) that his notion of reality in
the code, at a given point, agrees with the machine\'s notion at
that same point.

In a desktop/mainframe application, you just abort the program and
rerun it. If the problem persists, you call the vendor.

In an embedded device, it\'s likely *doing* something \"productive\".
So, how you detect and handle errors is important. If your
only remedy is a \"blue screen\", then you\'ve likely not thought
out the consequences of each error, thoroughly.
 
On Wednesday, February 22, 2023 at 6:48:04 PM UTC-8, Phil Hobbs wrote:
On 2023-02-22 19:06, Don Y wrote:
...
Knuth\'s tAoCP (et al.) don\'t use any \"modern\" language that sees
use outside of his texts.
So you program everything in TeX and MIX assembler? ;)

(Don L used to code a whole lot of stuff directly in Postscript, iirc.)

He certainly did. He was a huge proponent. He used to post here occasionally, as I am sure you recall.

I use Tim Edward\'s XCircuit to do drawings for documents, including schematics. It is 100% postscript; I never use the spice aspect of it. Once the documents are finally outputted to pdf, text search works on the drawings too, not just the body text. I mean, I can search for \"C66\" and it will find it in the body *and* in the schematic drawing. Plus, the drawings are all vector graphics, where zooming never causes pixelation.

ok. Enough of my OT words.
 
On 2/23/2023 12:35 PM, Don Y wrote:
On 2/23/2023 10:12 AM, bitrex wrote:
I feel like once a \"hard embedded\" product has enough degrees of
freedom in its inputs to make range check failures a likely
occurrence, one isn\'t really designing a \"hard embedded\" thing anymore.

I\'m curious what kind of designs these are where people have to do so
many range checks, like I\'ve rarely been concerned a 10 bit i2c ADC is
ever going to accidentally return a value of 1024 or something and
then need to accommodate that possibility in my code.

There are (at least) two different scenarios.

The first (by far largest) is catching latent mistakes that the developer
made in his assumptions and implementation.  We still have folks who
don\'t know how to guard against buffer overruns.  Or, access \"released\"
memory (\"No, that\'s no longer on the stack!\").  Or, fencepost errors, etc.

Using modern C++ sure helps a lot with crap like fence post errors.
Direct access to raw arrays is a super-privileged operation!
Need-to-know basis. Read-only consumers get iterators they don\'t get to
muck with actual data. And iterators are easy to do loops, etc. over
they know their own size.

Let\'s keep dark-ages stuff like using raw pointers to structures as
function arguments to a minimum, please. Not entirely unavoidable in
bare-metal programming but can at least largely be delegated to the
internal logic of containers and allocators.

Relevant Dr. McCoy quote:

<https://youtu.be/_R_WbAhKyAk?t=26>

And, in most products, there are no protections against <something
scribbling <somewhere> that it shouldn\'t!  Without appropriate hardware
protections, how do you know that this hasn\'t happened?

What if you have a malevolent actor involved?  You wouldn\'t let \"just any\"
piece of code run on your PC/phone/process controller, would you?

And, there are also \"unexpected\" situations that come up.  E.g., what
do you do if someone connects the I/O\'s (cables) incorrectly?  Do you
allow the mechanism to destroy itself or cause harm?  Or, do you
*notice* that something is remiss and take steps to protect the
device, operator, data, etc.?

And, things also *break*.  What do you do if your accelerometer reports
a signal that represents 400g\'s?  Do you mindlessly believe it?  Or,
do you start thinking:  \"Hey, that *may* actually be the signal
coming from the sensor.  Or, it may be a hardware fault.  But, in either
case, it doesn\'t make sense in the context in which I created this
application.  Let\'s panic().  Or, refuse to act on it and continue
our other functions.\"

[A check engine\" light doesn\'t always result in the vehicle shutting down!]

What do you do if you get a bad flash and start seeing write (or read!)
induced errors?  (Why do disk drives have ECC?)

As I say, a requirement for more complex IO (especially where there
could be some kind of malevolent actor) tends to be the point where I
figure some kind of OS or thread-manager is required, even a lightweight
one, so there\'s someplace to fail to.

Like if I wanted to build a device that had some hard realtime
requirement but was also supposed to take user input over HTTP I
wouldn\'t try to write my own secure API/input sanitizer, you know? Run
embedded Linux and leave that stuff to the professionals who write
libraries for it.

Or, if the memory access pattern necessitated by the way the *compiler*
organized the code causes similar disturb errors in RAM?

Or, if you\'ve stumbled on a yet-to-be-reported bug in the \"chip\"?

Bounds checks, invariants, assertions, etc. act to reassure the
developer (and those that follow) that his notion of reality in
the code, at a given point, agrees with the machine\'s notion at
that same point.

Static assertions are great, I don\'t find much use for runtime checks in
e.g. single-threaded 8 bit applications, so I don\'t tend to miss that
they\'re not available in the first place much.

Guess I\'m fortunate that I haven\'t had to design anything
mission-critical enough that it has to compensate for flaws in the
compiler, or bugs/damage to the hardware itself.

I\'m sure there\'s software that does checksums and comparisons of the
stack and every other bit of data it seems productive to monitor, on
every main loop iteration, just to be on the safe side.

In a desktop/mainframe application, you just abort the program and
rerun it.  If the problem persists, you call the vendor.

In an embedded device, it\'s likely *doing* something \"productive\".
So, how you detect and handle errors is important.  If your
only remedy is a \"blue screen\", then you\'ve likely not thought
out the consequences of each error, thoroughly.

What to do when a sensor is broken and is returning unexpected values
seems like a design problem, but the context of \"range checking\" in the
original post was about array access.

I think a program that behaved _incorrectly_ when an e.g. accelerator
was broken might be a common bug, but who writes code such that a faulty
accelerometer can smash the stack?!
 
On 2/23/2023 2:11 PM, bitrex wrote:
On 2/23/2023 12:35 PM, Don Y wrote:
On 2/23/2023 10:12 AM, bitrex wrote:
I feel like once a \"hard embedded\" product has enough degrees of
freedom in its inputs to make range check failures a likely
occurrence, one isn\'t really designing a \"hard embedded\" thing anymore.

I\'m curious what kind of designs these are where people have to do so
many range checks, like I\'ve rarely been concerned a 10 bit i2c ADC
is ever going to accidentally return a value of 1024 or something and
then need to accommodate that possibility in my code.

There are (at least) two different scenarios.

The first (by far largest) is catching latent mistakes that the developer
made in his assumptions and implementation.  We still have folks who
don\'t know how to guard against buffer overruns.  Or, access \"released\"
memory (\"No, that\'s no longer on the stack!\").  Or, fencepost errors,
etc.


Using modern C++ sure helps a lot with crap like fence post errors.
Direct access to raw arrays is a super-privileged operation!
Need-to-know basis. Read-only consumers get iterators they don\'t get to
muck with actual data. And iterators are easy to do loops, etc. over
they know their own size.

Let\'s keep dark-ages stuff like using raw pointers to structures

Using raw pointers to arrays, rather
 
On Wed, 22 Feb 2023 11:05:30 -0800, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

.<https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

New programming languages are invented every day, claiming to solve
the many problems of all prior languages. The vast majority of these
languages vanish without a trace within a few years, and some hang on
for years in academia, as they are really vehicles for trying out
various theories then under consideration. (Likewise, operating
systems.)

There are about 9,000 programming languages in existence.

..<https://hopl.info/>

Mostly forgotten:

..<https://www.tiobe.com/tiobe-index/>

Filtering the TIOBE list to remove languages unsuited to embedded
hardware uses, we end up with C and C++, and assembler.

Notable that C and C++ have been around, and have gone through many
revisions and updates, yielding mature standards with large
ecosystems.

Turned around, any language that is not decades old and now in wide
use is unlikely to endure. The first C version was released in 1972,
and the first C++ in 1983.

More generally, if your intent is to develop a product, and maintain
it through its product lifecycle, the issues are more practical than
theoretical.

The most important issue for development is the general availability
of the entire needed ecosystem, including toolchain (compilers,
debuggers, et al), operating-system interfaces, tracers, kernel
debuggers, et al, a community of interest, and customer support.

The next issue is availability of programmers for the chosen language
and ecosystem, covering not just the initial team but also to cover
the usual rates of employee turnover to maintain full staffing over
time. Which brings us to the next issue:

How widely is this toolchain supported? Is it just one company, so
there will be a forced redesign, recode in a different language, and
reimplementation of hardware when that company triples their prices
and ultimately fails, or decides to leave this business for greener
pastures, or whatever. Which happens all the time.

So, there must be multiple entities supporting the chosen ecosystem.
Historically, open-source ecosystems have fared better here.

If one does need to change ecosystems, how hard will it be? If the
programming language is widely supported, then while a lot of work,
adapting existing code to the new toolchain is practical, while
rewriting an entire codebase in a different language is usually
totally impractical - so that existing code is a dead loss.

For embedded realtime uses, the list of suitable languages and
ecosystems is relatively short. Assembly code is excluded, because
going from one processor type to another to another is basically a
full rewrite.

So we are basically left with C and C++ in their various dialects and
forms.

The basic difference is that C is smaller, simpler, and faster than
C++, and far more suited to direct control of hardware.

In the large radars of my experience, we use both. Stuff close to
hardware is in C (in turn controlling hardware using VHDL of some
kind), and the millions of lines of application code are in C++.

So, my vote would be plain ANSI C. Most C++ compilers can handle C as
well, and there are also C-specific compilers and toolchains.


Joe Gwinn
 
On Wednesday, February 22, 2023 at 6:00:09 PM UTC-5, John Walliker wrote:
On Wednesday, 22 February 2023 at 19:05:37 UTC, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.
Or maybe Ada, but definitely not C++.

John
In my world (safety critical sw systems, e.g. flight control, medical), Ada is still used - probably on life support tho. More usage in Europe than USA.
C is out and out dangerous in this environment even when standards such as MISRA-C are used.

What do you me by \'coding hard embedded products\'? you mean \'hard real-time embedded systems\' eg. system timing and thread scheduling must be completely deterministic?
 
On 2/23/2023 12:11 PM, bitrex wrote:

The first (by far largest) is catching latent mistakes that the developer
made in his assumptions and implementation.  We still have folks who
don\'t know how to guard against buffer overruns.  Or, access \"released\"
memory (\"No, that\'s no longer on the stack!\").  Or, fencepost errors, etc.

Using modern C++ sure helps a lot with crap like fence post errors. Direct
access to raw arrays is a super-privileged operation! Need-to-know basis.
Read-only consumers get iterators they don\'t get to muck with actual data. And
iterators are easy to do loops, etc. over they know their own size.

Let\'s keep dark-ages stuff like using raw pointers to structures as function
arguments to a minimum, please.

Do you plan on passing the struct BY VALUE, instead?
BY REFERENCE offers huge performance advantages -- esp
if the argument is an in-out.

Pointers let you do things like pass the tail end of
an array (or a single element).

[I am *keenly* aware of the cost of passing by value
as passing references -- for most things -- in RPCs
isn\'t really practical. It really mucks up your sense of
what you can make available to another function!]

Not entirely unavoidable in bare-metal
programming but can at least largely be delegated to the internal logic of
containers and allocators.

But, esp in embedded projects, the form of the object is entirely
private and likely not going to change. So, you can exploit
your knowledge of its internals in your (private) code without
trying to stick to abstractions.

I built a grapheme-to-phoneme translator some time back.
Instead of a nice hierarchy of pointers into a set of
translation \"rules\", I opted to just treat the \"rule set\"
as a contiguous block of memory. The code *walked* through
it looking for matches.

But, the time required to do so was unimportant -- because
the follow-on speech synthesizer was likely still busy uttering
the last word! (speech is incredibly low bandwidth in the
information channel) So, if I could save *memory*, it was worth
the *time*.

[A check engine\" light doesn\'t always result in the vehicle shutting down!]

What do you do if you get a bad flash and start seeing write (or read!)
induced errors?  (Why do disk drives have ECC?)

As I say, a requirement for more complex IO (especially where there could be
some kind of malevolent actor) tends to be the point where I figure some kind
of OS or thread-manager is required, even a lightweight one, so there\'s
someplace to fail to.

But the I/O doesn\'t (in itself) need to be particularly complex.

if (*limit_switches) {
if (*limit_switches & LEFT_SWITCH) {
motors.left.stop();
} else {
motors.right.stop();
}
}

is inherently flawed. A more correct implementation would be:

sensed = *limit_switches;
if (sensed) {
// at least one limit switch was detected as active
if (sensed & LEFT_SWITCH) {
motors.left.stop()
}
if (sensed & RIGHT_SWITCH) {
motors.right.stop()
}
}

In a miserly environment, a developer may have thought he could
save a tiny bit of RAM by eliminating the \"sensed\" variable.

[I see this more often in ASM-coded products where the
\"*limit_switches\" is just a memory reference (memory
mapped I/O) so why bother adding some OTHER \"memory
reference\" (\"sensed\")?]

OTOH, it\'s possible that *limit_switches has changed between
the assignment and the next statement -- and the code won\'t \"see\"
it, now (recall, the code may have been preempted -- at any point)

Like if I wanted to build a device that had some hard realtime requirement but
was also supposed to take user input over HTTP I wouldn\'t try to write my own
secure API/input sanitizer, you know? Run embedded Linux and leave that stuff
to the professionals who write libraries for it.

What if you can\'t afford a Linux runtime environment?
You can build a web-enabled device with very few resources...
as long as you restrict the type of pages you serve (or
interact with)

Or, if the memory access pattern necessitated by the way the *compiler*
organized the code causes similar disturb errors in RAM?

Or, if you\'ve stumbled on a yet-to-be-reported bug in the \"chip\"?

Bounds checks, invariants, assertions, etc. act to reassure the
developer (and those that follow) that his notion of reality in
the code, at a given point, agrees with the machine\'s notion at
that same point.

Static assertions are great, I don\'t find much use for runtime checks in e.g.
single-threaded 8 bit applications, so I don\'t tend to miss that they\'re not
available in the first place much.

If you never make mistakes, you don\'t need any! :>

Guess I\'m fortunate that I haven\'t had to design anything mission-critical
enough that it has to compensate for flaws in the compiler, or bugs/damage to
the hardware itself.

I\'m sure there\'s software that does checksums and comparisons of the stack and
every other bit of data it seems productive to monitor, on every main loop
iteration, just to be on the safe side.

It\'s also common to detect counterfeited/corrupted software.

If you have a power switch and a \"user\", you can always (often?)
resort to the user getting frustrated enough to power cycle the
device to \"fix stuff\".

But, when your device has to run unattended and/or for prolonged
periods, you can\'t rely on that remedy.

In a desktop/mainframe application, you just abort the program and
rerun it.  If the problem persists, you call the vendor.

In an embedded device, it\'s likely *doing* something \"productive\".
So, how you detect and handle errors is important.  If your
only remedy is a \"blue screen\", then you\'ve likely not thought
out the consequences of each error, thoroughly.

What to do when a sensor is broken and is returning unexpected values seems
like a design problem, but the context of \"range checking\" in the original post
was about array access.

It all boils down to ensuring you and reality are in sync.
If you *think* you are accessing an array element -- but
the index you are using is out of range -- then you
clearly have a different idea of the current reality than
the machine does!

I think a program that behaved _incorrectly_ when an e.g. accelerator was
broken might be a common bug, but who writes code such that a faulty
accelerometer can smash the stack?!

You don\'t know what it will do as you don\'t know how that code
interacts with the rest of the application.

Perhaps it builds a dynamic structure that has a number of
elements proportional to this \"peak\" force in which to store
some other observation (correlated with that force observation
that it expects to, perhaps, be monotonically decreasing).
As it\'s part of a mechanism with known characteristics,
the developer may have said, \"Maximum force encountered will be
10g\'s so I\'ll assume no more than 100 elements in the struct
that I will need to allocate\" (assuming he allocates one for
every 0.1g reading).

I have a wrist-mounted accelerometer that I use to \"watch\"
the user \"gesture\" with his hands/arms. If I saw such a
high force, I\'d know the user had just wacked his arm into
something... it wouldn\'t make sense as part of a gesture
because it would be an *uncontrolled* action.
 
On Wed, 22 Feb 2023 21:47:58 -0500, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2023-02-22 19:06, Don Y wrote:
On 2/22/2023 4:00 PM, John Walliker wrote:
Or maybe Ada, but definitely not C++.

Ada will sorely limit the talent you can draw.  Who cares
about the language\'s features if you can never get a
finished product from it!

[Ditto the many \"fad\" languages that may be \"gone\"
before the product\'s completely implemented!]

I use a C++ like syntax for code written entirely in C
(some syntax enhancements to increase comprehension
that are strictly NOT portable!) as it helps others
relate to what the code is trying to express.

ANYONE can learn practically ANY programming language.
And, in most projects, you\'ll likely have to know \"a few\"
to completely implement the product (unless it\'s a
relatively limited application) because different languages
express different concepts better or worse than others.

The important thing is getting a broad-based understanding
in algorithms and their consequences.  And, realizing that,
for most problems, there are cleverer ways of implementing
them than you\'ve considered or will likely consider!  Not
because of \"language tricks\" but, rather, because of the
inherent nature of the problem that you\'ll often miss. If
you\'ve not thought along those lines, you\'re likely
Just Another Coder (JAC?  should I coin the term?  :> )

Having a good understanding of the architecture of the (likely)
hosting processor also separates the men from the boys.

Knuth\'s tAoCP (et al.) don\'t use any \"modern\" language that sees
use outside of his texts.

So you program everything in TeX and MIX assembler? ;)

(Don L used to code a whole lot of stuff directly in Postscript, iirc.)

Cheers

Phil Hobbs

My people document FPGA registers in HTML. I think that\'s weird.
 
On Thu, 23 Feb 2023 11:42:27 -0800 (PST), three_jeeps
<jjhudak@gmail.com> wrote:

On Wednesday, February 22, 2023 at 6:00:09?PM UTC-5, John Walliker wrote:
On Wednesday, 22 February 2023 at 19:05:37 UTC, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.
Or maybe Ada, but definitely not C++.

John
In my world (safety critical sw systems, e.g. flight control, medical), Ada is still used - probably on life support tho. More usage in Europe than USA.
C is out and out dangerous in this environment even when standards such as MISRA-C are used.

What do you me by \'coding hard embedded products\'? you mean \'hard real-time embedded systems\' eg. system timing and thread scheduling must be completely deterministic?

I mean an electronic instrument or controller that has a uP inside. It
will typically have an i/o process, hard real-time, and a user
interface and communications side that is a lot softer.

Things like this:

http://www.highlandtechnology.com/categories/measurement_simulation.shtml

A few run Linux, but a typical small box doesn\'t have an OS or threads
as such. The ones we do lately have a dual-core ARM, one for the
process i/o and control loops, and one for the softer side, the user
interface and communications stuff. Really intense stuff gets done in
the FPGA. Usually no OS at all, just some state machines and maybe one
periodic IRQ thing.

Ultimately we could have a CPU per process and not task switch or even
interrupt.
 
On 23/02/23 15:14, Don Y wrote:
On 2/22/2023 9:00 PM, Clifford Heath wrote:
On 23/02/23 14:02, bitrex wrote:
On 2/22/2023 2:05 PM, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages

Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

User-defined strong types that enforce their own usage are probably
worth the price of admission alone; e.g. quantities in newton/meters
should be of type NewtonMeters and foot/pounds should be FootPounds,
and casually performing operations with the two causes a compile error.

On the contrary, the language should automatically provide the
appropriate conversions.

I disagree, esp for anything beyond simple types.  (i.e., promote a
char to an int and hope the developer truly understands how signedness
is handled, etc.)

Pascal?

Requiring an explicit cast reassure me (the NEXT guy looking at
the code) that the developer actually intended to do what
he\'s doing in the way he\'s doing it -- even if it \"makes sense\".
This is what I liked most about C++ (overloading operators
so the *syntax* was clearer by hiding the machinery -- but not
eliminating it!)

How many times do you see an int being used as a pointer?
Is it *really* intended to be a pointer in THIS context?
I get tired of having to chase down compiler warnings
of this sort of thing in inherited code:  \"If you WANT
it to be a pointer, then explicitly cast it as such!
Don\'t just count on the compiler to *use* it as one!\"

Why on earth are you answering my statement about *units conversion*
with a rant about integer representations and pointer casts.

Don, sometimes you\'re a master of irrelevance.

CH
 
On 2023/02/22 11:05 a.m., John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

If you know COBOL then the US IRS department may have work for you.
Apparently that is the language for their tax system...

Is 2036 going to be a problem? They want to phase COBOL out by 2030.

John :-#)#
--
(Please post followups or tech inquiries to the USENET newsgroup)
John\'s Jukes Ltd.
#7 - 3979 Marine Way, Burnaby, BC, Canada V5J 5E3
(604)872-5757 (Pinballs, Jukes, Video Games)
www.flippers.com
\"Old pinballers never die, they just flip out.\"
 
On 2/23/2023 3:08 PM, Clifford Heath wrote:
On 23/02/23 15:14, Don Y wrote:
On 2/22/2023 9:00 PM, Clifford Heath wrote:
On 23/02/23 14:02, bitrex wrote:
On 2/22/2023 2:05 PM, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages

Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

User-defined strong types that enforce their own usage are probably worth
the price of admission alone; e.g. quantities in newton/meters should be of
type NewtonMeters and foot/pounds should be FootPounds, and casually
performing operations with the two causes a compile error.

On the contrary, the language should automatically provide the appropriate
conversions.

I disagree, esp for anything beyond simple types.  (i.e., promote a
char to an int and hope the developer truly understands how signedness
is handled, etc.)

Pascal?

Requiring an explicit cast reassure me (the NEXT guy looking at
the code) that the developer actually intended to do what
he\'s doing in the way he\'s doing it -- even if it \"makes sense\".
This is what I liked most about C++ (overloading operators
so the *syntax* was clearer by hiding the machinery -- but not
eliminating it!)

How many times do you see an int being used as a pointer?
Is it *really* intended to be a pointer in THIS context?
I get tired of having to chase down compiler warnings
of this sort of thing in inherited code:  \"If you WANT
it to be a pointer, then explicitly cast it as such!
Don\'t just count on the compiler to *use* it as one!\"

Why on earth are you answering my statement about *units conversion* with a
rant about integer representations and pointer casts.

Because it\'s not *units* that bitrex was addressing,
rather, *types*. Did you miss:

------------------------vvvvv
\"User-defined strong types that enforce their own usage
are probably worth the price of admission alone;\"

He could, perhaps, have come up with a different example.

Don, sometimes you\'re a master of irrelevance.

CH
 
On 2/23/2023 3:10 PM, John Robertson wrote:
If you know COBOL then the US IRS department may have work for you. Apparently
that is the language for their tax system...

Is 2036 going to be a problem? They want to phase COBOL out by 2030.

2038 is the next Y2K...
 
On 2023/02/23 2:30 p.m., Don Y wrote:
On 2/23/2023 3:10 PM, John Robertson wrote:
If you know COBOL then the US IRS department may have work for you.
Apparently that is the language for their tax system...

Is 2036 going to be a problem? They want to phase COBOL out by 2030.

2038 is the next Y2K...

what is two years between friends...(thanks!)

John :-#)#

--
(Please post followups or tech inquiries to the USENET newsgroup)
John\'s Jukes Ltd.
#7 - 3979 Marine Way, Burnaby, BC, Canada V5J 5E3
(604)872-5757 (Pinballs, Jukes, Video Games)
www.flippers.com
\"Old pinballers never die, they just flip out.\"
 
On 2/23/2023 5:28 PM, Don Y wrote:
On 2/23/2023 3:08 PM, Clifford Heath wrote:
On 23/02/23 15:14, Don Y wrote:
On 2/22/2023 9:00 PM, Clifford Heath wrote:
On 23/02/23 14:02, bitrex wrote:
On 2/22/2023 2:05 PM, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages

Now I\'m told that we should be coding hard embedded products in
C++ or
Rust.

User-defined strong types that enforce their own usage are probably
worth the price of admission alone; e.g. quantities in
newton/meters should be of type NewtonMeters and foot/pounds should
be FootPounds, and casually performing operations with the two
causes a compile error.

On the contrary, the language should automatically provide the
appropriate conversions.

I disagree, esp for anything beyond simple types.  (i.e., promote a
char to an int and hope the developer truly understands how signedness
is handled, etc.)

Pascal?

Requiring an explicit cast reassure me (the NEXT guy looking at
the code) that the developer actually intended to do what
he\'s doing in the way he\'s doing it -- even if it \"makes sense\".
This is what I liked most about C++ (overloading operators
so the *syntax* was clearer by hiding the machinery -- but not
eliminating it!)

How many times do you see an int being used as a pointer?
Is it *really* intended to be a pointer in THIS context?
I get tired of having to chase down compiler warnings
of this sort of thing in inherited code:  \"If you WANT
it to be a pointer, then explicitly cast it as such!
Don\'t just count on the compiler to *use* it as one!\"

Why on earth are you answering my statement about *units conversion*
with a rant about integer representations and pointer casts.

Because it\'s not *units* that bitrex was addressing,
rather, *types*.  Did you miss:

------------------------vvvvv
   \"User-defined strong types that enforce their own usage
   are probably worth the price of admission alone;\"

He could, perhaps, have come up with a different example.

Sure, there are all sorts of good reasons to use strong types beyond
just enforcing unit conversions, it also makes for self-documenting
code. A contrived example is:

class Rectangle
{
public:
Rectangle(float width, float height);
....
};

But then at the call site one coder writes:

auto r = Rectangle(4, 5);

Someone else looks at that will have to go back to the .h file to see
what order the parameters are. You could instead have

class Rectangle
{
public:
Rectangle(Width width, Height height);
....
};

and then at the call site is written:

auto rectangle = Rectangle{Width{4}, Height{5}};

So it\'s more clear what\'s going on.
 
On 24-Feb-23 2:10 am, Ricky wrote:
On Thursday, February 23, 2023 at 6:00:51 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 8:22 pm, Ricky wrote:
On Thursday, February 23, 2023 at 3:51:26 AM UTC-5, Sylvia Else wrote:
On 23-Feb-23 4:01 pm, Ricky wrote:
On Wednesday, February 22, 2023 at 10:15:36 PM UTC-5, Sylvia Else wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

You mean as opposed to programs randomly failing in the field?

It\'s not as if the checks make a program work properly, they only make
the failure mode clearer. If an improper access occurs frequently, then
this would likely show up during development/testing. If it occurs
rarely, then you\'ll still see what look like random failures in the field.

If you have checks in place, you will know something about what failed and where to look in the code.


Whether it\'s worth the extra cost of hardware will depend on how many
incarnations there are going to be.

Extra hardware cost???

Faster processor, more memory.

Sylvia.

0.01% faster... 10 bytes more memory. WTF???

If every non-constant array index is bounds checked, and every pointer
access is implemented via code that checks the pointer for validity
first, then it will neither by 0.01% not 10 bytes more.

Compilers may reduce this by proving that certain accesses are always
valid, but I believe the overhead will still be significant.

Sylvia.
 

Welcome to EDABoard.com

Sponsor

Back
Top