dead programming languages...

On 2/23/2023 9:29 PM, Sylvia Else wrote:
On 24-Feb-23 2:00 pm, Don Y wrote:


That was my first (actually, second) experience writing code.
On Hollerith cards, of course.  Amusing to think your proficiency in
operating the punch was as much a factor in your \"productivity\"
as was your programming skills!

Using a punch? Sheer luxury. We were using coding sheets that managed to be
garbled by punch operators.

Ah. I always found it annoying as the keypunch machines were always mounted
in such a way that you had to use them *standing up*. I think the theory
was that it would deter folks from monopolizing a scarce resource.

A savvy user would set up a program card to make the tedium a little more
manageable!

Fortunately, this was before the time when most people could type, and the few
machines available to students were not much used, other than by me.

Also, the place I was working during the holidays was a time-sharing service
(remember those?), so I did most of the COBOL work there.

Yes, we had to submit decks at an \"input window\". Then, wander around to
pick up our output whenever the job managed to get run. Really annoying
to discover (an hour later) that the job abended because of a bad JCL card!

Moving to a teletypewriter (Trendata 1200\'s) was a huge step up in
efficiency. Then, DECwriters. The only glass TTY that I used in
school was the Imlac (PDS-1). So, it was painfully difficult to be
productive (and you ended up carting reams of 132 column paper around
with you!)

By contrast, at work, we had glass TTYs -- VT100s on the \'11 and
<something> on the development systems (an MDS800 and some other
one -- maybe from Tek?). The i4004 development was done on the
11 (and with \"pocket assemblers\" -- index cards with opcode maps
that you carried in your pocket/wallet). To think that we\'ve gone
from instruction times of 10 *microseconds* to *nanoseconds*!
 
On 24-Feb-23 3:53 pm, Don Y wrote:

Fortunately, this was before the time when most people could type, and
the few machines available to students were not much used, other than
by me.

Also, the place I was working during the holidays was a time-sharing
service (remember those?), so I did most of the COBOL work there.

Yes, we had to submit decks at an \"input window\".  Then, wander around to
pick up our output whenever the job managed to get run.  Really annoying
to discover (an hour later) that the job abended because of a bad JCL card!

The time-sharing service I worked at provided dial-up access, at 110
baud. Even in-house, we were mostly limited to 110 baud teletypes, which
were in a separate room because they were so noisy.

Hard to imagine now.

Sylvia.
 
On 2/23/2023 10:09 PM, Sylvia Else wrote:
Yes, we had to submit decks at an \"input window\".  Then, wander around to
pick up our output whenever the job managed to get run.  Really annoying
to discover (an hour later) that the job abended because of a bad JCL card!

The time-sharing service I worked at provided dial-up access, at 110 baud. Even
in-house, we were mostly limited to 110 baud teletypes, which were in a
separate room because they were so noisy.

Yes. My first experience with a computer was with an ASR-33
and 110/300 baud acoustical coupler. You \"saved\" your files to
punched paper tape -- else you\'d have to type them in, again,
tomorrow!

You learn how to be as efficient as *practical* with the
tools you are given. E.g., when I shared a development
system with two other engineers, I had to spend a lot of time
reviewing *listings* so I knew what I wanted to change when
I had an opportunity to access my files (my *floppy*).

And, how to organize your code so you didn\'t have to burn a
complete set of EPROMs if your changes could, instead, be
localized to a single device (i.e., constrain the linkage editor).
Finally, how to get the hardware to tell you where the software
was executing without the benefit of a logic analyzer or ICE.

Still, you were lucky if you could turn the crank *twice* in
an 8-hour day!

> Hard to imagine now.

I have an ASR-33, here (but no acoustical coupler). In use,
the sound is unmistakably familiar -- like a blast from the past.
The sort of familiarity that an old electro-mechanical pinball
machine elicits.
 
On a sunny day (Thu, 23 Feb 2023 09:18:02 +0000) it happened Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote in <tt7b0a$1r163$1@dont-email.me>:

I recall someone once ordered a gross of grosses due to their odd
misunderstanding of the ordering system and their mistake only became
apparent when a 40T trailer arrived instead of the usual van.

That happened in the Philips service center here long time ago
they needed 2 things for spare parts, the new guy entered 2 in the then new terminal
Few days later a big truck appeared full of those parts.
You needed to type 000002 in that terminal..
 
On a sunny day (Thu, 23 Feb 2023 08:54:20 -0800) it happened John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote in
<266fvhl8gae2sdj0ecp7n511phphmkg47i@4ax.com>:

On Thu, 23 Feb 2023 06:34:25 GMT, Jan Panteltje <alien@comet.invalid
wrote:

On a sunny day (Wed, 22 Feb 2023 11:05:30 -0800) it happened John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote in
3opcvh111k7igirlsm6anc8eekalofvtcj@4ax.com>:

https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

Cplushplush is a crime against humanity
C will do better
But asm is the thing, it will always be there
and gives you full control.
It is not that hard to write an integer math library in asm..

I did that for the 68K. The format was signed 32.32. That worked great
for control systems. Macros made it look like native instructions.

But asm is clumsy for risc CPUs. Plain bare-metal c makes sense for
small instruments.

True, I have no experience with asm on my Raspberries for example.
But lots of C, gcc is a nice compiler.
Most (all?) things I wrote for x86 in C also compile and run on the Raspberries.
Some minor make file changes were required for the latest gcc version..
 
On a sunny day (Thu, 23 Feb 2023 07:12:17 -0800 (PST)) it happened Ricky
<gnuarm.deletethisbit@gmail.com> wrote in
<db92411e-80c3-4bfc-ba26-670e877a8cbdn@googlegroups.com>:

>Anytime someone talks about \"bloat\" in software, I realize they don\'t program.

You talk crap, show us some code you wrote.


It\'s like electric cars. The only people who complain about them are the people
who don\'t drive them.

Right and the grid is full here.
People with \'lectric cars in north-west US now without power because of the ice cold weather
ARE STUCK.

Do you get anything for your repeated sig?
 
On a sunny day (Thu, 23 Feb 2023 11:00:00) it happened
Wanderer<dont@emailme.com> wrote in <961117@dontemail.com>:

For embedded programming? What choice do you have? Unless you\'re planning to write your own compiler, you use the available
compilers for the IC and you learn the embedded IC\'s dialect for that language.


https://www.microchip.com/en-us/tools-resources/develop/mplab-xc-compilers

I program my PICs in asm, using gpasm in Linux
Wrote my own PIC programmer too.

How long did it take? few hours?
 
On a sunny day (Thu, 23 Feb 2023 08:08:15 -0800) it happened John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote in
<rl3fvh100tsdcd3drmm0shqt82rtbjj4ou@4ax.com>:

On Thu, 23 Feb 2023 09:18:02 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 23/02/2023 03:02, bitrex wrote:
On 2/22/2023 2:05 PM, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.

Maybe C++ provided that you only use a restricted subset of the language
much like the Spark dialect of High Integrity Ada. Modula2 came close to
being ideal (particularly back in its brief heyday) but never took off.
Close enough to bare metal to do device drivers but with strong typing.

The one that used to drive me crazy was hardware engineers who mapped
DACs and ADCs onto the least significant bits of a register - requiring
significant code changes when a new DAC/ADC with more bits came along.

That\'s the way some chips come. I\'d expect that some code changes will
be necessary when a new ADC or DAC is installed.

One ADC that we use has a bit in a register that sets whether the SPI
interface clocks on the rising or falling edge.

Where it will go is that the \'high level\' language becomes sort of standard
So any person can say:
AI build me a house with ... and ....
And AI will generate the house, the 3D printer will print it..
Programs? Who needs Programmers? :)
AI write a faster mediaplayer for me.
\'here it is, click icon to start, or should I click it for you?\'
well...
I really should try 3D (or is it 4D nowadays??) printing some day...

AI build me a fast electric? no, a fusion powered car
\'..ready, access password is hoopy\'
 
On a sunny day (Thu, 23 Feb 2023 08:43:27 -0800) it happened John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote in
<kc5fvht67v712ru25erfme0a9qu36gb4nr@4ax.com>:

>We don\'t need no stinkin\' OS!

And not even a file system
When I wrote the logging code for the position of my drone (to sdcard)
I simply wrote one 512 byte data record per sector..
GPS time, position, speed, altitude all fitted in that, 1 block per second or something.
Same with my GPS based radiation logger.

BTW panteltje.com no longer exists (else I would show a link to the code here)
but I just bought panteltje.nl and panteltje.online
Both now just show a test page,
Will need some time rewriting links before putting the site back up.
 
On a sunny day (Thu, 23 Feb 2023 10:59:17 -0800 (PST)) it happened Simon S
Aysdie <gwhite@ti.com> wrote in
<6f0dca8e-6a72-4107-9823-ee500c0a055dn@googlegroups.com>:

On Wednesday, February 22, 2023 at 6:48:04=E2=80=AFPM UTC-8, Phil Hobbs wrote:

On 2023-02-22 19:06, Don Y wrote:
...
Knuth\'s tAoCP (et al.) don\'t use any \"modern\" language that sees
use outside of his texts.
So you program everything in TeX and MIX assembler? ;)

(Don L used to code a whole lot of stuff directly in Postscript, iirc.)

He
certainly did. He was a huge proponent. He used to post here occasionally,
as I am sure you recall.

I use Tim Edward\'s XCircuit to do drawings for documents, including schematics.
It is 100% postscript; I never use the spice aspect of it. Once the documents
are finally outputted to pdf, text search works on the drawings too,
not just the body text. I mean, I can search for \"C66\" and it will find it
in the body *and* in the schematic drawing. Plus, the drawings are all vector
graphics, where zooming never causes pixelation.

ok. Enough of my OT words.

Ha, I used xcircuit in the past
-rwxr-xr-x 1 root root 1342667 Feb 4 2007 /root/compile/xcircuit/xcircuit-3.1.4/xcircuit
Been a while...
 
On 2023/02/23 10:06 p.m., Don Y wrote:
On 2/23/2023 10:09 PM, Sylvia Else wrote:
Yes, we had to submit decks at an \"input window\".  Then, wander
...

I have an ASR-33, here (but no acoustical coupler).  In use,
the sound is unmistakably familiar -- like a blast from the past.
The sort of familiarity that an old electro-mechanical pinball
machine elicits.

Poketa, poketa, poketa...

Nothing like the sound of an EM machine running happily!

John ;-#)#
--
(Please post followups or tech inquiries to the USENET newsgroup)
John\'s Jukes Ltd.
#7 - 3979 Marine Way, Burnaby, BC, Canada V5J 5E3
(604)872-5757 (Pinballs, Jukes, Video Games)
www.flippers.com
\"Old pinballers never die, they just flip out.\"


 
On 24/02/2023 06:07, Jan Panteltje wrote:
On a sunny day (Thu, 23 Feb 2023 09:18:02 +0000) it happened Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote in <tt7b0a$1r163$1@dont-email.me>:

I recall someone once ordered a gross of grosses due to their odd
misunderstanding of the ordering system and their mistake only became
apparent when a 40T trailer arrived instead of the usual van.

That happened in the Philips service center here long time ago
they needed 2 things for spare parts, the new guy entered 2 in the then new terminal
Few days later a big truck appeared full of those parts.
You needed to type 000002 in that terminal..

We had that with our village hall electricity bill.

Meter had 5 digits but central computer believed it had six - meter
reading muppet zero padded the reading at the wrong end and we got a
quarterly bill that was 9x our total usage to date (about £200k).

Amazingly the electricity company wanted this paid ASAP and got a court
order to cut off supply on the basis that it hadn\'t been paid and we
were in dispute with them. Fortunately I spotted the guys arriving and
unlocked for them showed them the meter and explained the nature of the
dispute. They saw sense and rang up their HQ to report the cock-up.

We changed electricity suppliers shortly afterwards.

--
Martin Brown
 
On Thu, 23 Feb 2023 10:35:49 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

And, there are also \"unexpected\" situations that come up. E.g., what
do you do if someone connects the I/O\'s (cables) incorrectly? Do you
allow the mechanism to destroy itself or cause harm? Or, do you
*notice* that something is remiss and take steps to protect the
device, operator, data, etc.?

And, things also *break*. What do you do if your accelerometer reports
a signal that represents 400g\'s? Do you mindlessly believe it? Or,
do you start thinking: \"Hey, that *may* actually be the signal
coming from the sensor. Or, it may be a hardware fault. But, in either
case, it doesn\'t make sense in the context in which I created this
application. Let\'s panic(). Or, refuse to act on it and continue
our other functions.\"

In industrial control systems in addition to the actual measured value
(e.g. from an ADC) you also have a separate data quality variable and
often also a time stamp (sometimes with a time quality field,
resolution, synched etc.).

In the 8/16/32 bit data quality variable, you can report e.g.
overflow, underflow, frozen (old) values, faulty value (e.g. ADC
reference missing or input cable grounded, manually forced values etc.
The actual value, data quality and time stamp are handled as a unit
through the application.

If some internal calculation causes overflow, the overflow bit can be
set in the data quality variable or some special value replaced and a
derived value data quality bit can be set.

When the result is to be used for control, the data quality bits can
be analyzed and determined, if the value can be used for control or if
some other action must be taken. Having a data quality word with every
variable makes it possible have special handling of some faulty
signals without shutting down the whole system after first error in
some input or calculations. Such systems can run for years without
restarts.

In IEEE floats, there are some rudimentary possibilities to inform
about overflows (+/-infinity) Not a Number (NaN), but using a separate
data quality variable with every signals allow a much wider selection
of special cases to be monitored.
 
On 23/02/2023 16:43, Don Y wrote:
On 2/23/2023 11:00 AM, Wanderer wrote:
For embedded programming? What choice do you have? Unless you\'re
planning to
write your own compiler, you use the available compilers for the IC
and you
learn the embedded IC\'s dialect for that language.

Some languages are interpreted; one can port the interpreter to
a new architecture relatively easily.

Back in the day BCPL was an example of a compiled language that was
fairly trivial to bootstrap quickly onto a novel processor.

FORTH was another that was quite good on embedded stuff but it tended to
be very much a write only language and maintenance was a nightmare. It
was an instant hit in some observatories for telescope control.

Even compiled and JIT\'ed languages tend to support most *popular*
processors (and the number of processor variants seems to be
DEcreasing, over time).  There are other costs associated with
\"fringe\" processors!

Modern languages seem to have pretty good back end cross compilers.

IME, you want to avoid (or wrap in some abstraction) any processor/vendor
specific hooks esp if you may want to reuse the code on some other
platform.  This, of course, is the biggest argument against ASM
(I have the \"same\" code running on SPARC, x86 and ARM; had much of
it been written in ASM, that would have been a herculean task!)

I have all the very processor specific bits in a module with the name of
the processor included. Flags in the main program include file adjust
what is hidden from compilers that won\'t understand it.

Intel no longer defining __INTEL_COMPILER in its latest offering caught
me out. It is now __INTEL_LLVM_COMPILER :( How daft can you get!

BTW does anyone know of a #define in the MS C++ compiler that is set
when the advanced vector linkage mode is enabled? I haven\'t found one!

--
Martin Brown
 
On 2/24/2023 1:23 AM, John Robertson wrote:
Poketa, poketa, poketa...

Nothing like the sound of an EM machine running happily!

Yeah, but the maintenance is a killer! Burnishing and regapping all
those frigging contacts! Particularly if you\'ve got someone in the
family \"addicted\" to it :< Or, if you get obsessive about keeping
all the rubber clean, targets working properly, replace burned bulbs,
etc.

(I only have one, here, and dismantled it \"temporarily\" many years ago.
SWMBO has given up asking me to set it back up realizing that it really
eats up a lot of space for the entertainment it provides! :> )

By comparison, videos are so much less headache -- but almost as big!
 
On Thu, 23 Feb 2023 14:10:28 -0800, John Robertson <jrr@flippers.com>
wrote:

On 2023/02/22 11:05 a.m., John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.


If you know COBOL then the US IRS department may have work for you.
Apparently that is the language for their tax system...

Is 2036 going to be a problem? They want to phase COBOL out by 2030.

Depending on how the Y2k fix was done.

If 2 digit fields are retained, it might work for 1950 to 2049 (or
1980 to 2079). With 4 digit year, some system will fail after Feb 28
2100, since this year is not a leap year.

Year 2038 is an issue on Unix/Linux/C/C++ based systems using second
count since Jan 1 1970.
 
On 2/24/2023 3:27 AM, Martin Brown wrote:
Even compiled and JIT\'ed languages tend to support most *popular*
processors (and the number of processor variants seems to be
DEcreasing, over time).  There are other costs associated with
\"fringe\" processors!

Modern languages seem to have pretty good back end cross compilers.

Assuming the target is reasonably \"popular\". And, the language isn\'t in its
infancy (with a small \"following\")

IME, you want to avoid (or wrap in some abstraction) any processor/vendor
specific hooks esp if you may want to reuse the code on some other
platform.  This, of course, is the biggest argument against ASM
(I have the \"same\" code running on SPARC, x86 and ARM; had much of
it been written in ASM, that would have been a herculean task!)

I have all the very processor specific bits in a module with the name of the
processor included. Flags in the main program include file adjust what is
hidden from compilers that won\'t understand it.

I have a \"shim\" header that lets me define \"configurations\".

<commentary defining what configuration 1 is all about>
#ifdef CONFIGURATION1
...
#endif

<commentary defining what configuration 2 is all about>
#ifdef CONFIGURATION2

So, I can put whatever manifest constants for the target compiler
as well as other hooks that one-or-more source files would need.

I don\'t want to have to edit source files for any changes in the
toolchain; fold that into the \"shim\".

Most of my processor specific things are in \"locore\" or other parts
of the abstraction layer. The code that runs atop that is largely
portable and processor agnostic.

The IDL compiler has to deal with type conversions between targets
in the client- and server-side stubs it creates. So, it\'s a bit of
a hodge-podge (not just for the gazintas and cumzoutas but, also,
to allow an object that is currently being served by a processor of
one particular architecture to be migrated to a processor of another
architecture, at runtime. So, the internal state associated with the
object has to be convert-able -- and the servers written with that in
mind!)

Intel no longer defining __INTEL_COMPILER in its latest offering caught me out.
It is now __INTEL_LLVM_COMPILER :( How daft can you get!

An homage to its roots?

BTW does anyone know of a #define in the MS C++ compiler that is set when the
advanced vector linkage mode is enabled? I haven\'t found one!
 
On 2/24/2023 3:08 AM, Martin Brown wrote:
We had that with our village hall electricity bill.

Meter had 5 digits but central computer believed it had six - meter reading
muppet zero padded the reading at the wrong end and we got a quarterly bill
that was 9x our total usage to date (about £200k).

Amazingly the electricity company wanted this paid ASAP and got a court order
to cut off supply on the basis that it hadn\'t been paid and we were in dispute
with them. Fortunately I spotted the guys arriving and unlocked for them showed
them the meter and explained the nature of the dispute. They saw sense and rang
up their HQ to report the cock-up.

We changed electricity suppliers shortly afterwards.

Many years ago, I got a \"terse\" letter from one of my banks threatening
to withhold 10% of my interest income -- because they didn\'t have my
social security number (to report to gummit) on file.

There, at the top of the letter, beneath my name/address, was my social
security number!

Annoyed that this was *obviously* a cock-up that was going to take time
out of my day to resolve, I called the bank.

Appears any SSN that *begins* with a \'0\' was treated as \'000-00-0000\'
(or some other marker for \"not available\").

In the US (at the time I got my SSN, no idea what current practice
is), the SSN number-space was partitioned into geographical regions
(to minimize the need for a national on-line registry, no doubt).
So, everyone in my part of the country had a SSN that began with
\'0\'. The fact that most of the remaining digits were NOT \'0\'
was obviously overlooked by the cretin who wrote the code!
 
On 2023-02-23 04:28, Martin Brown wrote:
On 23/02/2023 03:15, Sylvia Else wrote:
On 23-Feb-23 6:05 am, John Larkin wrote:
https://en.wikipedia.org/wiki/Timeline_of_programming_languages


Now I\'m told that we should be coding hard embedded products in C++ or
Rust.


But can you afford the memory and time overheads inherent in run-time
range checks of things like array accesses?

Today CPU time is cheap and most embedded controllers are way faster
than they need to be to do the job (this was not always true).

But program memory is still at a significant premium--more expensive and
harder to get. (We went through a lot of pain due to the LPC845
becoming unobtainium for 18 months--porting to the 825, going to two
processors instead of just one....)

Checks and asserts can help in debugging code  but if any of them have
side effects then it can make for unwelcome interesting behaviour when
the final optimised version is created.

The standard trick is to develop it with all the range checking on and
some form of postmortem call stack dump if it ever crashes and then
disable all the checking in production code but leave the post mortem
stack traceback and keep a copy of the map file and production code.

That way with a bit of luck you can identify and eliminate any in field
failures reliably. This presupposes you have a way to communicate with
the embedded firmware and do a soft reset to regain control.

Once again, this pleasant strategy assumes that you have enough flash to
hold the full debug version of the code, which is far from universally true.

Unlike hardware which wears out with time software should become more
reliable with accumulated runtime in different environments.

Well-aged shelfware is always the best. ;)

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2/24/2023 15:13, Phil Hobbs wrote:
On 2023-02-23 04:28, Martin Brown wrote:
.......
Unlike hardware which wears out with time software should become more
reliable with accumulated runtime in different environments.


Well-aged shelfware is always the best. ;)

Of course so but... sometimes even 20 years can be less than
\"well aged\", as I found out not so long ago.
A customer to a new product line (the tld readers,
http://tgi-sci.com//tgi/tld/index.htm ) brought two units for
maintenance (they had managed to injure the HV cable so it
would occasionally trigger the overcurrent protection etc.)
and I set to install the latest version of dps, nuvi and all
as I had the units here and would not need to call and ask for
repower if I messed something up doing all that online.

So I left them run in a loop to see they would do 2000 measurements
with no issues and found out they would crash somewhere between
500 and 1500 depending on luck (took hours to get there, some 10
seconds per measurement). I could see which task was *usually* doing
an access to unallocated memory but things were messed up enough
so the unit needed a reboot.
It took me perhaps *a week* to find out what was happening.
It turned out the visualization task for a tld spectrum - very
similar to that for a pha spectrum for gamma, alpha etc. but new
and obviously MCS, not PHA - was using floating point a lot and
someone had forgotten to turn on the \"save the fp regs\" for that
task properly. Wait, \"someone\" could only be the person written
all that...
Then while converting an FP to decimal so it can be shown the system
call would divide by 10 in a loop and stack the ascii numeric
characters (so they can be unstacked in the wanted order),
relying on not being able to get more than 32 positions.
Even if more they would go below the stack
pointer which usually has (and had in this case) a lot of spare room
and it would be just a wrong number being shown.
However if the task doing that conversion would happen to be preempted
while doing it - and not have its FP regs saved - and get the divisor,
the 10, switched to say 1.... the number of positions would get
indefinite and all the stack was full of ascii zeroes, $30.
Which is why the bad address another task, usually the HV control
task - having a common with the the visualization one - went
for address $3030303x something, this after plenty of other crucial
stuff being smashed.
Now, perhaps 20 years later, I added a limiting counter to that
stacking loop.... Might save me some day if I forget to save
the FP regs one day again :).

------------------------------------------------------
Dimiter Popoff, TGI http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/
 

Welcome to EDABoard.com

Sponsor

Back
Top