ping john Larkin Raspberry Pi Pico gets BASIC interpreter...

On 6/27/2023 1:58 AM, Martin Brown wrote:
On 26/06/2023 14:51, John Larkin wrote:
On Mon, 26 Jun 2023 04:08:42 -0700, Don Y
blockedofcourse@foo.invalid> wrote:

On 6/26/2023 2:45 AM, Martin Brown wrote:

Most computer languages look somewhat like Basic apart from APL & Forth.
(and a few exotic modern CompSci languages like Haskell)

I guess that depends on how you define \"like\".

Coding in any of the LISP dialects is likely a rude awakening for
the uninitiated.  Ladder logic?

Remiss of me not to mention LISP as one of the earliest entirely different to
Basic languages (aka Lots of Irritating Single Parenthesises). I once long ago
worked on a Lisp compiler.

It\'s a great example because it requires an entirely different mindset;
much like OOPS vs. procedural coding.

And, if you\'ve been working in resource starved environments (e.g.,
deeply embedded systems with hardware of that era), the least of which
problems is getting used to the inefficiency of such representations!

[I can recall writing a service for a PROM programmer -- in Pascal.
Of course, you write a routine to convert a nybble to ASCII; then
use that to convert a byte; then that to convert an \"address\"; etc.
Because that\'s how you would do it in ASM on a *tiny* processor! In
Pascal, it just looks stupid and unnecessarily complex! You have to
be able to map your approach to the environment/tools that you\'re
using to address the problem space.]

Much of the similarity is a consequence (IMO) of the serial
way that humans tend to think -- esp when it comes to algorithms...
it\'s almost always a set of *steps* instead of a network.

So do all mathematical proofs and for that matter proofs of correctness of
software systems - one step at a time built on solid foundations. I had a play
with Z and VDM a few decades ago but found them unweildly (and distinct
overkill for the reliability we needed).

But, there are tools/technologies that let you express problems
with full parallelism. Granted, as you work on each subproblem
you think serially. But, the tool/technology lets those individual
subproblems come together *correctly* -- if you\'ve embedded the
right dependencies in the expression!

Computer programming is most always procedural. When parallel things
neeed to be done, it\'s usually broken into threads or processes with
semiphores, locks, blocks, interrupts, flags, fifos, things like that.
Most programmers never use state machines.

You have some very funny ideas. Computer science uses all of the methods
available to it and more besides.

Dunning-Kruger. He\'s obviously only looked at toy applications...
likely written in simple languages (e.g., BASIC). And, thinks
you solve performance problems by buying faster hardware.

FPGA design is done in synchronous clocked logic in nonprocedural
languages; everything happens everywhere all at once. Crossing a clock
boundary is recognized as something to avoid or handle very carefully.
Computer programming is a lot like old-style hairball async logic
design and has correspondingly many bugs.

And the FPGA program is designed and implemented in the software that you so
despise. How can you possibly trust it to do the right thing?

Ditto the simulations. And, likely heavily relied upon in the
design of the silicon/discretes that\'s used! I can recall doing
full customs and having to model the effects of temperature,
supply and process variations in all my performance models.
Won\'t work to design something that runs ONLY at \"STP\"!

You should be hand coding it manually single bit by bit since you have made the
case so cogently that no software can ever be trusted to work.

I\'d be happy for a power supply that didn\'t shit the bed. I watch
countless devices headed to the tip simply because their designers
couldn\'t/wouldn\'t design a supply that \"ran forever\" (isn\'t my
software expected to do so??)

Computing languages are fad driven, and that drives good things out of
circulation, a sort of Gresham\'s Law of computing.

I don\'t think that is true at all. The older computer languages were
limited by
computing power and hardware available at the time. Modern languages harness
the huge computing resources available today to take some of the tedious grunt
work out of coding and detecting errors.

I think the BSPs, HALs, OSs, etc. are more guilty of that.  Folks don\'t code
on bare metal anymore -- just as they don\'t put a CPU on a schematic any
longer.  They are \"sold\" the notion that they can treat this API as
a well-defined abstraction -- without ever defining the abstraction well!  :
They don\'t know what their implementations \"cost\" or how to even *measure*
performance -- because they don\'t know what\'s involved.

OTOH, a lot of \"coding\" is taught targeting folks who will be building
web pages or web apps where there is no concern for resource management
(it works or it doesn\'t).

Coding has no theory, no math, and usually little testing. Comments

Software development has a hell of a lot of maths and provably correct software
is essentially just a branch of applied mathematics. It is also expensive very
difficult to do and so most practitioners don\'t do it.

And most employers neither want to hire qualified people nor take their
EXPERT ADVICE on how to tackle particular jobs.

I\'ve had employers/clients treat projects as \"time-limited\": \"So,
what SUBSET of the product do you want to implement?\" (clearly,
if you only have X manhours to throw at a project that requires
Y > X, something just isn\'t going to get done. would you like to
make that decision now? Or, live with whatever the outcome happens
to be? Or, get smart and no-bid the job??!)

Ask a coder how long a particular piece of code takes to execute.
(particularly amusing for folks who *claim* their application is HRT;
\"what guarantees do you have as to meeting your deadline(s)?\")

Or, to guesstimate *percentage* of time in each portion of the code.
(do you know how the compiler is likely going to render your code?
do you know what the hardware will do with it? If you *measure*
it, how sure are you that it will perform similarly in all possible
cases?)

Or, how deep the stack penetration. (how can you know how much space to
allocate for the stack if you don\'t know what worst-case penetration
will be? what do you mean, you\'re relying on libraries to which you
don\'t have sources??? how have their needs been quantified?)

My universities computing department grew out of the maths laboratory and were
exiled to a computer tower when their big machines started to require insane
amounts of power and acolytes to tend to their needs.

Our \"CS\" department was a subset of the EE curriculum. So, you learned
how to design a CPU as well as WHY you wanted it to have a particular set
of features.

On the CS side, you understood why call-by-value and call-by-reference
semantics differed -- and the advantages/consequences of each. And, how
to convert one to another (imagine how to implement by-value syntax
for an argument that was many KB -- to avoid the downside of by-reference
semantics!) What can you do *in* the processor to make these things possible?
What are the costs? Liabilities?

are rare and usually illiterate. Bugs are normal because we can always
fix them in the next weekly or daily release. Billion dollar projects
literally crash from dumb bugs. We are in the Dark Ages of
programming.

And what excuse for power supplies that fail?

I reckon more like Medieval cathedral building - if it still standing after 5
years then it was a good \'un. If it falls down or the tower goes wonky next
time make the foundations and lower walls a bit thicker.

Why do you derate components instead of using them at their rated
limits? Ans: because experience has TAUGHT you to do so.

The same sorts of practices exist in software engineering -- for folks
who are aware of them. And, they provide the same sorts of reliability
(robustness).

I built a bar-code reader into a product many decades ago. As cost was
ALWAYS an issue, it was little more than an optical (reflective) sensor
conditioned by a comparator that noticed black/white levels and AGC\'d
the signal into a single digital \"level\".

That directly fed an interrupt.

That ran continuously (because it would be a crappy UI if the
user had to push a button to say \"I want to scan a barcode, now!\"

The design targeted a maximum scan rate of 100 ips. Bar transitions
could occur at (worst case) ~7 microsecond intervals. (40 year
old processors!)

And, nothing to prevent a malicious user from rubbing a label across
the detector -- back and forth -- as fast as humanly possible (just to
piss off the software and/or \"prove\" it to be defective: \"If it doesn\'t
handle 300ips, how do we know it is correctly handling 100ips?\"

Yup. You could consume 100% of real-time by doing so. But, the
processor wouldn\'t crash. Data wouldn\'t be corrupted. And, when
your arm eventually got tired, you\'d look up to see the correct
barcode value displayed!

Because the system was *designed* to handle overload *gracefully*.

Ever see a PC handle a hung disk?

UK emergency 999 system went down on Sunday morning (almost certainly a
software update gone wrong) and guess what the backup system didn\'t work
properly either. It took them ~3 hours to inform the government too!

https://www.publictechnology.net/articles/news/nhs-launches-‘full-investigation’-90-minute-999-outage

It affected all the UK emergency services not just NHS.

Same happened with passport control a couple of weeks ago - a fault deemed too
\"sensitive\" (ie embarrassing) to disclose how it happened.

https://www.bbc.co.uk/news/uk-65731795

Most often, these are \"people failures\". Someone failed to perform a
step in a procedure that was indicated/mandated.

Who said \"Anybody can learn to code\" ?

It is true that anybody can learn to code but there are about three orders of
magnitude difference between the best professional coders (as you disparagingly
choose to call them) and the worst ones. I prefer the description software
engineer although I am conscious that many journeyman coders are definitely not
doing engineering or anything like!

How many technicians design custom silicon?

I have known individuals who quite literally had to be kept away from important
projects because their ham fisted \"style\" of hack it and be damned would break
the whole project resulting in negative progress.

We had a guy who was perpetually *RE*bugging the floating point libraries
in our products (we treated software modules as components -- with specific
part numbers catalogued and entered into \"inventory\". Why reinvent the
wheel for every project?) It got to the point that we would track the
\"most recent KNOWN GOOD release and always avoid the \"latest\".

One of the snags is that at university level anyone who has any aptitude for
the subject at all can hack their assessment projects out of solid code in no
time flat - ie. ignore all the development processes they are supposed to have
been taught. You can get away with murder on something that requires less than
3 man months work and no collaboration.

And, no *followup*!

Conversely, writing a piece of code that can stand for years/decades
and be understood by those that follow is a *skill*. When your product
life is measure d in a few years, you\'re never really \"out of development\".

*Designing* a solution that can stand the test of time is a considerable
effort. FAT12, FAT16, FAT32, exFAT, NTFS, etc. Each an embarassing
admission that the designers had no imagination to foretell the inevitable!

[How many gazillions of man-hours have developers AND USERS wasted
to short-sighted implementation decisions? Incl those that have some
\"rationale\" behind them?]
 
On Tue, 27 Jun 2023 05:32:58 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 6/27/2023 2:10 AM, Martin Brown wrote:
On 26/06/2023 18:17, Don Y wrote:
On 6/26/2023 9:18 AM, Don wrote:
John Larkin wrote:

Who said \"Anybody can learn to code\" ?

Someone who needs software and wants someone else to write it?

ANYONE can learn to code.  Coding is a largely mechanical skill.
Do this to get that.

Knowing which THIS to do is the issue.

What\'s the difference between:

for (i = 0; i < MAXI; i++)
    for (j = 0; j < MAXJ; j++)
        array[j]=17

and

for (j = 0; j < MAXJ; j++)
    for (i = 0; i < MAXI; i++)
        array[j]=17

(this is CompSci 101 material)

Transposing an array is a better example for these purpose.

But this is *obvious*! And, something that is frequently done
(though the assignment may be some expression instead of a
constant and other actions may exist in the loops).

What happens when MAX{I,J} is MAXINT? Will you (eventually) \"lift\"
a piece of this code from an app where it *works* and misapply it
to another where it *won\'t*? The *code* is correct in each of these
cases...

Someone would invariably bring our Starlink VAX to its knees by doing it the
naive way in a noddy style loop. It was annoying because everybody was handling
very large (for the time) images and highly optimised transpose a rectangular
array subroutines were in the library.

for (i = 0; i < MAXI; i++)
    for (j = 0; j < MAXJ; j++)
        array[j] = array[j]

Generates an insane number of page faults once MAXI*MAXJ > pagesize.

Cache misses are a more common issue as many folks don\'t use a PMMU
(in an embedded product) -- yet caches abound! \"It\'s just code\" -- as
if all implementations are equivalent.

[The same sorts of folks likely don\'t understand cancellation]

Wait until embedded systems start having to deal with runtime thrashing :
(The fact that the cache is being abused is largely hidden from the
coder because he doesn\'t understand HOW performance is defined)


Raspberry Pi Pico runs code out of a serial flash chip with 16 KB of
code cache, shared between two CPUs. That can get interesting.

Well, I used to do entire systems running the code out of that much
eprom.
 
On Tue, 27 Jun 2023 09:58:23 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 26/06/2023 14:51, John Larkin wrote:
On Mon, 26 Jun 2023 04:08:42 -0700, Don Y
blockedofcourse@foo.invalid> wrote:

On 6/26/2023 2:45 AM, Martin Brown wrote:

Most computer languages look somewhat like Basic apart from APL & Forth.
(and a few exotic modern CompSci languages like Haskell)

I guess that depends on how you define \"like\".

Coding in any of the LISP dialects is likely a rude awakening for
the uninitiated. Ladder logic?

Remiss of me not to mention LISP as one of the earliest entirely
different to Basic languages (aka Lots of Irritating Single
Parenthesises). I once long ago worked on a Lisp compiler.

Much of the similarity is a consequence (IMO) of the serial
way that humans tend to think -- esp when it comes to algorithms...
it\'s almost always a set of *steps* instead of a network.

So do all mathematical proofs and for that matter proofs of correctness
of software systems - one step at a time built on solid foundations. I
had a play with Z and VDM a few decades ago but found them unweildly
(and distinct overkill for the reliability we needed).

Computer programming is most always procedural. When parallel things
neeed to be done, it\'s usually broken into threads or processes with
semiphores, locks, blocks, interrupts, flags, fifos, things like that.
Most programmers never use state machines.

You have some very funny ideas. Computer science uses all of the methods
available to it and more besides.

All the computers that I know of execute instructions sequentially, so
their compilers assume procedural programming.

Are there any parallel-execution computer languages, like Verilog?
That would execute very inefficiently on any \"computer\" architecture.

FPGA programs have far fewer bugs than computer code. The \"state\" of a
procedural program is the number in the program counter(s) which is
mostly an uncontrolled mess. A state machine with 2^192 states, for
example, is hard to map on a whiteboard.


FPGA design is done in synchronous clocked logic in nonprocedural
languages; everything happens everywhere all at once. Crossing a clock
boundary is recognized as something to avoid or handle very carefully.
Computer programming is a lot like old-style hairball async logic
design and has correspondingly many bugs.

And the FPGA program is designed and implemented in the software that
you so despise. How can you possibly trust it to do the right thing?

Because I define the states and the tools implement them correctly.
The tools do optimize my logic equations for speed and to fit the
actual architecture but they always do that right.

Sometimes we have to force the tools to *not* optimize logic
expressions, especially if we want some delays to not be removed.
Those cases are obvious and there are simple tricks. But that just
works without the scores of latent bugs common to software.

We know the worst-case timing path in FPGAs. uP programmers can\'t
usually guess execution time of an interrupt within 10:1. In my
experience, they tend to be very pessimistic, and I have to force them
to measure it.
 
On Tue, 27 Jun 2023 06:15:01 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 6/27/2023 1:58 AM, Martin Brown wrote:
On 26/06/2023 14:51, John Larkin wrote:
On Mon, 26 Jun 2023 04:08:42 -0700, Don Y
blockedofcourse@foo.invalid> wrote:

On 6/26/2023 2:45 AM, Martin Brown wrote:

Most computer languages look somewhat like Basic apart from APL & Forth.
(and a few exotic modern CompSci languages like Haskell)

I guess that depends on how you define \"like\".

Coding in any of the LISP dialects is likely a rude awakening for
the uninitiated.  Ladder logic?

Remiss of me not to mention LISP as one of the earliest entirely different to
Basic languages (aka Lots of Irritating Single Parenthesises). I once long ago
worked on a Lisp compiler.

It\'s a great example because it requires an entirely different mindset;
much like OOPS vs. procedural coding.

And, if you\'ve been working in resource starved environments (e.g.,
deeply embedded systems with hardware of that era), the least of which
problems is getting used to the inefficiency of such representations!

[I can recall writing a service for a PROM programmer -- in Pascal.
Of course, you write a routine to convert a nybble to ASCII; then
use that to convert a byte; then that to convert an \"address\"; etc.
Because that\'s how you would do it in ASM on a *tiny* processor! In
Pascal, it just looks stupid and unnecessarily complex! You have to
be able to map your approach to the environment/tools that you\'re
using to address the problem space.]

Much of the similarity is a consequence (IMO) of the serial
way that humans tend to think -- esp when it comes to algorithms...
it\'s almost always a set of *steps* instead of a network.

So do all mathematical proofs and for that matter proofs of correctness of
software systems - one step at a time built on solid foundations. I had a play
with Z and VDM a few decades ago but found them unweildly (and distinct
overkill for the reliability we needed).

But, there are tools/technologies that let you express problems
with full parallelism. Granted, as you work on each subproblem
you think serially. But, the tool/technology lets those individual
subproblems come together *correctly* -- if you\'ve embedded the
right dependencies in the expression!

Computer programming is most always procedural. When parallel things
neeed to be done, it\'s usually broken into threads or processes with
semiphores, locks, blocks, interrupts, flags, fifos, things like that.
Most programmers never use state machines.

You have some very funny ideas. Computer science uses all of the methods
available to it and more besides.

Dunning-Kruger. He\'s obviously only looked at toy applications...
likely written in simple languages (e.g., BASIC). And, thinks
you solve performance problems by buying faster hardware.

You can look at my web site. All that stuff works, and people buy it,
and that\'s the bottom line.

We usually shift \"performance problems\" into FPGAs and let uPs do the
grunt work like web pages and displays and pushbuttons. We sell
electronic instruments and this is S.E.D.

FPGA design is done in synchronous clocked logic in nonprocedural
languages; everything happens everywhere all at once. Crossing a clock
boundary is recognized as something to avoid or handle very carefully.
Computer programming is a lot like old-style hairball async logic
design and has correspondingly many bugs.

And the FPGA program is designed and implemented in the software that you so
despise. How can you possibly trust it to do the right thing?

Because it routinely does.

Ditto the simulations. And, likely heavily relied upon in the
design of the silicon/discretes that\'s used! I can recall doing
full customs and having to model the effects of temperature,
supply and process variations in all my performance models.
Won\'t work to design something that runs ONLY at \"STP\"!

The Actel FPGAs were interesting. One-time programmable, soldered to
the board, with no simulation tools. We got some hairy digital delay
generators, with exotic PLLs, to work first time.
 
On a sunny day (Tue, 27 Jun 2023 05:32:58 -0700) it happened Don Y
<blockedofcourse@foo.invalid> wrote in <u7ektu$1d3hv$2@dont-email.me>:

On 6/27/2023 2:10 AM, Martin Brown wrote:
On 26/06/2023 18:17, Don Y wrote:
On 6/26/2023 9:18 AM, Don wrote:
John Larkin wrote:

Who said \"Anybody can learn to code\" ?

Someone who needs software and wants someone else to write it?

ANYONE can learn to code.  Coding is a largely mechanical skill.
Do this to get that.

Knowing which THIS to do is the issue.

What\'s the difference between:

for (i = 0; i < MAXI; i++)
    for (j = 0; j < MAXJ; j++)
        array[j]=17

and

for (j = 0; j < MAXJ; j++)
    for (i = 0; i < MAXI; i++)
        array[j]=17

(this is CompSci 101 material)

Transposing an array is a better example for these purpose.

But this is *obvious*! And, something that is frequently done
(though the assignment may be some expression instead of a
constant and other actions may exist in the loops).

What happens when MAX{I,J} is MAXINT? Will you (eventually) \"lift\"
a piece of this code from an app where it *works* and misapply it
to another where it *won\'t*? The *code* is correct in each of these
cases...

Someone would invariably bring our Starlink VAX to its knees by doing it the
naive way in a noddy style loop. It was annoying because everybody was handling
very large (for the time) images and highly optimised transpose a rectangular
array subroutines were in the library.

for (i = 0; i < MAXI; i++)
    for (j = 0; j < MAXJ; j++)
        array[j] = array[j]

Generates an insane number of page faults once MAXI*MAXJ > pagesize.

Cache misses are a more common issue as many folks don\'t use a PMMU
(in an embedded product) -- yet caches abound! \"It\'s just code\" -- as
if all implementations are equivalent.

[The same sorts of folks likely don\'t understand cancellation]

Wait until embedded systems start having to deal with runtime thrashing :
(The fact that the cache is being abused is largely hidden from the
coder because he doesn\'t understand HOW performance is defined)

It even does in Fortran which knows how to do large multi dimensional arrays
properly in contiguous memory (unlike C).

And,
     memset( &array[0][0], 17, MAXI*MAXJ )

And, more importantly, why/when would you use each approach?
When would each *fail* -- silently or otherwise??

Coders are technicians.  They don\'t understand the \"science\"
that goes into the *design* of algorithms.  Never formally
looked at concurrency design methodologies, race/hazzard
avoidance, non-locking protocols, etc.

I think it varies a lot with institution. The deskilling of coding software has
resulted in an underclass of semi-literate journeyman coders who have no real
mathematical knowledge to underpin what they do.

There has been a shift towards \"teaching what employers seek\" -- creating
employees with limited, short-term skillsets to address TODAY\'s need(s)
at the expense of knowing about those things that will be available, tomorrow.

Increasingly, processor architectures are becoming more \"minicomputer-like\"
than microcomputer. Yet, the folks using them are oblivious to the
mechanisms available to the practitioner -- because they just see a
\"module with BSP/runtime\", often created by a company with similarly
limited focus.

\"Why implement a VMM system -- there\'s no disk!\"

(Hint: that\'s not the only use!)

In my day it was pretty common to test out Knuth\'s algorithms - we even found a
bug in one of the prime testing codes. Though not in time to get a $2^N reward
for finding it. Did get a nice postcard from him though. I didn\'t do computer
science but sneaked along to some of their lectures when they didn\'t conflict
with my actual subject.

We spent a lot of time on theory because most of the equipment
was ... \"unique\". A different language, OS, hardware, focus, etc.
in each class. A \"coder\" would quickly be lost: in this class,
you\'ll be using LISP; this other, Algol; another, Pascal;
C in a fourth; etc. All in the course of a *day*. The coder
largely just thinks about syntax and not the \"why\" behind
language features.

For us, the *language* was insignificant -- the focus was on the
algorithms and the mechanisms that a particular language made
possible or the supporting hardware (e.g., B5000). E.g., lists
are inconvenient mechanisms in most procedural languages yet
delightfully effective in others.

\"Lambda calculus? Which *machine* does THAT run on??\"
\"DFA? What class of problems do *they* solve?\"
\"Recursion? How do I *know* I won\'t overrun the stack?\"
\"What should the objects in this application be?\"
\"How is object-BASED different from object-ORIENTED?\"
\"What does a language need to support for the latter?\"

A coder would *pick* one of the above -- likely without even
considering that there are alternatives (or the criteria for
successfully choosing between them).

The scariest thing I see all too often with clueless C/C++ coders is the method
of getting it to compile by the random application of casts. Such code almost
never does what the author expects or intended.

Exactly. \"How do I silence this compiler\'s WARNINGS?\" (Hint: they are
called warnings FOR A REASON!)

Some languages make it harder to \"appease\" the compiler with things
that are \"wrong\". E.g., Limbo doesn\'t support pointers, is much
more strongly typed, etc. Of course, it relies on GC to give the
coder that \"freedom\". Does the coder *know* what that will cost him
in any particular application? Or, is it a case of \"overprovision to
be on the safe side\" -- much like \"make everything HRT cuz SRT is *so*
much harder!\"

Modern compilers and runtimes have got a lot better at warning idiots that they
have uninitialised variables and/or unreachable code. It should lead to better
software in the future (or so I hope).

I\'m not sure that follows. The advent of faster tools seems not to have
led to smarter coders but, rather, more opportunities to GUESS what the
problem MIGHT be. When it took four-hours to \"turn the crank\" (i.e.,
two builds per day), you were REALLY certain about the fixes you would
try cuz you didn\'t have much time to \"play\". Add to that, having to SHARE
access to tools and you learned to be very methodical about getting
SOMETHING from each build.

Perhaps the future is deep AI based where the computer prompts the domain
expert to say what they want done down each of the less exploered branches in
the tree. It has been my experience that it is invariably the obscure rare
failure of something critical paths that may go untested (until that failure
actually happens).

The problem that has to be solved is imagining the entire extent of the
application. I would routinely interview clients to define the
extent of a job:
\"What do you want to happen in THIS case?\"
Folks often don\'t know. Do we let an AI *suggest* possible constraints?
Will everything just \"throw an error\" instead of doing something useful?

Our microwave oven lets you type in a time interval for the maggie.
0-59 makes sense as \"number of seconds\". What about \"60+\"?
Should those values throw errors (59 seconds is maximum in 1:00)?
If we let 60 be a valid entry, what about 70? 80? 90? 100??
In the latter case, how do we indicate 1:00 -- add a semicolon
or other delimiter?

We\'re already seeing the shifting of focus in coders as they \"rely\"
on more bloated systems on which to build their applications.
Amusing that a 50 year old OS is the basis for many apps, today.
Really? You think that\'s the way to solve every problem?
Most engineers learn about the limitations of implementations
and seek/try new ideas -- instead of being wed to an obsolescent one.

And, that -- despite MILLIONS of manhours -- it\'s still loaded with
bugs! (because the developers are enamored with themselves and
haven\'t learned to \"shoot the engineer\" -- nor does the idea even seem
to be in their psyche!)

\"Everything should be as simple as it can be, but not simpler\"

https://unix.stackexchange.com/questions/223746/why-is-the-linux-kernel-15-million-lines-of-code

Instead of thinking about what they *need*, they think about what they
can do with what they THINK they *have* -- as if it can\'t possibly have
bugs despite the extra complexity/bloat that they\'re employing.
Imagine having a box full of discretes and felt obligated to find
a use for them in your hardware design (WTF???)

That\'s what happens when you throw coders at a project. And, let
inertia govern your design choices! :

OTOH, when you let software engineers design a system, they have a deeper
well to draw on for experience as well as exposure to broader ideas
that may -- only now -- be becoming practical \"in the small\".

E.g., MULTICS was designed for infinite up-time -- you replace components
WHILE the system is still running. Just like an electric utility
replaces equipment while folks are using it! Why all this \"reboot
required\" nonsense? Why can\'t I replace a library while applications
are being launched and binding (or bound!) to it? Ans: because the
idea is anathema to you because you\'ve a coder\'s mentality (\"That\'s
just how it\'s done...\")


You talk crap.
Show us some code you wrote
 
On Tue, 27 Jun 2023 07:31:14 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 27 Jun 2023 05:32:58 -0700, Don Y
blockedofcourse@foo.invalid> wrote:

On 6/27/2023 2:10 AM, Martin Brown wrote:
On 26/06/2023 18:17, Don Y wrote:
On 6/26/2023 9:18 AM, Don wrote:
John Larkin wrote:

Who said \"Anybody can learn to code\" ?

Someone who needs software and wants someone else to write it?

ANYONE can learn to code.  Coding is a largely mechanical skill.
Do this to get that.

Knowing which THIS to do is the issue.

What\'s the difference between:

for (i = 0; i < MAXI; i++)
    for (j = 0; j < MAXJ; j++)
        array[j]=17

and

for (j = 0; j < MAXJ; j++)
    for (i = 0; i < MAXI; i++)
        array[j]=17

(this is CompSci 101 material)

Transposing an array is a better example for these purpose.

But this is *obvious*! And, something that is frequently done
(though the assignment may be some expression instead of a
constant and other actions may exist in the loops).

What happens when MAX{I,J} is MAXINT? Will you (eventually) \"lift\"
a piece of this code from an app where it *works* and misapply it
to another where it *won\'t*? The *code* is correct in each of these
cases...

Someone would invariably bring our Starlink VAX to its knees by doing it the
naive way in a noddy style loop. It was annoying because everybody was handling
very large (for the time) images and highly optimised transpose a rectangular
array subroutines were in the library.

for (i = 0; i < MAXI; i++)
    for (j = 0; j < MAXJ; j++)
        array[j] = array[j]

Generates an insane number of page faults once MAXI*MAXJ > pagesize.

Cache misses are a more common issue as many folks don\'t use a PMMU
(in an embedded product) -- yet caches abound! \"It\'s just code\" -- as
if all implementations are equivalent.

[The same sorts of folks likely don\'t understand cancellation]

Wait until embedded systems start having to deal with runtime thrashing :
(The fact that the cache is being abused is largely hidden from the
coder because he doesn\'t understand HOW performance is defined)

Raspberry Pi Pico runs code out of a serial flash chip with 16 KB of
code cache, shared between two CPUs. That can get interesting.

Well, I used to do entire systems running the code out of that much
eprom.


Actually, I had a successful CAMAC module with a 6802 CPU and a 16
kbit eprom, 2 Kbyte of code, with room to spare. No FPGA.

The modern equivalent is a $100 dual-core 600 MHz ZYNQ with gigabytes
of SD card running Linux, at 40x the development cost.
 
On 6/27/2023 6:15 AM, Don Y wrote:
My universities computing department grew out of the maths laboratory and
were exiled to a computer tower when their big machines started to require
insane amounts of power and acolytes to tend to their needs.

Our \"CS\" department was a subset of the EE curriculum.  So, you learned
how to design a CPU as well as WHY you wanted it to have a particular set
of features.

On the CS side, you understood why call-by-value and call-by-reference
semantics differed -- and the advantages/consequences of each.  And, how
to convert one to another (imagine how to implement by-value syntax
for an argument that was many KB -- to avoid the downside of by-reference
semantics!)  What can you do *in* the processor to make these things possible?
What are the costs?  Liabilities?

And, you looked at prior art. Why did this work/not-work? Why
are things no longer done this way but this, instead?

It\'s embarassing to see how much prior art has fallen by the wayside
simply because developers don\'t have an appreciation for what has
come before.

[This falls into the comment below]

In the late 80\'s, I designed a small system that was intended to
run unattended, 24/7/365. It had an internal 1200BPS modem (an
oddity, for the time) so that it could (literally) \"phone home\"
when it identified a fault in the kit it was monitoring.

(predating the ubiquity of The Internet)

Similar products had crude text interfaces: \"type 1 for vanilla,
2 for chocolate, 3 for strawberry\", etc.

I built a layered (popup) menu system so each additional level
(partially) overlayed the screen of the previous interface level.
And, tabs to switch between fields -- or hotkeys to directly
specify options on THIS menu/popup.

This requires lots of characters to be pushed to the display
device... at 120 per second! (a glass titty is ~2000 character
positions).

If you had to repaint the display for each subsequent popup,
the user would forever be waiting for I/O.

So, I built the menu/windowing system atop curses (which I
ported to the Z80 that hosted this application). Let the
CPU sort out how much/little *needs* to be changed to
go from \"current display contents\" to \"desired display
contents\" and send JUST those characters over the link.

Check a box? Only need to send a handful of characters
to position the cursor at that boxes location and paint
an \'X\' between the \"[ ]\".

Tab to a new field? Another few characters in a cursor
position sequence.

Close the window and expose the portion of the underlying
window that had been overlayed? Count the characters
that need to be changed on the display and divide by 120.

Want to use a different tty? Fine, that\'s why we have termcap!
(why should *I* reinvent that wheel?)

Why poke manifest constants into hardware registers AT THE APPLICATION
LAYER? Why not use an ioctl? So the next guy isn\'t wondering where
else you\'ve been dicking with the hardware in the application code!

No brainer. *If* you are aware of prior art! (and that work
has been created to stand the tests of time)

Compare product to that of competitors with clunky \"type 1 for
serial port configuration, 2 to set time, 3 to reset, 4 for
diagnostics\" style interface. Simple choice as to which is
perceived as better/slicker/more well-thought-out!

[[attached is the routine that builds the dialog to configure
a serial port -- assuming attachments are handled, here. Remember,
this is running on a Z80 40 years ago -- recognize the UN*X
manifest constants and ftn names? :>]]

Conversely, writing a piece of code that can stand for years/decades
and be understood by those that follow is a *skill*.  When your product
life is measure d in a few years, you\'re never really \"out of development\".

*Designing* a solution that can stand the test of time is a considerable
effort.  FAT12, FAT16, FAT32, exFAT, NTFS, etc.  Each an embarassing
admission that the designers had no imagination to foretell the inevitable!

[How many gazillions of man-hours have developers AND USERS wasted
to short-sighted implementation decisions?  Incl those that have some
\"rationale\" behind them?]
 
On Tuesday, June 27, 2023 at 6:15:13 AM UTC-7, Don Y wrote:
On 6/27/2023 1:58 AM, Martin Brown wrote:
On 26/06/2023 14:51, John Larkin wrote:
On Mon, 26 Jun 2023 04:08:42 -0700, Don Y
blocked...@foo.invalid> wrote:

...Billion dollar projects
literally crash from dumb bugs. We are in the Dark Ages of
programming.
And what excuse for power supplies that fail?
I reckon more like Medieval cathedral building - if it still standing after 5
years then it was a good \'un. If it falls down or the tower goes wonky next
time make the foundations and lower walls a bit thicker.
Why do you derate components instead of using them at their rated
limits? Ans: because experience has TAUGHT you to do so.

This rings a bell; a fine stereo component on my AV shelf has needed
my diagnostic attention three times, once for bad power-component
solder joints, once for a failed filter capacitor, and once for a high-voltage
switch element. All three faults (and it has DSP, DVD, remote controls, and
extensive audio and digital circuitry to support that) were in power
supplies.

PCs used to be well-designed, a power supply failure meant you just
unplugged the silver box and bought a replacement. Now, though, motherboard
POL supplies are the big failure mode I see Unidentifiable crispy things, not repair-friendly..
 
On 6/27/2023 1:55 PM, whit3rd wrote:
On Tuesday, June 27, 2023 at 6:15:13 AM UTC-7, Don Y wrote:
On 6/27/2023 1:58 AM, Martin Brown wrote:
On 26/06/2023 14:51, John Larkin wrote:
On Mon, 26 Jun 2023 04:08:42 -0700, Don Y
blocked...@foo.invalid> wrote:

...Billion dollar projects
literally crash from dumb bugs. We are in the Dark Ages of
programming.
And what excuse for power supplies that fail?
I reckon more like Medieval cathedral building - if it still standing after 5
years then it was a good \'un. If it falls down or the tower goes wonky next
time make the foundations and lower walls a bit thicker.
Why do you derate components instead of using them at their rated
limits? Ans: because experience has TAUGHT you to do so.

This rings a bell; a fine stereo component on my AV shelf has needed
my diagnostic attention three times, once for bad power-component
solder joints, once for a failed filter capacitor, and once for a high-voltage
switch element. All three faults (and it has DSP, DVD, remote controls, and
extensive audio and digital circuitry to support that) were in power
supplies.

Is there something revolutionarily advanced about the design of power supplies?
You\'d think after all these years, it would be an \"established science\"! :>

[I see 10-12 tons of kit discarded annually because of hardware failures,
many of which are easy to fix but costly to *pay* someone to fix -- which
explains why I have 32? monitors, 105? disk drives, 22 computers, etc. <grin>]

PCs used to be well-designed, a power supply failure meant you just
unplugged the silver box and bought a replacement.

I rescued a NAS with a power supply problem. Turns out to be
an intermittent cable connection. In a *sealed* unit (so, how
did the connector fail with no one dicking with it??)

Now, though, motherboard
POL supplies are the big failure mode I see Unidentifiable crispy things, not repair-friendly..

Software is \"easy\" to fix (download an update) yet hardware failures
prove to be the thing that obsoletes devices. Seems like attention is
focused in the wrong place! :> We need better HARDWARE quality!
 
On Tue, 27 Jun 2023 14:05:44 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 6/27/2023 1:55 PM, whit3rd wrote:
On Tuesday, June 27, 2023 at 6:15:13?AM UTC-7, Don Y wrote:
On 6/27/2023 1:58 AM, Martin Brown wrote:
On 26/06/2023 14:51, John Larkin wrote:
On Mon, 26 Jun 2023 04:08:42 -0700, Don Y
blocked...@foo.invalid> wrote:

...Billion dollar projects
literally crash from dumb bugs. We are in the Dark Ages of
programming.
And what excuse for power supplies that fail?
I reckon more like Medieval cathedral building - if it still standing after 5
years then it was a good \'un. If it falls down or the tower goes wonky next
time make the foundations and lower walls a bit thicker.
Why do you derate components instead of using them at their rated
limits? Ans: because experience has TAUGHT you to do so.

This rings a bell; a fine stereo component on my AV shelf has needed
my diagnostic attention three times, once for bad power-component
solder joints, once for a failed filter capacitor, and once for a high-voltage
switch element. All three faults (and it has DSP, DVD, remote controls, and
extensive audio and digital circuitry to support that) were in power
supplies.

Is there something revolutionarily advanced about the design of power supplies?
You\'d think after all these years, it would be an \"established science\"! :

No big line frequency transformers and, recently, megahertz GaN
switchers.
 
On Tue, 27 Jun 2023 10:10:46 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

Different languages use different multidimensional array layouts. With
Fortran it makes sense to use the innermost loop to access the _first_
index, while in C the \"virtual memory friendly\" is to use the
innermost loop for the _last_ index.

An application originally written in Fortran will behave really badly
if simply converted to C to avoid compiler syntax errors :).

>Transposing an array is a better example for these purpose.

This is a bad example, since in transposing an array, either the input
or output array is processed optimally, but not both. If input is
stored optimally, the output is scattered all over and so the dirty
pages must be saved to disk, even if there are only a single \"dirty\"
array element in that page.

Someone would invariably bring our Starlink VAX to its knees by doing it
the naive way in a noddy style loop. It was annoying because everybody
was handling very large (for the time) images and highly optimised
transpose a rectangular array subroutines were in the library.

In the 1970\'s people believed in virtual memory and some system were
originally delivered with less than 1 MB of physical memory :)


for (i = 0; i < MAXI; i++)
for (j = 0; j < MAXJ; j++)
array[j] = array[j]

Generates an insane number of page faults once MAXI*MAXJ > pagesize.


The working set size limits how big arrays can be readily transposed,
that is how many \"dirty\" pages can be in physical memory at one time.
If the output is suboptimal the same page may have to be written out
multiple times. For this reason it makes more sense to store the
output optimally, since a completely dirty page needs to be written
out only once.


It even does in Fortran which knows how to do large multi dimensional
arrays properly in contiguous memory (unlike C).

You can write\" virtual memory friendly\" cods in C if you start from
the beginning. Using some existing Fortran library routines and just
change the syntax to C will be a disaster if you do not alter the
innermost/outermost loop order.
 
On Wed, 28 Jun 2023 02:40:23 +0300, upsidedown@downunder.com wrote:

On Tue, 27 Jun 2023 10:10:46 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

Different languages use different multidimensional array layouts. With
Fortran it makes sense to use the innermost loop to access the _first_
index, while in C the \"virtual memory friendly\" is to use the
innermost loop for the _last_ index.

An application originally written in Fortran will behave really badly
if simply converted to C to avoid compiler syntax errors :).

Transposing an array is a better example for these purpose.

This is a bad example, since in transposing an array, either the input
or output array is processed optimally, but not both. If input is
stored optimally, the output is scattered all over and so the dirty
pages must be saved to disk, even if there are only a single \"dirty\"
array element in that page.


Someone would invariably bring our Starlink VAX to its knees by doing it
the naive way in a noddy style loop. It was annoying because everybody
was handling very large (for the time) images and highly optimised
transpose a rectangular array subroutines were in the library.

In the 1970\'s people believed in virtual memory and some system were
originally delivered with less than 1 MB of physical memory :)

PDP-8, 4K 12-bit words. I simulated a steamship propulsion system,
graphed step response on a teletype, showed it to some owners and a
shipyard, and sold stuff. Bunch of LASH ships.

https://en.wikipedia.org/wiki/Lighter_aboard_ship

San Francisco has an inlet on the bay called Lash Lighter Basin.
 
On 6/27/2023 4:40 PM, upsidedown@downunder.com wrote:
On Tue, 27 Jun 2023 10:10:46 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

Different languages use different multidimensional array layouts. With
Fortran it makes sense to use the innermost loop to access the _first_
index, while in C the \"virtual memory friendly\" is to use the
innermost loop for the _last_ index.

The point of the example is to show that, to a coder, the fragments
seem identical -- at the end of the outermost loop, the array is
identical in both cases (and we\'re ignoring the possibility that
the operation might want to be atomic so the end results are all
that matters)

An application originally written in Fortran will behave really badly
if simply converted to C to avoid compiler syntax errors :).

Transposing an array is a better example for these purpose.

This is a bad example, since in transposing an array, either the input
or output array is processed optimally, but not both. If input is
stored optimally, the output is scattered all over and so the dirty
pages must be saved to disk, even if there are only a single \"dirty\"
array element in that page.

Someone would invariably bring our Starlink VAX to its knees by doing it
the naive way in a noddy style loop. It was annoying because everybody
was handling very large (for the time) images and highly optimised
transpose a rectangular array subroutines were in the library.

In the 1970\'s people believed in virtual memory and some system were
originally delivered with less than 1 MB of physical memory :)

Even with \"resource flush\" systems, VMM has other benefits that can
be leveraged to advantage. But, if you aren\'t aware of how to use it,
you\'re likely unaware of how to \"abuse\" it!

E.g., you can implement call-by-value semantics for large objects
(e.g., arrays) instead of being constrained by call-by-reference
semantics imposed by the language (who wants to push a 10KB object?)

Or, CoW technology. Or, just memory protection hacks.

for (i = 0; i < MAXI; i++)
for (j = 0; j < MAXJ; j++)
array[j] = array[j]

Generates an insane number of page faults once MAXI*MAXJ > pagesize.

The working set size limits how big arrays can be readily transposed,
that is how many \"dirty\" pages can be in physical memory at one time.
If the output is suboptimal the same page may have to be written out
multiple times. For this reason it makes more sense to store the
output optimally, since a completely dirty page needs to be written
out only once.


It even does in Fortran which knows how to do large multi dimensional
arrays properly in contiguous memory (unlike C).

You can write\" virtual memory friendly\" cods in C if you start from
the beginning. Using some existing Fortran library routines and just
change the syntax to C will be a disaster if you do not alter the
innermost/outermost loop order.


But few embedded designs rely on VMM -- if used, it\'s not in
the legacy sense of using a larger backing store than the
resident store.

[Though VMM is now becoming more acceptable as the hardware
supporting it is on a par with the cost of more banal architectures]

Regardless most (all?) modern processors rely on cache.
And, this has a similar impact, there. (not this specific example as each
location is only referenced once -- though if byte-sized entities,
a cache line can store many... only if the others are actually used!0

If you touch one cache line, then another, then another, etc. you will
eventually lose the benefits of those touched lines, rendering the cache
effectively useless... in exactly the sorts of cases where it can be
of most use!

It also plays a big role in how you organize data structures
for a similar reason. What may *seem* like an intuitive
organization can be really counterproductive in terms of
runtime performance.

E.g., I wire down the pages that my kernel occupies so there are
no faults, there. But, have to be very methodical about how I
lay out data structures to ensure I can benefit from locality of
reference.

The difference is noticeable, if the processor keeps discarding
cache contents and taking a \"long trip\" to memory!

[Of course, if you \"buy\" someone else\'s solution, you can HOPE
they took these same actions...]
 
On Wed, 28 Jun 2023 02:40:23 +0300, upsidedown@downunder.com wrote:

The working set size limits how big arrays can be readily transposed,
that is how many \"dirty\" pages can be in physical memory at one time.

An other example of virtual memory misuse.

In most MS Windows file programming examples the file is opened, the
size is determined, a virtual memory table as big as the file is
created and then the file is read into the table in one operation,
before any processing is done to the file contents.

This works well as long as the file is much smaller than the working
set (and ultimately the physical memory size).

Things get ugly when the fie is multiple times the working set
size.Determining the size of the file and allocating in virtual memory
works OK regardless of file size. However start reading to the table
will cause those pages to become \"dirty\". When the working set becomes
full with dirty pages, those dirty pages must be written to the page
file.

Only after this, more pages can be loaded from the input file. If the
input file is on the same disk as the page file, the disk head
constantly needs to jump between these two files (seek time).This is
done multiple times with a few pages at a time.

Ultimately the whole file has been copied into page file and is read
from the page file when actual file processing starts.

Decades ago I observed a simple video display program that took half
an hour to just to load a huge file into page file :).

A better way of handling big files is memory mapped files. Open the
memory mapped file and you get a virtual address if the table.
Referencing a byte in a table will load that page into physical memory
from the input file and the contents can be used. For input files,
such pages are clean, thus no need to write the page into page file.
If the working set becomes full, simply discard such pages and load
new pages into physical memory. If the original page is kneaded again,
it is reloaded from the input file.

If the application only needs parts of the data in the file, it can
load only those pages, reducing disk I/O. Compare this to the MS
method, in which the file is fully loaded from input file (and written
back to page file :).
 
On 6/28/2023 4:37 AM, upsidedown@downunder.com wrote:
On Wed, 28 Jun 2023 02:40:23 +0300, upsidedown@downunder.com wrote:

The working set size limits how big arrays can be readily transposed,
that is how many \"dirty\" pages can be in physical memory at one time.

An other example of virtual memory misuse.

In most MS Windows file programming examples the file is opened, the
size is determined, a virtual memory table as big as the file is
created and then the file is read into the table in one operation > before any processing is done to the file contents.

That\'s just plain stupid. Is that \"current practice\"?

I create a \"memory object\" for each such object and associate
a \"pager\" of an appropriate type to manage it. Then, let
the user\'s accesses fault pages in -- via the associated
pager -- so only the portions of the file that the user
\"touches\" ever appear as physical memory.

In addition to letting me define how I want the faults *in*
that object to be handled, it also lets me put resource limits
on it. So, it can\'t arbitrarily consume physical memory
beyond what I\'ve decided is appropriate for it. (if it
exceeds that limit, then dirtied pages are paged out to
the backing store)

This works well as long as the file is much smaller than the working
set (and ultimately the physical memory size).

Things get ugly when the fie is multiple times the working set
size.Determining the size of the file and allocating in virtual memory
works OK regardless of file size.

But you still need physical memory for the TLBs?

However start reading to the table
will cause those pages to become \"dirty\". When the working set becomes
full with dirty pages, those dirty pages must be written to the page
file.

Only after this, more pages can be loaded from the input file. If the
input file is on the same disk as the page file, the disk head
constantly needs to jump between these two files (seek time).This is
done multiple times with a few pages at a time.

Ultimately the whole file has been copied into page file and is read
from the page file when actual file processing starts.

Decades ago I observed a simple video display program that took half
an hour to just to load a huge file into page file :).

A better way of handling big files is memory mapped files. Open the
memory mapped file and you get a virtual address if the table.
Referencing a byte in a table will load that page into physical memory
from the input file and the contents can be used. For input files,

Yes.

> such pages are clean,

...if they have been scrubbed of their past contents (else they can leak
those contents)

thus no need to write the page into page file.
If the working set becomes full, simply discard such pages and load
new pages into physical memory.

... but only if the \"read\" page (portion of file) hasn\'t been altered.

If the original page is kneaded again,
it is reloaded from the input file.

If the application only needs parts of the data in the file, it can
load only those pages, reducing disk I/O. Compare this to the MS
method, in which the file is fully loaded from input file (and written
back to page file :).
 
On Tuesday, June 27, 2023 at 10:50:13 AM UTC-4, John Larkin wrote:
On Tue, 27 Jun 2023 09:58:23 +0100, Martin Brown
snip

Are there any parallel-execution computer languages, like Verilog?
There have been a number of them developed at universities. Few have seen mainstream acceptance.
OCCAM is a programming language based on Hoar\'s Communicating Sequential Processes (CSP) - a delightful read with an abundance of logic and mathematical proofs.
The target machine was the transputer (INMOS) - which when I had to learn (and subsequently do tutorials on it) was a weird beast when compared to the PDP11\'s, MC68xxx and 8086 platforms of the day.
OCCAM lives on as OCCAM3 and OCCAM -pi but I have no clue where it usage is at this point.

If one is citing Verilog, and event-driven /hardware description language, VHDL can be included.
 
On Thu, 29 Jun 2023 13:41:17 -0700 (PDT), three_jeeps
<jjhudak@gmail.com> wrote:

On Tuesday, June 27, 2023 at 10:50:13?AM UTC-4, John Larkin wrote:
On Tue, 27 Jun 2023 09:58:23 +0100, Martin Brown
snip

Are there any parallel-execution computer languages, like Verilog?
There have been a number of them developed at universities. Few have seen mainstream acceptance.
OCCAM is a programming language based on Hoar\'s Communicating Sequential Processes (CSP) - a delightful read with an abundance of logic and mathematical proofs.

I\'m just an electronics designer. I want bare-metal c programs and the
proof is that it works.


The target machine was the transputer (INMOS) - which when I had to learn (and subsequently do tutorials on it) was a weird beast when compared to the PDP11\'s, MC68xxx and 8086 platforms of the day.
OCCAM lives on as OCCAM3 and OCCAM -pi but I have no clue where it usage is at this point.

If one is citing Verilog, and event-driven /hardware description language, VHDL can be included.

My guys prefer VHDL, but sometimes inherit some Verilog from somewhere
and do mixed projects. That seems to work.
 
On a sunny day (Mon, 3 Jul 2023 03:45:00 -0700 (PDT)) it happened Phil Allison
<pallison49@gmail.com> wrote in
<dbfd6633-167a-4461-bb03-5d52c01bd5b0n@googlegroups.com>:

John Larkin wrote:
----------------------------

Basically, if you buy two equal-power-rated transformers, one sold as
120:240 and one sold as 240:120, they are the same transformer.

** But not exactly - see below.

You may have to seriously de-rate the transformer in order to use it that way,
mains frequency transformers under 100VA are the most affected.

A 100 VA transformer is happy moving 100 VA in either direction.

** JL has such simple faith in overly simple models.

I wonder why ?

If you are concerned about reversing a transformer, just try it.


** LOL - of *course* I have and that is why I know about the pitfalls.

I haven\'t heard any real pitfalls so far. What problems did you have?

** Same as everyone else\'s !!

When roles are reversed, the previous 240VAC supply winding supplies less under load.
The difference is about twice the regulation percentage, so 20 to 30% less in some cases.
Imag current ( previously easily supplied by the mains ) goes up by the turns ratio, up to maybe 40 times in a low voltage
winding.

Applying more than rated voltage to the secondary in order to to fix this results in excessive current and overheating the
transformer.

This all derives from how transformer makers engineer * real * transformers and rely on specifying which is the primary etc
so all specs are met.


That\'s the part I don\'t see. \"Primary\" is an application decision, or
a data sheet convenience.

** It is far more than that, but you will never admit it.

I have wound maybe hundreds of transformers...
reverse use should be no problem with these.
Turns ratio rules.
I also use transformers a lot for what they were not intended for:
here as HV generator for a geiger tube, standard audio 1:10 transformer to make 400 V HV:
https://panteltje.nl/panteltje/pic/gm_pic2/
https://panteltje.nl/pub/conrad_audio_transformer_second_resonance_img_3085.jpg
been on 24/7 now for nine years...
more HV:
https://panteltje.nl/pub/home_made_1_to_33_hv_transformer_img_3096.jpg
https://panteltje.nl/pub/new_transformer_test_setup_img_3153.jpg
https://panteltje.nl/pub/ultrasonic_anti_fouling_test_transformer_IMG_5142.JPG
https://panteltje.nl/pub/ultra_sonic_anti_fouling_circuit_diagram_0.6_IMG_5163.JPG

Resonances:
https://panteltje.nl/pub/drone_power_small_core_test_IMG_6114.JPG
This is actually also a transformer, tuned at that:
https://panteltje.nl/pub/testing_the_20_meter_inductive_loop_antenna_IMG_4536.JPG
https://panteltje.nl/pub/testing_the_20_meter_inductive_loop_antenna_dunno_IMG_4537.JPG

Most RF stuff contains transformers, often tuned.
And power, RF heating:
https://panteltje.nl/pub/melting_solder_in_an_metal_olive_bottle_cap_IMG_5191.JPG
https://panteltje.nl/pub/crucible_with_molten_solder_IMG_5439.JPG

Remember winding one for my all transistor TV HV..

Never a problem.
Some people have fear for inductors and transformers.
Know that fear from my first job, we made among other things transformers for power stations
you needed a ladder to climb on those so big.
Big test room, safety lock, big insulators... Many kV.
Almost got killed in that job, on an Navy vessel flightdeck adjusting a transductor.
TV studio was more fun,
more transformers there than you can imagine, audio, video, tape, film, all synchronous...
motors...
So, turns, capacitance, L, C, flyback, saturation, core material, its fun.
Without transformers things are so limited you can do.
Its easy! Just wind them ;-) drive them up rawhide!
?
 
On Mon, 03 Jul 2023 11:53:40 GMT, Jan Panteltje <alien@comet.invalid>
wrote:

On a sunny day (Mon, 3 Jul 2023 03:45:00 -0700 (PDT)) it happened Phil Allison
pallison49@gmail.com> wrote in
dbfd6633-167a-4461-bb03-5d52c01bd5b0n@googlegroups.com>:

John Larkin wrote:
----------------------------

Basically, if you buy two equal-power-rated transformers, one sold as
120:240 and one sold as 240:120, they are the same transformer.

** But not exactly - see below.

You may have to seriously de-rate the transformer in order to use it that way,
mains frequency transformers under 100VA are the most affected.

A 100 VA transformer is happy moving 100 VA in either direction.

** JL has such simple faith in overly simple models.

I wonder why ?

If you are concerned about reversing a transformer, just try it.


** LOL - of *course* I have and that is why I know about the pitfalls.

I haven\'t heard any real pitfalls so far. What problems did you have?

** Same as everyone else\'s !!

When roles are reversed, the previous 240VAC supply winding supplies less under load.
The difference is about twice the regulation percentage, so 20 to 30% less in some cases.
Imag current ( previously easily supplied by the mains ) goes up by the turns ratio, up to maybe 40 times in a low voltage
winding.

Applying more than rated voltage to the secondary in order to to fix this results in excessive current and overheating the
transformer.

This all derives from how transformer makers engineer * real * transformers and rely on specifying which is the primary etc
so all specs are met.


That\'s the part I don\'t see. \"Primary\" is an application decision, or
a data sheet convenience.

** It is far more than that, but you will never admit it.

I have wound maybe hundreds of transformers...
reverse use should be no problem with these.
Turns ratio rules.
I also use transformers a lot for what they were not intended for:
here as HV generator for a geiger tube, standard audio 1:10 transformer to make 400 V HV:
https://panteltje.nl/panteltje/pic/gm_pic2/
https://panteltje.nl/pub/conrad_audio_transformer_second_resonance_img_3085.jpg
been on 24/7 now for nine years...
more HV:
https://panteltje.nl/pub/home_made_1_to_33_hv_transformer_img_3096.jpg
https://panteltje.nl/pub/new_transformer_test_setup_img_3153.jpg
https://panteltje.nl/pub/ultrasonic_anti_fouling_test_transformer_IMG_5142.JPG
https://panteltje.nl/pub/ultra_sonic_anti_fouling_circuit_diagram_0.6_IMG_5163.JPG

Resonances:
https://panteltje.nl/pub/drone_power_small_core_test_IMG_6114.JPG
This is actually also a transformer, tuned at that:
https://panteltje.nl/pub/testing_the_20_meter_inductive_loop_antenna_IMG_4536.JPG
https://panteltje.nl/pub/testing_the_20_meter_inductive_loop_antenna_dunno_IMG_4537.JPG

Most RF stuff contains transformers, often tuned.
And power, RF heating:
https://panteltje.nl/pub/melting_solder_in_an_metal_olive_bottle_cap_IMG_5191.JPG
https://panteltje.nl/pub/crucible_with_molten_solder_IMG_5439.JPG

Remember winding one for my all transistor TV HV..

Never a problem.
Some people have fear for inductors and transformers.
Know that fear from my first job, we made among other things transformers for power stations
you needed a ladder to climb on those so big.
Big test room, safety lock, big insulators... Many kV.
Almost got killed in that job, on an Navy vessel flightdeck adjusting a transductor.
TV studio was more fun,
more transformers there than you can imagine, audio, video, tape, film, all synchronous...
motors...
So, turns, capacitance, L, C, flyback, saturation, core material, its fun.
Without transformers things are so limited you can do.
Its easy! Just wind them ;-) drive them up rawhide!
?

We make our own transmission-line transformers and, rarely, an exotic
power inductor.

This is easy:

https://www.dropbox.com/s/pmecggbi463ipes/TX_1.jpg?dl=0

Just buy the windings already made. Sub-nansecond edges and 50-ohm
matched.
 
On a sunny day (Mon, 03 Jul 2023 08:31:06 -0700) it happened John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote in
<45q5ai9gs9caoh3e823eq7c1r9q3mp9ef8@4ax.com>:

On Mon, 03 Jul 2023 11:53:40 GMT, Jan Panteltje <alien@comet.invalid
wrote:

On a sunny day (Mon, 3 Jul 2023 03:45:00 -0700 (PDT)) it happened Phil Allison
pallison49@gmail.com> wrote in
dbfd6633-167a-4461-bb03-5d52c01bd5b0n@googlegroups.com>:

John Larkin wrote:
----------------------------

Basically, if you buy two equal-power-rated transformers, one sold as
120:240 and one sold as 240:120, they are the same transformer.

** But not exactly - see below.

You may have to seriously de-rate the transformer in order to use it that way,
mains frequency transformers under 100VA are the most affected.

A 100 VA transformer is happy moving 100 VA in either direction.

** JL has such simple faith in overly simple models.

I wonder why ?

If you are concerned about reversing a transformer, just try it.


** LOL - of *course* I have and that is why I know about the pitfalls.

I haven\'t heard any real pitfalls so far. What problems did you have?

** Same as everyone else\'s !!

When roles are reversed, the previous 240VAC supply winding supplies less under load.
The difference is about twice the regulation percentage, so 20 to 30% less in some cases.
Imag current ( previously easily supplied by the mains ) goes up by the turns ratio, up to maybe 40 times in a low voltage
winding.

Applying more than rated voltage to the secondary in order to to fix this results in excessive current and overheating the
transformer.

This all derives from how transformer makers engineer * real * transformers and rely on specifying which is the primary
etc
so all specs are met.


That\'s the part I don\'t see. \"Primary\" is an application decision, or
a data sheet convenience.

** It is far more than that, but you will never admit it.

I have wound maybe hundreds of transformers...
reverse use should be no problem with these.
Turns ratio rules.
I also use transformers a lot for what they were not intended for:
here as HV generator for a geiger tube, standard audio 1:10 transformer to make 400 V HV:
https://panteltje.nl/panteltje/pic/gm_pic2/
https://panteltje.nl/pub/conrad_audio_transformer_second_resonance_img_3085.jpg
been on 24/7 now for nine years...
more HV:
https://panteltje.nl/pub/home_made_1_to_33_hv_transformer_img_3096.jpg
https://panteltje.nl/pub/new_transformer_test_setup_img_3153.jpg
https://panteltje.nl/pub/ultrasonic_anti_fouling_test_transformer_IMG_5142.JPG
https://panteltje.nl/pub/ultra_sonic_anti_fouling_circuit_diagram_0.6_IMG_5163.JPG

Resonances:
https://panteltje.nl/pub/drone_power_small_core_test_IMG_6114.JPG
This is actually also a transformer, tuned at that:
https://panteltje.nl/pub/testing_the_20_meter_inductive_loop_antenna_IMG_4536.JPG
https://panteltje.nl/pub/testing_the_20_meter_inductive_loop_antenna_dunno_IMG_4537.JPG

Most RF stuff contains transformers, often tuned.
And power, RF heating:
https://panteltje.nl/pub/melting_solder_in_an_metal_olive_bottle_cap_IMG_5191.JPG
https://panteltje.nl/pub/crucible_with_molten_solder_IMG_5439.JPG

Remember winding one for my all transistor TV HV..

Never a problem.
Some people have fear for inductors and transformers.
Know that fear from my first job, we made among other things transformers for power stations
you needed a ladder to climb on those so big.
Big test room, safety lock, big insulators... Many kV.
Almost got killed in that job, on an Navy vessel flightdeck adjusting a transductor.
TV studio was more fun,
more transformers there than you can imagine, audio, video, tape, film, all synchronous...
motors...
So, turns, capacitance, L, C, flyback, saturation, core material, its fun.
Without transformers things are so limited you can do.
Its easy! Just wind them ;-) drive them up rawhide!
?

We make our own transmission-line transformers and, rarely, an exotic
power inductor.

This is easy:

https://www.dropbox.com/s/pmecggbi463ipes/TX_1.jpg?dl=0

Just buy the windings already made. Sub-nansecond edges and 50-ohm
matched.

Yes I noticed those in your postings before.
How reliable are those connectors?
had some problem with those on some board.
I like potcores :)
This was fun too:
https://panteltje.nl/pub/PMT_HV_supply_with_regulator_img_3175.jpg
https://panteltje.nl/pub/PMT_HV_supply_componet_side_img_3180.jpg
https://panteltje.nl/pub/PMT_regulated_power_supply_diagram_img_3182.jpg
 

Welcome to EDABoard.com

Sponsor

Back
Top