ping john Larkin Raspberry Pi Pico gets BASIC interpreter...

J

Jan Panteltje

Guest
Raspberry Pi Pico Gets Basic Interpreter Called PiccoloBASIC
https://www.tomshardware.com/news/raspberry-pi-pico-basic-interpreter-piccolobasic
have not tried it...
 
On Sun, 25 Jun 2023 08:24:14 GMT, Jan Panteltje <alien@comet.invalid>
wrote:

Raspberry Pi Pico Gets Basic Interpreter Called PiccoloBASIC
https://www.tomshardware.com/news/raspberry-pi-pico-basic-interpreter-piccolobasic
have not tried it...

Pi is taking over the world. The Foundation is planning to ship a
million units a month to squash the scalpers.

A Basic interpreter would probably be big and slow. Piccolo sounds too
primitive to be useful. We\'ll program our Picos in bare-metal c.

The Python runtine is huge too (about half the available flash) and
probably slow too. Maybe that\'s not too bad if it runs on one core and
the realtime stuff in c, on the other.

I still use PowerBasic on PCs, for engineering calcs.
 
On 25/06/2023 11:59, John Larkin wrote:
On Sun, 25 Jun 2023 08:24:14 GMT, Jan Panteltje <alien@comet.invalid
wrote:


Raspberry Pi Pico Gets Basic Interpreter Called PiccoloBASIC
https://www.tomshardware.com/news/raspberry-pi-pico-basic-interpreter-piccolobasic
have not tried it...

Pi is taking over the world. The Foundation is planning to ship a
million units a month to squash the scalpers.

A Basic interpreter would probably be big and slow. Piccolo sounds too
primitive to be useful. We\'ll program our Picos in bare-metal c.

The pedigree of some of the people involved hint that it might be quite
fast and small for an interpreter. The 6502 based BBC micro & its basic
interpreter that inspired ARM to design their first CPU was no slouch.
The Python runtine is huge too (about half the available flash) and
probably slow too. Maybe that\'s not too bad if it runs on one core and
the realtime stuff in c, on the other.

Python is huge but also portable across many platforms.
It is handy if you need huge numbers or very extended precision.

> I still use PowerBasic on PCs, for engineering calcs.

So you keep telling us.

Intel\'s 2023 compiler is about the best of the bunch now. I confess that
I still mostly use the MickeySoft offering and it\'s IDE though.

GCC can be slow and sometimes inaccurate for some important floating
point stuff. MS has problems there too. Almost none of the major C
compilers has a fully accurate cbrt() library function. The only ones
that do use Sun\'s implementation of Kahan\'s algorithm (BSD comes a
distant second) and GCC doesn\'t even make the running.

Greedy optimisers will swap sin, cos for sincos without a second thought
but sin(x), 1-2*sin(x/2)^2 can actually run faster in some instances.


--
Martin Brown
 
On a sunny day (Sun, 25 Jun 2023 03:59:08 -0700) it happened John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote in
<iq6g9i1thp3gs61bsvnst6k2vhojgddjti@4ax.com>:

On Sun, 25 Jun 2023 08:24:14 GMT, Jan Panteltje <alien@comet.invalid
wrote:


Raspberry Pi Pico Gets Basic Interpreter Called PiccoloBASIC
https://www.tomshardware.com/news/raspberry-pi-pico-basic-interpreter-piccolobasic
have not tried it...

Pi is taking over the world. The Foundation is planning to ship a
million units a month to squash the scalpers.

A Basic interpreter would probably be big and slow. Piccolo sounds too
primitive to be useful. We\'ll program our Picos in bare-metal c.

The Python runtine is huge too (about half the available flash) and
probably slow too. Maybe that\'s not too bad if it runs on one core and
the realtime stuff in c, on the other.

I still use PowerBasic on PCs, for engineering calcs.

Yes, C is nice on Raspberries.
As is apple pie with cream I just had :)

But I did some nice thing in 8052AH BASIC long ago, eighties:
https://panteltje.nl/pub/8052AH_BASIC_computer/8052AH_BASIC_computer_inside2_img_1757.jpg
controlled much in the house in BASIC. i2c code in BASIC controlled sensor chips.
Still working, but not used now.
Things get ever smaller and simpler.
 
On 6/25/2023 4:13 AM, Martin Brown wrote:
On 25/06/2023 11:59, John Larkin wrote:
On Sun, 25 Jun 2023 08:24:14 GMT, Jan Panteltje <alien@comet.invalid
wrote:


Raspberry Pi Pico Gets Basic Interpreter Called PiccoloBASIC
https://www.tomshardware.com/news/raspberry-pi-pico-basic-interpreter-piccolobasic
have not tried it...

Pi is taking over the world. The Foundation is planning to ship a
million units a month to squash the scalpers.

A Basic interpreter would probably be big and slow. Piccolo sounds too
primitive to be useful. We\'ll program our Picos in bare-metal c.

The pedigree of some of the people involved hint that it might be quite fast
and small for an interpreter. The 6502 based BBC micro & its basic interpreter
that inspired ARM to design their first CPU was no slouch.

If you do the lexical analysis/parsing ahead of time, you
can make big steps towards truly compiled code -- esp
on a language as banal as BASIC. I had a multitasking
BASIC pseudo-compiler running entirely *in* a 647180X
using a similar trick (also, to cut down on the memory
required to store the \"source\" as the 7180X only has
500? bytes of RAM)

Python is huge but also portable across many platforms.
It is handy if you need huge numbers or very extended precision.

And if you want to cull recent engineering grads who\'ve been
taught /la langue du jour/. Sadly, each newer generation of languages
seems to rob it\'s practitioners of ever increasing amounts of
knowledge of the underlying machine.

And, \"prepackaged boards\" just exacerbate that -- do you have
anyone on hand who can patch the binary in such a \"module\"?
 
On 25/06/2023 13:05, Jan Panteltje wrote:
On a sunny day (Sun, 25 Jun 2023 03:59:08 -0700) it happened John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote in
iq6g9i1thp3gs61bsvnst6k2vhojgddjti@4ax.com>:

On Sun, 25 Jun 2023 08:24:14 GMT, Jan Panteltje <alien@comet.invalid
wrote:


Raspberry Pi Pico Gets Basic Interpreter Called PiccoloBASIC
https://www.tomshardware.com/news/raspberry-pi-pico-basic-interpreter-piccolobasic
have not tried it...

Pi is taking over the world. The Foundation is planning to ship a
million units a month to squash the scalpers.

A Basic interpreter would probably be big and slow. Piccolo sounds too
primitive to be useful. We\'ll program our Picos in bare-metal c.

The Python runtine is huge too (about half the available flash) and
probably slow too. Maybe that\'s not too bad if it runs on one core and
the realtime stuff in c, on the other.

I still use PowerBasic on PCs, for engineering calcs.

Yes, C is nice on Raspberries.
As is apple pie with cream I just had :)

But I did some nice thing in 8052AH BASIC long ago, eighties:
https://panteltje.nl/pub/8052AH_BASIC_computer/8052AH_BASIC_computer_inside2_img_1757.jpg
controlled much in the house in BASIC. i2c code in BASIC controlled sensor chips.
Still working, but not used now.
Things get ever smaller and simpler.
Could have done with that in the NSC800 hand held terminal I designed
back in the early 80s....First all CMOS hand held terminal ?


--
This email has been checked for viruses by Avast antivirus software.
www.avast.com
 
On Sun, 25 Jun 2023 12:13:07 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 25/06/2023 11:59, John Larkin wrote:
On Sun, 25 Jun 2023 08:24:14 GMT, Jan Panteltje <alien@comet.invalid
wrote:


Raspberry Pi Pico Gets Basic Interpreter Called PiccoloBASIC
https://www.tomshardware.com/news/raspberry-pi-pico-basic-interpreter-piccolobasic
have not tried it...

Pi is taking over the world. The Foundation is planning to ship a
million units a month to squash the scalpers.

A Basic interpreter would probably be big and slow. Piccolo sounds too
primitive to be useful. We\'ll program our Picos in bare-metal c.

The pedigree of some of the people involved hint that it might be quite
fast and small for an interpreter. The 6502 based BBC micro & its basic
interpreter that inspired ARM to design their first CPU was no slouch.

The Python runtine is huge too (about half the available flash) and
probably slow too. Maybe that\'s not too bad if it runs on one core and
the realtime stuff in c, on the other.

Python is huge but also portable across many platforms.
It is handy if you need huge numbers or very extended precision.

I still use PowerBasic on PCs, for engineering calcs.

So you keep telling us.

Intel\'s 2023 compiler is about the best of the bunch now. I confess that
I still mostly use the MickeySoft offering and it\'s IDE though.

GCC can be slow and sometimes inaccurate for some important floating
point stuff. MS has problems there too. Almost none of the major C
compilers has a fully accurate cbrt() library function. The only ones
that do use Sun\'s implementation of Kahan\'s algorithm (BSD comes a
distant second) and GCC doesn\'t even make the running.

I don\'t need that sort of thing to compute resistor values. I rarely
need integers past 64 bits or floats past 80.

Greedy optimisers will swap sin, cos for sincos without a second thought
but sin(x), 1-2*sin(x/2)^2 can actually run faster in some instances.

Python looks a lot like Basic to me, but with some unnecessary
weirdnesses and a giant runtime overhead. Big and slow.

Computing languages are fad driven, and that drives good things out of
circulation, a sort of Gresham\'s Law of computing.
 
On a sunny day (Sun, 25 Jun 2023 08:12:54 -0700) it happened John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote in
<nslg9itjkhvja1esabgve9198vhal551fe@4ax.com>:

Computing languages are fad driven, and that drives good things out of
circulation, a sort of Gresham\'s Law of computing.

Yes, education should start at the hardware level...

OTOH I was surprised by this:
BirdNET-Pi
recognizes bird sounds...
https://www.tomshardware.com/news/raspberry-pi-bird-calls-with-birdnet-pi
https://github.com/mcguirepr89/BirdNET-Pi (downloaded 25-6-2023)


Downloaded it, not tried yet...
I remember you talking about what was it? Alligator noises? messing up some stuff long ago.
But a lot of power in a raspberry pi, would have taken a lot of computer power not even so long ago.
 
On 25/06/2023 16:12, John Larkin wrote:
On Sun, 25 Jun 2023 12:13:07 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

Intel\'s 2023 compiler is about the best of the bunch now. I confess that
I still mostly use the MickeySoft offering and it\'s IDE though.

GCC can be slow and sometimes inaccurate for some important floating
point stuff. MS has problems there too. Almost none of the major C
compilers has a fully accurate cbrt() library function. The only ones
that do use Sun\'s implementation of Kahan\'s algorithm (BSD comes a
distant second) and GCC doesn\'t even make the running.

I don\'t need that sort of thing to compute resistor values. I rarely
need integers past 64 bits or floats past 80.

On this we are agreed (although I do sometimes need quad FP precision to
verify double precision calculations and feed into Remez optimisation).
It was very annoying to me when MS discontinued 80bit FP support.

Greedy optimisers will swap sin, cos for sincos without a second thought
but sin(x), 1-2*sin(x/2)^2 can actually run faster in some instances.

Python looks a lot like Basic to me, but with some unnecessary
weirdnesses and a giant runtime overhead. Big and slow.

Most computer languages look somewhat like Basic apart from APL & Forth.
(and a few exotic modern CompSci languages like Haskell)
Computing languages are fad driven, and that drives good things out of
circulation, a sort of Gresham\'s Law of computing.

I don\'t think that is true at all. The older computer languages were
limited by computing power and hardware available at the time. Modern
languages harness the huge computing resources available today to take
some of the tedious grunt work out of coding and detecting errors.

There are always magic silver bullet salesmen offering the next greatest
thing since sliced bread that will halve your development costs. You
just have to take everything they say with a *big* pinch of salt and
count your fingers before and after every encounter.

--
Martin Brown
 
On 6/26/2023 2:45 AM, Martin Brown wrote:

Most computer languages look somewhat like Basic apart from APL & Forth.
(and a few exotic modern CompSci languages like Haskell)

I guess that depends on how you define \"like\".

Coding in any of the LISP dialects is likely a rude awakening for
the uninitiated. Ladder logic?

Much of the similarity is a consequence (IMO) of the serial
way that humans tend to think -- esp when it comes to algorithms...
it\'s almost always a set of *steps* instead of a network.

Computing languages are fad driven, and that drives good things out of
circulation, a sort of Gresham\'s Law of computing.

I don\'t think that is true at all. The older computer languages were limited by
computing power and hardware available at the time. Modern languages harness
the huge computing resources available today to take some of the tedious grunt
work out of coding and detecting errors.

I think the BSPs, HALs, OSs, etc. are more guilty of that. Folks don\'t code
on bare metal anymore -- just as they don\'t put a CPU on a schematic any
longer. They are \"sold\" the notion that they can treat this API as
a well-defined abstraction -- without ever defining the abstraction well! :>
They don\'t know what their implementations \"cost\" or how to even *measure*
performance -- because they don\'t know what\'s involved.

OTOH, a lot of \"coding\" is taught targeting folks who will be building
web pages or web apps where there is no concern for resource management
(it works or it doesn\'t).

How often do you see devices coded in Java?

There are always magic silver bullet salesmen offering the next greatest thing
since sliced bread that will halve your development costs. You just have to
take everything they say with a *big* pinch of salt and count your fingers
before and after every encounter.
 
On Sun, 25 Jun 2023 12:13:07 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

GCC can be slow and sometimes inaccurate for some important floating
point stuff. MS has problems there too. Almost none of the major C
compilers has a fully accurate cbrt() library function. The only ones
that do use Sun\'s implementation of Kahan\'s algorithm (BSD comes a
distant second) and GCC doesn\'t even make the running.

These days with usually a lot of memory available, it shouldn\'t be too
hard to get required accuracy in a reasonable time. Many functions are
implementing by first reducing the argument range into small segments
and then apply a poly nom to approximate that small range with a few
degree poly nom.

To take the cube root
- handle sign
- slightly denormalize so that the exponent can be divided by 3
- divide the signed exponent by 3 to get the signed exponent of
result
- the mantissa is now between 1 and 8
- split the argument range to e.g. 8 ranges, use the poly nom for
that range and calculate the poly nom. coefficient to approximate the
function.

By splitting the argument range (1..8) into smaller ranges (say 64 or
256 ranges), only a lower order poly nom is needed and the calculation
is faster.

Using too small argument ranges will increase the coefficient table
size, causing more cache misses and page faults. Especially page
faults are costly, so try to keep the coefficient table within one or
a few pages, otherwise the advance of lower polynom degree is lost to
page fault overhead.
 
On Mon, 26 Jun 2023 10:45:13 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 25/06/2023 16:12, John Larkin wrote:
On Sun, 25 Jun 2023 12:13:07 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

Intel\'s 2023 compiler is about the best of the bunch now. I confess that
I still mostly use the MickeySoft offering and it\'s IDE though.

GCC can be slow and sometimes inaccurate for some important floating
point stuff. MS has problems there too. Almost none of the major C
compilers has a fully accurate cbrt() library function. The only ones
that do use Sun\'s implementation of Kahan\'s algorithm (BSD comes a
distant second) and GCC doesn\'t even make the running.

I don\'t need that sort of thing to compute resistor values. I rarely
need integers past 64 bits or floats past 80.

On this we are agreed (although I do sometimes need quad FP precision to
verify double precision calculations and feed into Remez optimisation).
It was very annoying to me when MS discontinued 80bit FP support.

PowerBasic has an 80 bit float variable type, but I\'ve never needed to
use it. It also has 64-bit signed and unsigned ints, ditto. Some
integer-ratio frequency synth math needs long stuff, but that\'s rare
nowadays.

Greedy optimisers will swap sin, cos for sincos without a second thought
but sin(x), 1-2*sin(x/2)^2 can actually run faster in some instances.

Python looks a lot like Basic to me, but with some unnecessary
weirdnesses and a giant runtime overhead. Big and slow.

Most computer languages look somewhat like Basic apart from APL & Forth.
(and a few exotic modern CompSci languages like Haskell)

Computing languages are fad driven, and that drives good things out of
circulation, a sort of Gresham\'s Law of computing.

I don\'t think that is true at all. The older computer languages were
limited by computing power and hardware available at the time. Modern
languages harness the huge computing resources available today to take
some of the tedious grunt work out of coding and detecting errors.

If you ask google how many programming languages there are, the
answers range from 50 to 500 to 7000. There is a web site that
actually lists dead computer languages and variants and claims 6000.

People love to invent computer languages. And kill them.

What\'s interesting is that Python is mostly english words, readable
like Basic, and Rust looks like a cat walking on the upper rows of a
keyboard. Two different mind sets at work.

Some people adore abstraction and some people hate it.

c is 51 years old now.
 
On Mon, 26 Jun 2023 04:08:42 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

On 6/26/2023 2:45 AM, Martin Brown wrote:

Most computer languages look somewhat like Basic apart from APL & Forth.
(and a few exotic modern CompSci languages like Haskell)

I guess that depends on how you define \"like\".

Coding in any of the LISP dialects is likely a rude awakening for
the uninitiated. Ladder logic?

Much of the similarity is a consequence (IMO) of the serial
way that humans tend to think -- esp when it comes to algorithms...
it\'s almost always a set of *steps* instead of a network.

Computer programming is most always procedural. When parallel things
neeed to be done, it\'s usually broken into threads or processes with
semiphores, locks, blocks, interrupts, flags, fifos, things like that.
Most programmers never use state machines.

FPGA design is done in synchronous clocked logic in nonprocedural
languages; everything happens everywhere all at once. Crossing a clock
boundary is recognized as something to avoid or handle very carefully.
Computer programming is a lot like old-style hairball async logic
design and has correspondingly many bugs.



Computing languages are fad driven, and that drives good things out of
circulation, a sort of Gresham\'s Law of computing.

I don\'t think that is true at all. The older computer languages were limited by
computing power and hardware available at the time. Modern languages harness
the huge computing resources available today to take some of the tedious grunt
work out of coding and detecting errors.

I think the BSPs, HALs, OSs, etc. are more guilty of that. Folks don\'t code
on bare metal anymore -- just as they don\'t put a CPU on a schematic any
longer. They are \"sold\" the notion that they can treat this API as
a well-defined abstraction -- without ever defining the abstraction well! :
They don\'t know what their implementations \"cost\" or how to even *measure*
performance -- because they don\'t know what\'s involved.

OTOH, a lot of \"coding\" is taught targeting folks who will be building
web pages or web apps where there is no concern for resource management
(it works or it doesn\'t).

Coding has no theory, no math, and usually little testing. Comments
are rare and usually illiterate. Bugs are normal because we can always
fix them in the next weekly or daily release. Billion dollar projects
literally crash from dumb bugs. We are in the Dark Ages of
programming.

Who said \"Anybody can learn to code\" ?
 
On a sunny day (Mon, 26 Jun 2023 06:51:27 -0700) it happened John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote in
<885j9il2ri1dq8gosm5rr3fs5j1ffb8oti@4ax.com>:

On Mon, 26 Jun 2023 04:08:42 -0700, Don Y
blockedofcourse@foo.invalid> wrote:

On 6/26/2023 2:45 AM, Martin Brown wrote:

Most computer languages look somewhat like Basic apart from APL & Forth.
(and a few exotic modern CompSci languages like Haskell)

I guess that depends on how you define \"like\".

Coding in any of the LISP dialects is likely a rude awakening for
the uninitiated. Ladder logic?

Much of the similarity is a consequence (IMO) of the serial
way that humans tend to think -- esp when it comes to algorithms...
it\'s almost always a set of *steps* instead of a network.

Computer programming is most always procedural. When parallel things
neeed to be done, it\'s usually broken into threads or processes with
semiphores, locks, blocks, interrupts, flags, fifos, things like that.
Most programmers never use state machines.

FPGA design is done in synchronous clocked logic in nonprocedural
languages; everything happens everywhere all at once. Crossing a clock
boundary is recognized as something to avoid or handle very carefully.
Computer programming is a lot like old-style hairball async logic
design and has correspondingly many bugs.




Computing languages are fad driven, and that drives good things out of
circulation, a sort of Gresham\'s Law of computing.

I don\'t think that is true at all. The older computer languages were limited by
computing power and hardware available at the time. Modern languages harness
the huge computing resources available today to take some of the tedious grunt
work out of coding and detecting errors.

I think the BSPs, HALs, OSs, etc. are more guilty of that. Folks don\'t code
on bare metal anymore -- just as they don\'t put a CPU on a schematic any
longer. They are \"sold\" the notion that they can treat this API as
a well-defined abstraction -- without ever defining the abstraction well! :
They don\'t know what their implementations \"cost\" or how to even *measure*
performance -- because they don\'t know what\'s involved.

OTOH, a lot of \"coding\" is taught targeting folks who will be building
web pages or web apps where there is no concern for resource management
(it works or it doesn\'t).

Coding has no theory, no math, and usually little testing. Comments
are rare and usually illiterate. Bugs are normal because we can always
fix them in the next weekly or daily release. Billion dollar projects
literally crash from dumb bugs. We are in the Dark Ages of
programming.

Who said \"Anybody can learn to code\" ?

I like to code in ASM, and it is all very logical and close to the hardware (PIC asm for example)
But you must know about the hardware.
Very few bugs arize..
But if you look at current high level languages there is a layer of bloat
isolating you from the hardware, it is a totally different thing.
How many bytes does it take to say \'Hello World?\'
And testing and debugging, I use the serial port for PICs or a terminal in C.
It is simple.

As to all that math, for normal human things integer math will do just fine.
Look at scope_pic
https://panteltje.nl/panteltje/pic/scope_pic/
Only limited by LCD resolution... Had to right shift the Fourier transform output to get less points...
For output to Usenet with fixed font:
https://panteltje.nl/panteltje/pic/scope_pic/screen_dump2.txt

what can you do if you do not know the hardware, transmission standards?
https://panteltje.nl/panteltje/raspberry_pi_dvb-s_transmitter/

And in C, sky is the limit:
https://panteltje.nl/panteltje/subtitles/index.html


Web designers, browsers, my foot.
The user interface I experience with Chromium and Firefox is beyond belief so bad.

For that 8052 AH BASIC I wrote an inline assembler
https://panteltje.nl/panteltje/newsflex/a52-2.0.lsm

FPGA not much of a problem just more integrated hardware, less glue logic needed ;-)
 
John Larkin wrote:

<snip>

> Who said \"Anybody can learn to code\" ?

Someone who needs software and wants someone else to write it?

Danke,

--
Don, KB7RPU, https://www.qsl.net/kb7rpu
There was a young lady named Bright Whose speed was far faster than light;
She set out one day In a relative way And returned on the previous night.
 
On 6/26/2023 9:18 AM, Don wrote:
John Larkin wrote:

snip

Who said \"Anybody can learn to code\" ?

Someone who needs software and wants someone else to write it?

ANYONE can learn to code. Coding is a largely mechanical skill.
Do this to get that.

Knowing which THIS to do is the issue.

What\'s the difference between:

for (i = 0; i < MAXI; i++)
for (j = 0; j < MAXJ; j++)
array[j]=17

and

for (j = 0; j < MAXJ; j++)
for (i = 0; i < MAXI; i++)
array[j]=17

(this is CompSci 101 material)

And,
memset( &array[0][0], 17, MAXI*MAXJ )

And, more importantly, why/when would you use each approach?
When would each *fail* -- silently or otherwise??

Coders are technicians. They don\'t understand the \"science\"
that goes into the *design* of algorithms. Never formally
looked at concurrency design methodologies, race/hazzard
avoidance, non-locking protocols, etc.

A coder would *pick* one of the above -- likely without even
considering that there are alternatives (or the criteria for
successfully choosing between them).

I\'ve seen production code where three copies of a variable
were stored -- for redundancy. Without realizing what a collosal
waste of bits that was.

Or, loops that read (and discarded) bytes from a file one at
a time -- just to determine the file\'s size.

D\'uh...

Give them a datasheet (dataBOOK, nowadays) and tell them to
build a BSP and they\'ll be as a deer in headlights: \"Where
(how!) do I start?\"

\"Can\'t we use a PC (or some other \"module\") for that?\"
 
On 26/06/2023 14:51, John Larkin wrote:
On Mon, 26 Jun 2023 04:08:42 -0700, Don Y
blockedofcourse@foo.invalid> wrote:

On 6/26/2023 2:45 AM, Martin Brown wrote:

Most computer languages look somewhat like Basic apart from APL & Forth.
(and a few exotic modern CompSci languages like Haskell)

I guess that depends on how you define \"like\".

Coding in any of the LISP dialects is likely a rude awakening for
the uninitiated. Ladder logic?

Remiss of me not to mention LISP as one of the earliest entirely
different to Basic languages (aka Lots of Irritating Single
Parenthesises). I once long ago worked on a Lisp compiler.
Much of the similarity is a consequence (IMO) of the serial
way that humans tend to think -- esp when it comes to algorithms...
it\'s almost always a set of *steps* instead of a network.

So do all mathematical proofs and for that matter proofs of correctness
of software systems - one step at a time built on solid foundations. I
had a play with Z and VDM a few decades ago but found them unweildly
(and distinct overkill for the reliability we needed).

Computer programming is most always procedural. When parallel things
neeed to be done, it\'s usually broken into threads or processes with
semiphores, locks, blocks, interrupts, flags, fifos, things like that.
Most programmers never use state machines.

You have some very funny ideas. Computer science uses all of the methods
available to it and more besides.
FPGA design is done in synchronous clocked logic in nonprocedural
languages; everything happens everywhere all at once. Crossing a clock
boundary is recognized as something to avoid or handle very carefully.
Computer programming is a lot like old-style hairball async logic
design and has correspondingly many bugs.

And the FPGA program is designed and implemented in the software that
you so despise. How can you possibly trust it to do the right thing?

You should be hand coding it manually single bit by bit since you have
made the case so cogently that no software can ever be trusted to work.

Computing languages are fad driven, and that drives good things out of
circulation, a sort of Gresham\'s Law of computing.

I don\'t think that is true at all. The older computer languages were limited by
computing power and hardware available at the time. Modern languages harness
the huge computing resources available today to take some of the tedious grunt
work out of coding and detecting errors.

I think the BSPs, HALs, OSs, etc. are more guilty of that. Folks don\'t code
on bare metal anymore -- just as they don\'t put a CPU on a schematic any
longer. They are \"sold\" the notion that they can treat this API as
a well-defined abstraction -- without ever defining the abstraction well! :
They don\'t know what their implementations \"cost\" or how to even *measure*
performance -- because they don\'t know what\'s involved.

OTOH, a lot of \"coding\" is taught targeting folks who will be building
web pages or web apps where there is no concern for resource management
(it works or it doesn\'t).

Coding has no theory, no math, and usually little testing. Comments

Software development has a hell of a lot of maths and provably correct
software is essentially just a branch of applied mathematics. It is also
expensive very difficult to do and so most practitioners don\'t do it.

My universities computing department grew out of the maths laboratory
and were exiled to a computer tower when their big machines started to
require insane amounts of power and acolytes to tend to their needs.

are rare and usually illiterate. Bugs are normal because we can always
fix them in the next weekly or daily release. Billion dollar projects
literally crash from dumb bugs. We are in the Dark Ages of
programming.

I reckon more like Medieval cathedral building - if it still standing
after 5 years then it was a good \'un. If it falls down or the tower goes
wonky next time make the foundations and lower walls a bit thicker.

UK emergency 999 system went down on Sunday morning (almost certainly a
software update gone wrong) and guess what the backup system didn\'t work
properly either. It took them ~3 hours to inform the government too!

https://www.publictechnology.net/articles/news/nhs-launches-‘full-investigation’-90-minute-999-outage

It affected all the UK emergency services not just NHS.

Same happened with passport control a couple of weeks ago - a fault
deemed too \"sensitive\" (ie embarrassing) to disclose how it happened.

https://www.bbc.co.uk/news/uk-65731795

> Who said \"Anybody can learn to code\" ?

It is true that anybody can learn to code but there are about three
orders of magnitude difference between the best professional coders (as
you disparagingly choose to call them) and the worst ones. I prefer the
description software engineer although I am conscious that many
journeyman coders are definitely not doing engineering or anything like!

I have known individuals who quite literally had to be kept away from
important projects because their ham fisted \"style\" of hack it and be
damned would break the whole project resulting in negative progress.

One of the snags is that at university level anyone who has any aptitude
for the subject at all can hack their assessment projects out of solid
code in no time flat - ie. ignore all the development processes they are
supposed to have been taught. You can get away with murder on something
that requires less than 3 man months work and no collaboration.

--
Martin Brown
 
On 26/06/2023 12:33, upsidedown@downunder.com wrote:
On Sun, 25 Jun 2023 12:13:07 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:


GCC can be slow and sometimes inaccurate for some important floating
point stuff. MS has problems there too. Almost none of the major C
compilers has a fully accurate cbrt() library function. The only ones
that do use Sun\'s implementation of Kahan\'s algorithm (BSD comes a
distant second) and GCC doesn\'t even make the running.

These days with usually a lot of memory available, it shouldn\'t be too
hard to get required accuracy in a reasonable time. Many functions are
implementing by first reducing the argument range into small segments
and then apply a poly nom to approximate that small range with a few
degree poly nom.

Almost all of the best ones today are using some form of optimised
folded rational polynomial rather than a straight polynomial. The
hardware can evaluate numerator and denominator polys in parallel and
you take a hit for a single divide at the end of it. The reward is
greatly enhanced convergence for the correct choice of P(x)/Q(x).

Intel claim to be using the LLL algorithm in their trig approximations
now to get nearly five orders of magnitude more precision than with
naive rounding of Remez exchange algorithm equal ripple approximations.
It is mentioned in passing in one of the floating point textbooks.

https://doc.lagout.org/science/0_Computer%20Science/3_Theory/Handbook%20of%20Floating%20Point%20Arithmetic.pdf

Section 11.4.3 p393

LLL allows the difference between mathematicians continuous real numbers
and IEEE754 fixed length mantissa representations to be exploited. It is
much quicker than a brute force search or simulated annealing.

To take the cube root
- handle sign
- slightly denormalize so that the exponent can be divided by 3
- divide the signed exponent by 3 to get the signed exponent of
result
- the mantissa is now between 1 and 8
- split the argument range to e.g. 8 ranges, use the poly nom for
that range and calculate the poly nom. coefficient to approximate the
function.

By splitting the argument range (1..8) into smaller ranges (say 64 or
256 ranges), only a lower order poly nom is needed and the calculation
is faster.

Using too small argument ranges will increase the coefficient table
size, causing more cache misses and page faults. Especially page
faults are costly, so try to keep the coefficient table within one or
a few pages, otherwise the advance of lower polynom degree is lost to
page fault overhead.

Kahan\'s magic constant trick is specific to cbrt. I have a new solution
to the same problem that gets single precision FP accuracy in one step.

Sun\'s correct implementation:
https://rust-random.github.io/rand/src/libm/math/cbrt.rs.html

BSD version:
https://github.com/weiss/original-bsd/blob/master/lib/libm/ieee/cbrt.c

GCC uses a polynomial and is worst of the lot...

https://codebrowser.dev/glibc/glibc/sysdeps/ieee754/dbl-64/s_cbrt.c.html

It also uses the wrong form of Halley\'s algorithm at the end which
cannot deliver anything like full precision. solving y = x^(1/3) ie.

y = y*(2*x+y^3)/(2*y^3+x) // rather poor performance ~4 ulps

vs

y = y + y*(x-y^3)/(2*y^3+x) // better then 0.667 ulps

Although algebraically equivalent the latter form is about 6x more
accurate implemented in IEEE754 FP arithmetic. If fused multiply and add
is available then y^2 can be computed exactly and it is even better.

This isn\'t a bad introduction to the realities of real world use of
floating point and although titled what every computer scientist ought
to know I think it would be better addressed to *all* scientists (and
engineers).

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html


--
Martin Brown
 
On 26/06/2023 18:17, Don Y wrote:
On 6/26/2023 9:18 AM, Don wrote:
John Larkin wrote:

snip

Who said \"Anybody can learn to code\" ?

Someone who needs software and wants someone else to write it?

ANYONE can learn to code.  Coding is a largely mechanical skill.
Do this to get that.

Knowing which THIS to do is the issue.

What\'s the difference between:

for (i = 0; i < MAXI; i++)
   for (j = 0; j < MAXJ; j++)
       array[j]=17

and

for (j = 0; j < MAXJ; j++)
   for (i = 0; i < MAXI; i++)
       array[j]=17

(this is CompSci 101 material)


Transposing an array is a better example for these purpose.

Someone would invariably bring our Starlink VAX to its knees by doing it
the naive way in a noddy style loop. It was annoying because everybody
was handling very large (for the time) images and highly optimised
transpose a rectangular array subroutines were in the library.

for (i = 0; i < MAXI; i++)
for (j = 0; j < MAXJ; j++)
array[j] = array[j]

Generates an insane number of page faults once MAXI*MAXJ > pagesize.

It even does in Fortran which knows how to do large multi dimensional
arrays properly in contiguous memory (unlike C).
And,
    memset( &array[0][0], 17, MAXI*MAXJ )

And, more importantly, why/when would you use each approach?
When would each *fail* -- silently or otherwise??

Coders are technicians.  They don\'t understand the \"science\"
that goes into the *design* of algorithms.  Never formally
looked at concurrency design methodologies, race/hazzard
avoidance, non-locking protocols, etc.

I think it varies a lot with institution. The deskilling of coding
software has resulted in an underclass of semi-literate journeyman
coders who have no real mathematical knowledge to underpin what they do.

In my day it was pretty common to test out Knuth\'s algorithms - we even
found a bug in one of the prime testing codes. Though not in time to get
a $2^N reward for finding it. Did get a nice postcard from him though. I
didn\'t do computer science but sneaked along to some of their lectures
when they didn\'t conflict with my actual subject.
A coder would *pick* one of the above -- likely without even
considering that there are alternatives (or the criteria for
successfully choosing between them).

The scariest thing I see all too often with clueless C/C++ coders is the
method of getting it to compile by the random application of casts. Such
code almost never does what the author expects or intended.

Modern compilers and runtimes have got a lot better at warning idiots
that they have uninitialised variables and/or unreachable code. It
should lead to better software in the future (or so I hope).

Perhaps the future is deep AI based where the computer prompts the
domain expert to say what they want done down each of the less exploered
branches in the tree. It has been my experience that it is invariably
the obscure rare failure of something critical paths that may go
untested (until that failure actually happens).

--
Martin Brown
 
On 6/27/2023 2:10 AM, Martin Brown wrote:
On 26/06/2023 18:17, Don Y wrote:
On 6/26/2023 9:18 AM, Don wrote:
John Larkin wrote:

Who said \"Anybody can learn to code\" ?

Someone who needs software and wants someone else to write it?

ANYONE can learn to code.  Coding is a largely mechanical skill.
Do this to get that.

Knowing which THIS to do is the issue.

What\'s the difference between:

for (i = 0; i < MAXI; i++)
    for (j = 0; j < MAXJ; j++)
        array[j]=17

and

for (j = 0; j < MAXJ; j++)
    for (i = 0; i < MAXI; i++)
        array[j]=17

(this is CompSci 101 material)

Transposing an array is a better example for these purpose.


But this is *obvious*! And, something that is frequently done
(though the assignment may be some expression instead of a
constant and other actions may exist in the loops).

What happens when MAX{I,J} is MAXINT? Will you (eventually) \"lift\"
a piece of this code from an app where it *works* and misapply it
to another where it *won\'t*? The *code* is correct in each of these
cases...

Someone would invariably bring our Starlink VAX to its knees by doing it the
naive way in a noddy style loop. It was annoying because everybody was handling
very large (for the time) images and highly optimised transpose a rectangular
array subroutines were in the library.

for (i = 0; i < MAXI; i++)
    for (j = 0; j < MAXJ; j++)
        array[j] = array[j]

Generates an insane number of page faults once MAXI*MAXJ > pagesize.


Cache misses are a more common issue as many folks don\'t use a PMMU
(in an embedded product) -- yet caches abound! \"It\'s just code\" -- as
if all implementations are equivalent.

[The same sorts of folks likely don\'t understand cancellation]

Wait until embedded systems start having to deal with runtime thrashing :>
(The fact that the cache is being abused is largely hidden from the
coder because he doesn\'t understand HOW performance is defined)

It even does in Fortran which knows how to do large multi dimensional arrays
properly in contiguous memory (unlike C).

And,
     memset( &array[0][0], 17, MAXI*MAXJ )

And, more importantly, why/when would you use each approach?
When would each *fail* -- silently or otherwise??

Coders are technicians.  They don\'t understand the \"science\"
that goes into the *design* of algorithms.  Never formally
looked at concurrency design methodologies, race/hazzard
avoidance, non-locking protocols, etc.

I think it varies a lot with institution. The deskilling of coding software has
resulted in an underclass of semi-literate journeyman coders who have no real
mathematical knowledge to underpin what they do.

There has been a shift towards \"teaching what employers seek\" -- creating
employees with limited, short-term skillsets to address TODAY\'s need(s)
at the expense of knowing about those things that will be available, tomorrow.

Increasingly, processor architectures are becoming more \"minicomputer-like\"
than microcomputer. Yet, the folks using them are oblivious to the
mechanisms available to the practitioner -- because they just see a
\"module with BSP/runtime\", often created by a company with similarly
limited focus.

\"Why implement a VMM system -- there\'s no disk!\"

(Hint: that\'s not the only use!)

In my day it was pretty common to test out Knuth\'s algorithms - we even found a
bug in one of the prime testing codes. Though not in time to get a $2^N reward
for finding it. Did get a nice postcard from him though. I didn\'t do computer
science but sneaked along to some of their lectures when they didn\'t conflict
with my actual subject.

We spent a lot of time on theory because most of the equipment
was ... \"unique\". A different language, OS, hardware, focus, etc.
in each class. A \"coder\" would quickly be lost: in this class,
you\'ll be using LISP; this other, Algol; another, Pascal;
C in a fourth; etc. All in the course of a *day*. The coder
largely just thinks about syntax and not the \"why\" behind
language features.

For us, the *language* was insignificant -- the focus was on the
algorithms and the mechanisms that a particular language made
possible or the supporting hardware (e.g., B5000). E.g., lists
are inconvenient mechanisms in most procedural languages yet
delightfully effective in others.

\"Lambda calculus? Which *machine* does THAT run on??\"
\"DFA? What class of problems do *they* solve?\"
\"Recursion? How do I *know* I won\'t overrun the stack?\"
\"What should the objects in this application be?\"
\"How is object-BASED different from object-ORIENTED?\"
\"What does a language need to support for the latter?\"

A coder would *pick* one of the above -- likely without even
considering that there are alternatives (or the criteria for
successfully choosing between them).

The scariest thing I see all too often with clueless C/C++ coders is the method
of getting it to compile by the random application of casts. Such code almost
never does what the author expects or intended.

Exactly. \"How do I silence this compiler\'s WARNINGS?\" (Hint: they are
called warnings FOR A REASON!)

Some languages make it harder to \"appease\" the compiler with things
that are \"wrong\". E.g., Limbo doesn\'t support pointers, is much
more strongly typed, etc. Of course, it relies on GC to give the
coder that \"freedom\". Does the coder *know* what that will cost him
in any particular application? Or, is it a case of \"overprovision to
be on the safe side\" -- much like \"make everything HRT cuz SRT is *so*
much harder!\"

Modern compilers and runtimes have got a lot better at warning idiots that they
have uninitialised variables and/or unreachable code. It should lead to better
software in the future (or so I hope).

I\'m not sure that follows. The advent of faster tools seems not to have
led to smarter coders but, rather, more opportunities to GUESS what the
problem MIGHT be. When it took four-hours to \"turn the crank\" (i.e.,
two builds per day), you were REALLY certain about the fixes you would
try cuz you didn\'t have much time to \"play\". Add to that, having to SHARE
access to tools and you learned to be very methodical about getting
SOMETHING from each build.

Perhaps the future is deep AI based where the computer prompts the domain
expert to say what they want done down each of the less exploered branches in
the tree. It has been my experience that it is invariably the obscure rare
failure of something critical paths that may go untested (until that failure
actually happens).

The problem that has to be solved is imagining the entire extent of the
application. I would routinely interview clients to define the
extent of a job:
\"What do you want to happen in THIS case?\"
Folks often don\'t know. Do we let an AI *suggest* possible constraints?
Will everything just \"throw an error\" instead of doing something useful?

Our microwave oven lets you type in a time interval for the maggie.
0-59 makes sense as \"number of seconds\". What about \"60+\"?
Should those values throw errors (59 seconds is maximum in 1:00)?
If we let 60 be a valid entry, what about 70? 80? 90? 100??
In the latter case, how do we indicate 1:00 -- add a semicolon
or other delimiter?

We\'re already seeing the shifting of focus in coders as they \"rely\"
on more bloated systems on which to build their applications.
Amusing that a 50 year old OS is the basis for many apps, today.
Really? You think that\'s the way to solve every problem?
Most engineers learn about the limitations of implementations
and seek/try new ideas -- instead of being wed to an obsolescent one.

And, that -- despite MILLIONS of manhours -- it\'s still loaded with
bugs! (because the developers are enamored with themselves and
haven\'t learned to \"shoot the engineer\" -- nor does the idea even seem
to be in their psyche!)

\"Everything should be as simple as it can be, but not simpler\"

<https://unix.stackexchange.com/questions/223746/why-is-the-linux-kernel-15-million-lines-of-code>

Instead of thinking about what they *need*, they think about what they
can do with what they THINK they *have* -- as if it can\'t possibly have
bugs despite the extra complexity/bloat that they\'re employing.
Imagine having a box full of discretes and felt obligated to find
a use for them in your hardware design (WTF???)

That\'s what happens when you throw coders at a project. And, let
inertia govern your design choices! :<

OTOH, when you let software engineers design a system, they have a deeper
well to draw on for experience as well as exposure to broader ideas
that may -- only now -- be becoming practical \"in the small\".

E.g., MULTICS was designed for infinite up-time -- you replace components
WHILE the system is still running. Just like an electric utility
replaces equipment while folks are using it! Why all this \"reboot
required\" nonsense? Why can\'t I replace a library while applications
are being launched and binding (or bound!) to it? Ans: because the
idea is anathema to you because you\'ve a coder\'s mentality (\"That\'s
just how it\'s done...\")
 

Welcome to EDABoard.com

Sponsor

Back
Top