L
Lasse Langwadt Christensen
Guest
torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore
On Thu, 23 Jul 2020 17:39:57 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:
On 23/07/20 16:13, jlarkin@highlandsniptechnology.com wrote:
On Thu, 23 Jul 2020 10:36:08 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:
On 2020-07-22 20:14, John Larkin wrote:
I actually designed a CPU with all TTL logic. It had three
instructions and a 20 KHz 4-phase clock. It was actually produced, for
a shipboard data logger. MACRO-11 had great macro tools, so we used
that to make a cross assembler.
When I was a Tulane, the EE department acquired a gigantic (basically
a room full) military surplus computer that used a drum memory for
program and data. The logic modules were big gold-plated hermetic cans
that plugged in. The programmer had to distribute the opcodes at
optimal angular positions on the spinning drum.
I have a book, IBM\'s Early Computers. In early days, nobody was
entirely sure what a computer was.
It\'s a fun book, and does a lot to deflate the Harvard spin, which is
always good.
The sequel on the 360 and early 370s is a good read too, as is \"The
Mythical Man Month\" by Fred Brooks, who was in charge of OS/360, at the
time by far the largest programming project in the world. As he says,
\"How does a software project go a year late? One day at a time.\"
Obligatory Real Programmer reference:
http://www.cs.utah.edu/~elb/folklore/mel.html
Cheers
Phil Hobbs
Burroughs programmed their computers in Algol. There was never any
other assembler or compiler. I was told that, after the Algol compiler
was written in Algol, two guys hand-compiled it to machine code,
working side-by-side and checking every opcode. That was the bootstrap
compiler.
Isn\'t our ancient and settled idea of what a computer is, and what an
OS and languages are, overdue for the next revolution?
The trick will be to get a revolution which starts from
where we are. There is no chance of completely throwing
out all that has been achieved until now, however appealing
that might be.
I know of two plausible starting points...
1) The Mill Processor, as described by Ivan Godard over
on comp.arch. This has many innovative techniques that,
in effect, bring DSP processor parallelism when executing
standard languages such as C. It appears that there\'s an
order of magnitude to be gained.
Incidentally, Godard\'s background is the Burroughs/Unisys
Algol machines, plus /much/ more.
2) xCORE processors are commercially available (unlike the
Mill). They start from presuming that embedded programs can
be highly parallel /iff/ the hardware and software allows
programmers to express it cleanly. They merge Hoare\'s CSP
with innovative hardware to /guarantee/ *hard* realtime
performance. In effect they have occupied a niche that is
halfway between conventional processors and FPGA.
I\'ve used them, and they are *easy* and fun to use.
(Cf C on a conventional processor!)
We don\'t need more compute power. We need reliability and user
friendliness.
Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.
For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.
a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore