T
Tom Gardner
Guest
On 12/08/20 15:30, jlarkin@highlandsniptechnology.com wrote:
I\'m sure you have done that, but Martin is correct.
The core problem is fundamental: data is numbers and
programs are numbers. The only difference is in how
the numbers are interpreted by the hardware.
So, to ensure bulletproof memory management, you have
to ensure data cannot be executed. That rules out things
like JITters and general purpose compilers.
I\'ve never used them but I /believe/ the
only computers that achieve that are the Unisys/Burroughs
machines, by ensuring only their compilers can generate
code that can be executed - and keeping the compilers
under lock and key.
It does, however, put solid limits on what computers can
and cannot achieve.
One hardware analogy is Shannon\'s law, but there
are others
People that blunder into electronics and make statements
equivalent to breaking Shannon\'s law are correctly
regarded as ignorant cranks.
That\'s what anti-virus packages *attempt* to do. And my,
don\'t they work well.
Yup.
Then you need to protect the pseudocode interpreter, and you
are back where you began.
Good luck with that AI project.
\"Oooh goodie\" say the malefactors. A single attack surface
Your desire is understandable.
Your proposed implementation cannot work as you wish.
Whether it would be better than current standards
is a different question.
Yup.
Plus the malefactors are highly incentivised, whereas the
capitalist business imperative doesn\'t incentivise the good guys.
Good luck fixing that
Martin is correct.
I\'ve created a semi-custom IC design with a three-month
fabrication turnaround, which worked first time.
Yes, I have.
On Wed, 12 Aug 2020 08:33:20 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 11/08/2020 17:10, jlarkin@highlandsniptechnology.com wrote:
On Tue, 11 Aug 2020 08:46:38 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:
tirsdag den 11. august 2020 kl. 16.50.28 UTC+2 skrev jla...@highlandsniptechnology.com:
On Tue, 11 Aug 2020 10:02:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 23/07/2020 19:34, John Larkin wrote:
On Thu, 23 Jul 2020 10:36:20 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:
torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:
We don\'t need more compute power. We need reliability and user
friendliness.
Executing buggy c faster won\'t help. Historically, adding resources
(virtual memory, big DRAM, threads, more MIPS) makes things worse.
For Pete\'s sake, we still have buffer overrun exploits. We still have
image files with trojans. We still have malicious web pages.
a tool that can cut wood can cut your hand, only way totally prevent that
is to add safety features until it cannot cut anything anymore
Why not design a compute architecture that is fundamentally safe?
Instead of endlessly creating and patching bugs.
It has been tried and it all ended in tears. Viper was supposed to be
correct by design CPU but it all ended in recrimination and litigation.
Humans make mistakes and the least bad solution is to design tools that
can find the most commonly made mistakes as rapidly as possible. Various
dataflow methods can catch a whole host of classic bugs before the code
is even run but industry seems reluctant to invest so we have the status
quo. C isn\'t a great language for proof of correctness but the languages
that tried to force good programmer behaviour have never made any
serious penetration into the commercial market. I know this to my cost
as I have in the past been involved with compilers.
No language will ever force good programmer behavior. No software can
ever prove that other software is correct, or even point at most of
the bugs.
Proper hardware protections can absolutely firewall a heap of bad
code. In fact, make it un-runnable.
what\'s the definition of \"bad code\"?
Code that can contain or allow viruses, trojans, spyware, or
ransomware, or can modify the OS, or use excess resources. That should
be obvious.
It is very obvious that you have no understanding of the basics of
computing. The halting problem shows that what you want is impossible.
I\'ve written maybe a million lines of code, mostly realtime stuff, and
three RTOSs and two or three compilers, and actually designed one CPU
from MSI TTL chips, that went into production. I contributed code to
FOCAL (I\'m named in the source) and met with some of the guys that
invented the PDP-11 architecture, before they did it. Got slightly
involved in the dreadful HP 2114 thing too.
Have you done anything like that?
I\'m sure you have done that, but Martin is correct.
Bulletproof memory management is certainly not impossible. It\'s just
that not enough people care.
The core problem is fundamental: data is numbers and
programs are numbers. The only difference is in how
the numbers are interpreted by the hardware.
So, to ensure bulletproof memory management, you have
to ensure data cannot be executed. That rules out things
like JITters and general purpose compilers.
I\'ve never used them but I /believe/ the
only computers that achieve that are the Unisys/Burroughs
machines, by ensuring only their compilers can generate
code that can be executed - and keeping the compilers
under lock and key.
\"Computer Science\" theory has almost nothing to do with computers.
I\'ve told that story before.
It does, however, put solid limits on what computers can
and cannot achieve.
One hardware analogy is Shannon\'s law, but there
are others
People that blunder into electronics and make statements
equivalent to breaking Shannon\'s law are correctly
regarded as ignorant cranks.
You cannot tell reliably what code will do until it gets executed.
You can stop it from ransoming all the data on all of your servers
because some nurse opened an email attachment.
That\'s what anti-virus packages *attempt* to do. And my,
don\'t they work well.
A less severe class of \"bad\" is code that doesn\'t perform its intended
function properly, or crashes. If that annoys people, they can stop
using it.
Most decent software does what it is supposed to most of the time. Bugs
typically reside for a long time in seldom trodden paths that should
never normally happen like error recovery in weird situations.
The real dollar cost of bad software is gigantic. There should be no
reason for a small or mid-size company to continuously pay IT security
consultants, or to run AV software.
Yup.
C invites certain dangerous practices that attackers ruthlessly exploit
like loops copying until they hit a null byte.
Let bad programs malfunction or crash. But don\'t allow a stack or
buffer overflow to poke exploits into code space. The idea of
separating data, code, and stack isn\'t hard to understand, or even
hard to implement.
We probably need to go to pseudocode-only programs.
Then you need to protect the pseudocode interpreter, and you
are back where you began.
The machine needs
to be protected from programmers and from bad architectures. Most
programmers never learn about machine-level processes.
Good luck with that AI project.
Or push everything into the cloud and not actually run application
programs on a flakey box or phone.
\"Oooh goodie\" say the malefactors. A single attack surface
Ship it and be damned software development culture persists and it
existed long before there were online updates over the internet.
If a piece of code violates the rules, it should be killed and never
allowed to run again. Software vendors would notice that pretty quick.
what are the rules?
Don\'t access outside your assigned memory map. Don\'t execute anything
but what\'s in read-only code space. Don\'t overflow stacks or buffers.
That is motherhood and apple pie. It allows other programs and tasks to
keep running and was one of the strengths of IBM\'s OS/2 but apart from
in bank machines and air traffic control hardly anyone adopted it
My point. Why do you call me ignorant for wanting hardware-based
security?
Your desire is understandable.
Your proposed implementation cannot work as you wish.
Whether it would be better than current standards
is a different question.
IBM soured the pitch by delivering it late and not quite working and
conflating it with the horrible PS/2 hardware lockin that forced their
competitors to collaborate and design the EISA bus the rest is history.
Don\'t access any system resources that you are not specifically
assigned access to (which includes devices and IP addresses.) Don\'t
modify drivers or the OS. The penalty for violation is instant death.
You are going to have a lot of time wasting checking against all these
rules which will themselves contain inconsistencies after a while.
Let\'s get rid of virtual memory too.
Why? Disk is so much cheaper than ram and plentiful. SSDs are fast too.
Some of those rules just make programmers pay more attention, which is
nice but not critical. What really matters is that the hardware and OS
detect violations and kill the offending process.
One that you can do either in hardware or software is to catch any
attempt to fetch an undefined value from memory. These days there are a
few sophisticated compilers that can do this at *compile* time.
The problem circles back: the compilers are written, and run, the same
way as the application programs. The software bad guys will always be
more ceative than the software defenders.
Yup.
Plus the malefactors are highly incentivised, whereas the
capitalist business imperative doesn\'t incentivise the good guys.
Good luck fixing that
One I know (Russian as it happens) by default compiles a hard runtime
trap at the location of the latent fault. I have mine set to warning.
Hardware designers usually get things right, which is why FPGAs seldom
have bugs but procedural code is littered with errors. Programmers
can\'t control states, if they understand the concept at all.
Oh rubbish. You should stop using simulators and see how far you get -
since all software is all so buggy that you can\'t trust it can you?
I\'ve done nontrivial OTP (antifuse) CPLDs and FPGAs that worked first
pass, without simulation. First pass. You just need to use state
machines and think before you compile. People who build dams
understand the concept. Usually.
Martin is correct.
I\'ve created a semi-custom IC design with a three-month
fabrication turnaround, which worked first time.
Have you ever written any code past Hello, World! that compiled
error-free and ran correctly the very first time? That\'s unheard of.
Yes, I have.