Driver to drive?

On 12/04/14 16:48, John Larkin wrote:
On Sat, 12 Apr 2014 15:40:04 +0200, David Brown <david.brown@hesbynett.no
wrote:

On 12/04/14 04:58, John Larkin wrote:
On Fri, 11 Apr 2014 20:24:01 -0700, josephkk
joseph_barrett@sbcglobal.net> wrote:


See Link:

http://arstechnica.com/security/2014/04/critical-crypto-bug-exposes-yahoo-mail-passwords-russian-roulette-style/



?;..((


Here is the technical analysis:

http://xkcd.com/1354/


This is the best illustration of the flaw I have seen - thanks for that
link.


And some details:

http://www.theregister.co.uk/2014/04/09/heartbleed_explained

which reinforces what an astonishingly bad programming language c
is.


That just reinforces what an astonishingly poor understanding you - and
many others - have about programming languages, and about bugs in software.

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL. The bug
was caused by the programmer using data in the incoming telegram without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and encryption.


Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

That's true to a fair extent, though less so now than it used to be -
people are more aware of the problem, and use safer alternative functions.

However, the bug in heartbleed has nothing to do with this - either in
terms of "C culture" or programming language.

I don't disagree that C programs often have security risks that are easy
to make due to C's lack of resource management and proper strings - but
I strongly disagree with the implication that other languages are /safe/
or /secure/ solely because they don't have these issues.

Data should be stored in declared buffers, and runtime errors thrown if attempts
are made to address outside the buffer. Items should be addressed by named
indexes, not by wandering around with pointers.

That solves /some/ security issues - but there is nothing in C that
stops you doing this if you understand how to program secure software.
But it is a serious mistake to think that such issues are actually the
most important factors in secure programming - or that other languages
with garbage collection, no pointers, and safe arrays are actually more
secure. Insecure software is just as common with Python, PHP, Perl, and
other higher level languages.

You are taking one class of errors which occur more often in C (because
checks have to be made manually in the code, rather than automatically
in the language) and assuming this is a major security issue. But that
is simply not the case. Buffer overflows and similar errors usually
result in crashes - such programs are therefore susceptible to
denial-of-service attacks, but they seldom (but not never, of course)
lead to information leaks or privilege escalation. And the alternative
- using a language with managed buffers and runtime errors - will give
the same effect when the unexpected runtime error leads the program to end.


Writing secure software is about thinking securely - the language of
implementation is a minor issue, partly because the coding itself should
be a small part of the total workload.

The heartbleed bug did not come from an issue in the implementation
language - it came from not /thinking/ enough about where information
came from. Arguably it came from poor design of the heartbeat part of
the protocol - it is never a good idea for the same information (the
length of the test data) to be included twice in the telegram, as it can
lead to confusion and mistakes.

And it's crazy for compilers to not use MMUs to prevent data and stacks and code
from being all mixed up.

Compilers do not and should not manipulate the MMU.

I think what you mean to say is that stacks and data segments should be
marked non-executable. This is true in general, but not always - there
are some types of code feature that require run-time generation of code
(such as "trampolines" on the stack) to work efficiently. If you can
live without such features, then stacks and data segments can be marked
non-executable - and that is typically done on most systems. (It is the
OS that controls the executability of memory segments, not the compiler.)

Note that most high-level languages, with the sort of run-time control
and limitations that you are advocating, are byte-compiled and run by a
virtual machine. In a very real sense, the data section of the VM
contains the program they are executing.

Given the compute horsepower around these days, most programmers should be
running interpreters, Python-type things, that can protect the world from the
programmers.

Again, you are showing that you have very little idea of the issues
involved, and are merely repeating the popular myths. And one of these
myths is that we have so much computing horsepower that the efficiency
of the programs doesn't matter. Tell that to people running farms of
servers, and the people paying for the electricity.

Python, and languages like it, protect against /some/ kinds of errors
that are common in C. But they are far from the sort of magic bullet
you seem to believe in - they are just a tool. The ease and speed of
development with Python can also lead to a quick-and-dirty attitude to
development where proof-of-concept and prototype code ends up shipping -
there are pros and cons to any choice of language.

It is up to programmers and program designers to understand secure
programming, and to code with an appropriately paranoid mindset,
regardless of the language.

ADA has better protections than c, but requires discipline that most programmers
don't have time for.

Again, Ada is just an alternative tool, with its pros and cons.

For the record, I use Python for most PC programming - because it makes
it easier and faster to deal with strings and with more complex data
structures, and because I often find its interactivity useful in
development. I use C for almost all my embedded programming - and to my
knowledge, I have never written C code with a buffer overflow.
 
On 12/04/14 18:28, John Larkin wrote:
On Sat, 12 Apr 2014 17:36:12 +0200, Klaus Bahner <Klaus.Bahner@ieee.org> wrote:

On 12-04-2014 16:48, John Larkin wrote:
On Sat, 12 Apr 2014 15:40:04 +0200, David Brown <david.brown@hesbynett.no
wrote:


[snip]


That just reinforces what an astonishingly poor understanding you - and
many others - have about programming languages, and about bugs in software.

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL. The bug
was caused by the programmer using data in the incoming telegram without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and encryption.


Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

But the heartbeat problem has nothing to do with bufferoverflows etc.
The programmer did just not check (for whatever reason - there are a lot
of rumors spreading how this could happen) whether the information
coming in makes sense.

It was an unchecked buffer bug. Just past the assumed but unchecked buffer was
whatever else happened to be in memory. That sort of blunder has been chronic
for decades, should be impossible, and keeps happening.

It was not an issue with buffer overflows, the problem was that the
programmer accepted the incoming data at face value. This would have
caused trouble regardless of the language used, and it would have been
avoided if the programmer had followed good secure development practices.

The exact effects of the bug might have varied depending on the language
- a language with enough run-time checks could have thrown an exception
(leading to a crash, as the exception would be unexpected) rather than
returning other data.

There was, in the DOS days, a joke circulating that a certain JPEG file
contained a virus. A few years later, in Windows, Microsoft actually made that
possible. Same problem, trusting that a declared size was the actual size of a
structure, and data and code and stack mixed in the same memory spaces.

MS has always had a habit of trusting too much in their coding, and not
paying attention to security. The same is still true of their code
today, though they are not as bad as they were. Such habits and poor
development culture are independent of the language used.

This has nothing to do with C or any other programming language. Rather
with bad design. Worse it seems to be a blow to the open source
community, which always claimed their code is better/safer than
commercially developed software, because everyone can and will check the
code. This did obviously not happen.

A strongly typed language without pointers, and with runtime bounds checking,
and compilers that use proper protections, would prevent sloppy errors like this
one.

No, that is a complete misunderstanding of the /real/ problem.

Language choice can prevent /some/ mistakes, but certainly not all. And
good use of tools can prevent many others - if the openssl developers
had tested the code using appropriate C debugging tools, they would have
spotted the error as quickly as with a managed programming language.
What was missing is that no one tried sending malicious packets to the
code - no one thought about the possible security hole, no one noticed
that the code trusted external data, no one tested it. The
implementation language was irrelevant.

The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

Actually the opposite is true. In order to become a capable C programmer
you learn that it is your responsible to make sure that your code
behaves well, when fed with garbage. Other languages tend to foster
programmers less aware of what could happen and how an attacker might
exploit weaknesses.

You're joking, right? The trickier the language, the better the programmers?

There is a strong correlation here. You mentioned Ada in another post -
if I were hiring someone to write secure or reliable software and I had
a candidate that knew Ada and a candidate that knew Python, I'd pick the
Ada guy every time. If I wanted quickly developed code for use in a
closed network, I'd pick the Python guy.

Given the compute horsepower around these days, most programmers should be
running interpreters, Python-type things, that can protect the world from the
programmers.

Actually I would claim that the majority of security problems is caused
by those interpreters. Do I have to mention Java(-script)?



ADA has better protections than c, but requires discipline that most programmers
don't have time for.

ADA has been responsible for the "most costly software failure of all
times" - the Ariane 5 disaster.



ADA wasn't responsible. If someone converts a float to a 16-bit integer, to do
something like load a DAC maybe, he is obligated to consider the consequences.
That's true in any language.

At least nobody died.

http://www.ieee.li/pdf/viewgraphs/trustworthy_software.pdf

http://en.wikipedia.org/wiki/Therac-25

Ada was not responsible - it was a design error in the software, and
could have occurred in any language. But this demonstrates the point
that it is not the implementation language that is critical, it is the
software design and the development and testing process that is vital.
 
On 13/04/14 04:52, John Larkin wrote:
On Sat, 12 Apr 2014 20:28:33 -0400, "Maynard A. Philbrook Jr."
jamie_ka1lpa@charter.net> wrote:

In article <ooahk91t47mi7a99g5pnjf2b9njusfbnh7@4ax.com>,
jjlarkin@highNOTlandTHIStechnologyPART.com says...

On Fri, 11 Apr 2014 20:24:01 -0700, josephkk <joseph_barrett@sbcglobal.net
wrote:


See Link:

http://arstechnica.com/security/2014/04/critical-crypto-bug-exposes-yahoo-mail-passwords-russian-roulette-style/

?;..((


Here is the technical analysis:

http://xkcd.com/1354/


And some details:

http://www.theregister.co.uk/2014/04/09/heartbleed_explained

which reinforces what an astonishingly bad programming language c is.

After what I've seen happening with the security agencies that we are
suppose to trust, I don't discount foul play.

I too, do C/C++ programming and that sort of bug to me is not
accidental.

I can think of only one reason to have an additional buffer length in
the message package and have the software ignore the primary buffer
length.

The problem here is, the OpenSSL should of tested for that from day one
or totally ignore any data in the buffer for size parameters.

Sorry, sounds a little fishy to me.

Jamie

The heartbeat query could have had a fixed-length payload; 4 or maybe 8 bytes
would work fine. Heck, one byte would work fine, just a local variable, no
malloc at all.

On this point, I fully agree - the protocol was the first problem, and
the implementation was the secondary problem.

malloc is evil.

malloc must be handled appropriately to be safe. But the same applies
to other forms of dynamic memory allocation - in reliable systems, all
forms of dynamic memory should be minimised or avoided where possible.
 
On 14/04/14 12:47, Martin Brown wrote:
On 13/04/2014 19:47, Cursitor Doom wrote:
On Sat, 12 Apr 2014 10:39:03 -0700, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:


Working in c, always check every buffer for size errors. Study every
pointer.
Don't just write the code, READ it.

Does you no good in a language where the default is a null terminated
string. It is always possible for a malevolent external data source to
send a string that will overwrite the end of any finite buffer.

The problem is that C programmers are *taught* to write in a style that
is concise cryptic but dangerous when abused by malevolent source data

while (*d++=*s++);

It wouldn't matter in a world where the data sources could be trusted.

The key to secure programming - regardless of language - is to take your
untrusted data and sanitise and check it. /Then/ your data is trusted,
and you can take advantage of that.
 
On 14/04/2014 12:48, David Brown wrote:
On 14/04/14 12:47, Martin Brown wrote:
On 13/04/2014 19:47, Cursitor Doom wrote:
On Sat, 12 Apr 2014 10:39:03 -0700, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:


Working in c, always check every buffer for size errors. Study every
pointer.
Don't just write the code, READ it.

Does you no good in a language where the default is a null terminated
string. It is always possible for a malevolent external data source to
send a string that will overwrite the end of any finite buffer.

The problem is that C programmers are *taught* to write in a style that
is concise cryptic but dangerous when abused by malevolent source data

while (*d++=*s++);

It wouldn't matter in a world where the data sources could be trusted.


The key to secure programming - regardless of language - is to take your
untrusted data and sanitise and check it. /Then/ your data is trusted,
and you can take advantage of that.

Unfortunately we both know that that doesn't happen in the real world.
(at least it fails to occur in far too many software development shops)

--
Regards,
Martin Brown
 
On 12/04/2014 17:28, John Larkin wrote:
On Sat, 12 Apr 2014 17:36:12 +0200, Klaus Bahner <Klaus.Bahner@ieee.org> wrote:

On 12-04-2014 16:48, John Larkin wrote:
On Sat, 12 Apr 2014 15:40:04 +0200, David Brown <david.brown@hesbynett.no
wrote:


[snip]


That just reinforces what an astonishingly poor understanding you - and
many others - have about programming languages, and about bugs in software.

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL. The bug
was caused by the programmer using data in the incoming telegram without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and encryption.

Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

But the heartbeat problem has nothing to do with bufferoverflows etc.
The programmer did just not check (for whatever reason - there are a lot
of rumors spreading how this could happen) whether the information
coming in makes sense.

But that happens all too often.

It was an unchecked buffer bug. Just past the assumed but unchecked buffer was
whatever else happened to be in memory. That sort of blunder has been chronic
for decades, should be impossible, and keeps happening.

But it was the opposite to the usual overwrite the end and execute
attack. This one wrote a tiny amount to the start and then grabbed a
huge chunk back to peek into memory it was never entitled to see.
There was, in the DOS days, a joke circulating that a certain JPEG file
contained a virus. A few years later, in Windows, Microsoft actually made that
possible. Same problem, trusting that a declared size was the actual size of a
structure, and data and code and stack mixed in the same memory spaces.

Not quite. An MS implementation of JPEG had always contained a flaw that
would permit a program that lied about the true length of a particular
marker to overwrite code. Once you can do that it is just a case of
working out how to execute the inserted hostile code.

This has nothing to do with C or any other programming language. Rather
with bad design. Worse it seems to be a blow to the open source
community, which always claimed their code is better/safer than
commercially developed software, because everyone can and will check the
code. This did obviously not happen.

It is a bit of an embarrassment for the open source claims of many eyes
checking finding all possible bugs before they can do any harm. Wisdom
of crowds is all very well but it cuts both ways and the bad guys might
very well be more strongly motivated to find any vulnerabilities first.

A strongly typed language without pointers, and with runtime bounds checking,
and compilers that use proper protections, would prevent sloppy errors like this
one.

Removing pointers entirely is probably too restrictive. Requiring that
pointers to objects must check that the object they are pointing at is
genuine would go a long way to preventing these problems.

The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

Actually the opposite is true. In order to become a capable C programmer
you learn that it is your responsible to make sure that your code
behaves well, when fed with garbage. Other languages tend to foster
programmers less aware of what could happen and how an attacker might
exploit weaknesses.

You're joking, right? The trickier the language, the better the programmers?

Well it is sort of true. If you try juggling bean bags the worst that
can happen is you drop one on the floor whereas if you try juggling with
petrol driven chain saws things can get very messy indeed.

You clearly need to be a much better juggler to survive in the latter
case but I would hardly recommend it!

Given the compute horsepower around these days, most programmers should be
running interpreters, Python-type things, that can protect the world from the
programmers.

Actually I would claim that the majority of security problems is caused
by those interpreters. Do I have to mention Java(-script)?

ADA has better protections than c, but requires discipline that most programmers
don't have time for.

ADA has been responsible for the "most costly software failure of all
times" - the Ariane 5 disaster.

Not true. The hardware guys had improved the external solid fuel
boosters on Ariane 5 to produce an acceleration and top speed that was
well beyond what the software had been specified to handle for Ariane 4.
Unfortunately no one spotted that this would lead to a runtime numeric
overflow on the faster takeoff with the improved SRBs.

http://www.around.com/ariane.html

Is a reasonable description. It would not have mattered what language
the code had been written in a floating point to 16 bit integer
conversion would have generated an exception.

It added insult to injury that the data had no meaning once the vehicle
had left the launch pad but *engineers* had decided to leave it running
for 40s into the launch. Bad design decision to allow any unnecessary
processes to run after they are no longer needed.
ADA wasn't responsible. If someone converts a float to a 16-bit integer, to do
something like load a DAC maybe, he is obligated to consider the consequences.
That's true in any language.

At least nobody died.

http://www.ieee.li/pdf/viewgraphs/trustworthy_software.pdf

I like the one with the military aircraft that allowed you to retract
the undercarriage when it was still on the ground and the apochryphal
one that flipped over when it crossed the equator.

Gunnery table compensation for coriolis forces in artillery were still
incorrect in NATO weapons as recently as the Falklands conflict. And
famously the Type 42 destroyer HMS Sheffield ADACS thought that the
Super Etenard launched Exocet missile was friendly (ie not Russian).

The trouble with a fence post error in a binary logic 0/1 situation is
that the result is always the opposite of what you intended.

> http://en.wikipedia.org/wiki/Therac-25

A nasty example of what happens when the wrong sort of engineers get
involved in design decisions. A mechanical engineer once managed to get
the handedness of a press to break emergency stop switch reversed to a
press to make emergency stop switch with a relay somewhere else.

A faulty wiring harness damn near killed somebody because pressing the
emergency stop which was no longer properly connected did nothing!

As a software engineer I insist on hardware interlocks on anything that
poses a lethal hazard like EHT, high power RF or hazardous mechanical
parts. It might be my hand that is in the way when the software fails.

Most entertaining software failure I ever saw was on an embedded TI9900
where all registers are memory mapped. Unfortunately after a system
glitch the registers *including* the program counter were in ROM!

I didn't appreciate quite how good it was at context switching until
later when we implemented the same sort of system on a Motorola 68k.

--
Regards,
Martin Brown
 
On 2014-04-05, Sylvia Else <sylvia@not.at.this.address> wrote:

Convection in the water seems inevitable, since water is a poor
conductor of heat. The effects are quite visible in the temperature
measurements, though the variability is less than a degree in the setup
I have (admittedly, I'm only measuring the temperature at one point).
Things quieten down once the heater is off.

measure at different positions, convection will cause warm and cool
currents, finding the right spot to measure at may be critical.

--
umop apisdn
 
On 4/14/2014 6:47 AM, Martin Brown wrote:
On 13/04/2014 19:47, Cursitor Doom wrote:
On Sat, 12 Apr 2014 10:39:03 -0700, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:


Working in c, always check every buffer for size errors. Study every
pointer.
Don't just write the code, READ it.

Does you no good in a language where the default is a null terminated
string. It is always possible for a malevolent external data source to
send a string that will overwrite the end of any finite buffer.

The problem is that C programmers are *taught* to write in a style that
is concise cryptic but dangerous when abused by malevolent source data

while (*d++=*s++);

That works fine if you put a sentinel null in the right place first.
The idea of widely-used cryptographic software being put out there
without a zillion unit- and regression-tests is really scary.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On Mon, 14 Apr 2014 11:34:08 +0100, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

On 12/04/2014 16:15, edward.ming.lee@gmail.com wrote:

That just reinforces what an astonishingly poor understanding you - and
many others - have about programming languages, and about bugs in software.

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL. The bug
was caused by the programmer using data in the incoming telegram without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and encryption.

Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

Unfortunately I have to agree with you, but it isn't strictly down
either to programmers or computer scientists it is because businesses
prefer to ship first and be damned later.

There has been a lot of progress in static analysis to catch at compile
time the sorts of errors that humans are likely to make and options to
defend at runtime against likely exploits. However, the tools needed are
not easily available for teaching and are only available overpriced in
the environments least likely to use them - business and enterprise!

That's another part of the c culture: hack fast, don't review, and use some
automated tool to find your coding errors.


--

John Larkin Highland Technology Inc
www.highlandtechnology.com jlarkin at highlandtechnology dot com

Precision electronic instrumentation
 
On 14/04/2014 15:02, Phil Hobbs wrote:
On 4/14/2014 6:47 AM, Martin Brown wrote:
On 13/04/2014 19:47, Cursitor Doom wrote:
On Sat, 12 Apr 2014 10:39:03 -0700, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:


Working in c, always check every buffer for size errors. Study every
pointer.
Don't just write the code, READ it.

Does you no good in a language where the default is a null terminated
string. It is always possible for a malevolent external data source to
send a string that will overwrite the end of any finite buffer.

The problem is that C programmers are *taught* to write in a style that
is concise cryptic but dangerous when abused by malevolent source data

while (*d++=*s++);

That works fine if you put a sentinel null in the right place first. The

But if s is longer than d then you have corrupted one byte in the source
string which may also have consequences.

idea of widely-used cryptographic software being put out there without a
zillion unit- and regression-tests is really scary.

Agreed. It isn't like financial institutions are naive practitioners.

Particularly odd that no-one ran any deep static analysis tools against
the code base that might have spotted these sorts of vulnerablities.
This was public code used unchecked in a critical security setting.

--
Regards,
Martin Brown
 
On Mon, 14 Apr 2014 13:39:06 +0200, David Brown <david.brown@hesbynett.no>
wrote:

On 12/04/14 18:28, John Larkin wrote:
On Sat, 12 Apr 2014 17:36:12 +0200, Klaus Bahner <Klaus.Bahner@ieee.org> wrote:

On 12-04-2014 16:48, John Larkin wrote:
On Sat, 12 Apr 2014 15:40:04 +0200, David Brown <david.brown@hesbynett.no
wrote:


[snip]


That just reinforces what an astonishingly poor understanding you - and
many others - have about programming languages, and about bugs in software.

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL. The bug
was caused by the programmer using data in the incoming telegram without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and encryption.


Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

But the heartbeat problem has nothing to do with bufferoverflows etc.
The programmer did just not check (for whatever reason - there are a lot
of rumors spreading how this could happen) whether the information
coming in makes sense.

It was an unchecked buffer bug. Just past the assumed but unchecked buffer was
whatever else happened to be in memory. That sort of blunder has been chronic
for decades, should be impossible, and keeps happening.

It was not an issue with buffer overflows, the problem was that the
programmer accepted the incoming data at face value. This would have
caused trouble regardless of the language used, and it would have been
avoided if the programmer had followed good secure development practices.

The exact effects of the bug might have varied depending on the language
- a language with enough run-time checks could have thrown an exception
(leading to a crash, as the exception would be unexpected) rather than
returning other data.


There was, in the DOS days, a joke circulating that a certain JPEG file
contained a virus. A few years later, in Windows, Microsoft actually made that
possible. Same problem, trusting that a declared size was the actual size of a
structure, and data and code and stack mixed in the same memory spaces.


MS has always had a habit of trusting too much in their coding, and not
paying attention to security. The same is still true of their code
today, though they are not as bad as they were. Such habits and poor
development culture are independent of the language used.

Disagree. A language with indexed arrays and formal, controlled strings can have
hard bounds checking. A pointer-oriented language, with null-terminated strings,
can't.




This has nothing to do with C or any other programming language. Rather
with bad design. Worse it seems to be a blow to the open source
community, which always claimed their code is better/safer than
commercially developed software, because everyone can and will check the
code. This did obviously not happen.

A strongly typed language without pointers, and with runtime bounds checking,
and compilers that use proper protections, would prevent sloppy errors like this
one.


No, that is a complete misunderstanding of the /real/ problem.

Language choice can prevent /some/ mistakes, but certainly not all. And
good use of tools can prevent many others - if the openssl developers
had tested the code using appropriate C debugging tools, they would have
spotted the error as quickly as with a managed programming language.
What was missing is that no one tried sending malicious packets to the
code - no one thought about the possible security hole, no one noticed
that the code trusted external data, no one tested it. The
implementation language was irrelevant.

Disagree again. There are languages where that sort of error couldn't happen.

And languages that don't oblige days of testing to stress a few lines of code.





The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

Actually the opposite is true. In order to become a capable C programmer
you learn that it is your responsible to make sure that your code
behaves well, when fed with garbage. Other languages tend to foster
programmers less aware of what could happen and how an attacker might
exploit weaknesses.

You're joking, right? The trickier the language, the better the programmers?


There is a strong correlation here. You mentioned Ada in another post -
if I were hiring someone to write secure or reliable software and I had
a candidate that knew Ada and a candidate that knew Python, I'd pick the
Ada guy every time. If I wanted quickly developed code for use in a
closed network, I'd pick the Python guy.





Given the compute horsepower around these days, most programmers should be
running interpreters, Python-type things, that can protect the world from the
programmers.

Actually I would claim that the majority of security problems is caused
by those interpreters. Do I have to mention Java(-script)?



ADA has better protections than c, but requires discipline that most programmers
don't have time for.

ADA has been responsible for the "most costly software failure of all
times" - the Ariane 5 disaster.



ADA wasn't responsible. If someone converts a float to a 16-bit integer, to do
something like load a DAC maybe, he is obligated to consider the consequences.
That's true in any language.

At least nobody died.

http://www.ieee.li/pdf/viewgraphs/trustworthy_software.pdf

http://en.wikipedia.org/wiki/Therac-25


Ada was not responsible - it was a design error in the software, and
could have occurred in any language. But this demonstrates the point
that it is not the implementation language that is critical, it is the
software design and the development and testing process that is vital.

Buffer overruns have been a major source of security lapses. A language that
prevented them would, well, prevent them.


--

John Larkin Highland Technology Inc
www.highlandtechnology.com jlarkin at highlandtechnology dot com

Precision electronic instrumentation
 
On 14/04/14 14:28, Martin Brown wrote:
On 14/04/2014 12:48, David Brown wrote:
On 14/04/14 12:47, Martin Brown wrote:
On 13/04/2014 19:47, Cursitor Doom wrote:
On Sat, 12 Apr 2014 10:39:03 -0700, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:


Working in c, always check every buffer for size errors. Study every
pointer.
Don't just write the code, READ it.

Does you no good in a language where the default is a null terminated
string. It is always possible for a malevolent external data source to
send a string that will overwrite the end of any finite buffer.

The problem is that C programmers are *taught* to write in a style that
is concise cryptic but dangerous when abused by malevolent source data

while (*d++=*s++);

It wouldn't matter in a world where the data sources could be trusted.


The key to secure programming - regardless of language - is to take your
untrusted data and sanitise and check it. /Then/ your data is trusted,
and you can take advantage of that.

Unfortunately we both know that that doesn't happen in the real world.
(at least it fails to occur in far too many software development shops)

There is a method that works quite well in order to keep trusted and
untrusted data separate - the Hungarian notation. Simonyi (the
Hungarian in question, working for MS) first used it to make
distinctions about data that could not easily be checked and enforced by
the compiler - in particular, incoming data strings would have a "us"
prefix for "unsafe string" and sanitised versions would get the prefix
"ss" for "safe string". If you stick rigidly to this convention, you
will not mix up your safe and unsafe data. This "Apps Hungarian"
notation is independent of programming language.


Unfortunately, some halfwit (also at MS) thought "Hungarian notation"
meant prefixing names in C with letters indicating the type - so-called
"Systems Hungarian" which just makes code a mess, makes it easy to be
inconsistent, adds little information that is not already easily
available to the compiler and IDE, and means you can't use "Apps
Hungarian" to improve code safety. It's a fine example of snatching
defeat from the jaws of victory - and of MS having a strong group of
theoretical computer scientists with no communication with or influence
over the mass of numpties doing their real coding.


There are, of course, many other ways to ensure that your untrusted data
does not mix with the trusted data, and there are ways that can be
enforced by a C compiler (or at least by additional checking tools).
But it has to be part of the design process, and has to be implemented
consistently.
 
On 4/14/2014 10:11 AM, Martin Brown wrote:
On 14/04/2014 15:02, Phil Hobbs wrote:
On 4/14/2014 6:47 AM, Martin Brown wrote:
On 13/04/2014 19:47, Cursitor Doom wrote:
On Sat, 12 Apr 2014 10:39:03 -0700, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:


Working in c, always check every buffer for size errors.
Study every pointer. Don't just write the code, READ it.

Does you no good in a language where the default is a null
terminated string. It is always possible for a malevolent
external data source to send a string that will overwrite the end
of any finite buffer.

The problem is that C programmers are *taught* to write in a
style that is concise cryptic but dangerous when abused by
malevolent source data

while (*d++=*s++);

That works fine if you put a sentinel null in the right place
first. The

But if s is longer than d then you have corrupted one byte in the
source string which may also have consequences.

You take the sentinel back out when you're done. I'm not defending that
as a universal practice, especially when things like memcpy exist that
are more efficient still, but it does save a loop counter increment and
test, at the cost of a single local variable to store the previous value
of the sentinel.

The main issue is in multithreaded code, where some other thread may be
wanting to read s during the time you've got the sentinel in there. It
also isn't async-safe and all that.

But the larger issue IMO is the lack of unit testing, and the
unwarranted confidence in the "all bugs are shallow in the bazaar"
mantra, which has repeatedly been shown to be ludicrously false.

idea of widely-used cryptographic software being put out there
without a zillion unit- and regression-tests is really scary.

Agreed. It isn't like financial institutions are naive
practitioners.

Particularly odd that no-one ran any deep static analysis tools
against the code base that might have spotted these sorts of
vulnerablities. This was public code used unchecked in a critical
security setting.

Yup. What I'd be very interested to hear from anyone knowledgeable
about the process is whether that's SOP or an outlier. And of course
money may have changed hands, as with those schlemiels at RSA.


Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 14/04/14 16:10, John Larkin wrote:
On Mon, 14 Apr 2014 11:34:08 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 12/04/2014 16:15, edward.ming.lee@gmail.com wrote:

That just reinforces what an astonishingly poor understanding you - and
many others - have about programming languages, and about bugs in software.

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL. The bug
was caused by the programmer using data in the incoming telegram without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and encryption.

Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

Unfortunately I have to agree with you, but it isn't strictly down
either to programmers or computer scientists it is because businesses
prefer to ship first and be damned later.

There has been a lot of progress in static analysis to catch at compile
time the sorts of errors that humans are likely to make and options to
defend at runtime against likely exploits. However, the tools needed are
not easily available for teaching and are only available overpriced in
the environments least likely to use them - business and enterprise!

That's another part of the c culture: hack fast, don't review, and use some
automated tool to find your coding errors.

The alternative, which is used for most non-C coding, is even worse -
hack fast, don't review, and don't use automated tools to find coding
errors because there are no such tools, and because your run-time typing
and dynamic behaviour means such tools won't work anyway.

What you seem to think of as "C culture" is general programming culture.
Apart from a few niche areas where there are stronger rules and more
control over the development process, it applies to /all/ programming
regardless of the language.

Ironically, openssl development is an example of a normally solid
development process - this is the first security bug in openssl since
2003, which is an impressive record for such widely used software.
Human error in the development process was at fault here (unless you
believe it was the NSA all along...)
 
On 14/04/2014 15:10, John Larkin wrote:
On Mon, 14 Apr 2014 11:34:08 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 12/04/2014 16:15, edward.ming.lee@gmail.com wrote:

That just reinforces what an astonishingly poor understanding you - and
many others - have about programming languages, and about bugs in software.

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL. The bug
was caused by the programmer using data in the incoming telegram without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and encryption.

Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

Unfortunately I have to agree with you, but it isn't strictly down
either to programmers or computer scientists it is because businesses
prefer to ship first and be damned later.

There has been a lot of progress in static analysis to catch at compile
time the sorts of errors that humans are likely to make and options to
defend at runtime against likely exploits. However, the tools needed are
not easily available for teaching and are only available overpriced in
the environments least likely to use them - business and enterprise!

That's another part of the c culture: hack fast, don't review, and use some
automated tool to find your coding errors.

Actually it isn't. I wish it was. People always look hurt when I run
aggressive static analysis against a C codebase and ask if they want the
bugs it finds fixed as well as the ones I am supposed to look at.

Once I found a whole chunk of modules where every variable had been
altered to a soap opera character name. That took a while to undo.

I always do a before and after scan to demonstrate what is there and
avoid any unpleasant surprises later on. Another metric I consider very
reliable at finding code likely to contain bugs is McCabes CCI which is
a measure of the minimum number of test cases to exercise every path
through the code. If this number is too high then the code will almost
certainly be buggy and may still contain paths that have never executed.

CPU cycles are cheap and getting cheaper where as people cycles are
expensive and getting more so. It makes sense to offload as much of the
automateable grunt work onto the compiler and toolset as you can.

One compiler I know with a sense of humour will compile access to an
uninitialised variable as a hard trap with a warning message by default.
I have it promoted to a hard error (language is Modula 2).

--
Regards,
Martin Brown
 
On 14 Apr 2014 10:05:05 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2014-04-12, John Larkin <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

Data should be stored in declared buffers, and runtime errors thrown if attempts
are made to address outside the buffer. Items should be addressed by named
indexes, not by wandering around with pointers.

declard buffers make for inflexible software.

Yeah, it makes critical security bugs much harder to code.

I want all of my pointers to be able to address anything, anywhere! Even if I
don't know what's there!

immagine if your email program could only handle 20 attachments of
300K each because that was the buffer size, if sowmone wanted to send
you an email with a 2M attachment you'd have to exit that program and
run the version that supported 200k text and a 6M attachment,

and if you upgraded your RAM you'de need to install a new operating
system.

So declare a gigabyte buffer. It's virtual memory anyhow.

The system should trap if you address outside of the declared buffer. The
concept was once known as "memory management."


--

John Larkin Highland Technology Inc
www.highlandtechnology.com jlarkin at highlandtechnology dot com

Precision electronic instrumentation
 
On Mon, 14 Apr 2014 13:17:07 +0200, David Brown <david.brown@hesbynett.no>
wrote:

On 12/04/14 16:48, John Larkin wrote:
On Sat, 12 Apr 2014 15:40:04 +0200, David Brown <david.brown@hesbynett.no
wrote:

On 12/04/14 04:58, John Larkin wrote:
On Fri, 11 Apr 2014 20:24:01 -0700, josephkk
joseph_barrett@sbcglobal.net> wrote:


See Link:

http://arstechnica.com/security/2014/04/critical-crypto-bug-exposes-yahoo-mail-passwords-russian-roulette-style/



?;..((


Here is the technical analysis:

http://xkcd.com/1354/


This is the best illustration of the flaw I have seen - thanks for that
link.


And some details:

http://www.theregister.co.uk/2014/04/09/heartbleed_explained

which reinforces what an astonishingly bad programming language c
is.


That just reinforces what an astonishingly poor understanding you - and
many others - have about programming languages, and about bugs in software.

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL. The bug
was caused by the programmer using data in the incoming telegram without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and encryption.


Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

That's true to a fair extent, though less so now than it used to be -
people are more aware of the problem, and use safer alternative functions.

However, the bug in heartbleed has nothing to do with this - either in
terms of "C culture" or programming language.

Of course it does. The coder used an autoincrement pointer to pick up a buffer
size and dumped that amount of memory, without bounds checks, addressing memory
whose content was entirely unknown. No programming language should allow that.

The cultural part is that buffer overrun errors have been chronic security
hazards for decades, and neither the coder nor the open-source peer reviewers
checked for it! Some unchecked buffer bugs can be very subtle and hard to
analyze, but this one was blatant, simple and in plain sight.






I don't disagree that C programs often have security risks that are easy
to make due to C's lack of resource management and proper strings - but
I strongly disagree with the implication that other languages are /safe/
or /secure/ solely because they don't have these issues.

If strong checking eliminated half of the security bugs, that would be a big
improvement. In most engineering situations, 2:1 is considered to be a big deal.





Data should be stored in declared buffers, and runtime errors thrown if attempts
are made to address outside the buffer. Items should be addressed by named
indexes, not by wandering around with pointers.

That solves /some/ security issues - but there is nothing in C that
stops you doing this if you understand how to program secure software.
But it is a serious mistake to think that such issues are actually the
most important factors in secure programming - or that other languages
with garbage collection, no pointers, and safe arrays are actually more
secure. Insecure software is just as common with Python, PHP, Perl, and
other higher level languages.

You are taking one class of errors which occur more often in C (because
checks have to be made manually in the code, rather than automatically
in the language) and assuming this is a major security issue. But that
is simply not the case. Buffer overflows and similar errors usually
result in crashes - such programs are therefore susceptible to
denial-of-service attacks, but they seldom (but not never, of course)
lead to information leaks or privilege escalation. And the alternative
- using a language with managed buffers and runtime errors - will give
the same effect when the unexpected runtime error leads the program to end.

Buffer overflows are common. If you're lucky, they crash the OS and you find out
and fix it. If you're not lucky, some bad-hatter finds them before you do.

The worst thing that technology does is program. Critical systems are programmed
in a 45-year old, fundamentally hazardous language that requires execptional,
K&R-level, skills to get right. Will we be doing it the same way, in c, 45 years
from now?

Writing secure software is about thinking securely - the language of
implementation is a minor issue, partly because the coding itself should
be a small part of the total workload.

The heartbleed bug did not come from an issue in the implementation
language - it came from not /thinking/ enough about where information
came from. Arguably it came from poor design of the heartbeat part of
the protocol - it is never a good idea for the same information (the
length of the test data) to be included twice in the telegram, as it can
lead to confusion and mistakes.


And it's crazy for compilers to not use MMUs to prevent data and stacks and code
from being all mixed up.

Compilers do not and should not manipulate the MMU.

I think what you mean to say is that stacks and data segments should be
marked non-executable. This is true in general, but not always - there
are some types of code feature that require run-time generation of code
(such as "trampolines" on the stack) to work efficiently. If you can
live without such features, then stacks and data segments can be marked
non-executable - and that is typically done on most systems. (It is the
OS that controls the executability of memory segments, not the compiler.)

Note that most high-level languages, with the sort of run-time control
and limitations that you are advocating, are byte-compiled and run by a
virtual machine. In a very real sense, the data section of the VM
contains the program they are executing.


Given the compute horsepower around these days, most programmers should be
running interpreters, Python-type things, that can protect the world from the
programmers.

Again, you are showing that you have very little idea of the issues
involved, and are merely repeating the popular myths. And one of these
myths is that we have so much computing horsepower that the efficiency
of the programs doesn't matter. Tell that to people running farms of
servers, and the people paying for the electricity.

Python, and languages like it, protect against /some/ kinds of errors
that are common in C. But they are far from the sort of magic bullet
you seem to believe in - they are just a tool. The ease and speed of
development with Python can also lead to a quick-and-dirty attitude to
development where proof-of-concept and prototype code ends up shipping -
there are pros and cons to any choice of language.

It is up to programmers and program designers to understand secure
programming, and to code with an appropriately paranoid mindset,
regardless of the language.


ADA has better protections than c, but requires discipline that most programmers
don't have time for.

Again, Ada is just an alternative tool, with its pros and cons.

For the record, I use Python for most PC programming - because it makes
it easier and faster to deal with strings and with more complex data
structures, and because I often find its interactivity useful in
development. I use C for almost all my embedded programming - and to my
knowledge, I have never written C code with a buffer overflow.

c is fine for small bare-metal uP apps, where a single programmer does the
entire app, and the consequences of a bug are small. In bigger contexts, you get
heartbleed, and literally thousands of similar hazards.

So, are we in for another 50 years of the same? You seem to be arguing "yes."




--

John Larkin Highland Technology Inc
www.highlandtechnology.com jlarkin at highlandtechnology dot com

Precision electronic instrumentation
 
On 04/14/2014 10:39 AM, Martin Brown wrote:
On 14/04/2014 15:10, John Larkin wrote:
On Mon, 14 Apr 2014 11:34:08 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 12/04/2014 16:15, edward.ming.lee@gmail.com wrote:

That just reinforces what an astonishingly poor understanding you
- and
many others - have about programming languages, and about bugs in
software.

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL.
The bug
was caused by the programmer using data in the incoming telegram
without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and
encryption.

Unchecked buffers and stack overflows have been chronic security
lapses for
decades now, thousands and thousands of times. Wandering around
data structures
with autoincrement pointers is like stumbling in a mindfield,
blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture,
will make this
sort of thing keep happening.

Unfortunately I have to agree with you, but it isn't strictly down
either to programmers or computer scientists it is because businesses
prefer to ship first and be damned later.

There has been a lot of progress in static analysis to catch at compile
time the sorts of errors that humans are likely to make and options to
defend at runtime against likely exploits. However, the tools needed are
not easily available for teaching and are only available overpriced in
the environments least likely to use them - business and enterprise!

That's another part of the c culture: hack fast, don't review, and use
some
automated tool to find your coding errors.

Actually it isn't. I wish it was. People always look hurt when I run
aggressive static analysis against a C codebase and ask if they want the
bugs it finds fixed as well as the ones I am supposed to look at.

Once I found a whole chunk of modules where every variable had been
altered to a soap opera character name. That took a while to undo.

I always do a before and after scan to demonstrate what is there and
avoid any unpleasant surprises later on. Another metric I consider very
reliable at finding code likely to contain bugs is McCabes CCI which is
a measure of the minimum number of test cases to exercise every path
through the code. If this number is too high then the code will almost
certainly be buggy and may still contain paths that have never executed.

CPU cycles are cheap and getting cheaper where as people cycles are
expensive and getting more so. It makes sense to offload as much of the
automateable grunt work onto the compiler and toolset as you can.

One compiler I know with a sense of humour will compile access to an
uninitialised variable as a hard trap with a warning message by default.
I have it promoted to a hard error (language is Modula 2).

What are your favourite static analysis tools, Martin? I mostly use
PCLint, but I don't have an uplevel copy. I'm a big fan of mudflap for
debugging.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 14 Apr 2014 10:34:55 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2014-04-14, Tim Williams <tmoranwms@charter.net> wrote:
"Sylvia Else" <sylvia@not.at.this.address> wrote in message
news:br0tf2Fs854U1@mid.individual.net...
At the time of its creation, both memory and CPU time were expensive. It
wasn't practical to specify the language in a way that ensured bounds
checking because of the memory and time costs involved.

Have they never been "inexpensive"?

This sort of malady has been known since the 60s at least, and bounds
checking has been around since about the same time (e.g., when was BASIC
introduced -- which AFAIK, always has bounds checking... at least when
interpreted?). For God sakes, the 80186 even brought the BOUNDS
instruction to x86. It goes unused to this day!

In DOS it's a good way to kill trees.

It doesn't help that C's [ASCII] "strings" are zero-terminated. Because
they're just stupid arrays, and the standard libraries decided they should
be treated in that way. Absolutely no necessity of doing it that way.

Just use the mem* functions instead of the str* functions if you want
known-length strings.

Of all the dubious aspects of the language, that's one that should recieve
the most ire. What a stupid idea. Other languages (I'm only familiar
with QuickBasic offhand) store strings with a length prefix. And do
bounds checking besides.

yeah, but doesn't it put some stupid arbitrary limit on string length?

PowerBasic doesn't put a limit on string length, allows embedded nulls, and has
groovy inherent string functions. Without hazards. Ask for a substring out of
the range of a string and you get the null string. Append to a string and it
just works.

I've written PB programs that manipulate huge data arrays, using subscripts,
that run 4x as fast as the obvious c pointer equivalents. With an afternoon of
playing with code and compiler optimizations, the c got close.

--

John Larkin Highland Technology Inc
www.highlandtechnology.com jlarkin at highlandtechnology dot com

Precision electronic instrumentation
 
On 14/04/2014 16:38, John Larkin wrote:
On Mon, 14 Apr 2014 13:17:07 +0200, David Brown <david.brown@hesbynett.no
wrote:

On 12/04/14 16:48, John Larkin wrote:
On Sat, 12 Apr 2014 15:40:04 +0200, David Brown <david.brown@hesbynett.no

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL. The bug
was caused by the programmer using data in the incoming telegram without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and encryption.

Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

That's true to a fair extent, though less so now than it used to be -
people are more aware of the problem, and use safer alternative functions.

However, the bug in heartbleed has nothing to do with this - either in
terms of "C culture" or programming language.

Of course it does. The coder used an autoincrement pointer to pick up a buffer
size and dumped that amount of memory, without bounds checks, addressing memory
whose content was entirely unknown. No programming language should allow that.

Actually he didn't that part was hidden inside the memcpy routine.

The problem was not sanity checking the parameters in the message for
validity. This is all too common :(

The cultural part is that buffer overrun errors have been chronic security
hazards for decades, and neither the coder nor the open-source peer reviewers
checked for it! Some unchecked buffer bugs can be very subtle and hard to
analyze, but this one was blatant, simple and in plain sight.

It was only in plain sight if you think like a black hat. You can hide a
lot of things in plain sight. Most people don't consider what would
happen if someone were to deliberately seek to find hostile parameters.

I don't disagree that C programs often have security risks that are easy
to make due to C's lack of resource management and proper strings - but
I strongly disagree with the implication that other languages are /safe/
or /secure/ solely because they don't have these issues.

If strong checking eliminated half of the security bugs, that would be a big
improvement. In most engineering situations, 2:1 is considered to be a big deal.

Strong typing would catch some of the faults. *Using* the available
static analysis tools for C/C++ would also improve things, but the most
important thing by far would be making sure that the next generation are
taught from the outset to use these tools.

It is unfortunately the case that the best tools are only available to a
select few with very deep pockets or on a roll your own basis.

Data should be stored in declared buffers, and runtime errors thrown if attempts
are made to address outside the buffer. Items should be addressed by named
indexes, not by wandering around with pointers.

That solves /some/ security issues - but there is nothing in C that
stops you doing this if you understand how to program secure software.
But it is a serious mistake to think that such issues are actually the
most important factors in secure programming - or that other languages
with garbage collection, no pointers, and safe arrays are actually more
secure. Insecure software is just as common with Python, PHP, Perl, and
other higher level languages.

You are taking one class of errors which occur more often in C (because
checks have to be made manually in the code, rather than automatically
in the language) and assuming this is a major security issue. But that
is simply not the case. Buffer overflows and similar errors usually
result in crashes - such programs are therefore susceptible to
denial-of-service attacks, but they seldom (but not never, of course)
lead to information leaks or privilege escalation. And the alternative
- using a language with managed buffers and runtime errors - will give
the same effect when the unexpected runtime error leads the program to end.

Buffer overflows are common. If you're lucky, they crash the OS and you find out
and fix it. If you're not lucky, some bad-hatter finds them before you do.

They are not as common as they used to be. Steve McGuire (sp?) in his
classic "Writing Solid Code" described how it should be done back in the
1990's. Industry still hasn't fully adopted his approach.
The worst thing that technology does is program. Critical systems are programmed
in a 45-year old, fundamentally hazardous language that requires execptional,
K&R-level, skills to get right. Will we be doing it the same way, in c, 45 years
from now?

Most of the realistic alternatives are of a similar vintage. New silver
bullets promise everything but then deliver little or no improvement.

Writing secure software is about thinking securely - the language of
implementation is a minor issue, partly because the coding itself should
be a small part of the total workload.

The heartbleed bug did not come from an issue in the implementation
language - it came from not /thinking/ enough about where information
came from. Arguably it came from poor design of the heartbeat part of
the protocol - it is never a good idea for the same information (the
length of the test data) to be included twice in the telegram, as it can
lead to confusion and mistakes.

Crucially if you already know how long the heartbeat datagram should be
then you should drop it on the floor if it is clearly malformed on
arrival. Data corruption can sometimes happen so there is no excuse.

ADA has better protections than c, but requires discipline that most programmers
don't have time for.

Again, Ada is just an alternative tool, with its pros and cons.

For the record, I use Python for most PC programming - because it makes
it easier and faster to deal with strings and with more complex data
structures, and because I often find its interactivity useful in
development. I use C for almost all my embedded programming - and to my
knowledge, I have never written C code with a buffer overflow.

c is fine for small bare-metal uP apps, where a single programmer does the
entire app, and the consequences of a bug are small. In bigger contexts, you get
heartbleed, and literally thousands of similar hazards.

I hate to defend C but in this instance it wasn't so much the language
as the failure of the handler algorithm to sanity check the parameters
it was passed were correct before copying the data and sending it back.

The problem arose from trusting the value of "payload". Most attacks
against computers arise from misdirection or social engineering of the
humans using the computer rather that direct technical attacks.

> So, are we in for another 50 years of the same? You seem to be arguing "yes."

Unless and until someone comes up with a new paradigm that is the
difference between machine code and compilers - basically yes :(

You can have formally verified correct software iff you need it but the
price is astronomical to do it for any non-trivial application. Formally
proved hardware like VIPER has all ended in tears and lawsuits

It will happen eventually that someone devises a graphical programming
interface that allows domain experts to codify their knowledge reliably
but until then we have to live with the tools that we have available.

In some ways electronics simulators and silicon chip design software are
amongst the most powerful tools for avoiding human error we have. They
are not perfect but they are a lot better than doing it manually.

--
Regards,
Martin Brown
 

Welcome to EDABoard.com

Sponsor

Back
Top