D
David Brown
Guest
On 12/04/14 16:48, John Larkin wrote:
That's true to a fair extent, though less so now than it used to be -
people are more aware of the problem, and use safer alternative functions.
However, the bug in heartbleed has nothing to do with this - either in
terms of "C culture" or programming language.
I don't disagree that C programs often have security risks that are easy
to make due to C's lack of resource management and proper strings - but
I strongly disagree with the implication that other languages are /safe/
or /secure/ solely because they don't have these issues.
That solves /some/ security issues - but there is nothing in C that
stops you doing this if you understand how to program secure software.
But it is a serious mistake to think that such issues are actually the
most important factors in secure programming - or that other languages
with garbage collection, no pointers, and safe arrays are actually more
secure. Insecure software is just as common with Python, PHP, Perl, and
other higher level languages.
You are taking one class of errors which occur more often in C (because
checks have to be made manually in the code, rather than automatically
in the language) and assuming this is a major security issue. But that
is simply not the case. Buffer overflows and similar errors usually
result in crashes - such programs are therefore susceptible to
denial-of-service attacks, but they seldom (but not never, of course)
lead to information leaks or privilege escalation. And the alternative
- using a language with managed buffers and runtime errors - will give
the same effect when the unexpected runtime error leads the program to end.
Writing secure software is about thinking securely - the language of
implementation is a minor issue, partly because the coding itself should
be a small part of the total workload.
The heartbleed bug did not come from an issue in the implementation
language - it came from not /thinking/ enough about where information
came from. Arguably it came from poor design of the heartbeat part of
the protocol - it is never a good idea for the same information (the
length of the test data) to be included twice in the telegram, as it can
lead to confusion and mistakes.
Compilers do not and should not manipulate the MMU.
I think what you mean to say is that stacks and data segments should be
marked non-executable. This is true in general, but not always - there
are some types of code feature that require run-time generation of code
(such as "trampolines" on the stack) to work efficiently. If you can
live without such features, then stacks and data segments can be marked
non-executable - and that is typically done on most systems. (It is the
OS that controls the executability of memory segments, not the compiler.)
Note that most high-level languages, with the sort of run-time control
and limitations that you are advocating, are byte-compiled and run by a
virtual machine. In a very real sense, the data section of the VM
contains the program they are executing.
Again, you are showing that you have very little idea of the issues
involved, and are merely repeating the popular myths. And one of these
myths is that we have so much computing horsepower that the efficiency
of the programs doesn't matter. Tell that to people running farms of
servers, and the people paying for the electricity.
Python, and languages like it, protect against /some/ kinds of errors
that are common in C. But they are far from the sort of magic bullet
you seem to believe in - they are just a tool. The ease and speed of
development with Python can also lead to a quick-and-dirty attitude to
development where proof-of-concept and prototype code ends up shipping -
there are pros and cons to any choice of language.
It is up to programmers and program designers to understand secure
programming, and to code with an appropriately paranoid mindset,
regardless of the language.
Again, Ada is just an alternative tool, with its pros and cons.
For the record, I use Python for most PC programming - because it makes
it easier and faster to deal with strings and with more complex data
structures, and because I often find its interactivity useful in
development. I use C for almost all my embedded programming - and to my
knowledge, I have never written C code with a buffer overflow.
On Sat, 12 Apr 2014 15:40:04 +0200, David Brown <david.brown@hesbynett.no
wrote:
On 12/04/14 04:58, John Larkin wrote:
On Fri, 11 Apr 2014 20:24:01 -0700, josephkk
joseph_barrett@sbcglobal.net> wrote:
See Link:
http://arstechnica.com/security/2014/04/critical-crypto-bug-exposes-yahoo-mail-passwords-russian-roulette-style/
?;..((
Here is the technical analysis:
http://xkcd.com/1354/
This is the best illustration of the flaw I have seen - thanks for that
link.
And some details:
http://www.theregister.co.uk/2014/04/09/heartbleed_explained
which reinforces what an astonishingly bad programming language c
is.
That just reinforces what an astonishingly poor understanding you - and
many others - have about programming languages, and about bugs in software.
This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL. The bug
was caused by the programmer using data in the incoming telegram without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and encryption.
Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.
The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.
That's true to a fair extent, though less so now than it used to be -
people are more aware of the problem, and use safer alternative functions.
However, the bug in heartbleed has nothing to do with this - either in
terms of "C culture" or programming language.
I don't disagree that C programs often have security risks that are easy
to make due to C's lack of resource management and proper strings - but
I strongly disagree with the implication that other languages are /safe/
or /secure/ solely because they don't have these issues.
Data should be stored in declared buffers, and runtime errors thrown if attempts
are made to address outside the buffer. Items should be addressed by named
indexes, not by wandering around with pointers.
That solves /some/ security issues - but there is nothing in C that
stops you doing this if you understand how to program secure software.
But it is a serious mistake to think that such issues are actually the
most important factors in secure programming - or that other languages
with garbage collection, no pointers, and safe arrays are actually more
secure. Insecure software is just as common with Python, PHP, Perl, and
other higher level languages.
You are taking one class of errors which occur more often in C (because
checks have to be made manually in the code, rather than automatically
in the language) and assuming this is a major security issue. But that
is simply not the case. Buffer overflows and similar errors usually
result in crashes - such programs are therefore susceptible to
denial-of-service attacks, but they seldom (but not never, of course)
lead to information leaks or privilege escalation. And the alternative
- using a language with managed buffers and runtime errors - will give
the same effect when the unexpected runtime error leads the program to end.
Writing secure software is about thinking securely - the language of
implementation is a minor issue, partly because the coding itself should
be a small part of the total workload.
The heartbleed bug did not come from an issue in the implementation
language - it came from not /thinking/ enough about where information
came from. Arguably it came from poor design of the heartbeat part of
the protocol - it is never a good idea for the same information (the
length of the test data) to be included twice in the telegram, as it can
lead to confusion and mistakes.
And it's crazy for compilers to not use MMUs to prevent data and stacks and code
from being all mixed up.
Compilers do not and should not manipulate the MMU.
I think what you mean to say is that stacks and data segments should be
marked non-executable. This is true in general, but not always - there
are some types of code feature that require run-time generation of code
(such as "trampolines" on the stack) to work efficiently. If you can
live without such features, then stacks and data segments can be marked
non-executable - and that is typically done on most systems. (It is the
OS that controls the executability of memory segments, not the compiler.)
Note that most high-level languages, with the sort of run-time control
and limitations that you are advocating, are byte-compiled and run by a
virtual machine. In a very real sense, the data section of the VM
contains the program they are executing.
Given the compute horsepower around these days, most programmers should be
running interpreters, Python-type things, that can protect the world from the
programmers.
Again, you are showing that you have very little idea of the issues
involved, and are merely repeating the popular myths. And one of these
myths is that we have so much computing horsepower that the efficiency
of the programs doesn't matter. Tell that to people running farms of
servers, and the people paying for the electricity.
Python, and languages like it, protect against /some/ kinds of errors
that are common in C. But they are far from the sort of magic bullet
you seem to believe in - they are just a tool. The ease and speed of
development with Python can also lead to a quick-and-dirty attitude to
development where proof-of-concept and prototype code ends up shipping -
there are pros and cons to any choice of language.
It is up to programmers and program designers to understand secure
programming, and to code with an appropriately paranoid mindset,
regardless of the language.
ADA has better protections than c, but requires discipline that most programmers
don't have time for.
Again, Ada is just an alternative tool, with its pros and cons.
For the record, I use Python for most PC programming - because it makes
it easier and faster to deal with strings and with more complex data
structures, and because I often find its interactivity useful in
development. I use C for almost all my embedded programming - and to my
knowledge, I have never written C code with a buffer overflow.