Driver to drive?

In article <m4mlk9llghcqpij2dcr2slrru04qcsf7fr@4ax.com>,
cd@spamfreezone.net says...
On Fri, 11 Apr 2014 21:21:27 -0700 (PDT), dakupoto@gmail.com wrote:


There is absolutely NOTHING wrong with C. The original
designers - Ritchie/Kernighan created a language, often
called "souped up Assembly" to do system-level tasks.
Incompetent programmers cannot pass on their stupidity
on some computer language. Please try finding out why
Yahoo or later Bing could not come remotely close to
Google in terms of search speed performance.


K&R wrote the best book on a computer language ever: the C Programming
Language. It's beautifully written; so clear and precise in every
respect - a rare achievement in a technical work.

Sure it is, and I bet the coder that worked on that section knew
exactly what they were doing!

Jamie
 
On Sun, 13 Apr 2014 17:20:37 -0500, the renowned "Tim Williams"
<tmoranwms@charter.net> wrote:

"Cursitor Doom" <cd@spamfreezone.net> wrote in message
news:j6nlk95u5hfai3tpu7ab3errt1bpdv17oo@4ax.com...
At least in a bar you can see for yourself if the other person is of
the age, sex and race they claim to be.

Even a bar in Thailand?...

Tim

Okay, age and race.


Best regards,
Spehro Pefhany
--
"it's the network..." "The Journey is the reward"
speff@interlog.com Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog Info for designers: http://www.speff.com
 
On Sun, 13 Apr 2014 15:37:02 -0700 (PDT), Klaus Kragelund
<klauskvik@hotmail.com> wrote:

On Monday, April 14, 2014 12:30:45 AM UTC+2, k...@attt.bizz wrote:
On Sun, 13 Apr 2014 13:22:28 -0700 (PDT), Klaus Kragelund

klauskvik@hotmail.com> wrote:



On Sunday, April 13, 2014 5:14:50 PM UTC+2, k...@attt.bizz wrote:

On Sat, 12 Apr 2014 23:56:10 -0700 (PDT), Klaus Kragelund



klauskvik@hotmail.com> wrote:







Quoted:







On Friday, April 11, 2014 12:37:39 AM UTC+2, k...@attt.bizz wrote:



On Thu, 10 Apr 2014 15:07:38 -0700, John Larkin







jlarkin@highlandtechnology.com> wrote:















On Thu, 10 Apr 2014 18:50:31 GMT, Jan Panteltje







pNaonStpealmtje@yahoo.com> wrote:















On a sunny day (Thu, 10 Apr 2014 09:16:12 -0700) it happened John Larkin







jjlarkin@highNOTlandTHIStechnologyPART.com> wrote in







asgdk9d29q9ds74218i3r5me51loi12pm5@4ax.com>:















I avoid battery-powered tools. They are wimpy, and the batteries will die in a







year or two.















You have a cellphone?















Sure, a simple one. I charge it about every other week, and I've







replaced the battery once. But it's not a power tool.















You're not going to get a horsepower or so out of a battery for long,







especially when the battery is two years old.















You're not going to get a "horsepower or so" out of a hand tool.







You're in the stationary tool realm at a HP (Craftsman HPs don't







count).







Sure you will







I swear to Festool tools. I am dreaming about this one:







I like Festools, too. Great stuff. You are dreaming about your



knowledge of motors.







https://www.festool.com/Microsite/Pages/TSC.aspx







I own one. I can *guarantee* you that it does *NOT* develop anywhere



close to 2HP. It's actually a rather wimpy saw (my DeWalt is far more



powerful) but also quite useful. Its purpose is to cut sheet goods;



not an incredibly demanding job.





You bought one? Ha, imagining stuff now, eh?



Yes, I've owned a TS55 for three or four years, with several of the

attachments (two 55" rails, a 106", and a parallel guide

w/extensions). I've said as much. I also own a 1400EQ router and a

500Q Domino. Unlike your blathering, I do know a little about the

things. Your can resume your lies now.

....and right on cue...

>Can't you read? the "TSC" what the one you replied to,

Illiterate much?

> but I guess you will only read what you want

More lies. I read what you (try) to write.
 
On Mon, 14 Apr 2014 00:51:16 +0200, Cursitor Doom
<cd@spamfreezone.net> wrote:

On Sun, 13 Apr 2014 18:43:03 -0400, krw@attt.bizz wrote:

On Sun, 13 Apr 2014 20:51:37 +0200, Cursitor Doom
cd@spamfreezone.net> wrote:

On Sun, 13 Apr 2014 01:46:50 -0700 (PDT), haiticare2011@gmail.com
wrote:


I don't either - but Facebook is really "Fakebook." Fakebook likes and Fakebook
"friends" are a falsity. Similar to people one meets in a bar.

I don't go near any of it. The people who have the most to say about
themselves are the most vacuous, self-obsessed, narcisistic, boring
and uninspiring of individuals; well worth avoiding having any contact
with.

You don't do any of it but you know what everyone is saying. Well...

That is correct. Throughout my life I've noticed the most interesting
people are the ones who are highly reluctant about discussing what
they've done and what they plan to do. Those are precisely the kind of
people I'd like to get to know, but they obviously won't go near
Faecesbook, and so that's why I myself don't bother with it, never
have and never will.

Let me get this straight. You've never so much as talked to "those
people", you have never talked to "those people", you have no interest
in talking to "those people", yet in your mind you still know
everything about "those people". There is a word for that attitude.
"Bigot" comes to mind.

No, I don't want anything to do with Facebook, either, but not because
of some scary "those people" who inhabit the "swamp".
 
"Sylvia Else" <sylvia@not.at.this.address> wrote in message
news:br0tf2Fs854U1@mid.individual.net...
At the time of its creation, both memory and CPU time were expensive. It
wasn't practical to specify the language in a way that ensured bounds
checking because of the memory and time costs involved.

Have they never been "inexpensive"?

This sort of malady has been known since the 60s at least, and bounds
checking has been around since about the same time (e.g., when was BASIC
introduced -- which AFAIK, always has bounds checking... at least when
interpreted?). For God sakes, the 80186 even brought the BOUNDS
instruction to x86. It goes unused to this day!

It doesn't help that C's [ASCII] "strings" are zero-terminated. Because
they're just stupid arrays, and the standard libraries decided they should
be treated in that way. Absolutely no necessity of doing it that way. Of
all the dubious aspects of the language, that's one that should recieve
the most ire. What a stupid idea. Other languages (I'm only familiar
with QuickBasic offhand) store strings with a length prefix. And do
bounds checking besides.

Here's another thought:
Bad programmers want to be like good programmers. So they want their code
to be fast. So they write ugly, terse code. They prematurely optimize.
They settle on algorithms that are understandable. And they leave out
security and sanity features, like bounds checking. Fully missing the
fact that, good programmers achieve all of these goals (or at least strive
to).

The same sort of logic that, say, a racecar driver might apply to remove
the seatbelts -- saving a few pounds and eking out that last 0.02 second
on the quarter mile or whatever.

The same focus on short-term gains that's destroying the rest of the
world, not just software...

Tim

--
Seven Transistor Labs
Electrical Engineering Consultation
Website: http://seventransistorlabs.com
 
On 14/04/2014 12:49 PM, Tim Williams wrote:
"Sylvia Else" <sylvia@not.at.this.address> wrote in message
news:br0tf2Fs854U1@mid.individual.net...
At the time of its creation, both memory and CPU time were expensive. It
wasn't practical to specify the language in a way that ensured bounds
checking because of the memory and time costs involved.

Have they never been "inexpensive"?

This sort of malady has been known since the 60s at least, and bounds
checking has been around since about the same time (e.g., when was BASIC
introduced -- which AFAIK, always has bounds checking... at least when
interpreted?). For God sakes, the 80186 even brought the BOUNDS
instruction to x86. It goes unused to this day!

If the instruction is used, then it occupies memory space, and consumes
CPU time. In the vast majority of cases, and indeed, in all correct
programs, this is space and time wasted.

Bounds checking also implies that the compiler can determine the limits
of the memory area being used. The simplest case, where the memory is an
array whose declaration is visible to the compiler is easy to handle,
but once parameters are involved, the implementation has to start
passing descriptors around, and the bounds checking becomes even more
expensive.

Back in the days of 640K PCs, this was an unaffordable luxury. These
days, both the cost situation and the security environment have changed,
so that one would prefer all programming to be done languages that do
actually bear these costs.
It doesn't help that C's [ASCII] "strings" are zero-terminated. Because
they're just stupid arrays, and the standard libraries decided they should
be treated in that way. Absolutely no necessity of doing it that way. Of
all the dubious aspects of the language, that's one that should recieve
the most ire. What a stupid idea. Other languages (I'm only familiar
with QuickBasic offhand) store strings with a length prefix. And do
bounds checking besides.

It's not inherently unsafe, though anyone who's doing much string
manipulation will tend to write functions for the purpose, if only to
avoid mindlessly repetitive code.

However, 'C' was never designed for use by people who do not consider
the security ramifications of what they're writing.

Here's another thought:
Bad programmers want to be like good programmers. So they want their code
to be fast. So they write ugly, terse code. They prematurely optimize.
They settle on algorithms that are understandable. And they leave out
security and sanity features, like bounds checking. Fully missing the
fact that, good programmers achieve all of these goals (or at least strive
to).

I think you're misstating that. The problem is not that the bad
programmers are trying to be like good programmers, it's just that they
don't see the possible ramifications of what they're doing. In the ranks
of employed programmers, they may not give a stuff anyway - it's just a job.

This is not confined to the kind of mistake that can be made in 'C'. The
number of times I've seen people construct SQL statements from input
data without considering the possibility that the input data may contain
syntactically significant characters (SQL insertion vulnerability).

So, yes, we should now be using safe languages, but we still need to
manage the various levels of competence. This implies proper reviews of
work, but it is expensive, and people frequently don't really understand
the review process either. Even in environments where code review is
part of the formal process, what comes out is frequently just nitpicking
over variable naming and spelling.

Sylvia.
 
On Sun, 13 Apr 2014 21:48:15 +0200, Cursitor Doom wrote:

On Fri, 11 Apr 2014 20:24:01 -0700, josephkk
joseph_barrett@sbcglobal.net> wrote:


See Link:

http://arstechnica.com/security/2014/04/critical-crypto-bug-exposes-
yahoo-mail-passwords-russian-roulette-style/

?;..((



This scare seems wonderfully well-timed to coincide with Widows XP
support stopping. Some ruse to panic people into upgrading to Win8,
possibly? Just a guess.

Corrected here with recently released corrected openssl.
Bank checks out as well so it's all good.
Linux, compiled the corrected software this A.M.
 
On Mon, 14 Apr 2014 11:54:39 +1000, Sylvia Else
<sylvia@not.at.this.address> wrote:

At the time of its creation, both memory and CPU time were expensive. It
wasn't practical to specify the language in a way that ensured bounds
checking because of the memory and time costs involved.

In the 1970's i wrote a lot of programs for 16 bit mini computers
using FORTRAN IV, which only had tables and indexes, no pointers or
character strings. At least some compilers had the option to generate
run time index checks. This was usually employed during product
development, but turned off in the shipped product.

FORTRAN IV did not have any string data type, so you had to write your
own string library using byte arrays (or in the worst case integer
arrays). It was as primitive as C. The only difference is that C
provides ready made string subroutine library (strcpy etc.).

Fortran-77 integrated strings into the language. A similar integration
was available only in C++, not in C.
 
On 12/04/2014 12:58 PM, John Larkin wrote:
On Fri, 11 Apr 2014 20:24:01 -0700, josephkk <joseph_barrett@sbcglobal.net
wrote:


See Link:

http://arstechnica.com/security/2014/04/critical-crypto-bug-exposes-yahoo-mail-passwords-russian-roulette-style/

?;..((


Here is the technical analysis:

http://xkcd.com/1354/


And some details:

http://www.theregister.co.uk/2014/04/09/heartbleed_explained

which reinforces what an astonishingly bad programming language c is.

At the time of its creation, both memory and CPU time were expensive. It
wasn't practical to specify the language in a way that ensured bounds
checking because of the memory and time costs involved.

Another common mistake - continuing to use freed memory - was even
harder to address.

It would nice if all the application software exposed to the web were
rewritten in a safe language, but that's not going to happen any time soon.

That said, anyone created a new web-exposed application would have to
have their head examined if they wrote it in C or C++.

Sylvia.
 
On 12/04/2014 03:22, Jim Thompson wrote:
On Fri, 11 Apr 2014 20:24:01 -0700, josephkk
joseph_barrett@sbcglobal.net> wrote:


See Link:

http://arstechnica.com/security/2014/04/critical-crypto-bug-exposes-yahoo-mail-passwords-russian-roulette-style/

?;..((


Only if you're dumb enough to use Yahoo, gmail, or any Micro$hit
product.

...Jim Thompson

Actually no you are wrong. The fault in this case lies in the Unix based
OpenSSL and it was used by many banks and other corporate secure
websites. Tools are available now to check if the dozy b*stards have
fixed their sites and the only sensible thing to do is change your
password(s) on all affected sites once they are secure again.

http://www.independent.co.uk/life-style/gadgets-and-tech/heartbleed-bug-undermines-the-safety-of-nearly-two-thirds-of-the-web-9247918.html?origin=internalSearch

For once MickeySoft is not guilty. This was an entirely open source MFU!

A fair number of banks and other financial institutions have used this
OpenSSL code to implement their "secure" https transactions.

--
Regards,
Martin Brown
 
On 12/04/14 16:36, edward.ming.lee@gmail.com wrote:
On Saturday, April 12, 2014 7:30:59 AM UTC-7, edward....@gmail.com
wrote:
http://www.theregister.co.uk/2014/04/09/heartbleed_explained
which reinforces what an astonishingly bad programming language
c is.
That just reinforces what an astonishingly poor understanding you
- and many others - have about programming languages, and about
bugs in software.

This was a bug in the implementation of the response to
"heartbeat" telegrams in OpenSSL, which is a commonly used
library for SSL. The bug was caused by the programmer using data
in the incoming telegram without double-checking it. It is
totally independent of the programming language used, and totally
independent of the SSL algorithms and encryption.

This is really a problem of the go
(ghost key pressed)

This is really a problem of the Government/Industrial leadership (or
lack of). This could have been fixed quietly with must less damages.
But now, everybody (including criminals) know how to and how easy it
is to hack into servers.

If someone discoveres a bug, who are you going to call? NSA, CIA,
FBI, etc?

Security flaws like this are usually handled discretely at first.
People who find a flaw report it to the developers (openssl in this
case), and/or to Linux/BSD distributors (since it is used on such
systems). Big sites such as google, facebook, banks, etc., have
dedicated people who will track such information and get the fix in
place in their systems. Then there will be public disclosure so that
all the "small" people can see the problem and fix their servers, and so
that end users can take appropriate precautions.

There is /no/ way to get the information to the multitudes of small site
admins without also giving the information to the bad guys. But usually
the big site admins get the information first, as do the upstream people
- so that when the small guys hear of the problem, they can just
"apt-get update" to get a fixed version of the libraries.


Of course, sometimes people who find such flaws think it is more
profitable to sell the information to the bad guys rather than report it
to the developers. And sometimes people think it is better to be open
about everything as soon as possible. But usually the bugs are reported
to developers, and only revealed to the masses when the fix is in place
(or when the vendors have failed to release a fix in a timely fashion).
 
On 2014-04-12, John Larkin <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:
Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

Data should be stored in declared buffers, and runtime errors thrown if attempts
are made to address outside the buffer. Items should be addressed by named
indexes, not by wandering around with pointers.

declard buffers make for inflexible software.

immagine if your email program could only handle 20 attachments of
300K each because that was the buffer size, if sowmone wanted to send
you an email with a 2M attachment you'd have to exit that program and
run the version that supported 200k text and a 6M attachment,

and if you upgraded your RAM you'de need to install a new operating
system.

--
umop apisdn


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
 
On 2014-04-12, haiticare2011@gmail.com <haiticare2011@gmail.com> wrote:
So how to protect against this threat?
I just got an email from a friend's yahoo address, saying that this person
had been in an accident and needed me to send $1300 to a western union in Rome,
Italy. All false of course.
What to do? - practical steps?

since it's yahoo: DMARC


--
umop apisdn


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
 
On 2014-04-13, Cursitor Doom <cd@spamfreezone.net> wrote:
On Sun, 13 Apr 2014 14:44:33 -0700, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

On Sun, 13 Apr 2014 20:47:34 +0200, Cursitor Doom <cd@spamfreezone.net> wrote:

On Sat, 12 Apr 2014 10:39:03 -0700, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:


Working in c, always check every buffer for size errors. Study every pointer.
Don't just write the code, READ it.

I don't think that helps. You can study and study and study a chunk of
code over and over and over again and still not see where a possible
problem lies. Computer Guru extraordinaire Steve Gibson (grc.com) has
warned programmers about this phenomenon many times. Our thought
processes simply don't work that way.

So, write code, compile and run, ship it, but don't bother to check it?

Step-thru debugger.

A memory access checker like "valgrind" would have found this, but only
during it being exploited. the problem is that noone tested it with bad
data.




--
umop apisdn


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
 
On 12/04/2014 16:15, edward.ming.lee@gmail.com wrote:
That just reinforces what an astonishingly poor understanding you - and
many others - have about programming languages, and about bugs in software.

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL. The bug
was caused by the programmer using data in the incoming telegram without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and encryption.

Unchecked buffers and stack overflows have been chronic security lapses for
decades now, thousands and thousands of times. Wandering around data structures
with autoincrement pointers is like stumbling in a mindfield, blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture, will make this
sort of thing keep happening.

Unfortunately I have to agree with you, but it isn't strictly down
either to programmers or computer scientists it is because businesses
prefer to ship first and be damned later.

There has been a lot of progress in static analysis to catch at compile
time the sorts of errors that humans are likely to make and options to
defend at runtime against likely exploits. However, the tools needed are
not easily available for teaching and are only available overpriced in
the environments least likely to use them - business and enterprise!

Data should be stored in declared buffers, and runtime errors thrown if attempts
are made to address outside the buffer. Items should be addressed by named
indexes, not by wandering around with pointers.

It is possible to design languages that are fully range checked and not
beyond the wit of man to make it so that a memory fetch from a location
that has never previously been written to and is not declared as memory
mapped IO will generate a page fault. High Integrity Ada and Modula2
lend themselves to this sort of approach when you want security.

C pointers are not *necessarily* evil but you do have to be defensive in
their use and very untrusting of external data sources that may
masquerade as something else or fib about their true length.

The C construct

while (*d++=*s++);

Has a lot to answer for.

> There is already something like that: server side Java. But think for a moment how that would impact performance of servers with hundreds and thousands of clients. For servers, every bit of performance count.

Nothing compels Java to be interpreted. It could be compiled to native
code with some advantage. Compilers have come a long way.
And it's crazy for compilers to not use MMUs to prevent data and stacks and code
from being all mixed up.

Remapping MMU hundreds or thousands times for every program? Impractical!

OS/2 actually did keep data and code entirely separate. The shift back
to a single flat memory space containing everything came later.

On balance I think I prefer it to the flawed Intel noexecute bit bodge.

--
Regards,
Martin Brown
 
On 2014-04-14, Tim Williams <tmoranwms@charter.net> wrote:
"Sylvia Else" <sylvia@not.at.this.address> wrote in message
news:br0tf2Fs854U1@mid.individual.net...
At the time of its creation, both memory and CPU time were expensive. It
wasn't practical to specify the language in a way that ensured bounds
checking because of the memory and time costs involved.

Have they never been "inexpensive"?

This sort of malady has been known since the 60s at least, and bounds
checking has been around since about the same time (e.g., when was BASIC
introduced -- which AFAIK, always has bounds checking... at least when
interpreted?). For God sakes, the 80186 even brought the BOUNDS
instruction to x86. It goes unused to this day!

In DOS it's a good way to kill trees.

"strings" are zero-terminated. Because
they're just stupid arrays, and the standard libraries decided they should
be treated in that way. Absolutely no necessity of doing it that way.

Just use the mem* functions instead of the str* functions if you want
known-length strings.

Of all the dubious aspects of the language, that's one that should recieve
the most ire. What a stupid idea. Other languages (I'm only familiar
with QuickBasic offhand) store strings with a length prefix. And do
bounds checking besides.

yeah, but doesn't it put some stupid arbitrary limit on string length?

The same focus on short-term gains that's destroying the rest of the
world, not just software...


--
umop apisdn
 
On 12/04/2014 18:39, John Larkin wrote:
The CS departments of the world should have Manhattan-project, Man-on-the-moon
scale projects to make computing reliable. They have other priorities.

CS departments have made a lot of progress in this field for high
reliability code. It is *INDUSTRY* that isn't listening to them.

--
Regards,
Martin Brown
 
On 13/04/2014 19:47, Cursitor Doom wrote:
On Sat, 12 Apr 2014 10:39:03 -0700, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:


Working in c, always check every buffer for size errors. Study every pointer.
Don't just write the code, READ it.

Does you no good in a language where the default is a null terminated
string. It is always possible for a malevolent external data source to
send a string that will overwrite the end of any finite buffer.

The problem is that C programmers are *taught* to write in a style that
is concise cryptic but dangerous when abused by malevolent source data

while (*d++=*s++);

It wouldn't matter in a world where the data sources could be trusted.

I don't think that helps. You can study and study and study a chunk of
code over and over and over again and still not see where a possible
problem lies. Computer Guru extraordinaire Steve Gibson (grc.com) has
warned programmers about this phenomenon many times. Our thought
processes simply don't work that way.

You read what you think you have written. The only way to find certain
types of fault is to explain it to another person. Verbalising it uses
other brain pathways and allows another viewpoint. The chances are that
a second pair of eyes will not miss the same error as the author.

It isn't uncommon when style checking large ordinary documents like
reports to find "the the " or "a a " occurring spontaneously. Similar
unconscious repetition errors in software can be much more serious.

--
Regards,
Martin Brown
 
On Mon, Apr 14, 2014 at 11:37:40AM +0100, Martin Brown wrote:
On 12/04/2014 18:39, John Larkin wrote:

The CS departments of the world should have Manhattan-project,
Man-on-the-moon
scale projects to make computing reliable. They have other priorities.

CS departments have made a lot of progress in this field for high
reliability code. It is *INDUSTRY* that isn't listening to them.

Tinfoil hat mode:

And don't forget that spy agencies don't want people writing secure
code. The obstruction I've experienced over the last several years is
almost certainly partially related to this fact. My own secure
application framework is now more than six years behind schedule, and
that is no accident even if you discount the 'TLA sabotaging good code
theory'.


Regards,

Uncle Steve

--
Always talking, lurking behind my back; surveying a place to jam the
knife in. Why don't you all fuck off and learn how to become
productive humans? Too lazy.
 

Welcome to EDABoard.com

Sponsor

Back
Top