Driver to drive?

On 14/04/2014 16:47, John Larkin wrote:
On 14 Apr 2014 10:34:55 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2014-04-14, Tim Williams <tmoranwms@charter.net> wrote:
"Sylvia Else" <sylvia@not.at.this.address> wrote in message
news:br0tf2Fs854U1@mid.individual.net...

Of all the dubious aspects of the language, that's one that should recieve
the most ire. What a stupid idea. Other languages (I'm only familiar
with QuickBasic offhand) store strings with a length prefix. And do
bounds checking besides.

yeah, but doesn't it put some stupid arbitrary limit on string length?

PowerBasic doesn't put a limit on string length, allows embedded nulls, and has
groovy inherent string functions. Without hazards. Ask for a substring out of
the range of a string and you get the null string. Append to a string and it
just works.

I think you will find it limits maximum string lengths at 2^31-1 or
possibly 2^32-1. Older basics tend to limit it at 2^16-1 = 65535.

Memory was a rare expensive commodity when these languages were born.

I've written PB programs that manipulate huge data arrays, using subscripts,
that run 4x as fast as the obvious c pointer equivalents. With an afternoon of
playing with code and compiler optimizations, the c got close.

Only because you don't know what you are doing.

--
Regards,
Martin Brown
 
"Martin Brown" <|||newspam|||@nezumi.demon.co.uk> wrote in message
news:GaU2v.138614$vj2.122202@fx06.am4...
I think you will find it limits maximum string lengths at 2^31-1 or
possibly 2^32-1. Older basics tend to limit it at 2^16-1 = 65535.

Memory was a rare expensive commodity when these languages were born.

Of QuickBasic specifically, it was 32ki, because of architecture (8086,
real mode), not because of memory limitations. The string is limited to
one segment and has a signed 16 bit integer length parameter, compatible
with the INTEGER data type; QB has no unsigned integers.

There were a lot of bad compromises made in the language: having only
signed data types, it wouldn't make sense to offer 64k strings when you
can't even return the value of a LEN(string$) function call half the time.
I wouldn't call that an 'arbitrary limitation', it's partly practical
(saves having to specify an unsigned data type) and partly architectural.

I don't remember if the STRING's base address was segment (16 byte)
aligned, but allocations of that sort were common. Every STRING was a
WORD for length, followed by the value (including NUL, which is just
another character). I guess some people don't like the idea, maybe
because it makes all the offsets off by 2 (or 4 for a uint32_t*)? An
extra ADD or SUB instruction, BFD.

*As opposed to int, which might've been 16 bits on an 8086, 32 on a 386
(protected mode). Depending on compiler. I don't even know. Always
better to use a known size. Yet another bizarre C anti-pattern.

One thing that was perversely valuable: STRINGs are the only dynamic
sort-of-object-oriented primitive in QB. So you'd often use strings as
dynamic arrays, lists and so on. All horribly inefficient of course
(every time you work on a string, it has to be re-allocated and copied),
but it was the only way.

As I recall, you can tell QB to allocate much more heap and thus use more
than 64k total of strings. That might be wrong. I do recall there's a
"long" mode which allows you to allocate and handle arrays over 64k in
length (of arbitrary indices, dimension and data type); they warn you it's
slower (of course, because of the segment calculations required).

Tim

--
Seven Transistor Labs
Electrical Engineering Consultation
Website: http://seventransistorlabs.com
 
On 04/14/2014 12:33 PM, Martin Brown wrote:
On 14/04/2014 16:38, John Larkin wrote:
On Mon, 14 Apr 2014 13:17:07 +0200, David Brown
david.brown@hesbynett.no
wrote:

On 12/04/14 16:48, John Larkin wrote:
On Sat, 12 Apr 2014 15:40:04 +0200, David Brown
david.brown@hesbynett.no

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL.
The bug
was caused by the programmer using data in the incoming telegram
without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and
encryption.

Unchecked buffers and stack overflows have been chronic security
lapses for
decades now, thousands and thousands of times. Wandering around data
structures
with autoincrement pointers is like stumbling in a mindfield,
blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture, will
make this
sort of thing keep happening.

That's true to a fair extent, though less so now than it used to be -
people are more aware of the problem, and use safer alternative
functions.

However, the bug in heartbleed has nothing to do with this - either in
terms of "C culture" or programming language.

Of course it does. The coder used an autoincrement pointer to pick up
a buffer
size and dumped that amount of memory, without bounds checks,
addressing memory
whose content was entirely unknown. No programming language should
allow that.

Actually he didn't that part was hidden inside the memcpy routine.

The problem was not sanity checking the parameters in the message for
validity. This is all too common :(

See e.g. http://xkcd.com/327/


Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On Monday, April 14, 2014 7:39:59 PM UTC-4, Tim Wescott wrote:
On Mon, 14 Apr 2014 16:12:09 -0700, haiticare2011 wrote:



I can't give an exhaustive list, due to time constraints, but here is

the first. It is:



The Human Genome Project



This has been a dismal failure. A worm has similar DNA to ours. Here is

just one brief view of it: I have heard supposedly intelligent PhD's say

that "everything about us is determined by DNA." And VC's went along

with this hoax.



But even a high school student in Biology 1A knows that the genotype and

a phenotype both interact to produce the organism.



Without going into details, the perpetrators of the HGP lured investors

by promising that patentable DNA sequences would predict cancer, and the

like.

But any cancer researcher knows that only a small percentage are linked

to genetics.



===========



I'm open to any suggestions of further scientific flim-flam. On my list

are AI, medical research, much nano-technology, alternative energy,

climate change,

SSRI drugs, medical treatments, IPhone, IOT...



Oh jeeze, don't stop there -- you left out evolution!







--

Tim Wescott

Control system and signal processing consulting

www.wescottdesign.com

About evolution - the discovery of "jumping genes" (1) was suppressed, and as well, it doesn't explain "jumps" in progress of complexity. In the main I agree with it, but I should mention that Darwin despised the term "survival of the
fittest." By fit, he meant that which "fit into the environmental forces," not
"fittest and strongest."
And...I should also mention the theory is all after the fact - it does not
explain "what" happened, just the reason for it to happen. (survival and
reproduction)

1. Barbara McClintock
 
On 2014-04-15 03:54, Lasse Langwadt Christensen wrote:
Den mandag den 14. april 2014 19.27.27 UTC+2 skrev Phil Hobbs:
On 04/14/2014 12:33 PM, Martin Brown wrote:
The problem was not sanity checking the parameters in the message for
validity. This is all too common :(

See e.g. http://xkcd.com/327/
that is obviously because the database was written in c ;)

-Lasse

I don't see what C's got to do with this. This concerns
some form of SQL.

You're supposed to sanitize all input data, no matter
what language.

Jeroen Belleman

P.S. Clean up your posts, or better yet: Lose Google.
 
On Monday, April 14, 2014 5:57:39 PM UTC-5, jurb...@gmail.com wrote:
GM in't got a leg to stand on anyway. They put the start control in the ignition, or kill switch. Fukum.

In the old days you had the swich, and the starter (control) was a pedal on the floor.



So if they did say anything tell them to STFU. what's more, I heard that in certain models the blinkers are controlled somehow throught the computer.. What kind of a brainianc puts the...

And, it's been found that hackers can access the PCM or ECM and shut it down, even affect the brakes via the ABSD system, or if the thing has that totally stupidest idea in the world of electric power steering, it can affect the steering.



Really, I want like a 1965 car.



I also want an ashtray in it even tough I do not smoke. I don't care, for that kind of money put it in. It's like next they'll be asking if you want tires with that, and then would you like air put in them.



Fuck all this less is more shit, this isn't 1984.

Hey! You have some great ideas. Let'a all ditch our broadband Internet connections and go back to dial-up. Heck, let's screw that and go back to BBSs over coupler 300bps modems. Whoever thought of putting data through a coax line or high speed data through a telephone line can STFU.

Also, I really don't like starting the car with the key. I kind of like the idea of getting out and sticking the crank in the front and turning it by hand. Those (apparently) were the days.
 
On 15/04/2014 05:15, John Larkin wrote:
On Mon, 14 Apr 2014 13:27:27 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 04/14/2014 12:33 PM, Martin Brown wrote:

Actually he didn't that part was hidden inside the memcpy routine.

The problem was not sanity checking the parameters in the message for
validity. This is all too common :(

See e.g. http://xkcd.com/327/

That's what you get when you can't tell data from code.

It is *nothing* to do with that at all. It is a failure to check that
the incoming message is correctly formed and then acting on it blindly.

Harvard architecture has made some inroads in embedded PICs. Data and
code are in separate memory spaces and completely separate.

Intel provided the means to have separate CODE and DATA but the flat no
distinction memory model of the early Motorola CPUs is in vogue now.

OS/2 actually used Intels segmentation to keep code and data apart. It
was incredibly robust and if some process failed the rest all kept
running. It was fast enough to simulate 16550 UART in software.

Windows flat model won out due to IBMs crap marketing department.

--
Regards,
Martin Brown
 
On 2014-04-14, Tim Williams <tmoranwms@charter.net> wrote:
"Martin Brown" <|||newspam|||@nezumi.demon.co.uk> wrote in message
news:GaU2v.138614$vj2.122202@fx06.am4...
I think you will find it limits maximum string lengths at 2^31-1 or
possibly 2^32-1. Older basics tend to limit it at 2^16-1 = 65535.

Memory was a rare expensive commodity when these languages were born.

Of QuickBasic specifically, it was 32ki, because of architecture (8086,
real mode), not because of memory limitations. The string is limited to
one segment and has a signed 16 bit integer length parameter, compatible
with the INTEGER data type; QB has no unsigned integers.

Turbo C didn't have that string lenght limit on that architecture ...
you could certainly have a string over 512Ki and possibly over 700Ki if you
pulled out all the stops.

*As opposed to int, which might've been 16 bits on an 8086, 32 on a 386
(protected mode). Depending on compiler. I don't even know. Always
better to use a known size. Yet another bizarre C anti-pattern.

c has got known sizes now,

--
umop apisdn


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
 
On 14/04/2014 15:24, David Brown wrote:
On 14/04/14 14:28, Martin Brown wrote:
On 14/04/2014 12:48, David Brown wrote:
On 14/04/14 12:47, Martin Brown wrote:
On 13/04/2014 19:47, Cursitor Doom wrote:
On Sat, 12 Apr 2014 10:39:03 -0700, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:


Working in c, always check every buffer for size errors. Study every
pointer.
Don't just write the code, READ it.

Does you no good in a language where the default is a null terminated
string. It is always possible for a malevolent external data source to
send a string that will overwrite the end of any finite buffer.

The problem is that C programmers are *taught* to write in a style that
is concise cryptic but dangerous when abused by malevolent source data

while (*d++=*s++);

It wouldn't matter in a world where the data sources could be trusted.


The key to secure programming - regardless of language - is to take your
untrusted data and sanitise and check it. /Then/ your data is trusted,
and you can take advantage of that.

Unfortunately we both know that that doesn't happen in the real world.
(at least it fails to occur in far too many software development shops)


There is a method that works quite well in order to keep trusted and
untrusted data separate - the Hungarian notation. Simonyi (the
Hungarian in question, working for MS) first used it to make
distinctions about data that could not easily be checked and enforced by
the compiler - in particular, incoming data strings would have a "us"
prefix for "unsafe string" and sanitised versions would get the prefix
"ss" for "safe string". If you stick rigidly to this convention, you
will not mix up your safe and unsafe data. This "Apps Hungarian"
notation is independent of programming language.

Agreed although it is still subject to human error.

Unfortunately, some halfwit (also at MS) thought "Hungarian notation"
meant prefixing names in C with letters indicating the type - so-called
"Systems Hungarian" which just makes code a mess, makes it easy to be
inconsistent, adds little information that is not already easily
available to the compiler and IDE, and means you can't use "Apps
Hungarian" to improve code safety. It's a fine example of snatching
defeat from the jaws of victory - and of MS having a strong group of
theoretical computer scientists with no communication with or influence
over the mass of numpties doing their real coding.

+1

There are, of course, many other ways to ensure that your untrusted data
does not mix with the trusted data, and there are ways that can be
enforced by a C compiler (or at least by additional checking tools).
But it has to be part of the design process, and has to be implemented
consistently.

It seems to me that programmers do not think about preconditions, post
conditions and invariants that can be used to detect faults earlier.

This latest fault was in effect a failure to check the preconditions for
correct operation of the heartbeat code were met and the attack consists
of basically violating the rules with a malformed message.

In a sense programmers are too trusting and testers not wild enough.

Murphy's Law applies in spades to software in the field.

--
Regards,
Martin Brown
 
On 14/04/14 16:18, John Larkin wrote:
Disagree. A language with indexed arrays and formal, controlled strings can have
hard bounds checking. A pointer-oriented language, with null-terminated strings,
can't.

snip

Language choice can prevent /some/ mistakes, but certainly not all. And
good use of tools can prevent many others - if the openssl developers
had tested the code using appropriate C debugging tools, they would have
spotted the error as quickly as with a managed programming language.
What was missing is that no one tried sending malicious packets to the
code - no one thought about the possible security hole, no one noticed
that the code trusted external data, no one tested it. The
implementation language was irrelevant.

Disagree again. There are languages where that sort of error couldn't happen.

And languages that don't oblige days of testing to stress a few lines of code.

snip


Buffer overruns have been a major source of security lapses. A language that
prevented them would, well, prevent them.

snip

There is little doubt that with a language that has no pointers (or
strongly discourages pointers) and has run-time checks on array and
buffer access will have far fewer problems with buffer overruns and
similar issues than with a language that allows free and rampant access.

But let me give you a few specific points here, to avoid help avoid
going round in circles:

1. There is /nothing/ in C that stops you checking your arrays and
buffers. People who are experienced in reliable and secure programming
write their C code carefully in order to avoid any risk of overflows.

2. With C, there is a lot more specified at compile-time than with
dynamic languages. So if you have written your C code well, and use
appropriate static error checkers (there are many such tools for C and
C++), a great many potential bugs are caught at compile time. With
dynamic languages, bugs often do not appear until your code is running -
and if you don't have good tests covering all code paths, you will not
see the bugs until after the system is in use.

3. High-level languages make it much easier to avoid memory leaks and
issues due to unclear resource ownership. But they don't avoid such
problems entirely, and they use far more resources in order to achieve
this automation.

4. High-level languages make it much easier and safer to work with
strings. C is crap at strings.

5. With C, when you get buffer overflows and similar problems, the
result is usually a hard crash. With dynamic languages, the result is
usually a run-time error. People often write error-handling code for
the errors they expect, but fail to do so for errors they don't expect.
So a run-time error or exception when you try to go out of bounds in
your dynamic language will lead to improperly handled errors - you'll
get weird error messages, program halts, silent incorrect operation,
etc. It is unlikely that you will get the same kind of read or write of
random memory that you can get with C, but injection attacks (popular
with SQL) can be easier to exploit, and unexpected errors can easily
lead to skipping security checks and other protection.

6. Regardless of the language, you have to /think/ securely if you want
to keep your system secure and reliable. You have to check /every/
assumption about the incoming data, and sanitise everything. No
programming language does that for you - you always have to think about
it. But /frameworks/ and libraries can help, and make sure that the
data delivered to your code is safe. Choosing a good framework is far
more important than choosing the programming language.

7. Regardless of the language, you need to test /everything/. And you
need a development process in place to ensure everything is tested, that
code is reviewed, that test procedures are reviewed, etc. - all by
different people.


As you can see, there are pros and cons to high and low level languages.
You can write secure and insecure code in either. I don't know of any
statistics showing some languages to be "safe" and others "unsafe",
taking into account the amount of times the code is run, the number of
attempted attacks, the level of expertise of the people writing the
code, and the amount of time and effort spent writing and testing the code.
 
The Human Genome Project


This has been a dismal failure. A worm has similar DNA to ours. Here is just one brief view of it: I have heard supposedly intelligent PhD's say that "everything about us is determined by DNA." And VC's went along with this hoax.

snip

The Human Genome Project has been a spectacular success. Nobody with any sense ever said that everything about us is determined by our DNA.
snip

Many flaming liberals, may not realize they are being used as a tool of moneyed
interests. Getting rich and having grand ideas, however wrong, seem melded in
the modern liberal mind like chocolate sauce and vanilla ice cream.
The backers of the HGP thought that they could patent the gene sequences
responsible for various diseases. Then there was the cockamamie vision put
forward that each of us would have a genetic profile on file, and that the
doctor would then just squirt you full of an expensive drug to counteract the
problem. This is just Progressive eugenics in a modern guise. Total idiocy.
Yet the progressive scientists will hold on to their ideas like a political
campaign or a dog eating foul meat with a growl.


Here is a contrast list between traditional "liberals" and modern Progressive
scientists.

Liberal scientists:
develop science, but have an open mind about it
don't enrich themselves from it unless it comes from performance of it.

Progressive scientists:
science as a political campaign - closed, bigoted mind
enrich their egos and pocket-books through moneyed interests
stick to positions as politics, not science

Various moneyed interests behind "science" today:
Human Genome Project - VC's, Big Pharma, crony capitalists of government
medicine
global warming - government-sponsored scientists, the Saudi oil interests
....more

So, last year, 327 billion $ was spent on global warming research. That's BIG
money, and the recipients will do anything to keep that money coming. Some of
the engineers I used to work with - really vacuum system technicians - were
installed in local colleges (Santa Clara College in Silicon Valley) as
"Professors of Climate Change." The reason: thin film coatings are used for
solar energy, as in "Solyndra," the fraudulent alternative energy project.
These "Professors of Climate Change" don't know a thing about climatology, nor
do they care. This is just the barest "ice above the water" of the massive
fraudulent scam of climate change, formerly global warming. (1)

And addressed to the myriad phony scientists filling their pockets in these frauds:
"And you, like a labor union thugs, will defend it to the death. Shame on you.
At least let's not pretend this is science. This is crony capital politics,
many are being used as their tools. "

=================
(1) Al Gore filled his pockets with 200m of crony money. He pretended to start
a TV channel about global warming, which would be a front for politics, but
then sold it to Al Jazeera. An agile guy, that Gore.
 
On 15/04/2014 00:39, Tim Wescott wrote:
On Mon, 14 Apr 2014 16:12:09 -0700, haiticare2011 wrote:

I can't give an exhaustive list, due to time constraints, but here is
the first. It is:

The Human Genome Project

This has been a dismal failure. A worm has similar DNA to ours. Here is
just one brief view of it: I have heard supposedly intelligent PhD's say
that "everything about us is determined by DNA." And VC's went along
with this hoax.

But even a high school student in Biology 1A knows that the genotype and
a phenotype both interact to produce the organism.

Without going into details, the perpetrators of the HGP lured investors
by promising that patentable DNA sequences would predict cancer, and the
like.
But any cancer researcher knows that only a small percentage are linked
to genetics.

===========

I'm open to any suggestions of further scientific flim-flam. On my list
are AI, medical research, much nano-technology, alternative energy,
climate change,
SSRI drugs, medical treatments, IPhone, IOT...

Oh jeeze, don't stop there -- you left out evolution!

And Copernicanism.

Cheers
--
Syd
 
On 15/04/14 09:27, Martin Brown wrote:
On 15/04/2014 05:15, John Larkin wrote:
On Mon, 14 Apr 2014 13:27:27 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 04/14/2014 12:33 PM, Martin Brown wrote:

Actually he didn't that part was hidden inside the memcpy routine.

The problem was not sanity checking the parameters in the message for
validity. This is all too common :(

See e.g. http://xkcd.com/327/

That's what you get when you can't tell data from code.

And the language here was SQL, not C. Probably the underlying
application was in Perl or Python - it's highly unlikely it was in C.

It turns out that C does not have a monopoly on insecure code.

It is *nothing* to do with that at all. It is a failure to check that
the incoming message is correctly formed and then acting on it blindly.

Correct.

Harvard architecture has made some inroads in embedded PICs. Data and
code are in separate memory spaces and completely separate.

And the PICs are famous for the programming friendliness...

In the real world, separate memory spaces for data and code is not /too/
bad, as long as read-only data is in the same memory space as read-write
data. (I don't mean you should be able to write over the read-only data
- it's fine for it to be protected in some way.) Harvard architecture
micros like the PICs and the AVRs are a serious pain to work with, and
the separate memory spaces means you have to jump through hoops to make
read-only data work. Slow, inefficient, and error-prone.

Intel provided the means to have separate CODE and DATA but the flat no
distinction memory model of the early Motorola CPUs is in vogue now.

Intel's segmented memory was widely considered to be crap. It was a
painful hack on a limited architecture that was out of date before the
first 8086 designs were made. Flat memory models are a /far/ more
efficient design.

Note that the memory model here (segmented or flat) has /nothing/ to do
with memory protection or virtual memory mapping. There are lots of
advantages in having memory areas with different access rights
(read-only, no-execute, etc.) and having flexible virtual-to-physical
address mapping.

But there are /no/ advantages to a system where you have lots of real
memory, but you can only access it in small bits (such as 64K lumps in
older x86 chips).

OS/2 actually used Intels segmentation to keep code and data apart. It
was incredibly robust and if some process failed the rest all kept
running. It was fast enough to simulate 16550 UART in software.

There are many reasons why OS/2 was a good system - good memory
management and process separation (especially compared to Windows at the
time) was part of it. But segmentation, and the segmentation registers,
was not an essential issue - they were only used because that's the only
way 80386 had of getting the protection needed. Alternative good
processor designs (and later x86 chips) had proper memory management
units that gave protection without needing messy segments.

Windows flat model won out due to IBMs crap marketing department.

There are all sorts of reasons why OS/2 lost out (including, but not
limited to, a crap marketing department). Windows did not have a flat
memory model at the time - Win9x had no proper memory model at all. It
used Intel's segments but without any decent protection between processes.


Fortunately, the world has moved on and stabilised on flat memory models
with protection handled by the MMU.
 
On 14/04/2014 16:47, Phil Hobbs wrote:
On 04/14/2014 10:39 AM, Martin Brown wrote:
On 14/04/2014 15:10, John Larkin wrote:
On Mon, 14 Apr 2014 11:34:08 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 12/04/2014 16:15, edward.ming.lee@gmail.com wrote:

That just reinforces what an astonishingly poor understanding you
- and
many others - have about programming languages, and about bugs in
software.

This was a bug in the implementation of the response to "heartbeat"
telegrams in OpenSSL, which is a commonly used library for SSL.
The bug
was caused by the programmer using data in the incoming telegram
without
double-checking it. It is totally independent of the programming
language used, and totally independent of the SSL algorithms and
encryption.

Unchecked buffers and stack overflows have been chronic security
lapses for
decades now, thousands and thousands of times. Wandering around
data structures
with autoincrement pointers is like stumbling in a mindfield,
blindfolded. With
various-sized mines.

The c language and, more significantly, the c language culture,
will make this
sort of thing keep happening.

Unfortunately I have to agree with you, but it isn't strictly down
either to programmers or computer scientists it is because businesses
prefer to ship first and be damned later.

There has been a lot of progress in static analysis to catch at compile
time the sorts of errors that humans are likely to make and options to
defend at runtime against likely exploits. However, the tools needed
are
not easily available for teaching and are only available overpriced in
the environments least likely to use them - business and enterprise!

That's another part of the c culture: hack fast, don't review, and use
some
automated tool to find your coding errors.

Actually it isn't. I wish it was. People always look hurt when I run
aggressive static analysis against a C codebase and ask if they want the
bugs it finds fixed as well as the ones I am supposed to look at.

Once I found a whole chunk of modules where every variable had been
altered to a soap opera character name. That took a while to undo.

I always do a before and after scan to demonstrate what is there and
avoid any unpleasant surprises later on. Another metric I consider very
reliable at finding code likely to contain bugs is McCabes CCI which is
a measure of the minimum number of test cases to exercise every path
through the code. If this number is too high then the code will almost
certainly be buggy and may still contain paths that have never executed.

CPU cycles are cheap and getting cheaper where as people cycles are
expensive and getting more so. It makes sense to offload as much of the
automateable grunt work onto the compiler and toolset as you can.

One compiler I know with a sense of humour will compile access to an
uninitialised variable as a hard trap with a warning message by default.
I have it promoted to a hard error (language is Modula 2).


What are your favourite static analysis tools, Martin? I mostly use
PCLint, but I don't have an uplevel copy. I'm a big fan of mudflap for
debugging.

A mixture of home grown over the years and compiler tools for Modula 2
(some translated/transmuted to work on C). I may yet get around to
making a version of McCabes CCI for C available publicly either free or
for a nominal charge. Trouble is a chronic shortage of roundtuits.

I find it is a very good heuristic for legacy code that if the CCI
complexity of a procedure exceeds certain bounds then it will almost
certainly contain bugs and it is just a case of finding them!

The latest version of PCLint apparently supports MISRA C restrictions (I
don't have that version myself either). I do have an older copy.

This paper (sadly no longer online) describes some of the philosophy
behind this new generation of satic dataflow analysis checkers.

http://web.archive.org/web/20060130025540/http://www.iis.nsk.su/wasp/papers.html

(link is to wayback machine M2 code analysis stuff was integrated into
the XDS M2 compiler - no idea how good their Java checker is at all).

I don't actually have this tool but if I was in the market today I think
I would be looking at something like Red Lizards Goanna.

http://redlizards.com/

Simple way to find out if it is for you is to take a fairly sizeable
codebase you think is OK. Download the evaluation version and see how
many things it can find. Mostly they will be fence post errors at
extremes of possible input data or paths where a variable manages not to
be initialised (but might work most of the time anyway). Dataflow
analysis across whole programs is one of the big steps forward.

Any bug you can remove without running the software is worth killing.
(it is very often in the seldom traversed error recovery paths that
serious flaws lurk - the routine frequently traversed code is mostly OK)

I am also a great fan of linking production code so that it will save a
traceback that shows the stack at the time of failure - who called who
with what. Actually it saves a bunch of hex numbers and you need the
right MAP files for the production build to get back to code.

I grew up with tools that provided a fabulous post mortem debugger that
pretty much guaranteed that you could find and fix any in service bug
after a single incident. These days it is a bit harder since optimised
production code gets reordered so you have to hunt about a bit more.

Still worth doing if your compiler/linker provides such an option. Less
useful in embedded but you can still have the exception handler save
registers & trap address to read back later from an external computer.

--
Regards,
Martin Brown
 
On 14/04/2014 06:56, upsidedown@downunder.com wrote:
On Mon, 14 Apr 2014 11:54:39 +1000, Sylvia Else
sylvia@not.at.this.address> wrote:

At the time of its creation, both memory and CPU time were expensive. It
wasn't practical to specify the language in a way that ensured bounds
checking because of the memory and time costs involved.

In the 1970's i wrote a lot of programs for 16 bit mini computers
using FORTRAN IV, which only had tables and indexes, no pointers or
character strings. At least some compilers had the option to generate
run time index checks. This was usually employed during product
development, but turned off in the shipped product.

Actually it did have pointers but FORTRAN programmers tended not to be
aware of them. The lack of any strong typing meant that an inordinate
amount of time was wasted by physicists calling NAGLIB routines that
expected 8 byte DOUBLE PRECISION REAL arrays with 4 byte REAL ones.

SUBROUTINE SWAP(I,J)
K=I
I=J
J=K
RETURN
END

When called with arguments like SIN and COS could have very interesting
side effects on subsequent use of trig functions. A pointer to an array
of unknown length was declared by convention as length 1 eg.

INTEGER TRICKY(1)

FORTRAN IV did not have any string data type, so you had to write your
own string library using byte arrays (or in the worst case integer
arrays). It was as primitive as C. The only difference is that C
provides ready made string subroutine library (strcpy etc.).

It did have character arrays but only a handful of custom dialects
allowed easy string manifest constants in quotes. 6HSTRING was always
portable but heaven help you if you miscounted the string length.

It would after F66 let you assign Hollerith character constants to
arrays in DATA statement. I think this illustrates my point perfectly.

PROGRAM HELLO
C
INTEGER IHWSTR(3)
DATA IHWSTR/4HHELL,4HO WO,3HRLD/
C
WRITE (6,100) IHWSTR
STOP
100 FORMAT (3A4)
END

Believe it or not that was an improvement on what went before!

The lack of reserved words made the language interesting with the
Chinese usage. It wasn't hard to break compilers back then. Indeed
FORTRAN G was so unsure of itself that all successful compilations ended
with the message "NO DIAGNOSTICS GENERATED?".

Fortran-77 integrated strings into the language. A similar integration
was available only in C++, not in C.

Ironically they were integrated too late. C had its nul terminated
strings but they were just that and so intrinsically dangerous.

Had there been a string length at the start (as occurred in some other
languages of the era) the world be a very different place. That was
probably one of the most destructive peephole optimisations of all time.

--
Regards,
Martin Brown
 
On Tuesday, April 15, 2014 4:19:24 AM UTC-4, Syd Rumpo wrote:
On 15/04/2014 00:39, Tim Wescott wrote:

On Mon, 14 Apr 2014 16:12:09 -0700, haiticare2011 wrote:



I can't give an exhaustive list, due to time constraints, but here is

the first. It is:



The Human Genome Project



This has been a dismal failure. A worm has similar DNA to ours. Here is

just one brief view of it: I have heard supposedly intelligent PhD's say

that "everything about us is determined by DNA." And VC's went along

with this hoax.



But even a high school student in Biology 1A knows that the genotype and

a phenotype both interact to produce the organism.



Without going into details, the perpetrators of the HGP lured investors

by promising that patentable DNA sequences would predict cancer, and the

like.

But any cancer researcher knows that only a small percentage are linked

to genetics.



===========



I'm open to any suggestions of further scientific flim-flam. On my list

are AI, medical research, much nano-technology, alternative energy,

climate change,

SSRI drugs, medical treatments, IPhone, IOT...



Oh jeeze, don't stop there -- you left out evolution!



And Copernicanism.



Cheers

--

Syd

On a deep level, yes, you could say that Copernicus had another side. As a
scientist, of course he was "right." The Christian Bible put forward a belief
that the earth is the center of the universe. Leonardo Da Vinci even said we
were celestial beings living on a star, but I doubt he meant it literally.


But here's the rub: Is the Bible really a "scientific" document? Of course not,
because science is mutable, and the Christian Church is mainly fundamentalist -
ie scripture preserving. This is good and bad - it does preserve an original
vision (you could argue), but it also inculcates a closed-mindedness.


Around the year 1500, we had an amazing number of social forces in play:
-invention of the printing press
-Protestantism
-the Renaissance (of Pagan Greece, to be precise)
-beginning of the scientific method (Bacon 1620)
-and...rebellion of lower classes, new world conquest, higher education, etc.

To cut to the chase, Copernicus was an example of comparing apples to oranges.
The Bible is a purported document dealing with spiritual and ontological areas
of consciousness. (along with its social rigidity) Copernicus was a proponent
of the rise of scientific knowledge about matter (along with it's rigidity and
ignorance of the ontological nature of consciousness.)

So, like many "revolutions," the Copernican one was largely about something
else: the rise of independent thought in the 1500's. Because of the social
rigidity of the Catholic Church, the Copernican theory made a splash in 1540
due to it's disproving of a Bible picture thousands of years old. I believe the
Greeks had had a heliocentric theory as well.

But the problem with Copernicus is that it does not take account of the baby
that is thrown out with the bath-water: the extra-corporeal nature of human
beings. What the atheists are really saying about materialist science is "See,
man is just a piece of matter without consequence, the science proves it.
There is nothing beyond man that he is connected to or a part of. Life is
without meaning."

And this is the Progressive view of things that is taught in schools today.
Depressed? Take our SSRI's! What a mess! The rise of materialist science has
created a false and destructive view of man.

So, in a very real sense, Copernicanism is wrong: human kind, and the concerns
of the earth are at the center of our universe. To peer through a telescope
and not develop a true connection with a higher power, s the plight of modern
man.
 
On Tuesday, 15 April 2014 21:05:23 UTC+10, haitic...@gmail.com wrote:
On Monday, April 14, 2014 7:39:59 PM UTC-4, Tim Wescott wrote:
On Mon, 14 Apr 2014 16:12:09 -0700, haiticare2011 wrote:

<snipped first round of twaddle>

Oh jeeze, don't stop there -- you left out evolution!

About evolution - the discovery of "jumping genes" (1) was suppressed, and as well, it doesn't explain "jumps" in progress of complexity.

It wasn't suppressed, just ignored. In 1951 nobody had much of an idea how genes worked, and making sense out of what Barbara McClintock had found wasn't easy.

After Watson and Crick, and a whole lot of subsequent work, it became easier to recognise exactly what she had found, and she got a Nobel Prize for it in 1983.

http://en.wikipedia.org/wiki/Transposable_element

It was never expected to explain "jumps" in the process of evolving more complicated organs, because biological orthodoxy is that there aren't any - if you look hard enough you always seem to be able to find a sequence of progressively more complicated organs, each of which makes sense in the creature that had it.

http://en.wikipedia.org/wiki/Gradualism

http://en.wikipedia.org/wiki/Uniformitarianism_%28science%29

> In the main I agree with it, but I should mention that Darwin despised the term "survival of the fittest." By fit, he meant that which "fit into the environmental forces," not "fittest and strongest."

The "social Darwinists" were a pretty unpleasant bunch. They added Darwin's theory to an existing bunch of nasty and brutish ideas and tried to use it to justify the sort of political choices that the Tea Party would like.

http://en.wikipedia.org/wiki/Social_Darwinism

And...I should also mention the theory is all after the fact - it does not
explain "what" happened, just the reason for it to happen (survival and
reproduction).

At the time, biology was an observational science, and in all observational sciences, all theories are "after the fact". Stellar evolution, and heavy element synthesis in supernova's are equally "after the fact" but that doesn't make them bad science.

You aren't a sceptic but rather an ignoramus, trying to put your own spin on stuff that you don't actually understand.

--
Bill Sloman, Sydney
 
On Tuesday, April 15, 2014 7:40:31 AM UTC-4, Kennedy wrote:
snip
SSRI drugs, medical treatments, IPhone, IOT...



Homeopathy - this is hilarious stuff:



http://www.newsbiscuit.com/2011/09/09/homeopathic-leak-threatens-catastrophe/

Yes, homeopathy is an obvious fraud, in the view of traditional science. There
was a fellow in europe who claimed to have scientific proof for it, but he was
faking data, as I remember.
I had an acquaintance, a woman who lived to the age of 100. When she was alive,
I looked at her vitamin shelf, and it was all homeopathy!
Homeopathy may belong in the area of "higher placebo effect." The nurses study,
a landmark lifestyle-health study, showed that friends are more important than
even smoking! And other studies show church-going has a big effect.

Friends are more important than diet or anything else!
Explain that, materialist scientists!

When I see things like that, I am hesitant to dismiss homeopathy completely.
Freud said that the power of suggestion was the strongest psychological force.
I am at a loss to explain the effect of friends.
 
On Tuesday, 15 April 2014 22:15:18 UTC+10, haitic...@gmail.com wrote:
The Human Genome Project

This has been a dismal failure. A worm has similar DNA to ours. Here is just one brief view of it: I have heard supposedly intelligent PhD's say that "everything about us is determined by DNA." And VC's went along with this hoax.

snip

The Human Genome Project has been a spectacular success. Nobody with any sense ever said that everything about us is determined by our DNA.

snip

Many flaming liberals, may not realize they are being used as a tool of moneyed interests. Getting rich and having grand ideas, however wrong, seem melded in the modern liberal mind like chocolate sauce and vanilla ice cream.

You clearly haven't got a clue about the history of the Human Genome Project.

http://en.wikipedia.org/wiki/Human_Genome_Project

It seems to have come from US Department of Energy.

The backers of the HGP thought that they could patent the gene sequences
responsible for various diseases. Then there was the cockamamie vision put
forward that each of us would have a genetic profile on file, and that the
doctor would then just squirt you full of an expensive drug to counteract
the problem. This is just Progressive eugenics in a modern guise. Total
idiocy.

Some of the backers of the HGP may have thought that. James Watson (of Watson and Crick), who led the US NIH branch of the project from 1990 to 1992 famously resigned because he though that patenting existing gene sequences was a really bad idea.

Craig Venter and his firm Celera Genomics jumped on the bandwaggon a lot later with this idea in mind. When Clinton said they couldn't, in March 2000, the biotechnology sector lost $50 billion in market capitalisation in two days.

Yet the progressive scientists will hold on to their ideas like a political
campaign or a dog eating foul meat with a growl.

Most of the "progressive scientists" involved thought that patenting existing gene sequences was a very bad idea.

Here is a contrast list between traditional "liberals" and modern Progressive
scientists.

<snipped nonsensical twaddle>

You really don't have a clue about the stuff you are talking about.

--
Bill Sloman, Sydney
 
On Tuesday, April 15, 2014 9:02:17 AM UTC-4, Bill Sloman wrote:

You aren't a sceptic but rather an ignoramus, trying to put your own spin on stuff that you don't actually understand.



--

Bill Sloman, Sydney

your case that I am an ignoramus is undercut by your frequent use of wikipedia!
j
 

Welcome to EDABoard.com

Sponsor

Back
Top