design rigor: electronics vs. software

On Saturday, January 11, 2020 at 11:06:53 AM UTC-5, Winfield Hill wrote:
DecadentLinuxUserNumeroUno@decadence.org wrote...

Well, it WAS the finished product that failed, but
the true failure was their ability to ensure proper,
robust, failsafe coding.

To me your operative word is, proper. I'm sure the
code was robust in doing what it was spec'd to do,
and likely included failsafe coding as well. It was
improper specs that created a non-failsafe system.

No doubt the coding was broken up into pieces, each of
which acted in specied manners for its variable inputs,
and which may well have obscured the overall task.

In fact, the output code that implemented the minor
"augmentation" function may not have been revisited
for changes, after the systems-level decision was
made to expand the use of the augmentation system,
to add anti-stall.


--
Thanks,
- Win

Some code is so complicated that it can not be adequately tested.
Ten years ago I read an article about how some Canadian warships were designed to "re-route" critical systems after sustaining battle damage, by using whatever hardware was then available.

A daunting task, for sure.

Just developing a test plan for something like that is amazingly complex.
 
On 2020-01-13 19:35, mpm wrote:
On Saturday, January 11, 2020 at 1:10:56 AM UTC-5, Rick C wrote:
I think that is a load. Hardware often fouls up. The two space shuttle disasters were both hardware problems and both were preventable, but there was a clear lack of rigor in the design and execution. The Apollo 13 accident was hardware. The list goes on and on.

Then your very example of the Boeing plane is wrong because no one has said the cause of the accident was improperly coded software.

Technically, one of those shuttle disasters was due to management not listening to their engineers, including those at Morton-Thiokol, that the booster rocket O-Rings were unsafe to launch at cold temperature.

I don't consider that to be a "hardware problem" so much as an arrogantly stupid decision to launch under known, unsafe conditions.

Diane Vaughan's "The Challenger Launch Decision" is an amazingly good
read on how they got to that point. She's a sociologist, of course, but
she took great pains to understand the culture and the issues, which led
her to completely re-evaluate her initial cultural-Marxist take on it.

She has my complete respect for her willingness to follow where the
facts led--a rare and valuable trait in our diminished, ideology-driven
days.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 2020-01-13 11:22, Tom Gardner wrote:
On 13/01/20 14:01, Phil Hobbs wrote:
On 2020-01-13 04:04, Tom Gardner wrote:
On 13/01/20 01:07, John Larkin wrote:
On Sun, 12 Jan 2020 16:58:40 +0000, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 11/01/2020 14:57, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

If code kills people, it was improperly coded.

Not necessarily. The code written may well have exactly implemented
the
algorithm(s) that the clowns supervised by monkeys specified. It isn't
the job of programmers to double check the workings of the people
who do
the detailed calculations of aerodynamic force vectors and torques.

It is not the programmers fault if the systems engineering, failure
analysis and aerodynamics calculations are incorrect in some way!

The management of two AOA sensors was insane. Fatal, actually. A
programmer should understand simple stuff like that.

It is unrealistic to expect programmers to understand sensor
reliability. That is the job of the people specifying the
system design and encoding that in the system specification
and the software specification.

Programmers would have zero ability to deviate from implementing
the software spec, full stop. If they did knowingly deviate, it
would be a career ending decision - at best.

Gee, Mr. Gardner, you're so manly--can I have your autograph? ;)

Nobody's talking about coders doing jazz on the spec AFAICT.  Systems
folks do need to listen to them, is all.  If they can't do that
because they don't understand the issues, that's a serious
organizational problem, on a level with the flawed spec.

Well, by all accounts there were/are serious organisational
problems in Boeing. Those are probably a significant
contributor to there being a flawed spec.


Aerospace engineers have lost their pension for far less
serious deviations, even though they had zero consequences.

Fortunately that's illegal over here, even for cause.

I was gobsmacked when I heard that, and don't understand it.
But then I don't even understand the concept of pension
"vesting".

The company's contributions towards your pension are part of your
compensation year by year. Taking that away is no different from trying
to claw back 20 years worth of salary.

Cheers

Phil Hobbs
 
On Mon, 13 Jan 2020 19:26:26 +0000, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 17:41, John Larkin wrote:
On Mon, 13 Jan 2020 16:40:55 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 15:58, John Larkin wrote:
On Mon, 13 Jan 2020 09:04:20 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 01:07, John Larkin wrote:
On Sun, 12 Jan 2020 16:58:40 +0000, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 11/01/2020 14:57, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

If code kills people, it was improperly coded.

Not necessarily. The code written may well have exactly implemented the
algorithm(s) that the clowns supervised by monkeys specified. It isn't
the job of programmers to double check the workings of the people who do
the detailed calculations of aerodynamic force vectors and torques.

It is not the programmers fault if the systems engineering, failure
analysis and aerodynamics calculations are incorrect in some way!

The management of two AOA sensors was insane. Fatal, actually. A
programmer should understand simple stuff like that.

It is unrealistic to expect programmers to understand sensor
reliability. That is the job of the people specifying the
system design and encoding that in the system specification
and the software specification.

Programmers would have zero ability to deviate from implementing
the software spec, full stop. If they did knowingly deviate, it
would be a career ending decision - at best.

Aerospace engineers have lost their pension for far less
serious deviations, even though they had zero consequences.


https://philip.greenspun.com/blog/2019/03/21/optional-angle-of-attack-sensors-on-the-boeing-737-max/

Given dual sensors, why would any sane person decide to alternate
using one per flight?

Agreed. Especially given the poor reliability of AoA sensors.

The people that write and signed off that spec
bear a lot of responsibility


A programmer would have to be awfully thick to not object to that.

The programmer's job is to implement the spec, not to write it

They may have objected, and may have been overruled.

Have you worked in large software organisations?

Not in, but with. Most "just do our jobs", which means that they don't
care to learn much about the process that they are implementing.

Seen that, and it even occurs within software world:
-analysts lob spec over wall to developers
-developers lob code over wall to testers
-developers lob tested code over wall to operations
-rinse and repeat, slowly

"Devops" tries to avoid that inefficiency.


And the hardware guys don't have much insight or visibility into the
software. Often, not much control either, in a large organization
where things are very firewalled.

I've turned down job offers where the HR droids couldn't
deal with someone that successfully straddles both
hardware and software worlds.

I interviewed with HP once. The guy looked at my resume and said "The
first thing you need to do is decide whether you're an engineer or a
programmer", so I walked out.

One big company that we work with has, I've heard, 12 levels of
engineering management. If an EE group wants a hole drilled in a
chassis, the request has to propagate up 5 or six management levels,
and then back down, to get to a mechanical engineer. Software is
similarly insulated. Any change fires off an enormous volume of
paperwork and customer "copy exact" notices, so most things just never
get done.


Recipe for disaster.

Yup, as we've seen.

--

John Larkin Highland Technology, Inc

The cork popped merrily, and Lord Peter rose to his feet.
"Bunter", he said, "I give you a toast. The triumph of Instinct over Reason"
 
On 11 Jan 2020 07:27:05 -0800, Winfield Hill <winfieldhill@yahoo.com>
wrote:

DecadentLinuxUserNumeroUno@decadence.org wrote...

Winfield Hill wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

Thanks Win. That guy is nuts. Boeing most certainly
did announce just a few months ago, that it was a
software fault.

That's the opposite of my position. I'm sure the coders
made the software do exactly what they were told to make
it do.

But nobody ever writes a requirement document at the level of detail
that the programmers will work to. And few requirement docs are
all-correct and all-inclusive.

It sure helps if the programmers understand, and take responsibility
for, the actual system.



--

John Larkin Highland Technology, Inc

The cork popped merrily, and Lord Peter rose to his feet.
"Bunter", he said, "I give you a toast. The triumph of Instinct over Reason"
 
On Sat, 11 Jan 2020 15:44:23 +0000 (UTC),
DecadentLinuxUserNumeroUno@decadence.org wrote:

Winfield Hill <winfieldhill@yahoo.com> wrote in news:qvcpg901bm2
@drn.newsguy.com:

DecadentLinuxUserNumeroUno@decadence.org wrote...

Winfield Hill wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

Thanks Win. That guy is nuts. Boeing most certainly
did announce just a few months ago, that it was a
software fault.

That's the opposite of my position. I'm sure the coders
made the software do exactly what they were told to make
it do. It was system engineers and their managers, who
made the decisions and wrote the software specs. They
should not be allowed to simply blame "the software".



Well, it WAS the finished product that failed, but the true failure
was their ability to ensure proper, robust, failsafe coding.

Is there such a thing? Electronic design is based on physics and
corillary principles. I don't know of any hard principles that
programming applies. It's more of a craft than a science.

I think that electronics is also easier to design review than
software.


--

John Larkin Highland Technology, Inc

The cork popped merrily, and Lord Peter rose to his feet.
"Bunter", he said, "I give you a toast. The triumph of Instinct over Reason"
 
On Sat, 11 Jan 2020 17:31:26 -0500, bitrex <user@example.net> wrote:

On 1/11/20 9:47 AM, jlarkin@highlandsniptechnology.com wrote:
On Fri, 10 Jan 2020 21:46:19 -0800 (PST), omnilobe@gmail.com wrote:

Hardware designs are more rigorously done than
software designs. A large company had problems with a 737
and a rocket to the space station...

https://www.bloomberg.com/news/articles/2019-06-28/boeing-s-737-max-software-outsourced-to-9-an-hour-engineers

I know programmers who do not care for rigor at home at work.
I did hardware design with rigor and featuring reviews by caring
electronics design engineers and marketing engineers.

Software gets sloppy with OOPs.
Object Oriented Programming.
Windows 10 on a rocket to ISS space station.
C++ mud.

The easier it is to change things, the less careful people are about
doing them. Software, which includes FPGA code, seldom works the first
time. Almost never. The average hunk of fresh code has a mistake
roughly every 10 lines. Or was that three?

FPGAs are usually better than procedural code, but are still mostly
done as hack-and-fix cycles, with software test benches. When we did
OTP (fuse based) FPGAs without test benching, we often got them right
first try. If compiles took longer, people would be more careful.

PCBs usually work the first time, because they are checked and
reviewed, and that is because mistakes are slow and expensive to fix,
and very visible to everyone. Bridges and buildings are almost always
right the first time. They are even more expensive and slow and
visible.

Besides, electronics and structures have established theory, but
software doesn't. Various people just sort of do it.

My Spice sims are often wrong initially, precisely because there are
basically no consequences to running the first try without much
checking. That is of course dangerous; we don't want to base a
hardware design on a sim that runs and makes pretty graphs but is
fundamentally wrong.


Don't know why C++ is getting the rap here. Modern C++ design is
rigorous, there are books about what to do and what not to do, and the
language has built-in facilities to ensure that e.g. memory is never
leaked, pointers always refer to an object that exists, and the user
can't ever add feet to meters if they're not supposed to.

Pointers are evil.



--

John Larkin Highland Technology, Inc

The cork popped merrily, and Lord Peter rose to his feet.
"Bunter", he said, "I give you a toast. The triumph of Instinct over Reason"
 
On Mon, 13 Jan 2020 09:27:19 -0000, RBlack <news@rblack01.plus.com>
wrote:

In article <d0nj1f50mabot5tnfooihn6o50up57n22b@4ax.com>,
jlarkin@highlandsniptechnology.com says...

[snip]

My Spice sims are often wrong initially, precisely because there are
basically no consequences to running the first try without much
checking. That is of course dangerous; we don't want to base a
hardware design on a sim that runs and makes pretty graphs but is
fundamentally wrong.

I just got bitten by a 'feature' of LTSpice XVII, I don't remeber IV
having this behaviour but I don't have it installed any more:

If you make a tweak to a previously working circuit, which makes the
netlister fail (in my case it was an inductor shorted to ground at both
ends), it will pop up a warning to this effect, and then *run the sim
using the old netlist*.

Well, don't ignore the warning.

It will then allow you to probe around on the new schematic, but the
schematic nodes are mapped onto the old netlist, so depending on what
you tweaked, what is displayed can range from slightly wrong to flat-out
impossible.

Anyone else seen this?

LT4 would complain about, say, one end of a cap floating, or your
shorted inductor. The new one doesn't. I prefer it the new way.

I haven't seen the old/new netlist thing that you describe.




--

John Larkin Highland Technology, Inc

The cork popped merrily, and Lord Peter rose to his feet.
"Bunter", he said, "I give you a toast. The triumph of Instinct over Reason"
 
On 14/1/20 1:46 pm, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 07:27:05 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

DecadentLinuxUserNumeroUno@decadence.org wrote...

Winfield Hill wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

Thanks Win. That guy is nuts. Boeing most certainly
did announce just a few months ago, that it was a
software fault.

That's the opposite of my position. I'm sure the coders
made the software do exactly what they were told to make
it do.

But nobody ever writes a requirement document at the level of detail
that the programmers will work to. And few requirement docs are
all-correct and all-inclusive.

Your comments lack nuance.

The definition of "all-correct" can only be made with reference to a
Turing machine that implements it.

This, the finished code is the (first and only) finished specification.

Collorary: If a specification is all-correct and all-inclusive, a
compiler can be written that implements it precisely.

The trouble is, no-one can tell whether the specification meets the
high-level goals of the system - not even the programmer usually.

The reason for "formal methods" is to be able to state the "high level
goals" in a precise way, and to show that the code cannot fail to meet
those goals.

CH.
 
On Monday, January 13, 2020 at 9:46:30 PM UTC-5, jla...@highlandsniptechnology.com wrote:
But nobody ever writes a requirement document at the level of detail
that the programmers will work to. And few requirement docs are
all-correct and all-inclusive.

It sure helps if the programmers understand, and take responsibility
for, the actual system.

What Larkman doesn't understand is that the sort of formal requirements documents he is talking about are written for large, complex systems that he knows literally nothing about. Having never participated in such a design process he doesn't even understand that the programmers can't always know much about the things they are writing code for, because they can't be expert in all parts of the system they are writing code for.

So instead of expecting the coders to sanity check systems they don't and literally can't understand, just as no one in the company understands the entire airplane, they use the documents they are provided to define the software they are writing and then test according to the requirements that apply to that software. They don't try to analyze the requirements in the context of the rest of the system because that has already been done.

Larkman also doesn't understand that the requirements documents are written at every level of decomposition so that each requirement can be traced to the modules that are responsible for implementing it. It's a large process, but is essential to making sure the airplane does what you want it to. Can the process fail, yes, it's a human process after all. But it's a whole lot better than the Larkman method of having one guy in charge of everything and he does the hard part for everyone and lets them finish the work that he started. I guess we could design bicycles that way, but not airplanes..

I remember dealing with a layout guy who was pretty good, but was used to thinking in terms of absolute rules without always understanding them. He had a big power pour area running across the board to reach a resistor in a part of the circuit that was only to measure the voltage on the power plane.. I told him he didn't need to make that run so fat, it could just be a thin trace like any other signal and explained what it was for. He refused to change it saying that was how you route power planes. Rather than fight that idea, I had him move the resistor to the area of the power plane and run a thin trace over to the rest of the circuit. He didn't like the idea, but couldn't argue, so did it my way.

This shows why programmers don't get to change low level requirements on their own. They either go through the process of pushing back on the high level requirements while they are being defined, or they code what needs to be coded as the requirements state. If the decision makers say the MCAS needs to work this way, the coders are not in a position to make changes once the requirements have been decomposed to the module level. It's not like the people doing the design work didn't give it a lot of thought. Having coders change the requirements would be like cops changing the laws they have to enforce.

I guess it's a good thing Larkman isn't a cop either.

--

Rick C.

+-- Get 1,000 miles of free Supercharging
+-- Tesla referral code - https://ts.la/richard11209
 
On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:
Your comments lack nuance.

The definition of "all-correct" can only be made with reference to a
Turing machine that implements it.

This, the finished code is the (first and only) finished specification.

Collorary: If a specification is all-correct and all-inclusive, a
compiler can be written that implements it precisely.

Sorry, that is simply wrong. You can specify the behavior of a module without enough detail for a compiler to spit out code unless that compiler had a vast array of tools and libraries at its disposal. So I guess in theory, a compiler could be written, but it would be a ginormous task such as compiling the English language to computer code.

So, in either case, possible or not, your statement is of no practical value.


The trouble is, no-one can tell whether the specification meets the
high-level goals of the system - not even the programmer usually.

Huh???


The reason for "formal methods" is to be able to state the "high level
goals" in a precise way, and to show that the code cannot fail to meet
those goals.

What does that have to do with your compiler statement? First you say specifications can't be fully complete and then you say they can be written "in a precise way". Are you say "precise" as in easy to code but not necessarily complete???

--

Rick C.

+-+ Get 1,000 miles of free Supercharging
+-+ Tesla referral code - https://ts.la/richard11209
 
On Monday, January 13, 2020 at 7:35:20 PM UTC-5, mpm wrote:
On Saturday, January 11, 2020 at 1:10:56 AM UTC-5, Rick C wrote:
I think that is a load. Hardware often fouls up. The two space shuttle disasters were both hardware problems and both were preventable, but there was a clear lack of rigor in the design and execution. The Apollo 13 accident was hardware. The list goes on and on.

Then your very example of the Boeing plane is wrong because no one has said the cause of the accident was improperly coded software.

Technically, one of those shuttle disasters was due to management not listening to their engineers, including those at Morton-Thiokol, that the booster rocket O-Rings were unsafe to launch at cold temperature.

I don't consider that to be a "hardware problem" so much as an arrogantly stupid decision to launch under known, unsafe conditions.

I can't believe you are nit-picking this. Even if it isn't your definition of a hardware problem, it certainly isn't a software problem and that was the issue being discussed, software vs. hardware. There's no reason to discuss wetware issues other than how they impact software and hardware and in this case it was hardware that failed from the abuse by the wetware.

I guess what I'm really saying is, so what?


> As for the tiles (2nd shuttle loss), I am weirdly reminded of the Siegfried & Roy Vegas act with the white lions and tigers. They insured against every conceivable possibility (including the performance animals jumping into the crowd and causing a panic!). Everything that is, except the tiger viciously attacking Roy Horn on-stage.

Except that's not what happened. Go read about it. I get tired of educating you.


You think you could see that coming..., or at least have a plan (however remote the possibility)?

With the shuttle heat tiles, NASA had to replace a lot of those after every flight. Did they never see the tiger?

I think either, you again don't understand what happened, or you have simplified your understanding of the accident to "tiles fell off". I'll discuss this further with you if you want, but only after you educate yourself with the facts.

--

Rick C.

-++ Get 1,000 miles of free Supercharging
-++ Tesla referral code - https://ts.la/richard11209
 
On 14/01/20 02:43, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Jan 2020 19:26:26 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 17:41, John Larkin wrote:
On Mon, 13 Jan 2020 16:40:55 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 15:58, John Larkin wrote:
On Mon, 13 Jan 2020 09:04:20 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 01:07, John Larkin wrote:
On Sun, 12 Jan 2020 16:58:40 +0000, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 11/01/2020 14:57, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

If code kills people, it was improperly coded.

Not necessarily. The code written may well have exactly implemented the
algorithm(s) that the clowns supervised by monkeys specified. It isn't
the job of programmers to double check the workings of the people who do
the detailed calculations of aerodynamic force vectors and torques.

It is not the programmers fault if the systems engineering, failure
analysis and aerodynamics calculations are incorrect in some way!

The management of two AOA sensors was insane. Fatal, actually. A
programmer should understand simple stuff like that.

It is unrealistic to expect programmers to understand sensor
reliability. That is the job of the people specifying the
system design and encoding that in the system specification
and the software specification.

Programmers would have zero ability to deviate from implementing
the software spec, full stop. If they did knowingly deviate, it
would be a career ending decision - at best.

Aerospace engineers have lost their pension for far less
serious deviations, even though they had zero consequences.


https://philip.greenspun.com/blog/2019/03/21/optional-angle-of-attack-sensors-on-the-boeing-737-max/

Given dual sensors, why would any sane person decide to alternate
using one per flight?

Agreed. Especially given the poor reliability of AoA sensors.

The people that write and signed off that spec
bear a lot of responsibility


A programmer would have to be awfully thick to not object to that.

The programmer's job is to implement the spec, not to write it

They may have objected, and may have been overruled.

Have you worked in large software organisations?

Not in, but with. Most "just do our jobs", which means that they don't
care to learn much about the process that they are implementing.

Seen that, and it even occurs within software world:
-analysts lob spec over wall to developers
-developers lob code over wall to testers
-developers lob tested code over wall to operations
-rinse and repeat, slowly

"Devops" tries to avoid that inefficiency.


And the hardware guys don't have much insight or visibility into the
software. Often, not much control either, in a large organization
where things are very firewalled.

I've turned down job offers where the HR droids couldn't
deal with someone that successfully straddles both
hardware and software worlds.

I interviewed with HP once. The guy looked at my resume and said "The
first thing you need to do is decide whether you're an engineer or a
programmer", so I walked out.

HP hired me because I was both. Various parts of HP were
very different from each other.


One big company that we work with has, I've heard, 12 levels of
engineering management. If an EE group wants a hole drilled in a
chassis, the request has to propagate up 5 or six management levels,
and then back down, to get to a mechanical engineer. Software is
similarly insulated. Any change fires off an enormous volume of
paperwork and customer "copy exact" notices, so most things just never
get done.

So you /do/ understand how programmers couldn't be
held responsible for implementing the spec.

At HP, if I had been promoted 6 times, I would
have been the CEO


Recipe for disaster.

Yup, as we've seen.
 
On 11/01/20 22:31, bitrex wrote:
On 1/11/20 9:47 AM, jlarkin@highlandsniptechnology.com wrote:
On Fri, 10 Jan 2020 21:46:19 -0800 (PST), omnilobe@gmail.com wrote:

Hardware designs are more rigorously done than
software designs. A large company had problems with a 737
and a rocket to the space station...

https://www.bloomberg.com/news/articles/2019-06-28/boeing-s-737-max-software-outsourced-to-9-an-hour-engineers


I know programmers who do not care for rigor at home at work.
I did hardware design with rigor and featuring reviews by caring
electronics design engineers and marketing engineers.

Software gets sloppy with OOPs.
Object Oriented Programming.
Windows 10 on a rocket to ISS space station.
C++ mud.

The easier it is to change things, the less careful people are about
doing them. Software, which includes FPGA code, seldom works the first
time. Almost never. The average hunk of fresh code has a mistake
roughly every 10 lines. Or was that three?

FPGAs are usually better than procedural code, but are still mostly
done as hack-and-fix cycles, with software test benches. When we did
OTP (fuse based) FPGAs without test benching, we often got them right
first try. If compiles took longer, people would be more careful.

PCBs usually work the first time, because they are checked and
reviewed, and that is because mistakes are slow and expensive to fix,
and very visible to everyone. Bridges and buildings are almost always
right the first time. They are even more expensive and slow and
visible.

Besides, electronics and structures have established theory, but
software doesn't. Various people just sort of do it.

My Spice sims are often wrong initially, precisely because there are
basically no consequences to running the first try without much
checking. That is of course dangerous; we don't want to base a
hardware design on a sim that runs and makes pretty graphs but is
fundamentally wrong.


Don't know why C++ is getting the rap here. Modern C++ design is rigorous, there
are books about what to do and what not to do, and the language has built-in
facilities to ensure that e.g. memory is never leaked, pointers always refer to
an object that exists, and the user can't ever add feet to meters if they're not
supposed to.

If the developer chooses to ignore it all like they always know better than the
people who wrote the books on it, well, God bless...

Read the C++ FQA http://yosefk.com/c++fqa/

I'm particularly fond of the const correctness section :)
 
On 12/01/2020 20:20, Phil Hobbs wrote:
On 2020-01-12 11:58, Martin Brown wrote:
On 11/01/2020 14:57, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

If code kills people, it was improperly coded.

Not necessarily. The code written may well have exactly implemented
the algorithm(s) that the clowns supervised by monkeys specified. It
isn't the job of programmers to double check the workings of the
people who do the detailed calculations of aerodynamic force vectors
and torques.

It is not the programmers fault if the systems engineering, failure
analysis and aerodynamics calculations are incorrect in some way!

That's a bit facile, I think. Folks who take an interest in their
professions aren't that easy to confine that way.

Depends how the development is being done. One way is a formal software
specification that is handed to an outsourced team of cheap coders. They
literally have no idea what anything does beyond the boundaries of the
functional module specification that they have been given to implement.

My boss was pushing for that modus operandi just before I quit.

The idea is that you have well specified software modules in much the
same way as IC's that have datasheets describing exactly what they do.
It works pretty well for numerical analysis for instance NAGLIB.
(way more reliable than rolling your own code)

In in ideal software component model it can work. However, one place I
knew referred to their code repository (in jargon term at the time) s/re/su/
Problem was that stuff too often got put into it that was not fit for
purpose and would bite anyone foolish enough to reuse it very badly.

Back in my one foray into big-system design, we design engineers were
always getting in the systems guys' faces about various pieces of
stupidity in the specs.  It was all pretty good-natured, and we wound up
with the pain and suffering distributed about equally.

+1


--
Regards,
Martin Brown
 
In article <o7bq1f54cmsvthkp8om1tqa3jrbau9ko5r@4ax.com>,
jlarkin@highlandsniptechnology.com says...
On Mon, 13 Jan 2020 09:27:19 -0000, RBlack <news@rblack01.plus.com
wrote:

In article <d0nj1f50mabot5tnfooihn6o50up57n22b@4ax.com>,
jlarkin@highlandsniptechnology.com says...

[snip]

My Spice sims are often wrong initially, precisely because there are
basically no consequences to running the first try without much
checking. That is of course dangerous; we don't want to base a
hardware design on a sim that runs and makes pretty graphs but is
fundamentally wrong.

I just got bitten by a 'feature' of LTSpice XVII, I don't remeber IV
having this behaviour but I don't have it installed any more:

If you make a tweak to a previously working circuit, which makes the
netlister fail (in my case it was an inductor shorted to ground at both
ends), it will pop up a warning to this effect, and then *run the sim
using the old netlist*.

Well, don't ignore the warning.

Yep. Although it looks like 'warning' should be 'fatal error'. I'm
pretty sure LT4 would refuse to run the sim at all with no valid
netlist, rather than use the last-known-good one.

It will then allow you to probe around on the new schematic, but the
schematic nodes are mapped onto the old netlist, so depending on what
you tweaked, what is displayed can range from slightly wrong to flat-out
impossible.

Anyone else seen this?

LT4 would complain about, say, one end of a cap floating, or your
shorted inductor. The new one doesn't. I prefer it the new way.

I haven't seen the old/new netlist thing that you describe.

Another recent one was a boost switcher. I had that working OK, then
added a linear post-regulator, using a model from TI. This added a
bunch of extra nodes to the netlist. The TI model turned out to have a
typo (the warning said something along the lines of 'diode D_XYZ
undefined. Using ideal diode model instead.'

The sim appeared to run OK anyway, but the FET dissipation trace was now
multiplying the wrong node voltages/currents (node names from the old
netlist) and it was out by an order of magnitude. Once I found the typo
and fixed it everything ran fine.
I suppose labelling all the nodes would also have caught that one.

I found LT4 more comfortable to use. Still, I can't complain about the
price. We have a bunch of PSPICE licenses (came bundled with OrCAD) but
LTSPICE is good enough that I've never even tried running it.
 
On Tuesday, January 14, 2020 at 4:28:12 AM UTC-5, Clifford Heath wrote:
On 14/1/20 5:23 pm, Rick C wrote:
On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:

Your comments lack nuance.

The definition of "all-correct" can only be made with reference to a
Turing machine that implements it.


^^^^^ This. You fail to understand this. It invalidates the rest of your
ignorant complaints.

I especially like the way you toss into the conversation totally unsupported statements. I expect you have no real familiarity with the process of developing code using requirements.


> End. You clearly don't get it, and I'm not going to waste more time on you.

I think that would please us both.

--

Rick C.

++- Get 1,000 miles of free Supercharging
++- Tesla referral code - https://ts.la/richard11209
 
On Tue, 14 Jan 2020 08:19:27 +0000, Tom Gardner
<spamjunk@blueyonder.co.uk> said:
On 14/01/20 02:43, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Jan 2020 19:26:26 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 17:41, John Larkin wrote:
On Mon, 13 Jan 2020 16:40:55 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

[snip]

I've turned down job offers where the HR droids couldn't
deal with someone that successfully straddles both
hardware and software worlds.

I interviewed with HP once. The guy looked at my resume and said "The
first thing you need to do is decide whether you're an engineer or a
programmer", so I walked out.

HP hired me because I was both. Various parts of HP were
very different from each other.

HP's tape drives division was my first 'proper' gig as an EE. They
didn't pigeon-hole people either, the hardware guys could write their
own test code if needed and the embedded software guys could debug their
code using a scope.

Next job was a small startup where everybody had to be a jack-of-all-
trades. Later on, as we grew and took on more people, it came as a bit
of a shock that the 'straddlers' were a tiny minority. It's something
we still struggle with when trying to hire people.

One big company that we work with has, I've heard, 12 levels of
engineering management. If an EE group wants a hole drilled in a
chassis, the request has to propagate up 5 or six management levels,
and then back down, to get to a mechanical engineer. Software is
similarly insulated. Any change fires off an enormous volume of
paperwork and customer "copy exact" notices, so most things just never
get done.

So you /do/ understand how programmers couldn't be
held responsible for implementing the spec.

At HP, if I had been promoted 6 times, I would
have been the CEO





Recipe for disaster.

Yup, as we've seen.
 
On 14/1/20 5:23 pm, Rick C wrote:
On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:

Your comments lack nuance.

The definition of "all-correct" can only be made with reference to a
Turing machine that implements it.

^^^^^ This. You fail to understand this. It invalidates the rest of your
ignorant complaints.

End. You clearly don't get it, and I'm not going to waste more time on you.

This, the finished code is the (first and only) finished specification.

Collorary: If a specification is all-correct and all-inclusive, a
compiler can be written that implements it precisely.

Sorry, that is simply wrong. You can specify the behavior of a module without enough detail for a compiler to spit out code unless that compiler had a vast array of tools and libraries at its disposal. So I guess in theory, a compiler could be written, but it would be a ginormous task such as compiling the English language to computer code.

So, in either case, possible or not, your statement is of no practical value.


The trouble is, no-one can tell whether the specification meets the
high-level goals of the system - not even the programmer usually.

Huh???


The reason for "formal methods" is to be able to state the "high level
goals" in a precise way, and to show that the code cannot fail to meet
those goals.

What does that have to do with your compiler statement? First you say specifications can't be fully complete and then you say they can be written "in a precise way". Are you say "precise" as in easy to code but not necessarily complete???
 
On 14/1/20 8:44 pm, Rick C wrote:
On Tuesday, January 14, 2020 at 4:28:12 AM UTC-5, Clifford Heath wrote:
On 14/1/20 5:23 pm, Rick C wrote:
On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:

Your comments lack nuance.

The definition of "all-correct" can only be made with reference to a
Turing machine that implements it.


^^^^^ This. You fail to understand this. It invalidates the rest of your
ignorant complaints.

I expect you have no real familiarity with the process of developing code using requirements.

You'd be an idiot then.

Literally thousands of projects. Hell I have an archive here of over
three hundred projects' documents (several from each project, starting
with requirements) on which I participated or led the engineering teams,
and those are just from the 1990s (one of my four decades in the
software industry).

Much of that code is still running on tens of millions of machines
around the globe, coordinating systems management for mission-critical
functions in the world's largest enterprises.

Naah, I know nothing about software dev. Nothing you could learn anyhow.
 

Welcome to EDABoard.com

Sponsor

Back
Top