design rigor: electronics vs. software

On Tuesday, January 14, 2020 at 5:06:40 AM UTC-5, Clifford Heath wrote:
On 14/1/20 8:44 pm, Rick C wrote:
On Tuesday, January 14, 2020 at 4:28:12 AM UTC-5, Clifford Heath wrote:
On 14/1/20 5:23 pm, Rick C wrote:
On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:

Your comments lack nuance.

The definition of "all-correct" can only be made with reference to a
Turing machine that implements it.


^^^^^ This. You fail to understand this. It invalidates the rest of your
ignorant complaints.

I expect you have no real familiarity with the process of developing code using requirements.

You'd be an idiot then.

Literally thousands of projects. Hell I have an archive here of over
three hundred projects' documents (several from each project, starting
with requirements) on which I participated or led the engineering teams,
and those are just from the 1990s (one of my four decades in the
software industry).

Much of that code is still running on tens of millions of machines
around the globe, coordinating systems management for mission-critical
functions in the world's largest enterprises.

Naah, I know nothing about software dev. Nothing you could learn anyhow.

I think your words speak volumes more than your resume.

I thought you were done talking to me???

BTW, you never provided any support for your statement about Turing machines. Do you have anything on that in your hundreds of project folders? I thought not.

This sort of discussion is pretty simple. If you make a claim, you should be able to support it with something more than "I'm an expert". I really don't get all the bluster when all you needed to do is provide some basis for the statement. But instead you choose to insult me on a personal level.

Yeah, I'm sure you were quite the project leader.

--

Rick C.

+++ Get 1,000 miles of free Supercharging
+++ Tesla referral code - https://ts.la/richard11209
 
On 14/1/20 9:17 pm, Rick C wrote:
On Tuesday, January 14, 2020 at 5:06:40 AM UTC-5, Clifford Heath wrote:
On 14/1/20 8:44 pm, Rick C wrote:
On Tuesday, January 14, 2020 at 4:28:12 AM UTC-5, Clifford Heath wrote:
On 14/1/20 5:23 pm, Rick C wrote:
On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:

Your comments lack nuance.

The definition of "all-correct" can only be made with reference to a
Turing machine that implements it.


^^^^^ This. You fail to understand this. It invalidates the rest of your
ignorant complaints.

I expect you have no real familiarity with the process of developing code using requirements.

You'd be an idiot then.

Literally thousands of projects. Hell I have an archive here of over
three hundred projects' documents (several from each project, starting
with requirements) on which I participated or led the engineering teams,
and those are just from the 1990s (one of my four decades in the
software industry).

Much of that code is still running on tens of millions of machines
around the globe, coordinating systems management for mission-critical
functions in the world's largest enterprises.

Naah, I know nothing about software dev. Nothing you could learn anyhow.

I think your words speak volumes more than your resume.

I thought you were done talking to me???

I said I was done trying to teach you the theory of computation.

> Yeah, I'm sure you were quite the project leader.

Principal engineer. Held that title in that company for 12 of the 17
years I was there. I was also a founder.
 
Clifford Heath <no.spam@please.net> wrote in news:3YcTF.32875$Mc.7726
@fx35.iad:

Collorary: If a specification is all-correct and all-inclusive, a
compiler can be written that implements it precisely.

FPGA (as one example)programming is the programmer telling the
hardware what switches he wants it to use in what order, etc. So
just like a hardware built device like a clock the signals on the
hour but is 100% hardware driven, electronics can be built with or
without 'processors' and still have events get 'processed'.

Programming (and the electronics behind it) is just our refinement
of Frankenstein's big double blade throw switch on the wall.

Programming against a fault condition in a mission critical setting
is rife with problems.

Like the attitude indictaor. Why would one even freeze up? Pretty
cold up there in that airstream. So built a unit that has built in
mechanical function protections in it to ensure it never stops
working and never gives a false reading based on a failed mechanical
aspect of its operation. Easy to say.

I suggested maybe heating the thing internally (the part that is
inside the aircraft skin) and placing a mechanism in there that
allows it to be 'swung' through it entire range of motion as a test
of freedom of movement, and then released for use again. Could have
sensors and a computer watching the test run and looking at bearing
temps, etc.. Then it would decide the unit is good and can be relied
on for an accurate reaing, If there is not a bird hanging off the
thing outside or if it got sheared off clean yet still was able to be
rotated in the test, the two most extreme failure modes.
 
On Tue, 14 Jan 2020 08:19:27 +0000, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 14/01/20 02:43, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Jan 2020 19:26:26 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 17:41, John Larkin wrote:
On Mon, 13 Jan 2020 16:40:55 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 15:58, John Larkin wrote:
On Mon, 13 Jan 2020 09:04:20 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 01:07, John Larkin wrote:
On Sun, 12 Jan 2020 16:58:40 +0000, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 11/01/2020 14:57, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

If code kills people, it was improperly coded.

Not necessarily. The code written may well have exactly implemented the
algorithm(s) that the clowns supervised by monkeys specified. It isn't
the job of programmers to double check the workings of the people who do
the detailed calculations of aerodynamic force vectors and torques.

It is not the programmers fault if the systems engineering, failure
analysis and aerodynamics calculations are incorrect in some way!

The management of two AOA sensors was insane. Fatal, actually. A
programmer should understand simple stuff like that.

It is unrealistic to expect programmers to understand sensor
reliability. That is the job of the people specifying the
system design and encoding that in the system specification
and the software specification.

Programmers would have zero ability to deviate from implementing
the software spec, full stop. If they did knowingly deviate, it
would be a career ending decision - at best.

Aerospace engineers have lost their pension for far less
serious deviations, even though they had zero consequences.


https://philip.greenspun.com/blog/2019/03/21/optional-angle-of-attack-sensors-on-the-boeing-737-max/

Given dual sensors, why would any sane person decide to alternate
using one per flight?

Agreed. Especially given the poor reliability of AoA sensors.

The people that write and signed off that spec
bear a lot of responsibility


A programmer would have to be awfully thick to not object to that.

The programmer's job is to implement the spec, not to write it

They may have objected, and may have been overruled.

Have you worked in large software organisations?

Not in, but with. Most "just do our jobs", which means that they don't
care to learn much about the process that they are implementing.

Seen that, and it even occurs within software world:
-analysts lob spec over wall to developers
-developers lob code over wall to testers
-developers lob tested code over wall to operations
-rinse and repeat, slowly

"Devops" tries to avoid that inefficiency.


And the hardware guys don't have much insight or visibility into the
software. Often, not much control either, in a large organization
where things are very firewalled.

I've turned down job offers where the HR droids couldn't
deal with someone that successfully straddles both
hardware and software worlds.

I interviewed with HP once. The guy looked at my resume and said "The
first thing you need to do is decide whether you're an engineer or a
programmer", so I walked out.

HP hired me because I was both. Various parts of HP were
very different from each other.


One big company that we work with has, I've heard, 12 levels of
engineering management. If an EE group wants a hole drilled in a
chassis, the request has to propagate up 5 or six management levels,
and then back down, to get to a mechanical engineer. Software is
similarly insulated. Any change fires off an enormous volume of
paperwork and customer "copy exact" notices, so most things just never
get done.

So you /do/ understand how programmers couldn't be
held responsible for implementing the spec.

I understand that some people are content to just do their jobs and
cash their checks.

I recently discovered that one group has been responsible, for almost
20 years, for a bunch of instrumentation of which over 80% doesn't
work, and which is not used. But they still get their paychecks, so
don't rock the boat.




At HP, if I had been promoted 6 times, I would
have been the CEO





Recipe for disaster.

Yup, as we've seen.

--

John Larkin Highland Technology, Inc

The cork popped merrily, and Lord Peter rose to his feet.
"Bunter", he said, "I give you a toast. The triumph of Instinct over Reason"
 
On Tue, 14 Jan 2020 09:15:36 -0000, RBlack <news@rblack01.plus.com>
wrote:

In article <o7bq1f54cmsvthkp8om1tqa3jrbau9ko5r@4ax.com>,
jlarkin@highlandsniptechnology.com says...

On Mon, 13 Jan 2020 09:27:19 -0000, RBlack <news@rblack01.plus.com
wrote:

In article <d0nj1f50mabot5tnfooihn6o50up57n22b@4ax.com>,
jlarkin@highlandsniptechnology.com says...

[snip]

My Spice sims are often wrong initially, precisely because there are
basically no consequences to running the first try without much
checking. That is of course dangerous; we don't want to base a
hardware design on a sim that runs and makes pretty graphs but is
fundamentally wrong.

I just got bitten by a 'feature' of LTSpice XVII, I don't remeber IV
having this behaviour but I don't have it installed any more:

If you make a tweak to a previously working circuit, which makes the
netlister fail (in my case it was an inductor shorted to ground at both
ends), it will pop up a warning to this effect, and then *run the sim
using the old netlist*.

Well, don't ignore the warning.

Yep. Although it looks like 'warning' should be 'fatal error'. I'm
pretty sure LT4 would refuse to run the sim at all with no valid
netlist, rather than use the last-known-good one.



It will then allow you to probe around on the new schematic, but the
schematic nodes are mapped onto the old netlist, so depending on what
you tweaked, what is displayed can range from slightly wrong to flat-out
impossible.

Anyone else seen this?

LT4 would complain about, say, one end of a cap floating, or your
shorted inductor. The new one doesn't. I prefer it the new way.

I haven't seen the old/new netlist thing that you describe.

Another recent one was a boost switcher. I had that working OK, then
added a linear post-regulator, using a model from TI. This added a
bunch of extra nodes to the netlist. The TI model turned out to have a
typo (the warning said something along the lines of 'diode D_XYZ
undefined. Using ideal diode model instead.'

The sim appeared to run OK anyway, but the FET dissipation trace was now
multiplying the wrong node voltages/currents (node names from the old
netlist) and it was out by an order of magnitude. Once I found the typo
and fixed it everything ran fine.
I suppose labelling all the nodes would also have caught that one.

I found LT4 more comfortable to use. Still, I can't complain about the
price. We have a bunch of PSPICE licenses (came bundled with OrCAD) but
LTSPICE is good enough that I've never even tried running it.

When I get a warning, I fix it before I run the sim. That would
explain why I haven't seen the old-netlist-runs thing.

I do label a lot of nodes, but just the interesting ones, not all.

I need to force myself to check all the named nodes when I copy/paste
bits of a circuit. It duplicates all named nodes, which creates some
interesting shorts.



--

John Larkin Highland Technology, Inc

The cork popped merrily, and Lord Peter rose to his feet.
"Bunter", he said, "I give you a toast. The triumph of Instinct over Reason"
 
On Tue, 14 Jan 2020 21:06:33 +1100, Clifford Heath
<no.spam@please.net> wrote:

On 14/1/20 8:44 pm, Rick C wrote:
On Tuesday, January 14, 2020 at 4:28:12 AM UTC-5, Clifford Heath wrote:
On 14/1/20 5:23 pm, Rick C wrote:
On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:

Your comments lack nuance.

The definition of "all-correct" can only be made with reference to a
Turing machine that implements it.


^^^^^ This. You fail to understand this. It invalidates the rest of your
ignorant complaints.

I expect you have no real familiarity with the process of developing code using requirements.

You'd be an idiot then.

Literally thousands of projects. Hell I have an archive here of over
three hundred projects' documents (several from each project, starting
with requirements) on which I participated or led the engineering teams,
and those are just from the 1990s (one of my four decades in the
software industry).

Much of that code is still running on tens of millions of machines
around the globe, coordinating systems management for mission-critical
functions in the world's largest enterprises.

Naah, I know nothing about software dev. Nothing you could learn anyhow.

Hey, you said that you wouldn't waste more time on him.



--

John Larkin Highland Technology, Inc

The cork popped merrily, and Lord Peter rose to his feet.
"Bunter", he said, "I give you a toast. The triumph of Instinct over Reason"
 
jlarkin@highlandsniptechnology.com wrote:

I do label a lot of nodes, but just the interesting ones, not all.

I need to force myself to check all the named nodes when I copy/paste
bits of a circuit. It duplicates all named nodes, which creates some
interesting shorts.

When you name the nodes, use names made from adjacent components, such as
R1C1, Q1B, U1N, etc.

When you copy and paste, the component reference designations will change.
You can easily find the erroneous node names since they won't match the
adjacent components.
 
On 2020-01-14 11:39, jlarkin@highlandsniptechnology.com wrote:
On Tue, 14 Jan 2020 08:19:27 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 14/01/20 02:43, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Jan 2020 19:26:26 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 17:41, John Larkin wrote:
On Mon, 13 Jan 2020 16:40:55 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 15:58, John Larkin wrote:
On Mon, 13 Jan 2020 09:04:20 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 01:07, John Larkin wrote:
On Sun, 12 Jan 2020 16:58:40 +0000, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 11/01/2020 14:57, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

If code kills people, it was improperly coded.

Not necessarily. The code written may well have exactly implemented the
algorithm(s) that the clowns supervised by monkeys specified. It isn't
the job of programmers to double check the workings of the people who do
the detailed calculations of aerodynamic force vectors and torques.

It is not the programmers fault if the systems engineering, failure
analysis and aerodynamics calculations are incorrect in some way!

The management of two AOA sensors was insane. Fatal, actually. A
programmer should understand simple stuff like that.

It is unrealistic to expect programmers to understand sensor
reliability. That is the job of the people specifying the
system design and encoding that in the system specification
and the software specification.

Programmers would have zero ability to deviate from implementing
the software spec, full stop. If they did knowingly deviate, it
would be a career ending decision - at best.

Aerospace engineers have lost their pension for far less
serious deviations, even though they had zero consequences.


https://philip.greenspun.com/blog/2019/03/21/optional-angle-of-attack-sensors-on-the-boeing-737-max/

Given dual sensors, why would any sane person decide to alternate
using one per flight?

Agreed. Especially given the poor reliability of AoA sensors.

The people that write and signed off that spec
bear a lot of responsibility


A programmer would have to be awfully thick to not object to that.

The programmer's job is to implement the spec, not to write it

They may have objected, and may have been overruled.

Have you worked in large software organisations?

Not in, but with. Most "just do our jobs", which means that they don't
care to learn much about the process that they are implementing.

Seen that, and it even occurs within software world:
-analysts lob spec over wall to developers
-developers lob code over wall to testers
-developers lob tested code over wall to operations
-rinse and repeat, slowly

"Devops" tries to avoid that inefficiency.


And the hardware guys don't have much insight or visibility into the
software. Often, not much control either, in a large organization
where things are very firewalled.

I've turned down job offers where the HR droids couldn't
deal with someone that successfully straddles both
hardware and software worlds.

I interviewed with HP once. The guy looked at my resume and said "The
first thing you need to do is decide whether you're an engineer or a
programmer", so I walked out.

HP hired me because I was both. Various parts of HP were
very different from each other.


One big company that we work with has, I've heard, 12 levels of
engineering management. If an EE group wants a hole drilled in a
chassis, the request has to propagate up 5 or six management levels,
and then back down, to get to a mechanical engineer. Software is
similarly insulated. Any change fires off an enormous volume of
paperwork and customer "copy exact" notices, so most things just never
get done.

So you /do/ understand how programmers couldn't be
held responsible for implementing the spec.

I understand that some people are content to just do their jobs and
cash their checks.

I recently discovered that one group has been responsible, for almost
20 years, for a bunch of instrumentation of which over 80% doesn't
work, and which is not used. But they still get their paychecks, so
don't rock the boat.

I'd rather cut my own throat than do that for 20 years. Sometimes my
stuff doesn't work either, but that's due to it being insanely hard. A
lot of the insanely hard stuff works really well though, which makes it
all worthwhile. (I've often said that my ideal project is building a
computer starting with sand--it's a tendency I have to fight.)

Client work almost always succeeds, and the occasional failures are
mostly due to the customer's prevarication, such as taking my
proof-of-concept system, giving it to a CE outfit, and then pulling me
back in to attempt to fix the CE's mess--of course at the last minute,
when they've almost run out of money. That's happened a couple of
times, so I try very hard to discourage it. (The two were the
transcutaneous blood glucose/alcohol system and the blood-spot detector
for hens' eggs.)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 14/01/20 16:39, jlarkin@highlandsniptechnology.com wrote:
On Tue, 14 Jan 2020 08:19:27 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 14/01/20 02:43, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Jan 2020 19:26:26 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 17:41, John Larkin wrote:
On Mon, 13 Jan 2020 16:40:55 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 15:58, John Larkin wrote:
On Mon, 13 Jan 2020 09:04:20 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 01:07, John Larkin wrote:
On Sun, 12 Jan 2020 16:58:40 +0000, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 11/01/2020 14:57, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

If code kills people, it was improperly coded.

Not necessarily. The code written may well have exactly implemented the
algorithm(s) that the clowns supervised by monkeys specified. It isn't
the job of programmers to double check the workings of the people who do
the detailed calculations of aerodynamic force vectors and torques.

It is not the programmers fault if the systems engineering, failure
analysis and aerodynamics calculations are incorrect in some way!

The management of two AOA sensors was insane. Fatal, actually. A
programmer should understand simple stuff like that.

It is unrealistic to expect programmers to understand sensor
reliability. That is the job of the people specifying the
system design and encoding that in the system specification
and the software specification.

Programmers would have zero ability to deviate from implementing
the software spec, full stop. If they did knowingly deviate, it
would be a career ending decision - at best.

Aerospace engineers have lost their pension for far less
serious deviations, even though they had zero consequences.


https://philip.greenspun.com/blog/2019/03/21/optional-angle-of-attack-sensors-on-the-boeing-737-max/

Given dual sensors, why would any sane person decide to alternate
using one per flight?

Agreed. Especially given the poor reliability of AoA sensors.

The people that write and signed off that spec
bear a lot of responsibility


A programmer would have to be awfully thick to not object to that.

The programmer's job is to implement the spec, not to write it

They may have objected, and may have been overruled.

Have you worked in large software organisations?

Not in, but with. Most "just do our jobs", which means that they don't
care to learn much about the process that they are implementing.

Seen that, and it even occurs within software world:
-analysts lob spec over wall to developers
-developers lob code over wall to testers
-developers lob tested code over wall to operations
-rinse and repeat, slowly

"Devops" tries to avoid that inefficiency.


And the hardware guys don't have much insight or visibility into the
software. Often, not much control either, in a large organization
where things are very firewalled.

I've turned down job offers where the HR droids couldn't
deal with someone that successfully straddles both
hardware and software worlds.

I interviewed with HP once. The guy looked at my resume and said "The
first thing you need to do is decide whether you're an engineer or a
programmer", so I walked out.

HP hired me because I was both. Various parts of HP were
very different from each other.


One big company that we work with has, I've heard, 12 levels of
engineering management. If an EE group wants a hole drilled in a
chassis, the request has to propagate up 5 or six management levels,
and then back down, to get to a mechanical engineer. Software is
similarly insulated. Any change fires off an enormous volume of
paperwork and customer "copy exact" notices, so most things just never
get done.

So you /do/ understand how programmers couldn't be
held responsible for implementing the spec.

I understand that some people are content to just do their jobs and
cash their checks.

So, what is in the job description of the programmers
under consideration? I'll bet the prime statement is
"implement the specification using the defined processes"


I recently discovered that one group has been responsible, for almost
20 years, for a bunch of instrumentation of which over 80% doesn't
work, and which is not used. But they still get their paychecks, so
don't rock the boat.

Nothing new there!
 
jlarkin@highlandsniptechnology.com wrote:

I do label a lot of nodes, but just the interesting ones, not all.

I need to force myself to check all the named nodes when I copy/paste
bits of a circuit. It duplicates all named nodes, which creates some
interesting shorts.

When you name a node, use names made from adjacent components, such as R1C1,
Q1B, U1N, etc.

When you copy and post, the component reference designations will change, but
the named nodes will remain the same. You can easily find them since they
won't match the new reference designations.
 
On Tue, 14 Jan 2020 12:35:56 -0500, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

I recently discovered that one group has been responsible, for almost
20 years, for a bunch of instrumentation of which over 80% doesn't
work, and which is not used. But they still get their paychecks, so
don't rock the boat.

I'd rather cut my own throat than do that for 20 years. Sometimes my
stuff doesn't work either, but that's due to it being insanely hard. A
lot of the insanely hard stuff works really well though, which makes it
all worthwhile. (I've often said that my ideal project is building a
computer starting with sand--it's a tendency I have to fight.)

When our stuff doesn't work, it's usually because of some dumb
mistake, which we can fix.

The other kind of "failure" is when our stuff works, but the
customer's system or product doesn't work, or doesn't sell, or after
we do it, they discover that they can do it themselves.



Client work almost always succeeds, and the occasional failures are
mostly due to the customer's prevarication, such as taking my
proof-of-concept system, giving it to a CE outfit, and then pulling me
back in to attempt to fix the CE's mess--of course at the last minute,
when they've almost run out of money. That's happened a couple of
times, so I try very hard to discourage it. (The two were the
transcutaneous blood glucose/alcohol system and the blood-spot detector
for hens' eggs.)

Cheers

Phil Hobbs

--

John Larkin Highland Technology, Inc

The cork popped merrily, and Lord Peter rose to his feet.
"Bunter", he said, "I give you a toast. The triumph of Instinct over Reason"
 
On 14/01/20 17:35, Phil Hobbs wrote:
On 2020-01-14 11:39, jlarkin@highlandsniptechnology.com wrote:
On Tue, 14 Jan 2020 08:19:27 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 14/01/20 02:43, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Jan 2020 19:26:26 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 17:41, John Larkin wrote:
On Mon, 13 Jan 2020 16:40:55 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 15:58, John Larkin wrote:
On Mon, 13 Jan 2020 09:04:20 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 01:07, John Larkin wrote:
On Sun, 12 Jan 2020 16:58:40 +0000, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 11/01/2020 14:57, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

If code kills people, it was improperly coded.

Not necessarily. The code written may well have exactly implemented the
algorithm(s) that the clowns supervised by monkeys specified. It isn't
the job of programmers to double check the workings of the people who do
the detailed calculations of aerodynamic force vectors and torques.

It is not the programmers fault if the systems engineering, failure
analysis and aerodynamics calculations are incorrect in some way!

The management of two AOA sensors was insane. Fatal, actually. A
programmer should understand simple stuff like that.

It is unrealistic to expect programmers to understand sensor
reliability. That is the job of the people specifying the
system design and encoding that in the system specification
and the software specification.

Programmers would have zero ability to deviate from implementing
the software spec, full stop. If they did knowingly deviate, it
would be a career ending decision - at best.

Aerospace engineers have lost their pension for far less
serious deviations, even though they had zero consequences.


https://philip.greenspun.com/blog/2019/03/21/optional-angle-of-attack-sensors-on-the-boeing-737-max/


Given dual sensors, why would any sane person decide to alternate
using one per flight?

Agreed. Especially given the poor reliability of AoA sensors.

The people that write and signed off that spec
bear a lot of responsibility


A programmer would have to be awfully thick to not object to that.

The programmer's job is to implement the spec, not to write it

They may have objected, and may have been overruled.

Have you worked in large software organisations?

Not in, but with. Most "just do our jobs", which means that they don't
care to learn much about the process that they are implementing.

Seen that, and it even occurs within software world:
-analysts lob spec over wall to developers
-developers lob code over wall to testers
-developers lob tested code over wall to operations
-rinse and repeat, slowly

"Devops" tries to avoid that inefficiency.


And the hardware guys don't have much insight or visibility into the
software. Often, not much control either, in a large organization
where things are very firewalled.

I've turned down job offers where the HR droids couldn't
deal with someone that successfully straddles both
hardware and software worlds.

I interviewed with HP once. The guy looked at my resume and said "The
first thing you need to do is decide whether you're an engineer or a
programmer", so I walked out.

HP hired me because I was both. Various parts of HP were
very different from each other.


One big company that we work with has, I've heard, 12 levels of
engineering management. If an EE group wants a hole drilled in a
chassis, the request has to propagate up 5 or six management levels,
and then back down, to get to a mechanical engineer. Software is
similarly insulated. Any change fires off an enormous volume of
paperwork and customer "copy exact" notices, so most things just never
get done.

So you /do/ understand how programmers couldn't be
held responsible for implementing the spec.

I understand that some people are content to just do their jobs and
cash their checks.

I recently discovered that one group has been responsible, for almost
20 years, for a bunch of instrumentation of which over 80% doesn't
work, and which is not used. But they still get their paychecks, so
don't rock the boat.

I'd rather cut my own throat than do that for 20 years.

I very deliberately avoided the "20 years experience
being 1 year repeated 20 times" trap.

I use a specific example from my early career, and the
technique I used to avoid it, to sensitise youngsters
to the kind of decisions they may face in the future.

Herbert's “they’d chosen always the clear, safe course
that leads ever downward into stagnation.” was an
awful warning for me.

But in some companies, and worse industries, that can be
a very difficult trap to avoid.
 
On Sunday, January 12, 2020 at 7:33:40 PM UTC-5, Phil Hobbs wrote:
On 2020-01-12 19:13, jjhudak4@gmail.com wrote:
On Sunday, January 12, 2020 at 5:55:08 PM UTC-5, Phil Hobbs wrote:
On 2020-01-12 17:38, jjhudak4@gmail.com wrote:
On Sunday, January 12, 2020 at 3:32:06 PM UTC-5,
DecadentLinux...@decadence..org wrote:
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote in
news:fb4888b5-e96f-1145-85e8-bc382c9bdcdf@electrooptical.net:

Back in my one foray into big-system design, we design
engineers were always getting in the systems guys' faces
about various pieces of stupidity in the specs. It was all
pretty good-natured, and we wound up with the pain and
suffering distributed about equally.



That is how men get work done... even 'the programmers'. Very
well said, there.

That is like the old dig on 'the hourly help'.

Some programmers are very smart. Others not so much.

I guess choosing to go into it is not such a smart move so
they take a hit from the start. :)


If that is how men get work done then they are not using
software and system engineering techniques developed in the last
15-20 years and their results are *still* subject to the same
types of errors. I do research and teach in this area. A number
of studies, and one in particular, cites up to 70% of software
faults are introduced on the LHS of the 'V' development model
(Other software design lifecycle models have similar fault
percentages.) A major issue is that most of these errors are
observed at integration time (software+software,
software+hardware). The cost of defect removal along the RHS of
the 'V' development model is anywhere from 50-200X of the removal
cost along the LHS of the 'V'. (no wonder why systems cost so
much)

Nice rant. Could you tell us more about the 'V' model?

The talk about errors in this thread are very high level and
most ppl have the mindset that they are thinking about errors at
the unit test level. There are numerous techniques developed to
identify and fix fault types throughout the entire development
lifecycle but regrettably a lot of them are not employed.

What sorts of techniques to you use to find problems in the
specifications?
Actually a large percentage of the errors are discovered and
fixed at that level. Errors of the type: units mismatch, variable
type mismatch, and a slew of concurrency issues aren't discovered
till integration time. Usually, at that point, there is a 'rush'
to get the system fielded. The horror stories and lessons learned
are well documented.

Yup. Leaving too much stuff for the system integration step is a
very very well-known way to fail.

IDK what exactly happened (yet) with the Boeing MAX development.
I do have info from some sources that cannot be disclosed at
this point. From what I've read, there were major mistakes made
from inception through implementation and integration. My
personal view, is that one should almost never (never?) place the
task on software to correct an inherently unstable airframe
design - it is putting a bandaid on the source of the problem.

It's commonly done, though, isn't it? I remember reading Ben
Rich's book on the Skunk Works, where he says that the F-117's very
squirrelly handling characteristics were fixed up in software to
make it a beautiful plane to fly. That was about 1980.

Another major issue is the hazard analysis and fault tolerance
approach was not done at the system (the redundancy approach
was pitiful, as well as the *logic* used in implementing it as
well as conceptual.

I do think that the better software engineers do have a more
holistic view of the system (hardware knowledge + system
operational knowledge) which will allow them to ask questions
when things don't 'seem right.' OTHO, the software engineers
should not go making assumptions about things and coding to those
assumptions. (It happens more than you think) It is the job of
the software architect to ensure that any development assumptions
are captured and specified in the software architecture.

In real life, though, it's super important to have two-way
communications during development, no? My large-system experience
was all hardware (the first civilian satellite DBS system,
1981-83), so things were quite a bit simpler than in a large
software-intensive system. I'd expect the need for bottom-up
communication to be greater now rather than less.

In studies I have looked at, the percentage of requirements
errors is somewhere between 30-40% of the overall number of
faults during the design lifecycle, and the 'industry standard'
approach approach to dealing with this problem is woefully
indequate despite techniques to detect and remove the errors. A
LOT Of time is spent doing software requirements tracing as
opposed to doing verification of requirements. People argue that
one cannot verify the requirements until the system has been
built - which is complete BS but industry is very slow to change.
We have shown that using software architecture modeling addresses
a large percentage of system level problems early in the design
life cycle. We are trying to convince industry. Until change
happens, the parade of failures like the MAX will continue.

I'd love to hear more about that.

Cheers

Phil Hobbs


Sorry - I get a bit carried away on this topic... For requirements
engineering verification one can google: formal and semi-formal
requirements specification languages. RDAL and ReqSpec are ones I am
familiar with. Techniques to verify requirements include model
checking. Google model checking. Based of formal logic like LTL
(Linear temporal logic) CTL (Compositional Tree Logic. One constructs
state models from requirements and uses model checking engines to
analyze the structures. Model checking was actually used to verify a
bus protocol in the early 90s and found *lots* of problems with the
spec...that caused industry to 'wake up'. There are others that work
on code, but these are very much research-y efforts.

Simulink has a model checker in its toolboxes (based on Promala) it
is quite good).

We advocate using architecture design languages (ADL's) that is a
formal modeling notation to model different views of the architecture
and capture properties of the system from which analysis can be done
(e.g. signal latency, variable format and property consistency,
processor utilization, bandwidth capacity, hazard analysis, etc.)
The one that I had a hand in designing is Architecture Analysis and
Design Language (AADL) It is an SAE Aerospace standard. IF things
turn out well, it will be used on the next generation of helecopters
for the army. We have been piloting it use on real systems for the
last 2-3 years, and last 10 years on pilot studies. For systems
hazard analysis, google STPA (System Theoretic Process Approach)
spearheaded by Nancy Leveson MIT (She has consulted to Boeing).

Yes, I've seen software applied to fix hw problems but assessing the
risk is complicated. The results can be catastrophic. Ok, off my
rant....


Thanks. I feel a bit like I'm drinking from a fire hose, which is
always my preferred way of learning stuff.... I'd be super interested
in an accessible presentation of methods for sanity-checkin high-level
system requirements.

Being constitutionally lazy, I'm a huge fan of ways to work smarter
rather than harder. ;)

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com

Phil, et al....
I meant to post some information wrt your inquiry about techniques to express and analyze requirements and about model checking but got OBE.
I found this slide set that rather concisely lays out the problem & approaches to express requirements.
https://www.iaria.org/conferences2018/filesICSEA18/RadekKoci_RequirementsModellingAndSoftwareSystemsImplementation.pdf

When I read through English text requirements, I tend to do two things simultaneously: map them to some abstract component in the system hierarchy (because the written requirements are usually spread all over the system, Re-express them in a semi formal or formal notation (usually semi-formal such as state-charts, ER diagrams, sequence diagrams, interaction diagrams.) This gives me an idea if things are collectively coherent. I look for conflicts and omissions primarily.
I then take my understanding of the components and their interactions and then construct an AADL model to: understand who talks to who, data communicated, and then map requirements to the components and do analysis on the model (signal flows and latency are usually the top properties. I then try to tease out what the fault tolerance approach is and model that, keeping in mind error types and look for error flow, mitigation approaches, etc.
If there is an area that is really confusing, I'll construct state models and use model checking. Some useful tools are nuSMV, http://nusmv.fbk.eu/
and SPIN http://spinroot.com/spin/whatispin.html
As a note, using model checking for the engineer can be a challenge. They have not seen anything like this in undergrad or grad school unless they are leaning more in computer science. We looked at this issue 20 years ago and produced a number of reports that tried to take the approach as a tool kit and identified types of analysis and patterns that could be identified and more easily applied by an engineer unfamiliar with the area. They are somewhere on the SEI website.

Speaking of model checking, below are two of the more often cited model checking approaches and successful applications. There is little 'how to' but more of here is the problem and how we solved it. (Details left to the reader ;) )

http://www.cs.cmu.edu/~emc/papers/Conference%20Papers/95_verification_fbc_protocol.pdf
https://link.springer.com/chapter/10.1007/3-540-60973-3_102

There is a report from NASA some years ago that gave some excellent guidelines in writing requirements - I can't locate it at the moment but this website has some good guidelines, many of which were in the NASA report.
https://qracorp.com/write-clear-requirements-document/
(It still amazes me that even now, requirements docs that I've seen don't do half of these things....)
Hope this helps
J
 
On Tue, 14 Jan 2020 16:43:56 -0000 (UTC), Steve Wilson <no@spam.com>
wrote:

jlarkin@highlandsniptechnology.com wrote:

I do label a lot of nodes, but just the interesting ones, not all.

I need to force myself to check all the named nodes when I copy/paste
bits of a circuit. It duplicates all named nodes, which creates some
interesting shorts.

When you name the nodes, use names made from adjacent components, such as
R1C1, Q1B, U1N, etc.

When you copy and paste, the component reference designations will change.
You can easily find the erroneous node names since they won't match the
adjacent components.

I'd rather use something that describes the signal, not the parts.
Like ADC_IN or something. So the plots make sense and can be used as
illustrations in manuals, for example.

--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Tue, 14 Jan 2020 17:53:41 +0000, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 14/01/20 17:35, Phil Hobbs wrote:
On 2020-01-14 11:39, jlarkin@highlandsniptechnology.com wrote:
On Tue, 14 Jan 2020 08:19:27 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 14/01/20 02:43, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Jan 2020 19:26:26 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 17:41, John Larkin wrote:
On Mon, 13 Jan 2020 16:40:55 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 15:58, John Larkin wrote:
On Mon, 13 Jan 2020 09:04:20 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 01:07, John Larkin wrote:
On Sun, 12 Jan 2020 16:58:40 +0000, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 11/01/2020 14:57, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

If code kills people, it was improperly coded.

Not necessarily. The code written may well have exactly implemented the
algorithm(s) that the clowns supervised by monkeys specified. It isn't
the job of programmers to double check the workings of the people who do
the detailed calculations of aerodynamic force vectors and torques.

It is not the programmers fault if the systems engineering, failure
analysis and aerodynamics calculations are incorrect in some way!

The management of two AOA sensors was insane. Fatal, actually. A
programmer should understand simple stuff like that.

It is unrealistic to expect programmers to understand sensor
reliability. That is the job of the people specifying the
system design and encoding that in the system specification
and the software specification.

Programmers would have zero ability to deviate from implementing
the software spec, full stop. If they did knowingly deviate, it
would be a career ending decision - at best.

Aerospace engineers have lost their pension for far less
serious deviations, even though they had zero consequences.


https://philip.greenspun.com/blog/2019/03/21/optional-angle-of-attack-sensors-on-the-boeing-737-max/


Given dual sensors, why would any sane person decide to alternate
using one per flight?

Agreed. Especially given the poor reliability of AoA sensors.

The people that write and signed off that spec
bear a lot of responsibility


A programmer would have to be awfully thick to not object to that.

The programmer's job is to implement the spec, not to write it

They may have objected, and may have been overruled.

Have you worked in large software organisations?

Not in, but with. Most "just do our jobs", which means that they don't
care to learn much about the process that they are implementing.

Seen that, and it even occurs within software world:
-analysts lob spec over wall to developers
-developers lob code over wall to testers
-developers lob tested code over wall to operations
-rinse and repeat, slowly

"Devops" tries to avoid that inefficiency.


And the hardware guys don't have much insight or visibility into the
software. Often, not much control either, in a large organization
where things are very firewalled.

I've turned down job offers where the HR droids couldn't
deal with someone that successfully straddles both
hardware and software worlds.

I interviewed with HP once. The guy looked at my resume and said "The
first thing you need to do is decide whether you're an engineer or a
programmer", so I walked out.

HP hired me because I was both. Various parts of HP were
very different from each other.


One big company that we work with has, I've heard, 12 levels of
engineering management. If an EE group wants a hole drilled in a
chassis, the request has to propagate up 5 or six management levels,
and then back down, to get to a mechanical engineer. Software is
similarly insulated. Any change fires off an enormous volume of
paperwork and customer "copy exact" notices, so most things just never
get done.

So you /do/ understand how programmers couldn't be
held responsible for implementing the spec.

I understand that some people are content to just do their jobs and
cash their checks.

I recently discovered that one group has been responsible, for almost
20 years, for a bunch of instrumentation of which over 80% doesn't
work, and which is not used. But they still get their paychecks, so
don't rock the boat.

I'd rather cut my own throat than do that for 20 years.

I very deliberately avoided the "20 years experience
being 1 year repeated 20 times" trap.

I use a specific example from my early career, and the
technique I used to avoid it, to sensitise youngsters
to the kind of decisions they may face in the future.

Herbert's “they’d chosen always the clear, safe course
that leads ever downward into stagnation.” was an
awful warning for me.

But in some companies, and worse industries, that can be
a very difficult trap to avoid.

I was talking to my MD, a really wonderful lady, about problem
solving. The thing is, her mistakes might kill people, but I can blow
things up just to see what might happen.

--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Tue, 14 Jan 2020 17:11:09 +1100, Clifford Heath
<no.spam@please.net> wrote:

On 14/1/20 1:46 pm, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 07:27:05 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

DecadentLinuxUserNumeroUno@decadence.org wrote...

Winfield Hill wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

Thanks Win. That guy is nuts. Boeing most certainly
did announce just a few months ago, that it was a
software fault.

That's the opposite of my position. I'm sure the coders
made the software do exactly what they were told to make
it do.

But nobody ever writes a requirement document at the level of detail
that the programmers will work to. And few requirement docs are
all-correct and all-inclusive.

Your comments lack nuance.

Absolutely. Sometimes common sense is safer than nuance.

(Not to start a political branch.)


--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On 1/13/20 9:51 PM, jlarkin@highlandsniptechnology.com wrote:
On Sat, 11 Jan 2020 17:31:26 -0500, bitrex <user@example.net> wrote:

On 1/11/20 9:47 AM, jlarkin@highlandsniptechnology.com wrote:
On Fri, 10 Jan 2020 21:46:19 -0800 (PST), omnilobe@gmail.com wrote:

Hardware designs are more rigorously done than
software designs. A large company had problems with a 737
and a rocket to the space station...

https://www.bloomberg.com/news/articles/2019-06-28/boeing-s-737-max-software-outsourced-to-9-an-hour-engineers

I know programmers who do not care for rigor at home at work.
I did hardware design with rigor and featuring reviews by caring
electronics design engineers and marketing engineers.

Software gets sloppy with OOPs.
Object Oriented Programming.
Windows 10 on a rocket to ISS space station.
C++ mud.

The easier it is to change things, the less careful people are about
doing them. Software, which includes FPGA code, seldom works the first
time. Almost never. The average hunk of fresh code has a mistake
roughly every 10 lines. Or was that three?

FPGAs are usually better than procedural code, but are still mostly
done as hack-and-fix cycles, with software test benches. When we did
OTP (fuse based) FPGAs without test benching, we often got them right
first try. If compiles took longer, people would be more careful.

PCBs usually work the first time, because they are checked and
reviewed, and that is because mistakes are slow and expensive to fix,
and very visible to everyone. Bridges and buildings are almost always
right the first time. They are even more expensive and slow and
visible.

Besides, electronics and structures have established theory, but
software doesn't. Various people just sort of do it.

My Spice sims are often wrong initially, precisely because there are
basically no consequences to running the first try without much
checking. That is of course dangerous; we don't want to base a
hardware design on a sim that runs and makes pretty graphs but is
fundamentally wrong.


Don't know why C++ is getting the rap here. Modern C++ design is
rigorous, there are books about what to do and what not to do, and the
language has built-in facilities to ensure that e.g. memory is never
leaked, pointers always refer to an object that exists, and the user
can't ever add feet to meters if they're not supposed to.

Pointers are evil.

That's why in modern times you avoid working with "naked" ones at all
costs. In architectures with managed memory like x86 and ARM with an
operating system there's pretty much no good reason to use naked
pointers at all unless you are yourself writing a memory manager or
allocator. There are test suites to find all potential memory leaks!
There's no good excuse to have programs that leak resources anymore...
 
On 2020-01-14 16:31, bitrex wrote:
On 1/13/20 9:51 PM, jlarkin@highlandsniptechnology.com wrote:
On Sat, 11 Jan 2020 17:31:26 -0500, bitrex <user@example.net
wrote:

On 1/11/20 9:47 AM, jlarkin@highlandsniptechnology.com wrote:
On Fri, 10 Jan 2020 21:46:19 -0800 (PST), omnilobe@gmail.com
wrote:

Hardware designs are more rigorously done than software
designs. A large company had problems with a 737 and a
rocket to the space station...

https://www.bloomberg.com/news/articles/2019-06-28/boeing-s-737-max-software-outsourced-to-9-an-hour-engineers








I know programmers who do not care for rigor at home at work.
I did hardware design with rigor and featuring reviews by
caring electronics design engineers and marketing engineers.

Software gets sloppy with OOPs. Object Oriented Programming.
Windows 10 on a rocket to ISS space station. C++ mud.

The easier it is to change things, the less careful people are
about doing them. Software, which includes FPGA code, seldom
works the first time. Almost never. The average hunk of fresh
code has a mistake roughly every 10 lines. Or was that three?

FPGAs are usually better than procedural code, but are still
mostly done as hack-and-fix cycles, with software test
benches. When we did OTP (fuse based) FPGAs without test
benching, we often got them right first try. If compiles took
longer, people would be more careful.

PCBs usually work the first time, because they are checked and
reviewed, and that is because mistakes are slow and expensive
to fix, and very visible to everyone. Bridges and buildings
are almost always right the first time. They are even more
expensive and slow and visible.

Besides, electronics and structures have established theory,
but software doesn't. Various people just sort of do it.

My Spice sims are often wrong initially, precisely because
there are basically no consequences to running the first try
without much checking. That is of course dangerous; we don't
want to base a hardware design on a sim that runs and makes
pretty graphs but is fundamentally wrong.


Don't know why C++ is getting the rap here. Modern C++ design is
rigorous, there are books about what to do and what not to do,
and the language has built-in facilities to ensure that e.g.
memory is never leaked, pointers always refer to an object that
exists, and the user can't ever add feet to meters if they're
not supposed to.

Pointers are evil.

That's why in modern times you avoid working with "naked" ones at
all costs. In architectures with managed memory like x86 and ARM with
an operating system there's pretty much no good reason to use naked
pointers at all unless you are yourself writing a memory manager or
allocator.

That's a bit strong. It's still reasonable to use void* deep in the
implementation of templates for performance-critical stuff. My
clusterized EM simulator uses bare pointers in structs, because they
vectorize dramatically better, but again that's optimized innermost-loop
stuff.

For other things, std::shared_ptr, std::unique_ptr, std::weak_ptr, and
the standard containers are the bomb.

There are test suites to find all potential memory leaks! There's no
good excuse to have programs that leak resources anymore...

RAII is really good medicine. I used to like mudflap a lot, but it got
rolled up into GCC's sanitizers, which are super useful too.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
John Larkin <jlarkin@highland_atwork_technology.com> wrote:

On Tue, 14 Jan 2020 16:43:56 -0000 (UTC), Steve Wilson <no@spam.com
wrote:

jlarkin@highlandsniptechnology.com wrote:

I do label a lot of nodes, but just the interesting ones, not all.

I need to force myself to check all the named nodes when I copy/paste
bits of a circuit. It duplicates all named nodes, which creates some
interesting shorts.

When you name the nodes, use names made from adjacent components, such
as R1C1, Q1B, U1N, etc.

When you copy and paste, the component reference designations will
change. You can easily find the erroneous node names since they won't
match the adjacent components.

I'd rather use something that describes the signal, not the parts.
Like ADC_IN or something. So the plots make sense and can be used as
illustrations in manuals, for example.

of course. Vin, Vout, Clk, Diff, VCC, etc. These are all good for
external connecting signals.

But if you want signals internal to a circuit block, you need some way to
identify them. If you leave them unnamed, they will get renumbered every
time you make a change to the circuit. So you cannot use unnamed nodes to
plot waveforms.

I find it saves time to go ahead and name every node. It doesn't take long,
and you don't have to waste time naming a node that you find out you need,
then re-run the sim again.
 
On 14/01/20 19:45, John Larkin wrote:
On Tue, 14 Jan 2020 17:53:41 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 14/01/20 17:35, Phil Hobbs wrote:
On 2020-01-14 11:39, jlarkin@highlandsniptechnology.com wrote:
On Tue, 14 Jan 2020 08:19:27 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 14/01/20 02:43, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Jan 2020 19:26:26 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 17:41, John Larkin wrote:
On Mon, 13 Jan 2020 16:40:55 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 15:58, John Larkin wrote:
On Mon, 13 Jan 2020 09:04:20 +0000, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 13/01/20 01:07, John Larkin wrote:
On Sun, 12 Jan 2020 16:58:40 +0000, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 11/01/2020 14:57, jlarkin@highlandsniptechnology.com wrote:
On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfieldhill@yahoo.com
wrote:

Rick C wrote...

Then your very example of the Boeing plane is wrong
because no one has said the cause of the accident
was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.

If code kills people, it was improperly coded.

Not necessarily. The code written may well have exactly implemented the
algorithm(s) that the clowns supervised by monkeys specified. It isn't
the job of programmers to double check the workings of the people who do
the detailed calculations of aerodynamic force vectors and torques.

It is not the programmers fault if the systems engineering, failure
analysis and aerodynamics calculations are incorrect in some way!

The management of two AOA sensors was insane. Fatal, actually. A
programmer should understand simple stuff like that.

It is unrealistic to expect programmers to understand sensor
reliability. That is the job of the people specifying the
system design and encoding that in the system specification
and the software specification.

Programmers would have zero ability to deviate from implementing
the software spec, full stop. If they did knowingly deviate, it
would be a career ending decision - at best.

Aerospace engineers have lost their pension for far less
serious deviations, even though they had zero consequences.


https://philip.greenspun.com/blog/2019/03/21/optional-angle-of-attack-sensors-on-the-boeing-737-max/


Given dual sensors, why would any sane person decide to alternate
using one per flight?

Agreed. Especially given the poor reliability of AoA sensors.

The people that write and signed off that spec
bear a lot of responsibility


A programmer would have to be awfully thick to not object to that.

The programmer's job is to implement the spec, not to write it

They may have objected, and may have been overruled.

Have you worked in large software organisations?

Not in, but with. Most "just do our jobs", which means that they don't
care to learn much about the process that they are implementing.

Seen that, and it even occurs within software world:
-analysts lob spec over wall to developers
-developers lob code over wall to testers
-developers lob tested code over wall to operations
-rinse and repeat, slowly

"Devops" tries to avoid that inefficiency.


And the hardware guys don't have much insight or visibility into the
software. Often, not much control either, in a large organization
where things are very firewalled.

I've turned down job offers where the HR droids couldn't
deal with someone that successfully straddles both
hardware and software worlds.

I interviewed with HP once. The guy looked at my resume and said "The
first thing you need to do is decide whether you're an engineer or a
programmer", so I walked out.

HP hired me because I was both. Various parts of HP were
very different from each other.


One big company that we work with has, I've heard, 12 levels of
engineering management. If an EE group wants a hole drilled in a
chassis, the request has to propagate up 5 or six management levels,
and then back down, to get to a mechanical engineer. Software is
similarly insulated. Any change fires off an enormous volume of
paperwork and customer "copy exact" notices, so most things just never
get done.

So you /do/ understand how programmers couldn't be
held responsible for implementing the spec.

I understand that some people are content to just do their jobs and
cash their checks.

I recently discovered that one group has been responsible, for almost
20 years, for a bunch of instrumentation of which over 80% doesn't
work, and which is not used. But they still get their paychecks, so
don't rock the boat.

I'd rather cut my own throat than do that for 20 years.

I very deliberately avoided the "20 years experience
being 1 year repeated 20 times" trap.

I use a specific example from my early career, and the
technique I used to avoid it, to sensitise youngsters
to the kind of decisions they may face in the future.

Herbert's “they’d chosen always the clear, safe course
that leads ever downward into stagnation.” was an
awful warning for me.

But in some companies, and worse industries, that can be
a very difficult trap to avoid.

I was talking to my MD, a really wonderful lady, about problem
solving. The thing is, her mistakes might kill people, but I can blow
things up just to see what might happen.

Some electronics/software people are in the position
that their products can people, even when they are
working as designed.

That /ought/ to colour their mentality and practices!
 

Welcome to EDABoard.com

Sponsor

Back
Top