J
Jan Panteltje
Guest
On a sunny day (Mon, 6 May 2019 11:40:04 -0400) it happened bitrex
<user@example.net> wrote in <pzYzE.3$h75.1@fx41.iad>:
Indeed, software developer for embedded is also a hardware developer,
and should have refused or have sounded alarm when asked to code the system.
Indeed redundancy is required, but a lot more than that
It is possible that the sensor had nothing wrong with it,
as in the first crash that sensor was just replaced?
Or did they just replace the module with software?
Clearly not all data is released!
Had a pilot been in the software team (assuming it was a team and not some intern (on speed?)
(what I think and have stated before), something totally different would have
been created.
Now they are doing just those things, having pilot input, made a new system,
and are testing that,
And again a simple MEMS sensor can see if the attitude of the plane makes any sense
and software can then inhibit putting the nose ever more down, a 1$ sensor.
Also I have read the new system will now only correct the angle _once_.
<user@example.net> wrote in <pzYzE.3$h75.1@fx41.iad>:
On 5/6/19 9:42 AM, Jan Panteltje wrote:
On a sunny day (Mon, 6 May 2019 06:27:34 -0700 (PDT)) it happened
trader4@optonline.net wrote in
0f4b448b-5459-4dbd-8b22-ef2f484f8752@googlegroups.com>:
On Monday, May 6, 2019 at 3:28:36 AM UTC-4, Jan Panteltje wrote:
On a sunny day (Sun, 5 May 2019 23:13:13 -0700) it happened Banders
snap@mailchute.com> wrote in <qaoj9p$1jpn$1@gioia.aioe.org>:
On 05/05/2019 08:25 PM, omnilobe@gmail.com wrote:
Weight and Balance of the 737 Max.
My opinion is that such software should be written by pilots,
not by spaced out no flying experience people.
That's ridiculous and what evidence do you have that the software developers
were spaced out?
That it did what it did!!!
When you design a safety system that relies on a single sensor to
determine whether the aircraft system is in a safety-critical state that
needs action "the AOA sensor indicates the aircraft is in stall", how do
you distinguish between a "normal" safety-critical situation "the AOA
sensor indicates the aircraft is in stall" where the plane a a whole is
in an error state but the safety system itself is still functioning
normally, vs. "the AOA indicates the aircraft is in stall...because the
AOA sensor is trashed" where the plane is still functioning normally but
the safety system is in an error state.
I don't know that I have an answer to that question either other than
"don't do that."
Indeed, software developer for embedded is also a hardware developer,
and should have refused or have sounded alarm when asked to code the system.
Indeed redundancy is required, but a lot more than that
It is possible that the sensor had nothing wrong with it,
as in the first crash that sensor was just replaced?
Or did they just replace the module with software?
Clearly not all data is released!
Had a pilot been in the software team (assuming it was a team and not some intern (on speed?)
(what I think and have stated before), something totally different would have
been created.
Now they are doing just those things, having pilot input, made a new system,
and are testing that,
And again a simple MEMS sensor can see if the attitude of the plane makes any sense
and software can then inhibit putting the nose ever more down, a 1$ sensor.
Also I have read the new system will now only correct the angle _once_.