Guest
On Sunday, January 12, 2020 at 5:55:08 PM UTC-5, Phil Hobbs wrote:
Sorry - I get a bit carried away on this topic...
For requirements engineering verification one can google: formal and semi-formal requirements specification languages. RDAL and ReqSpec are ones I am familiar with.
Techniques to verify requirements include model checking. Google model checking. Based of formal logic like LTL (Linear temporal logic) CTL (Compositional Tree Logic. One constructs state models from requirements and uses model checking engines to analyze the structures. Model checking was actually used to verify a bus protocol in the early 90s and found *lots* of problems with the spec...that caused industry to 'wake up'.
There are others that work on code, but these are very much research-y efforts.
Simulink has a model checker in its toolboxes (based on Promala) it is quite good).
We advocate using architecture design languages (ADL's) that is a formal modeling notation to model different views of the architecture and capture properties of the system from which analysis can be done (e.g. signal latency, variable format and property consistency, processor utilization, bandwidth capacity, hazard analysis, etc.) The one that I had a hand in designing is Architecture Analysis and Design Language (AADL) It is an SAE Aerospace standard. IF things turn out well, it will be used on the next generation of helecopters for the army. We have been piloting it use on real systems for the last 2-3 years, and last 10 years on pilot studies.
For systems hazard analysis, google STPA (System Theoretic Process Approach) spearheaded by Nancy Leveson MIT (She has consulted to Boeing).
Yes, I've seen software applied to fix hw problems but assessing the risk is complicated. The results can be catastrophic.
Ok, off my rant....
On 2020-01-12 17:38, jjhudak4@gmail.com wrote:
On Sunday, January 12, 2020 at 3:32:06 PM UTC-5,
DecadentLinux...@decadence..org wrote:
Phil Hobbs <pcdhSpamMeSenseless@electrooptical.net> wrote in
news:fb4888b5-e96f-1145-85e8-bc382c9bdcdf@electrooptical.net:
Back in my one foray into big-system design, we design engineers
were always getting in the systems guys' faces about various
pieces of stupidity in the specs. It was all pretty
good-natured, and we wound up with the pain and suffering
distributed about equally.
That is how men get work done... even 'the programmers'. Very
well said, there.
That is like the old dig on 'the hourly help'.
Some programmers are very smart. Others not so much.
I guess choosing to go into it is not such a smart move so they
take a hit from the start.
If that is how men get work done then they are not using software
and system engineering techniques developed in the last 15-20 years
and their results are *still* subject to the same types of errors. I
do research and teach in this area. A number of studies, and one in
particular, cites up to 70% of software faults are introduced on
the LHS of the 'V' development model (Other software design lifecycle
models have similar fault percentages.) A major issue is that most
of these errors are observed at integration time
(software+software, software+hardware). The cost of defect removal
along the RHS of the 'V' development model is anywhere from 50-200X
of the removal cost along the LHS of the 'V'. (no wonder why systems
cost so much)
Nice rant. Could you tell us more about the 'V' model?
The talk about errors in this thread are very high level and most
ppl have the mindset that they are thinking about errors at the unit
test level. There are numerous techniques developed to identify and
fix fault types throughout the entire development lifecycle but
regrettably a lot of them are not employed.
What sorts of techniques to you use to find problems in the specifications?
Actually a large percentage of the errors are discovered and fixed at
that level. Errors of the type: units mismatch, variable type
mismatch, and a slew of concurrency issues aren't discovered till
integration time. Usually, at that point, there is a 'rush' to get
the system fielded. The horror stories and lessons learned are well
documented.
Yup. Leaving too much stuff for the system integration step is a very
very well-known way to fail.
IDK what exactly happened (yet) with the Boeing MAX development. I
do have info from some sources that cannot be disclosed at this
point. From what I've read, there were major mistakes made from
inception through implementation and integration. My personal view,
is that one should almost never (never?) place the task on software
to correct an inherently unstable airframe design - it is putting a
bandaid on the source of the problem.
It's commonly done, though, isn't it? I remember reading Ben Rich's
book on the Skunk Works, where he says that the F-117's very squirrelly
handling characteristics were fixed up in software to make it a
beautiful plane to fly. That was about 1980.
Another major issue is the hazard analysis and fault tolerance
approach was not done at the system (the redundancy approach was
pitiful, as well as the *logic* used in implementing it as well as
conceptual.
I do think that the better software engineers do have a more
holistic view of the system (hardware knowledge + system operational
knowledge) which will allow them to ask questions when things don't
'seem right.' OTHO, the software engineers should not go making
assumptions about things and coding to those assumptions. (It
happens more than you think) It is the job of the software architect
to ensure that any development assumptions are captured and specified
in the software architecture.
In real life, though, it's super important to have two-way
communications during development, no? My large-system experience was
all hardware (the first civilian satellite DBS system, 1981-83), so
things were quite a bit simpler than in a large software-intensive
system. I'd expect the need for bottom-up communication to be greater
now rather than less.
In studies I have looked at, the percentage of requirements errors
is somewhere between 30-40% of the overall number of faults during
the design lifecycle, and the 'industry standard' approach approach
to dealing with this problem is woefully indequate despite techniques
to detect and remove the errors. A LOT Of time is spent doing
software requirements tracing as opposed to doing verification of
requirements. People argue that one cannot verify the requirements
until the system has been built - which is complete BS but industry
is very slow to change. We have shown that using software
architecture modeling addresses a large percentage of system level
problems early in the design life cycle. We are trying to convince
industry. Until change happens, the parade of failures like the
MAX will continue.
I'd love to hear more about that.
Cheers
Phil Hobbs
--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510
http://electrooptical.net
http://hobbs-eo.com
Sorry - I get a bit carried away on this topic...
For requirements engineering verification one can google: formal and semi-formal requirements specification languages. RDAL and ReqSpec are ones I am familiar with.
Techniques to verify requirements include model checking. Google model checking. Based of formal logic like LTL (Linear temporal logic) CTL (Compositional Tree Logic. One constructs state models from requirements and uses model checking engines to analyze the structures. Model checking was actually used to verify a bus protocol in the early 90s and found *lots* of problems with the spec...that caused industry to 'wake up'.
There are others that work on code, but these are very much research-y efforts.
Simulink has a model checker in its toolboxes (based on Promala) it is quite good).
We advocate using architecture design languages (ADL's) that is a formal modeling notation to model different views of the architecture and capture properties of the system from which analysis can be done (e.g. signal latency, variable format and property consistency, processor utilization, bandwidth capacity, hazard analysis, etc.) The one that I had a hand in designing is Architecture Analysis and Design Language (AADL) It is an SAE Aerospace standard. IF things turn out well, it will be used on the next generation of helecopters for the army. We have been piloting it use on real systems for the last 2-3 years, and last 10 years on pilot studies.
For systems hazard analysis, google STPA (System Theoretic Process Approach) spearheaded by Nancy Leveson MIT (She has consulted to Boeing).
Yes, I've seen software applied to fix hw problems but assessing the risk is complicated. The results can be catastrophic.
Ok, off my rant....