MH370 crash site identified with amateur radio technology...

On 9/5/2023 10:03 AM, Don Y wrote:
Good problem decomposition goes a long way towards that goal.
If you try to do \"too much\" you quickly overwhelm the developer\'s
ability to manage complexity (7 items in STM?).  And, as you can\'t
*see* the entire implementation, there\'s nothing to REMIND you
of some salient issue that might impact your local efforts.

[Hence the value of eschewing globals and the languages that
tolerate/encourage them!  This dramatically cuts down the
number of ways X can influence Y.]

Of course, if you\'ve never had any formal training (\"you\'re
just a coder\"), then you don\'t even realize these hazards exist!
You just pick at your code until it SEEMS to work and then
walk away.

Hence the need for the \"managed environments\" and
languages du jour that try to compensate for the
lack of formal training in schools and businesses.

[I worked with a Fortune *100* company on a 30 man project
where The Boss assigned the software for the product to a
*technician* whose sole qualification was that he
had a CoCo at home! Really? You\'re putting your good
name in the hands of a tinkerer??]

Sadly, most businessmen don\'t understand software or the
process and, rather than admit their ignorance, blunder
onward wondering (later) why everything turns to shite.
Anyone whose had to explain why a \"little change\" in
the product specification requires a major change to
the schedule understands the \"ignorance at the top\".

[I had a manager who wrote BASIC programs to keep track of the
DOG SHOWS that he\'d entered (what is that? just a bunch
of PRINT statements??) and considered himself qualified to
make decisions regarding the software in the products for
which he was responsible. *Anyone* can write code.]

And, engineers turned managers tend to be the worst as they
THINK they understand the current state of the art (because
they used to practice it) without realizing that it\'s a moving
target and if you\'re using last year\'s technology, you are 2 or
3 (!) years out of date!

Would you promote a *technician* to run an electronics DESIGN
department and expect him to be current wrt the latest
generation of components, design and manufacturing practices?
If he *thought* he was, how quickly would you disabuse him of
that belief?
 
On Monday, September 4, 2023 at 6:00:19 PM UTC-4, Klaus Vestergaard Kragelund wrote:
On 03-09-2023 18:19, Fred Bloggs wrote:
On Sunday, September 3, 2023 at 10:42:14 AM UTC-4, John Larkin wrote:
On Sun, 3 Sep 2023 05:38:52 -0700 (PDT), Fred Bloggs
bloggs.fred...@gmail.com> wrote:
On Sunday, September 3, 2023 at 4:15:30?AM UTC-4, John Larkin wrote:
On Sat, 2 Sep 2023 11:20:49 -0700 (PDT), Fred Bloggs
bloggs.fred...@gmail.com> wrote:
On Friday, September 1, 2023 at 4:53:24?PM UTC-4, John Larkin wrote:
On Fri, 1 Sep 2023 12:56:31 -0700 (PDT), Klaus Kragelund
klaus.k...@gmail.com> wrote:

Hi

I have a triac control circuit in which I supply gate current all the time to avoid zero crossing noise.

https://electronicsdesign.dk/tmp/TriacSolution.PNG

Apparently, sometimes the circuit spontaneously turns on the triac.
It\'s probable due to a transient, high dV/dt, turning on via \"rate of rise of offstate voltage\" limits.

The triac used is BT137S-600:

https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf

I am using a snubber to divert energy, and also have a pulldown of 1kohm to shunt energy transients that capacitively couple into the gate.

The unit is at the client, so have not measured on it yet, so trying to guess what I should try to remove the problem.

I could:

Do a more hard snubber
Reduce the shunt resistor
Get a better triac
Add an inductor in series to limit the transient

One thing I though of, since I turn it on all the time, and it is not very critical that the timing is perfect in terms of turning it on in the zero crossing, was to add a big capacitor on the gate in parallel with shunt resistor R543. That will act as low impedance for high speed transients.

Good idea, or better ideas?

Cheers

Klaus
It\'s a sensitive-gate triac. R542 and 543 look big to me. They could
be smaller and bypassed.

If there are motors in the vicinity, you want to at least use twisted leads in all feeds of the gate circuit.
I doubt that would make any difference.

Twisted pairs make a HUGE difference.
Sometimes. Probably not here.

I wonder how far from the triac the opto is.

Snubbers designed around \'nominal\' values are almost always wrong. He needs to determine worst case phase lag of load current relative to voltage. That fixes the snubber specification. Although snubbers don\'t have much to do with \'spurious\' turn on.
Yeah, you mean the leakage from the snubber when the triac is turned off?

The triac has its greatest susceptibility at turn off. Spec says it could be as low as 8 V/us., in an 8A circuit, Tj=95oC. If the rate of rise of voltage across part exceeds that, it may re-fire. The snubber is used to reduce that rate of raise.

A lot of SS switches have fairly large OFF state leakage, on the order of 100mA. But most of that is because of an offline ancillary power supply and not the snubber.
 
On Friday, September 1, 2023 at 3:56:37 PM UTC-4, Klaus Kragelund wrote:
Hi

I have a triac control circuit in which I supply gate current all the time to avoid zero crossing noise.

https://electronicsdesign.dk/tmp/TriacSolution.PNG

Apparently, sometimes the circuit spontaneously turns on the triac.
It\'s probable due to a transient, high dV/dt, turning on via \"rate of rise of offstate voltage\" limits.

The triac used is BT137S-600:

https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf

I am using a snubber to divert energy, and also have a pulldown of 1kohm to shunt energy transients that capacitively couple into the gate.

The unit is at the client, so have not measured on it yet, so trying to guess what I should try to remove the problem.

I could:

Do a more hard snubber
Reduce the shunt resistor
Get a better triac
Add an inductor in series to limit the transient

One thing I though of, since I turn it on all the time, and it is not very critical that the timing is perfect in terms of turning it on in the zero crossing, was to add a big capacitor on the gate in parallel with shunt resistor R543. That will act as low impedance for high speed transients.

Good idea, or better ideas?

What does that U1 on the schematic represent? A SPST with 5 ms bounce? V3 is line 225VAC 50Hz, I see that. That could be a problem. Is the spurious triggering occurring when you throw that switch?

Cheers

Klaus
 
On Tue, 5 Sep 2023 18:02:05 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 05/09/2023 17:45, Joe Gwinn wrote:
On Tue, 05 Sep 2023 08:57:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.

It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.

That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.


The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.

There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.

There is a complication. Modern software is tens of millions of lines
of code, far exceeding the inspection capabilities of humans. Hardware
is far simpler in terms of lines of FPGA code. But it\'s creeping up.

On a project some decades ago, the customer wanted us to verify every
path through the code, which was about 100,000 lines (large at the
time) of C or assembler (don\'t recall, doesn\'t actually matter).

In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP

Although that is true it is also true that a small number of cunningly
constructed test datasets can explore a very high proportion of the most
frequently traversed paths in any given codebase. One snag is that
testing is invariably cut short by management when development overruns.

The bits that fail to get explored tend to be weird error recovery
routines. I recall one latent on the VAX for ages which was that when it
ran out of IO handles (because someone was opening them inside a loop)
the first thing the recovery routine tried to do was open an IO channel!

calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.

McCabe\'s complexity metric provides a way to test paths in components
and subsystems reasonably thoroughly and catch most of the common
programmer errors. Static dataflow analysis is also a lot better now
than in the past.

Then you only need at most 40000 test vectors to take each branch of
every binary if statement (60000 if it is Fortran with 3 way branches
all used). That is a rather more tractable number (although still large).

Any routine with too high a CCI count is practically certain to contain
latent bugs - which makes it worth looking at more carefully.

I must say that I fail to see how this can overcome 10^6021 paths,
even if it is wondrously effective, reducing the space to be tested by
a trillion to one (10^-12) - only 10^6009 paths to explore.

Joe Gwinn
 
On Tue, 05 Sep 2023 10:19:15 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 05 Sep 2023 12:45:01 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Tue, 05 Sep 2023 08:57:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.

It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.

That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.


The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.

There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.

There is a complication. Modern software is tens of millions of lines
of code, far exceeding the inspection capabilities of humans.

After you type a line of code, read it. When we did that, entire
applications often worked first try.

Hardware
is far simpler in terms of lines of FPGA code. But it\'s creeping up.

FPGAs are at least (usually) organized state machines. Mistakes are
typically hard failures, not low-rate bugs discovered in the field.
Avoiding race and metastability hazards is common practise.


On a project some decades ago, the customer wanted us to verify every
path through the code, which was about 100,000 lines (large at the
time) of C or assembler (don\'t recall, doesn\'t actually matter).

Software provability was a brief fad once. It wasn\'t popular or, as
code is now done, possible.



In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP
calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.

An FPGA is usually coded as a state machine, where the designer
understands that the machine has a finite number of states and handles
every one. A computer program has an impossibly large number of
states, unknown and certainly not managed. Code is like hairball async
logic design.

In recent FPGAs you have done, how many states and events (their
Cartesian product being the entire state table) are there?

By the way, back in the day when I was specifying state machines
(often for implementation in software), I had a rule that all cells
would have an entry, even the combinations of state and event that
\"couldn\'t happen\". This was essential for achieving robustness in
practice.


The customer withdrew the requirement.

It was naive of him to want correct code.

No, only a bit unrealistic.

But it was naive of him to think that total correctness can be tested
into anything.


The state of the art in verifying safety-critical code (as in for
safety of flight) is DO-178, which is an immensely heavy process. The
original objective was a probability of error not exceeding 10^-6,
this has been tightened to 10^-7 or 10^-8 because of the \"headline
risk\".

..<https://en.wikipedia.org/wiki/DO-178C>


Correctness can be mathematically proven only for extremely simple
mechanisms, using a sharply restricted set of allowed operations. See
The Halting Problem.

..<https://en.wikipedia.org/wiki/Halting_problem#:~:text=The%20halting%20problem%20is%20undecidable,usually%20via%20a%20Turing%20machine.>


Joe Gwinn
 
On Tue, 05 Sep 2023 18:33:47 -0400, Joe Gwinn <joegwinn@comcast.net>
wrote:

On Tue, 05 Sep 2023 10:19:15 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 05 Sep 2023 12:45:01 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Tue, 05 Sep 2023 08:57:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.

It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.

That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.


The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.

There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.

There is a complication. Modern software is tens of millions of lines
of code, far exceeding the inspection capabilities of humans.

After you type a line of code, read it. When we did that, entire
applications often worked first try.

Hardware
is far simpler in terms of lines of FPGA code. But it\'s creeping up.

FPGAs are at least (usually) organized state machines. Mistakes are
typically hard failures, not low-rate bugs discovered in the field.
Avoiding race and metastability hazards is common practise.


On a project some decades ago, the customer wanted us to verify every
path through the code, which was about 100,000 lines (large at the
time) of C or assembler (don\'t recall, doesn\'t actually matter).

Software provability was a brief fad once. It wasn\'t popular or, as
code is now done, possible.



In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP
calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.

An FPGA is usually coded as a state machine, where the designer
understands that the machine has a finite number of states and handles
every one. A computer program has an impossibly large number of
states, unknown and certainly not managed. Code is like hairball async
logic design.

In recent FPGAs you have done, how many states and events (their
Cartesian product being the entire state table) are there?

A useful state machine might have 4 or maybe 16 states. I\'m not sure
what you mean by \'events\'. Sometimes we have a state word and a
counter, which technically gives us more states but it\'s convnient to
think of them separately. As in \"repeat state 4 until the counter hits
zero.\"

A state machine can have many more inputs and outputs than it has
states. It is critical that no inputs can be changing when the clock
ticks.
 
On Tue, 05 Sep 2023 17:00:13 -0700, John Larkin
<jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 05 Sep 2023 18:33:47 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Tue, 05 Sep 2023 10:19:15 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 05 Sep 2023 12:45:01 -0400, Joe Gwinn <joegwinn@comcast.net
wrote:

On Tue, 05 Sep 2023 08:57:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:

On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?

Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.

It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.

That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.


The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.

There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.

Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.

FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?

That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.

There is a complication. Modern software is tens of millions of lines
of code, far exceeding the inspection capabilities of humans.

After you type a line of code, read it. When we did that, entire
applications often worked first try.

Hardware
is far simpler in terms of lines of FPGA code. But it\'s creeping up.

FPGAs are at least (usually) organized state machines. Mistakes are
typically hard failures, not low-rate bugs discovered in the field.
Avoiding race and metastability hazards is common practise.


On a project some decades ago, the customer wanted us to verify every
path through the code, which was about 100,000 lines (large at the
time) of C or assembler (don\'t recall, doesn\'t actually matter).

Software provability was a brief fad once. It wasn\'t popular or, as
code is now done, possible.



In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP
calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.

An FPGA is usually coded as a state machine, where the designer
understands that the machine has a finite number of states and handles
every one. A computer program has an impossibly large number of
states, unknown and certainly not managed. Code is like hairball async
logic design.

In recent FPGAs you have done, how many states and events (their
Cartesian product being the entire state table) are there?


A useful state machine might have 4 or maybe 16 states. I\'m not sure
what you mean by \'events\'. Sometimes we have a state word and a
counter, which technically gives us more states but it\'s convnient to
think of them separately. As in \"repeat state 4 until the counter hits
zero.\"

We\'ll call it 16 states for the present purposes.

An event is anything that can cause the state to change, including
expiration of a timer. This is basically a design choice.


A state machine can have many more inputs and outputs than it has
states.

Yes, that\'s typical.


It is critical that no inputs can be changing when the clock
ticks.

That\'s also essential in hardware state machines.

In software state machines, events are most often the arrival of
messages, and the mechanism that provides these messages ensures that
they are presented in serial order (even if the underlying hardware
does not ensure ordering).

Joe Gwinn
 
On 9/5/2023 3:14 PM, Joe Gwinn wrote:
Then you only need at most 40000 test vectors to take each branch of
every binary if statement (60000 if it is Fortran with 3 way branches
all used). That is a rather more tractable number (although still large).

Any routine with too high a CCI count is practically certain to contain
latent bugs - which makes it worth looking at more carefully.

I must say that I fail to see how this can overcome 10^6021 paths,
even if it is wondrously effective, reducing the space to be tested by
a trillion to one (10^-12) - only 10^6009 paths to explore.

You don\'t have to exercise every path with a unique set of stimuli.
The number of N-way branches only sets an upper limit on the
number of paths through a piece of code. The actual number of
paths can be much less:

if (x == 1)
doXisone;

if (x == 2)
doXistwo;

if (x == 3)
doXisthree;

has three 2-way branches but only three distinct paths
through the code (instead of 2^3 = 8)

The point of complexity metrics is to alert you that maybe you
have factored the problem poorly.

Or, failed to recognize some underlying relationship(s)
that could simplify the process.

What\'s more amusing is how few orgainzations USE any of these
measures. And, how poorly they institute design controls on
software, in general!

[I had a colleague ask me to help one of his clients
because their codebase \"suddenly\" stopped working.
It was \"suggested\" that an employee who had been \"made
redundant\" may have sabotaged the codebase.

\"Simple: retrieve the snapshot that would have been current
on the day before he was aware that he would be terminated.
Then, step forward until the day he was actually *gone*
and look for changes...\"

But, no controls on who had access to the repository (nor
guarantees that the identity of the ACTOR was accurately stored
with each ACTION. The employee had gone back and cooked
the contents of the repository so a more detailed forensic
examination was required; you couldn\'t just checkout a
particular release and be assured it represented the
state of the codebase on the assumed day!

Can YOUR developers dick with the repository in unexpected ways?]
 
On 9/5/2023 3:33 PM, Joe Gwinn wrote:
In recent FPGAs you have done, how many states and events (their
Cartesian product being the entire state table) are there?

There is undoubtedly far more state *in* the CPU (neglecting
the application!) than in most FPGA designs!

By the way, back in the day when I was specifying state machines
(often for implementation in software), I had a rule that all cells
would have an entry, even the combinations of state and event that
\"couldn\'t happen\". This was essential for achieving robustness in
practice.

It also helps if the machine can power up into a \"random\"
state (no \"reset\" to the state variables)... just let it run
for a few clocks until it finds its way to a known state.

The state of the art in verifying safety-critical code (as in for
safety of flight) is DO-178, which is an immensely heavy process. The
original objective was a probability of error not exceeding 10^-6,
this has been tightened to 10^-7 or 10^-8 because of the \"headline
risk\".

.<https://en.wikipedia.org/wiki/DO-178C

But, this is still just a game of chance. And, with the number of
external things that can potentially impact the operation of the
state machine, even a provably correct implementation can fail
because the hardware isn\'t \"ideal\".

Correctness can be mathematically proven only for extremely simple
mechanisms, using a sharply restricted set of allowed operations. See

Exactly. Just like provably correct programs are extremely
simple.

And, if you try to *build* a (more complex) system atop that
proven (sub)system, you discover you\'re right back where you
started. (Else, if knowing the underlying system was
provably correct, you would rationalize that ANY program
is correct because the opcodes on which it relies are correctly
implemented!)

The Halting Problem.

.<https://en.wikipedia.org/wiki/Halting_problem#:~:text=The%20halting%20problem%20is%20undecidable,usually%20via%20a%20Turing%20machine.
 
On 9/5/2023 5:11 PM, Joe Gwinn wrote:
In software state machines, events are most often the arrival of
messages, and the mechanism that provides these messages ensures that
they are presented in serial order (even if the underlying hardware
does not ensure ordering).

But software state machines can get EASILY creative in how they
process events. E.g., \"go back from whence you came\" (which isn\'t
present in most hardware machines)

A typical user interface can have hundreds of states, not counting
the information/controls they are \"accumulating\".
 
On 9/5/2023 6:04 PM, Don Y wrote:
On 9/5/2023 3:14 PM, Joe Gwinn wrote:
Then you only need at most 40000 test vectors to take each branch of
every binary if statement (60000 if it is Fortran with 3 way branches
all used). That is a rather more tractable number (although still large).

Any routine with too high a CCI count is practically certain to contain
latent bugs - which makes it worth looking at more carefully.

I must say that I fail to see how this can overcome 10^6021 paths,
even if it is wondrously effective, reducing the space to be tested by
a trillion to one (10^-12) - only 10^6009 paths to explore.

You don\'t have to exercise every path with a unique set of stimuli.
The number of N-way branches only sets an upper limit on the
number of paths through a piece of code.  The actual number of
paths can be much less:

if (x == 1)
   doXisone;

if (x == 2)
   doXistwo;

if (x == 3)
   doXisthree;

actually, four
 
On a sunny day (Tue, 5 Sep 2023 09:15:50 -0700 (PDT)) it happened John Smiht
<utube.jocjo@xoxy.net> wrote in
<9beed550-38ba-425a-9788-548c77eddf7an@googlegroups.com>:

On Tuesday, September 5, 2023 at 8:50:22 AM UTC-5, John Walliker wr=
ote:
On Tuesday, 5 September 2023 at 06:17:17 UTC+1, gggg gggg wrote:
On Friday, September 1, 2023 at 7:39:45 PM UTC-7, John Smiht wr=
ote:
On Friday, September 1, 2023 at 4:54:30 PM UTC-5, Flyguy wrot=
e:
On Friday, September 1, 2023 at 2:48:13 PM UTC-7, Flyguy wr=
ote:
A retired aerospace engineer, Richard Godfrey, analyzed radio wav=
e propagation data from the Weak Signal Propagation Reporter network develo=
ped by hams to pinpoint MH370\'s crash site to a 300 sq mi area. This sounds=
like a lot, but previous estimates where hundreds of thousands of sq mi.=

https://www.airlineratings.com/news/mh370-new-research-paper-conf=
irms-wsprnet-tracking-technology/
Here is the full report:
https://www.dropbox.com/s/pkolz2mxr1rhepb/MH370%20GDTAAA%20WSPRne=
t%20Analysis%20Technical%20Report%2015MAR2022.pdf?dl=0
Godfrey was approached by Netflix for a documentary about MH370, =
but declined as they only wanted conspiratorial viewpoints. In fact, the Ne=
tflix \"documentary\" peddles the idea of a Russian conspiracy where MH370 wa=
s hijacked by three Russians and flown to Kazakhstan. They
do this by entering the electronics bay and take control of the air=
craft and lock out the pilot\'s controls. Obviously, Godfrey\'s flight path t=
otally refutes this theory.
Here is the flight path report:
https://www.dropbox.com/s/k4fn8eec4z9np0z/GDTAAA%20WSPRnet%20MH370%=
20Analysis%20Flight%20Path%20Report.pdf
Captivating! I had no idea that WSPR analysees could produce such res=
ults.
Thanks for the link to the paper.
Cheers,
John
This is from the Comments section of the following article:

Dave Pergamon, Perth, Australia, 2 days ago

I\'m a radio ham and I know full well that WSPR is not technically capab=
le of tracking aircraft movements. For starters, WSPR frequencies and power=
levels are far too low to detect aircraft and anyhow, WSPR radio waves tra=
vel in the ionosphere, 80 to 600 km above the Earth\'s surface, whereas maxi=
mum altitude commercial aircraft fly at is around 30,000 feet or about ten =
kilometres. No professional radio physicist or atmospheric scientist of any=
repute would put their names to this kind of pseudo-scientific BS.
The paper does state that the interaction with aircraft happens close to =
the locations where the sky wave refracts
down to the ground and reflects up again. This means that the claim quote=
d above must have been made by
somebody who had not actually read what they were claiming to be BS. Whet=
her the results are accurate enough to
give a useful search area is another matter.
John

https://www.dailymail.co.uk/news/article-12468439/MH370-flight-bombshel=
l-claim-resting-place-revealed.html


Yes, and those WSPR stations are all over the globe. In addition, I think I=
remember watching something on TV or YT about how the Russians or somebody=
used the disturbance of radio waves as a passive \"radar\" system. Actually,=
ISTR that it allowed the detection of a US stealth plane which was shot do=
wn.
Another John

Passive radar (using broadcast stations) has been used.
https://www.rtl-sdr.com/tag/passive-radar/
scroll down to \'passive radar for RTL-SDRs\'
all open source, using RTL-SDR USB sticks
https://github.com/DanielKami
Plenty of RTL-SDR sticks here, have not tried it yet..

Them F35s come over so low and are so noisy here no need for radar :)
https://panteltje.nl/pub/first_F35_pilot_IXIMG_0225.JPG

This one is more quiet and green and has lower radar signature:
https://panteltje.nl/pub/a_better_f35.JPG
 
On 05/09/2023 23:14, Joe Gwinn wrote:
On Tue, 5 Sep 2023 18:02:05 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 05/09/2023 17:45, Joe Gwinn wrote:

calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.

McCabe\'s complexity metric provides a way to test paths in components
and subsystems reasonably thoroughly and catch most of the common
programmer errors. Static dataflow analysis is also a lot better now
than in the past.

Then you only need at most 40000 test vectors to take each branch of
every binary if statement (60000 if it is Fortran with 3 way branches
all used). That is a rather more tractable number (although still large).

Any routine with too high a CCI count is practically certain to contain
latent bugs - which makes it worth looking at more carefully.

I must say that I fail to see how this can overcome 10^6021 paths,
even if it is wondrously effective, reducing the space to be tested by
a trillion to one (10^-12) - only 10^6009 paths to explore.

It is a divide and conquer strategy. It guarantees to execute every path
at least once and common core paths many times with different data. The
number of test vectors required scales as log2 of the number of paths.

That is an enormous reduction in the state space and makes coverage
testing possible. There are even some automatic tools that can help
create the test vectors now. Not a recommendation but just to make you
aware of some of the options for this sort of thing.

www.accelq.com/blog/test-coverage-techniques/

Here is a simple example for N = 5 (you wouldn\'t code it this way)

if (i<16)
{
if (i<8)
{
if (i<4)
{
if (i<2)
{
if (i<1) //zero
else // one
}
else
{
if (i<3) // two
else // three
}
else
{
}
}
else
{
}
}
else
{
}
}
else
{
}

So that at each level i=0 ..N in the trivial example there are 2^N if
statements to select between numbers in the range 0 to 31 inclusive.
There is a deliberate error left in.

Counted your way there are 2^(N+1)-1 if statements and so 2^(2^(N+1))
distinct paths through the software (plus a few more with invalid data).

However integers -1 through 2^N will be sufficient to explore every path
at least once and test for high/low fence post errors.

Concrete example of N = 5
total if statements 63
naive paths through code 2^63 ~ 10^19
CCI test vectors to test every path 34

The example is topologically equivalent to real* code you merely have to
construct input data that will force execution down each binary choice
in turn at every level. Getting the absolute minimum number of test
vectors for full coverage is a much harder problem but a good enough
solution is possible in most practical cases.

--
Martin Brown

*real code can be grubby in reality with different depths of tree.
(I\'d hope that few routines go deeper than 5 nested if statements)
 
On 05/09/2023 19:06, Don Y wrote:
On 9/5/2023 10:02 AM, Martin Brown wrote:
In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements.  So, there
are up to 2^20000 unique paths through the code.  Which chokes my HP

Although that is true it is also true that a small number of cunningly
constructed test datasets can explore a very high proportion of the
most frequently traversed paths in any given codebase. One snag is
that testing is invariably cut short by management when development
overruns.

\"We\'ll fix it in version 2\"

I always found this an amusing delusion.

If the product is successful, there will be lots of people clamoring
for fixes so you won\'t have any manpower to devote to designing
version 2 (but your competitors will see the appeal your product
has and will start designing THEIR replacement for it!)

If the product is a dud (possibly because of these problems),
there won\'t be a need for a version 2.

It depends a bit on how special the hardware being controlled is.

The industry I am most familiar with was high end scientific mass
spectrometers where management viewed the user software (and embedded
code) as a necessary evil to make the hardware work properly.
The bits that fail to get explored tend to be weird error recovery
routines. I

Because, by design, they are seldom encountered.
So, don\'t benefit from being exercised in the normal
course of operation.

It is always the rare cases that bite you in the backside (eventually).

Back in the day when (even for a dry site) almost everyone went to the
pub on Friday lunchtimes we had a rule that you could test software and
note defects but not make any alterations that afternoon.

The finger trouble caused by the odd person having had one too many
replicated typical user errors rather well!

McCabe\'s complexity metric provides a way to test paths in components
and subsystems reasonably thoroughly and catch most of the common
programmer errors. Static dataflow analysis is also a lot better now
than in the past.

But some test cases can mask other paths through the code.
There is no guarantee that a given piece of code *can* be
thoroughly tested -- especially if you take into account the
fact that the underlying hardware isn\'t infallible;

The only time I have seen hardware failure masquerading as software bugs
I can count on the fingers of one hand. They are memorable for that:

1. Memory management unit on ND500 supermini
2. Disk controller DMA flaw on certain HP models in the 386 era
3. Embedded CPU where RTI failed about 1:10^9 times
4. Intel FPU divide bug

I know that cosmic ray single bit switches have to allowed for in some
of the space probes. OK in error corrected memory but really tough if it
happens in an unprotected CPU register.

\"if (x % )\" can yield one result, now, and a different
result, 5 lines later -- even though x hasn\'t been
altered (but the hardware farted).

So:

    if (x % 2) {
       do this;
       do that;
       do another_thing;
    } else {
       do that;
    }

can execute differently than:

    if (x % 2) {
       do this;
    }

    do that;

    if (x % 2) {
       do another_thing;
    }

Years ago, this possibility wasn\'t ever considered.

[Yes, optimizers can twiddle this but the point remains]

And, that doesn\'t begin to address hostile actors in a
system!

It is optimisers rearranging things that can make testing these days so
much more difficult. The CPU out of order and speculative execution
profile also means that the old rules about if statements no longer hold
true. Even more so if the same path is taken many times. The old loop
unrolling trick can actually work against you now if it means that the
innermost loop no longer fits nicely inside a cache line.
Then you only need at most 40000 test vectors to take each branch of
every binary if statement (60000 if it is Fortran with 3 way branches
all used). That is a rather more tractable number (although still large).

Any routine with too high a CCI count is practically certain to
contain latent bugs - which makes it worth looking at more carefully.

\"A \'program\' should fit on a single piece of paper\"

The 7 +/- 2 rule for each hierarchical level of a software design is
still quite a good heuristic unless there are special circumstances.

--
Martin Brown
 
On 9/6/2023 2:58 AM, Martin Brown wrote:
On 05/09/2023 19:06, Don Y wrote:
On 9/5/2023 10:02 AM, Martin Brown wrote:
In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements.  So, there
are up to 2^20000 unique paths through the code.  Which chokes my HP

Although that is true it is also true that a small number of cunningly
constructed test datasets can explore a very high proportion of the most
frequently traversed paths in any given codebase. One snag is that testing
is invariably cut short by management when development overruns.

\"We\'ll fix it in version 2\"

I always found this an amusing delusion.

If the product is successful, there will be lots of people clamoring
for fixes so you won\'t have any manpower to devote to designing
version 2 (but your competitors will see the appeal your product
has and will start designing THEIR replacement for it!)

If the product is a dud (possibly because of these problems),
there won\'t be a need for a version 2.

It depends a bit on how special the hardware being controlled is.

The industry I am most familiar with was high end scientific mass spectrometers
where management viewed the user software (and embedded code) as a necessary
evil to make the hardware work properly.

Note that they were LIKELY reasonably skilled users. So, they could
recognize things that \"didn\'t quite make sense\" and not be victimized
by them.

When users are less savvy, the consequences of a system problem
can be more dire -- as they may not be able to recognize and compensate
for it. E.g., the stove/oven bug I described -- if manifested -- is
likely to leave its victims perplexed as to how to escape its grip.

The bits that fail to get explored tend to be weird error recovery routines. I

Because, by design, they are seldom encountered.
So, don\'t benefit from being exercised in the normal
course of operation.

It is always the rare cases that bite you in the backside (eventually).

Because they *are* rare. Folks often don\'t check for memory allocation
failures because they don\'t encounter them in the normal course of
the program\'s execution. So, when the system is *stressed* and they
manifest, the code to handle them is conveniently missing!

[Oh, you don\'t have mechanisms in place to stress a system
as it is being tested? Don\'t worry. Your CUSTOMERS will
find a way in the normal course of operation... won\'t YOU
be embarassed! :> ]

I find that most folks haven\'t a clue as to how to deal with
unexpected errors. And, you often just see them propagated along
until *something* panic()s.

My current design is entirely real-time. EVERY task has to specify
a deadline. AND, a deadline handler -- what do you want the system to
do WHEN your deadline isn\'t met. Don\'t want to think about that?
Then it will just be killed off -- and, by extension, anything
that relies on the services it provides will be killed off. Not
going to leave a good impression to the user that installed your applet!

Back in the day when (even for a dry site) almost everyone went to the pub on
Friday lunchtimes we had a rule that you could test software and note defects
but not make any alterations that afternoon.

The finger trouble caused by the odd person having had one too many replicated
typical user errors rather well!

I have a delightful knack for tickling the unexplored corners of applications.
It\'s something colleagues have grown to love -- and hate! \"Have Don play
with it for a while...\" It\'s rare that I can\'t find something on a
*finished* piece of software that doesn\'t work -- in just a few minutes
of playing around!

[I\'ve got a Dragon cassette deck. It \"autoreverses\" by switching to
a separate pair of tracks in the head that are in-line with the
\"back side\" stereo channels on the tape -- so the cassette
doesn\'t have to be mechanically flipped over. It\'s \"tape counter\"
is a simple revolutions counter. So, if it starts at 0000 at
the start of side A and reaches XXXX at the end of side A, it
will count *backwards* back down to 0000 while it is playing side B.
But, there is a race in the system logic that can repeatably
cause it to count FORWARDS while *moving* BACKWARDS (there are
several MCUs in the product and comms take finite time! :> )

It\'s a genuine defect because I have two of them and they both
exhibit the same behavior. For a $2K (30+ years ago) device
you would think they\'d pay a bit more attention to detail!]

I had a buddy \"finish\" a product, complete all of the testing
prior to FDA approvals. I happened to be walking past him as he
made his \"announcement\". I reached over his shoulder and
typed something on the machine\'s console -- and it crashed
spectacularly!

\"What did you do????!\"
\"This:\"
\"You\'re not supposed to do that!\"
\"Why did your system LET ME?\"

McCabe\'s complexity metric provides a way to test paths in components and
subsystems reasonably thoroughly and catch most of the common programmer
errors. Static dataflow analysis is also a lot better now than in the past.

But some test cases can mask other paths through the code.
There is no guarantee that a given piece of code *can* be
thoroughly tested -- especially if you take into account the
fact that the underlying hardware isn\'t infallible;

The only time I have seen hardware failure masquerading as software bugs I can
count on the fingers of one hand. They are memorable for that:

1. Memory management unit on ND500 supermini
2. Disk controller DMA flaw on certain HP models in the 386 era
3. Embedded CPU where RTI failed about 1:10^9 times
4. Intel FPU divide bug

I know that cosmic ray single bit switches have to allowed for in some of the
space probes. OK in error corrected memory but really tough if it happens in an
unprotected CPU register.

The fact that ECCs are now common on datapaths (and structures)
*inside* CPUs suggests they already know they are pushing the
envelope in terms of reliable behavior (in the absence of those
mechanisms). Look at MLC flash where states are resolved based
on a few handfulls of electrons!

When do you -- as a user or designer -- lose faith in a system
that is reporting CORRECTED errors? At what level do you start
seriously wondering about the number of UNCORRECTED errors that
you *might* be experiencing?

What about systems that don\'t have provisions for reporting
same? Do you just assume all is well?

When do you start wondering if that flash memory that
holds your system image, FPGA configuration, etc. is
suffering from read wear? Do you have a mechanism for
detecting that? Reporting it? Remedying it?? The
software can\'t perform as designed if the hardware
can\'t be relied upon to reliably reproduce it.

Are you sure your *intended* code isn\'t triggering an
exploitable feature accidentally? (are you sure any
\"foreign\" code that you are hosting isn\'t deliberately
trying to do so?)

\"if (x % )\" can yield one result, now, and a different
result, 5 lines later -- even though x hasn\'t been
altered (but the hardware farted).

So:

     if (x % 2) {
        do this;
        do that;
        do another_thing;
     } else {
        do that;
     }

can execute differently than:

     if (x % 2) {
        do this;
     }

     do that;

     if (x % 2) {
        do another_thing;
     }

Years ago, this possibility wasn\'t ever considered.

[Yes, optimizers can twiddle this but the point remains]

And, that doesn\'t begin to address hostile actors in a
system!

It is optimisers rearranging things that can make testing these days so much
more difficult. The CPU out of order and speculative execution profile also
means that the old rules about if statements no longer hold true. Even more so
if the same path is taken many times. The old loop unrolling trick can actually
work against you now if it means that the innermost loop no longer fits nicely
inside a cache line.

Most \"programmers\" are clueless of these issues; they just concentrate
on getting the \"logic correct\". Hardware designers are even MORE
clueless because they can\'t even imagine the performance consequences.

I\'ve had to refactor many of the critical path structures in my
current design to \"less intuitive\" structures because of the
cache hits incurred when designing for a more intuitive implementation
(processor doesn\'t care about what *seems* related -- it only cares
about what *is* related, in terms of execution).

Then you only need at most 40000 test vectors to take each branch of every
binary if statement (60000 if it is Fortran with 3 way branches all used).
That is a rather more tractable number (although still large).

Any routine with too high a CCI count is practically certain to contain
latent bugs - which makes it worth looking at more carefully.

\"A \'program\' should fit on a single piece of paper\"

The 7 +/- 2 rule for each hierarchical level of a software design is still
quite a good heuristic unless there are special circumstances.

Makes you wonder how folks can still think things like monolithic
kernels \"make sense\": how many crania do you need to gather to have
even an APPROXIMATE understanding of what\'s happening in the
machine, *now*?

With multi-threaded kernels, are you sure you know everything that
*might* be happening WHILE \"this\" is happening?

*SO* much easier when you\'re trying to get some trivial piece of code
to work than something CONSIDERABLY more complex!

[It has been an eye-opener dealing with odd concepts like
transport delays in designing and evaluating performance
when you always took the cost of comms INSIDE a device to
be zero! Think of all the critical regions that are
inherently exposed that you never had to worry about:
\"What are the chances of <something> happening *between*
the execution of statements 23 and 24?\" Ans: 100%
if the user is running the device! Now, how do you TEST
for that inevitability?]
 
On 9/6/2023 4:42 AM, Don Y wrote:
[I\'ve got a Dragon cassette deck.  It \"autoreverses\" by switching to
a separate pair of tracks in the head that are in-line with the
\"back side\" stereo channels on the tape -- so the cassette
doesn\'t have to be mechanically flipped over.  It\'s \"tape counter\"
is a simple revolutions counter.  So, if it starts at 0000 at
the start of side A and reaches XXXX at the end of side A, it
will count *backwards* back down to 0000 while it is playing side B.
But, there is a race in the system logic that can repeatably
cause it to count FORWARDS while *moving* BACKWARDS (there are
several MCUs in the product and comms take finite time!  :> )

No, I think the flaw has it counting backwards while moving forwards
(I haven\'t tickled that bug recently; I know its there so why
give it a chance to manifest?) But, only if you act in a small
(1-2 second) window of time.

So, there is no way to claim that it\'s an intentional feature
as the deck shouldn\'t care if you waited 1 second or 4!

It\'s a genuine defect because I have two of them and they both
exhibit the same behavior.  For a $2K (30+ years ago) device
you would think they\'d pay a bit more attention to detail!]
 
On Monday, 4 September 2023 at 10:12:50 UTC+1, gggg gggg wrote:
> https://www.wired.com/2015/07/still-best-theory-mh370/

The trouble with that theory is that they would be unlikely to be overcome by smoke
when wearing oxygen masks - which they would undoubtedly put on in such a situation.

John
 
On 05-09-2023 22:08, Fred Bloggs wrote:
On Friday, September 1, 2023 at 3:56:37 PM UTC-4, Klaus Kragelund wrote:
Hi

I have a triac control circuit in which I supply gate current all the time to avoid zero crossing noise.

https://electronicsdesign.dk/tmp/TriacSolution.PNG

Apparently, sometimes the circuit spontaneously turns on the triac.
It\'s probable due to a transient, high dV/dt, turning on via \"rate of rise of offstate voltage\" limits.

The triac used is BT137S-600:

https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf

I am using a snubber to divert energy, and also have a pulldown of 1kohm to shunt energy transients that capacitively couple into the gate.

The unit is at the client, so have not measured on it yet, so trying to guess what I should try to remove the problem.

I could:

Do a more hard snubber
Reduce the shunt resistor
Get a better triac
Add an inductor in series to limit the transient

One thing I though of, since I turn it on all the time, and it is not very critical that the timing is perfect in terms of turning it on in the zero crossing, was to add a big capacitor on the gate in parallel with shunt resistor R543. That will act as low impedance for high speed transients.

Good idea, or better ideas?

What does that U1 on the schematic represent? A SPST with 5 ms bounce? V3 is line 225VAC 50Hz, I see that. That could be a problem. Is the spurious triggering occurring when you throw that switch?
The switch U1 is a simulation device to turn the external power source
on at 5ms (so peak of a 50Hz mains cycle).

That is done to see how a large dV/dt can affect the system (if that is,
the simulation model of the Triac supports M2-Gate capacitive coupling
modelling.
 
On Wednesday, September 6, 2023 at 9:51:31 AM UTC-4, Klaus Vestergaard Kragelund wrote:
On 05-09-2023 22:08, Fred Bloggs wrote:
On Friday, September 1, 2023 at 3:56:37 PM UTC-4, Klaus Kragelund wrote:
Hi

I have a triac control circuit in which I supply gate current all the time to avoid zero crossing noise.

https://electronicsdesign.dk/tmp/TriacSolution.PNG

Apparently, sometimes the circuit spontaneously turns on the triac.
It\'s probable due to a transient, high dV/dt, turning on via \"rate of rise of offstate voltage\" limits.

The triac used is BT137S-600:

https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf

I am using a snubber to divert energy, and also have a pulldown of 1kohm to shunt energy transients that capacitively couple into the gate.

The unit is at the client, so have not measured on it yet, so trying to guess what I should try to remove the problem.

I could:

Do a more hard snubber
Reduce the shunt resistor
Get a better triac
Add an inductor in series to limit the transient

One thing I though of, since I turn it on all the time, and it is not very critical that the timing is perfect in terms of turning it on in the zero crossing, was to add a big capacitor on the gate in parallel with shunt resistor R543. That will act as low impedance for high speed transients.

Good idea, or better ideas?

What does that U1 on the schematic represent? A SPST with 5 ms bounce? V3 is line 225VAC 50Hz, I see that. That could be a problem. Is the spurious triggering occurring when you throw that switch?

The switch U1 is a simulation device to turn the external power source
on at 5ms (so peak of a 50Hz mains cycle).

That is done to see how a large dV/dt can affect the system (if that is,
the simulation model of the Triac supports M2-Gate capacitive coupling
modelling.

Unless the model is specifically advertised to model that kind of thing, it probably doesn\'t. You can put a current probe in the gate lead to confirm..

For some representative transient waveforms see:
https://en.wikipedia.org/wiki/IEC_61000-4-5
 
On Wed, 6 Sep 2023 09:49:48 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 05/09/2023 23:14, Joe Gwinn wrote:
On Tue, 5 Sep 2023 18:02:05 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 05/09/2023 17:45, Joe Gwinn wrote:

calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.

McCabe\'s complexity metric provides a way to test paths in components
and subsystems reasonably thoroughly and catch most of the common
programmer errors. Static dataflow analysis is also a lot better now
than in the past.

Then you only need at most 40000 test vectors to take each branch of
every binary if statement (60000 if it is Fortran with 3 way branches
all used). That is a rather more tractable number (although still large).

Any routine with too high a CCI count is practically certain to contain
latent bugs - which makes it worth looking at more carefully.

I must say that I fail to see how this can overcome 10^6021 paths,
even if it is wondrously effective, reducing the space to be tested by
a trillion to one (10^-12) - only 10^6009 paths to explore.

It is a divide and conquer strategy. It guarantees to execute every path
at least once and common core paths many times with different data. The
number of test vectors required scales as log2 of the number of paths.

That is an enormous reduction in the state space and makes coverage
testing possible. There are even some automatic tools that can help
create the test vectors now. Not a recommendation but just to make you
aware of some of the options for this sort of thing.

www.accelq.com/blog/test-coverage-techniques/

Here is a simple example for N = 5 (you wouldn\'t code it this way)

if (i<16)
{
if (i<8)
{
if (i<4)
{
if (i<2)
{
if (i<1) //zero
else // one
}
else
{
if (i<3) // two
else // three
}
else
{
}
}
else
{
}
}
else
{
}
}
else
{
}

So that at each level i=0 ..N in the trivial example there are 2^N if
statements to select between numbers in the range 0 to 31 inclusive.
There is a deliberate error left in.

Counted your way there are 2^(N+1)-1 if statements and so 2^(2^(N+1))
distinct paths through the software (plus a few more with invalid data).

However integers -1 through 2^N will be sufficient to explore every path
at least once and test for high/low fence post errors.

Concrete example of N = 5
total if statements 63
naive paths through code 2^63 ~ 10^19
CCI test vectors to test every path 34

The example is topologically equivalent to real* code you merely have to
construct input data that will force execution down each binary choice
in turn at every level. Getting the absolute minimum number of test
vectors for full coverage is a much harder problem but a good enough
solution is possible in most practical cases.

In practice, this is certainly pretty effective, but the proposed
requirement did not allow for such shortcuts, rendering the
requirement intractable - the Sun will blow up first.

Also, in practice we do a combination of random probing and fuzzing.

..<https://en.wikipedia.org/wiki/Fuzzing>


Joe Gwinn
 

Welcome to EDABoard.com

Sponsor

Back
Top