S
server
Guest
message unavailable
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
On Friday, September 1, 2023 at 4:54:30â¯PM UTC-5, Flyguy wrote:
On Friday, September 1, 2023 at 2:48:13â¯PM UTC-7, Flyguy wrote:
A retired aerospace engineer, Richard Godfrey, analyzed radio wave propagation data from the Weak Signal Propagation Reporter network developed by hams to pinpoint MH370\'s crash site to a 300 sq mi area. This sounds like a lot, but previous estimates where hundreds of thousands of sq mi.
https://www.airlineratings.com/news/mh370-new-research-paper-confirms-wsprnet-tracking-technology/
Here is the full report:
https://www.dropbox.com/s/pkolz2mxr1rhepb/MH370%20GDTAAA%20WSPRnet%20Analysis%20Technical%20Report%2015MAR2022.pdf?dl=0
Godfrey was approached by Netflix for a documentary about MH370, but declined as they only wanted conspiratorial viewpoints. In fact, the Netflix \"documentary\" peddles the idea of a Russian conspiracy where MH370 was hijacked by three Russians and flown to Kazakhstan. They
do this by entering the electronics bay and take control of the aircraft and lock out the pilot\'s controls. Obviously, Godfrey\'s flight path totally refutes this theory.
Here is the flight path report:
https://www.dropbox.com/s/k4fn8eec4z9np0z/GDTAAA%20WSPRnet%20MH370%20Analysis%20Flight%20Path%20Report.pdf
Captivating! I had no idea that WSPR analysees could produce such results..
Thanks for the link to the paper.
Cheers,
John
On Saturday, September 2, 2023 at 11:17:32â¯AM UTC-4, Jan Panteltje =
wrote:
On a sunny day (Sat, 2 Sep 2023 06:18:05 -0700 (PDT)) it happened Fred Bl=
oggs
bloggs.fred...@gmail.com> wrote in
d8c35725-1608-4f22...@googlegroups.com>:
On Saturday, September 2, 2023 at 8:59:07â¯AM UTC-4, Jan Panteltj=
e w=
rote:
On a sunny day (Sat, 2 Sep 2023 04:42:02 -0700 (PDT)) it happened Fred=
Bl=
oggs
bloggs.fred...@gmail.com> wrote in
e55243f3-fec7-4101...@googlegroups.com>:
On Friday, September 1, 2023 at 5:48:13â¯PM UTC-4, Flyguy wrot=
e:
A retired aerospace engineer, Richard Godfrey, analyzed radio wave =
pro=
pag=
ation data from the Weak Signal Propagation Reporter network develope=
d b=
y h=
ams to pinpoint MH370\'s crash site to a 300 sq mi area. This sounds l=
ike=
a =
lot, but previous estimates where hundreds of thousands of sq mi.
https://www.airlineratings.com/news/mh370-new-research-paper-confir=
ms-=
wsp=
rnet-tracking-technology/
Here is the full report:
https://www.dropbox.com/s/pkolz2mxr1rhepb/MH370%20GDTAAA%20WSPRnet%=
20A=
nal=
ysis%20Technical%20Report%2015MAR2022.pdf?dl=0
Godfrey was approached by Netflix for a documentary about MH370, bu=
t d=
ecl=
ined as they only wanted conspiratorial viewpoints. In fact, the Netf=
lix=
\"d=
ocumentary\" peddles the idea of a Russian conspiracy where MH370 was =
hij=
ack=
ed by three Russians and flown to Kazakhstan. They do this by enterin=
g t=
he =
electronics bay and take control of the aircraft and lock out the pil=
ot\'=
s c=
ontrols. Obviously, Godfrey\'s flight path totally refutes this theory=
.=
Can they get fentanyl implicated in some way? Or UFOs maybe.
Drug smuggler stashed something toxic in the OBOG central filtration =
may=
be.=
..
\"In February 2022, the Australian Transport Safety Bureau and Geoscie=
nce=
Au=
stralia confirmed they were reviewing old data related to MH370, foll=
owi=
ng =
the release of Godfrey\'s report.[15] In April, 2022 the data review \"=
con=
clu=
ded that it is highly unlikely there is an aircraft debris field with=
in =
the=
reviewed search area.\"[16]\"
https://www.atsb.gov.au/media/news-items/2022/mh370-data-review
After reading both papers it seems evident to me that it was a pre-med=
ita=
ted suicide by the pilot.
He must have been 100% concious and in control to make ail those small=
ad=
justments.
and the endpoint corresponds to / is the same as the one he had on his=
fl=
ight simulator at home.
That is the third pilot suicide and one who does not give shit about h=
is =
passengers I read about,
one in France, one in Africa and now this.
Maybe he locked everybody else out the cockpit...
The location has several miles uncertainty, but to look again now with=
th=
e smaller area may make sense.
Not sure. But don\'t all suicide flights crash on the planned route? This=
o=
ne crashed because it ran out of fuel. Pilot could have programmed a dea=
th =
route and then shot himself or took a pill.
the route has funny things like flying in a rectangle
see page 49 and so on of
GDTAAA_WSPRnet_MH370_Analysis_Flight_Path_Report.pdf
many other corrections too
Not sure you can program the MH370 auto-pilot do do all that all by itsel=
f.
My drone can, but then again... You need to enter GPS locations,
altitude.. speed.
The reality is Asian commercial pilots commit suicide like this ALL the tim=
e. But there\'re strong political and economic reasons for their phony accid=
ent investigations to come up short of making that finding.
The most recent one:
https://www.planeandpilotmag.com/news/the-latest/pilot-murder-suicide-likely-cause-of-china-eastern-air-disaster/
When you start losing parts of the wings and other control surfaces, that k=
ind of gives away the pilot was deliberately operating the aircraft outside=
its envelope. Boeing was convinced of their finding based upon the data. C=
hina was angry with it.
On Sunday, September 3, 2023 at 11:09:50?PM UTC-5,jdyöung wrote:
ÂImitation is the sincerest form of flattery that mediocrity can pay to greatness. - Oscar Wilde
Indeed it is
Owned!
ROFL!
Yes you are
jdyoung, Official
jdy...@gmail.com
www.splc.org
No you\'re not.
jdyöung, Official
jdyo...@gmail.com
On Mon, 4 Sep 2023 23:59:08 +0200, Klaus Vestergaard Kragelund
klauskvik@hotmail.com> wrote:
On 03-09-2023 18:05, Fred Bloggs wrote:
On Sunday, September 3, 2023 at 10:42:14?AM UTC-4, John Larkin wrote:
On Sun, 3 Sep 2023 05:38:52 -0700 (PDT), Fred Bloggs
bloggs.fred...@gmail.com> wrote:
On Sunday, September 3, 2023 at 4:15:30?AM UTC-4, John Larkin wrote:
On Sat, 2 Sep 2023 11:20:49 -0700 (PDT), Fred Bloggs
bloggs.fred...@gmail.com> wrote:
On Friday, September 1, 2023 at 4:53:24?PM UTC-4, John Larkin wrote:
On Fri, 1 Sep 2023 12:56:31 -0700 (PDT), Klaus Kragelund
klaus.k...@gmail.com> wrote:
Hi
I have a triac control circuit in which I supply gate current all the time to avoid zero crossing noise.
https://electronicsdesign.dk/tmp/TriacSolution.PNG
Apparently, sometimes the circuit spontaneously turns on the triac.
It\'s probable due to a transient, high dV/dt, turning on via \"rate of rise of offstate voltage\" limits.
The triac used is BT137S-600:
https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf
I am using a snubber to divert energy, and also have a pulldown of 1kohm to shunt energy transients that capacitively couple into the gate.
The unit is at the client, so have not measured on it yet, so trying to guess what I should try to remove the problem.
I could:
Do a more hard snubber
Reduce the shunt resistor
Get a better triac
Add an inductor in series to limit the transient
One thing I though of, since I turn it on all the time, and it is not very critical that the timing is perfect in terms of turning it on in the zero crossing, was to add a big capacitor on the gate in parallel with shunt resistor R543. That will act as low impedance for high speed transients.
Good idea, or better ideas?
Cheers
Klaus
It\'s a sensitive-gate triac. R542 and 543 look big to me. They could
be smaller and bypassed.
If there are motors in the vicinity, you want to at least use twisted leads in all feeds of the gate circuit.
I doubt that would make any difference.
Twisted pairs make a HUGE difference.
Sometimes. Probably not here.
I wonder how far from the triac the opto is.
The opto is just next to the Triac, and with a good ground plane, so no
twisting of the gate traces needed,
It drops 1.3V minimum at 10A. It has an R theta-JC of about 2. If the application is high current, it needs a heat sink, so it may be off board.
I^2T is only 21, which is kind of weak.
The max rate of rise of turn off commutating is min. 10 V/us, again on the low side.
But dVd/dt is a minimum of 200 V/us with the gate open, that\'s to trigger a commutation from the off state, which is pretty good but not outstanding. It could be that, and if so a standard L shunt C off the line is all that\'s needed.
Don\'t know how you get sensitive gate with 30mA trigger current.
Some triacs need 150 mA and 1.5 volts to trigger. Some have low ohmic
paths from gate to MT1, which helps reduce sprious triggering.
The kicker is VGT, gate trigger voltage. At 400V across main terminals, it could be as low as 0.25V @125oC, making for 0.4V at 25oC. Table 6. That kind of number indicates a vulnerability. He definitely should guard the gate drive.
https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf
The combination of the voltage rate of rise and the capacitance from
M1/M2 to the gate is what triggers it, right?
So just adding a capacitor on the gate would be a good way to protect
against noise, right?
I\'d bypass the gate and the optocoupler receiver. Either could be
triggered by a bit of capacitively-coupled noise.
As suggested, R542 and R543 could be smaller, both bypassed by as much
c as is compatible with your speed requirements.
On 03-09-2023 18:05, Fred Bloggs wrote:
On Sunday, September 3, 2023 at 10:42:14â¯AM UTC-4, John Larkin wrote:
On Sun, 3 Sep 2023 05:38:52 -0700 (PDT), Fred Bloggs
bloggs.fred...@gmail.com> wrote:
On Sunday, September 3, 2023 at 4:15:30?AM UTC-4, John Larkin wrote:
On Sat, 2 Sep 2023 11:20:49 -0700 (PDT), Fred Bloggs
bloggs.fred...@gmail.com> wrote:
On Friday, September 1, 2023 at 4:53:24?PM UTC-4, John Larkin wrote:
On Fri, 1 Sep 2023 12:56:31 -0700 (PDT), Klaus Kragelund
klaus.k...@gmail.com> wrote:
Hi
I have a triac control circuit in which I supply gate current
all the time to avoid zero crossing noise.
https://electronicsdesign.dk/tmp/TriacSolution.PNG
Apparently, sometimes the circuit spontaneously turns on the triac.
It\'s probable due to a transient, high dV/dt, turning on via
\"rate of rise of offstate voltage\" limits.
The triac used is BT137S-600:
https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf
I am using a snubber to divert energy, and also have a pulldown
of 1kohm to shunt energy transients that capacitively couple
into the gate.
The unit is at the client, so have not measured on it yet, so
trying to guess what I should try to remove the problem.
I could:
Do a more hard snubber
Reduce the shunt resistor
Get a better triac
Add an inductor in series to limit the transient
One thing I though of, since I turn it on all the time, and it
is not very critical that the timing is perfect in terms of
turning it on in the zero crossing, was to add a big capacitor
on the gate in parallel with shunt resistor R543. That will act
as low impedance for high speed transients.
Good idea, or better ideas?
Cheers
Klaus
It\'s a sensitive-gate triac. R542 and 543 look big to me. They could
be smaller and bypassed.
If there are motors in the vicinity, you want to at least use
twisted leads in all feeds of the gate circuit.
I doubt that would make any difference.
Twisted pairs make a HUGE difference.
Sometimes. Probably not here.
I wonder how far from the triac the opto is.
The opto is just next to the Triac, and with a good ground plane, so no
twisting of the gate traces needed,
It drops 1.3V minimum at 10A. It has an R theta-JC of about 2. If the
application is high current, it needs a heat sink, so it may be off
board.
I^2T is only 21, which is kind of weak.
The max rate of rise of turn off commutating is min. 10 V/us, again on
the low side.
But dVd/dt is a minimum of 200 V/us with the gate open, that\'s to
trigger a commutation from the off state, which is pretty good but not
outstanding. It could be that, and if so a standard L shunt C off the
line is all that\'s needed.
Don\'t know how you get sensitive gate with 30mA trigger current.
The kicker is VGT, gate trigger voltage. At 400V across main
terminals, it could be as low as 0.25V @125oC, making for 0.4V at
25oC. Table 6. That kind of number indicates a vulnerability. He
definitely should guard the gate drive.
https://www.mouser.dk/datasheet/2/848/bt137s-600g-1520710.pdf
The combination of the voltage rate of rise and the capacitance from
M1/M2 to the gate is what triggers it, right?
So just adding a capacitor on the gate would be a good way to protect
against noise, right?
Anyone else use bug reporting frequency as a gross indicator
of system stability?
On Friday, September 1, 2023 at 7:39:45â¯PM UTC-7, John Smiht wrote:
On Friday, September 1, 2023 at 4:54:30â¯PM UTC-5, Flyguy wrote:
On Friday, September 1, 2023 at 2:48:13â¯PM UTC-7, Flyguy wrote:
A retired aerospace engineer, Richard Godfrey, analyzed radio wave propagation data from the Weak Signal Propagation Reporter network developed by hams to pinpoint MH370\'s crash site to a 300 sq mi area. This sounds like a lot, but previous estimates where hundreds of thousands of sq mi.
https://www.airlineratings.com/news/mh370-new-research-paper-confirms-wsprnet-tracking-technology/
Here is the full report:
https://www.dropbox.com/s/pkolz2mxr1rhepb/MH370%20GDTAAA%20WSPRnet%20Analysis%20Technical%20Report%2015MAR2022.pdf?dl=0
Godfrey was approached by Netflix for a documentary about MH370, but declined as they only wanted conspiratorial viewpoints. In fact, the Netflix \"documentary\" peddles the idea of a Russian conspiracy where MH370 was hijacked by three Russians and flown to Kazakhstan. They
do this by entering the electronics bay and take control of the aircraft and lock out the pilot\'s controls. Obviously, Godfrey\'s flight path totally refutes this theory.
Here is the flight path report:
https://www.dropbox.com/s/k4fn8eec4z9np0z/GDTAAA%20WSPRnet%20MH370%20Analysis%20Flight%20Path%20Report.pdf
Captivating! I had no idea that WSPR analysees could produce such results.
Thanks for the link to the paper.
Cheers,
John
This is from the Comments section of the following article:
Dave Pergamon, Perth, Australia, 2 days ago
I\'m a radio ham and I know full well that WSPR is not technically capable of tracking aircraft movements. For starters, WSPR frequencies and power levels are far too low to detect aircraft and anyhow, WSPR radio waves travel in the ionosphere, 80 to 600 km above the Earth\'s surface, whereas maximum altitude commercial aircraft fly at is around 30,000 feet or about ten kilometres. No professional radio physicist or atmospheric scientist of any repute would put their names to this kind of pseudo-scientific BS.
https://www.dailymail.co.uk/news/article-12468439/MH370-flight-bombshell-claim-resting-place-revealed.html
On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?
Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.
It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.
The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.
On Tuesday, 5 September 2023 at 06:17:17 UTC+1, gggg gggg wrote:
On Friday, September 1, 2023 at 7:39:45â¯PM UTC-7, John Smiht wrote:
On Friday, September 1, 2023 at 4:54:30â¯PM UTC-5, Flyguy wrote:
On Friday, September 1, 2023 at 2:48:13â¯PM UTC-7, Flyguy wrote:
A retired aerospace engineer, Richard Godfrey, analyzed radio wave propagation data from the Weak Signal Propagation Reporter network developed by hams to pinpoint MH370\'s crash site to a 300 sq mi area. This sounds like a lot, but previous estimates where hundreds of thousands of sq mi.
https://www.airlineratings.com/news/mh370-new-research-paper-confirms-wsprnet-tracking-technology/
Here is the full report:
https://www.dropbox.com/s/pkolz2mxr1rhepb/MH370%20GDTAAA%20WSPRnet%20Analysis%20Technical%20Report%2015MAR2022.pdf?dl=0
Godfrey was approached by Netflix for a documentary about MH370, but declined as they only wanted conspiratorial viewpoints. In fact, the Netflix \"documentary\" peddles the idea of a Russian conspiracy where MH370 was hijacked by three Russians and flown to Kazakhstan. They
do this by entering the electronics bay and take control of the aircraft and lock out the pilot\'s controls. Obviously, Godfrey\'s flight path totally refutes this theory.
Here is the flight path report:
https://www.dropbox.com/s/k4fn8eec4z9np0z/GDTAAA%20WSPRnet%20MH370%20Analysis%20Flight%20Path%20Report.pdf
Captivating! I had no idea that WSPR analysees could produce such results.
Thanks for the link to the paper.
Cheers,
John
This is from the Comments section of the following article:
Dave Pergamon, Perth, Australia, 2 days ago
I\'m a radio ham and I know full well that WSPR is not technically capable of tracking aircraft movements. For starters, WSPR frequencies and power levels are far too low to detect aircraft and anyhow, WSPR radio waves travel in the ionosphere, 80 to 600 km above the Earth\'s surface, whereas maximum altitude commercial aircraft fly at is around 30,000 feet or about ten kilometres. No professional radio physicist or atmospheric scientist of any repute would put their names to this kind of pseudo-scientific BS.
The paper does state that the interaction with aircraft happens close to the locations where the sky wave refracts
down to the ground and reflects up again. This means that the claim quoted above must have been made by
somebody who had not actually read what they were claiming to be BS. Whether the results are accurate enough to
give a useful search area is another matter.
John
https://www.dailymail.co.uk/news/article-12468439/MH370-flight-bombshell-claim-resting-place-revealed.html
On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?
Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.
It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.
That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.
The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.
There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.
Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.
FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?
That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.
On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?
Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.
It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.
That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.
The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.
There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.
Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.
FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?
That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.
On Tue, 05 Sep 2023 08:57:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:
On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?
Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.
It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.
That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.
The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.
There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.
Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.
FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?
That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.
There is a complication. Modern software is tens of millions of lines
of code, far exceeding the inspection capabilities of humans. Hardware
is far simpler in terms of lines of FPGA code. But it\'s creeping up.
On a project some decades ago, the customer wanted us to verify every
path through the code, which was about 100,000 lines (large at the
time) of C or assembler (don\'t recall, doesn\'t actually matter).
In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP
calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.
On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?
Just about everyone who runs a beta test program. > MTBF is another metric that can be used for something that is intended to run
24/7 and recover gracefully from anything that may happen to it.
It is inevitable that a new release will have some bugs and minor differences
from its predecessor that real life users will find PDQ.
The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without breaking
something else. Modern optimisers make that more difficult now than it used to
be back when I was involved in commercial development.
On Tue, 05 Sep 2023 08:57:22 -0700, John Larkin
jlarkin@highlandSNIPMEtechnology.com> wrote:
On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?
Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.
It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.
That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.
The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.
There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.
Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.
FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?
That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.
There is a complication. Modern software is tens of millions of lines
of code, far exceeding the inspection capabilities of humans.
On a project some decades ago, the customer wanted us to verify every
path through the code, which was about 100,000 lines (large at the
time) of C or assembler (don\'t recall, doesn\'t actually matter).
In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP
calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.
The customer withdrew the requirement.
Joe Gwinn
On 05/09/2023 16:57, John Larkin wrote:
On Tue, 5 Sep 2023 13:13:51 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:
On 04/09/2023 14:30, Don Y wrote:
Anyone else use bug reporting frequency as a gross indicator
of system stability?
Just about everyone who runs a beta test program.
MTBF is another metric that can be used for something that is intended
to run 24/7 and recover gracefully from anything that may happen to it.
It is inevitable that a new release will have some bugs and minor
differences from its predecessor that real life users will find PDQ.
That\'s the story of software: bugs are inevitable, so why bother to be
careful coding or testing? You can always wait for bug reports from
users and post regular fixes of the worst ones.
Don\'t blame the engineers for that - it is the ship it and be damned
senior management that is responsible for most buggy code being shipped.
Even more so now that 1+GB upgrades are essentially free.
First to market is worth enough that people live with buggy code. The
worst major release I can recall in a very long time was MS Excel 2007
(although bugs in Vista took a lot more flack - rather unfairly IMHO).
(which reminds me it is a MS patch Tuesday today)
The trick is to gain enough information from each in service failure to
identify and fix the root cause bug in a single iteration and without
breaking something else. Modern optimisers make that more difficult now
than it used to be back when I was involved in commercial development.
There have been various drives to write reliable code, but none were
popular. Quite the contrary, the software world loves abstraction and
ever new, bizarre languages... namely playing games instead of coding
boring, reliable applications in some klunky, reliable language.
The only ones which actually could be truly relied upon used formal
mathematical proof techniques to ensure reliability. Very few
practitioners are able to do it properly and it is pretty much reserved
for ultra high reliability safety and mission critical code.
It could be all be done to that standard iff commercial developers and
their customers were prepared to pay for it. However, they want it now
and they keep changing their minds about what it is they actually want
so the goalposts are forever shifting around. That sort of functionality
creep is much less common in hardware.
UK\'s NATS system is supposedly 6 sigma coding but its misbehaviour on
Bank Holiday Monday peak travel time was somewhat disastrous. It seems
someone managed to input the halt and catch fire instruction and the
buffers ran out before they were able to fix it. There will be a
technical report out in due course - my guess is that they have reduced
overheads and no longer have some of the key people who understand its
internals. Malformed flight plan data should not have been able to kill
it stone dead - but apparently that is exactly what happened!
https://www.ft.com/content/9fe22207-5867-4c4f-972b-620cdab10790
(might be paywalled)
If so Google \"UK air traffic control outage caused by unusual flight
plan data\"
Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.
But using design and simulation *software* that you fail to acknowledge
is actually pretty good. If you had to do it with pencil and paper your
would be there forever.
FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?
That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.
So do physical mechanical interlocks. I don\'t trust software or even
electronic interlocks to protect me compared to a damn great beam stop
and a padlock on it with the key in my pocket.
Don\'t blame the engineers for that - it is the ship it and be damned senior
management that is responsible for most buggy code being shipped. Even more so
now that 1+GB upgrades are essentially free.
major release I can recall in a very long time was MS Excel 2007 (although bugs
in Vista took a lot more flack - rather unfairly IMHO).
The only ones which actually could be truly relied upon used formal
mathematical proof techniques to ensure reliability. Very few practitioners are
able to do it properly and it is pretty much reserved for ultra high
reliability safety and mission critical code.
It could be all be done to that standard iff commercial developers and their
customers were prepared to pay for it. However, they want it now and they keep
changing their minds about what it is they actually want so the goalposts are
forever shifting around. That sort of functionality creep is much less common
in hardware.
UK\'s NATS system is supposedly 6 sigma coding but its misbehaviour on Bank
Holiday Monday peak travel time was somewhat disastrous. It seems someone
managed to input the halt and catch fire instruction and the buffers ran out
before they were able to fix it. There will be a technical report out in due
course - my guess is that they have reduced overheads and no longer have some
of the key people who understand its internals. Malformed flight plan data
should not have been able to kill it stone dead - but apparently that is
exactly what happened!
https://www.ft.com/content/9fe22207-5867-4c4f-972b-620cdab10790
(might be paywalled)
If so Google \"UK air traffic control outage caused by unusual flight plan data\"
Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.
But using design and simulation *software* that you fail to acknowledge is
actually pretty good. If you had to do it with pencil and paper your would be
there forever.
FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?
That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.
So do physical mechanical interlocks. I don\'t trust software or even electronic
interlocks to protect me compared to a damn great beam stop and a padlock on it
with the key in my pocket.
There is a complication. Modern software is tens of millions of lines
of code, far exceeding the inspection capabilities of humans. Hardware
is far simpler in terms of lines of FPGA code. But it\'s creeping up.
On a project some decades ago, the customer wanted us to verify every
path through the code, which was about 100,000 lines (large at the
time) of C or assembler (don\'t recall, doesn\'t actually matter).
In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP
calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.
In round numbers, one in five lines of code is an IF statement, so in
100,000 lines of code there will be 20,000 IF statements. So, there
are up to 2^20000 unique paths through the code. Which chokes my HP
Although that is true it is also true that a small number of cunningly
constructed test datasets can explore a very high proportion of the most
frequently traversed paths in any given codebase. One snag is that testing is
invariably cut short by management when development overruns.
recall one latent on the VAX for ages which was that when it ran out of IO
handles (because someone was opening them inside a loop) the first thing the
recovery routine tried to do was open an IO channel!
calculator, so we must resort to logarithms, yielding 10^6021, which
is a *very* large number. The age of the Universe is only 14 billion
years, call it 10^10 years, so one would never be able to test even a
tiny fraction of the possible paths.
McCabe\'s complexity metric provides a way to test paths in components and
subsystems reasonably thoroughly and catch most of the common programmer
errors. Static dataflow analysis is also a lot better now than in the past.
Then you only need at most 40000 test vectors to take each branch of every
binary if statement (60000 if it is Fortran with 3 way branches all used). That
is a rather more tractable number (although still large).
Any routine with too high a CCI count is practically certain to contain latent
bugs - which makes it worth looking at more carefully.
On 9/5/2023 9:47 AM, Martin Brown wrote:
Don\'t blame the engineers for that - it is the ship it and be damned senior
management that is responsible for most buggy code being shipped. Even more so
now that 1+GB upgrades are essentially free.
Note how the latest coding styles inherently acknowledge that.
Agile? How-to-write-code-without-knowing-what-it-has-to-do?
First to market is worth enough that people live with buggy code. The worst
Of course! Anyone think their Windows/Linux box is bug-free?
USENET client? Browser? yet, somehow, they all seem to provide
real value to their users!
major release I can recall in a very long time was MS Excel 2007 (although bugs
in Vista took a lot more flack - rather unfairly IMHO).
Of course. Folks run Linux with 20M+ LoC? So, a ballpark estimate
of 20K+ *bugs* in the RELEASED product??
https://en.wikipedia.org/wiki/Linux_kernel#/media/File:Linux_kernel_map.png
The era of monolithic kernels is over. Unless folks keep wanting
to DONATE their time to maintaining them.
https://en.wikipedia.org/wiki/Linux_kernel#/media/File:Redevelopment_costs_of_Linux_kernel.png
Amusing that it\'s pursuing a 50 year old dream... (let\'s get together
an effort to recreate the Wright flyer so we can all take 100 yard flights!)
(which reminds me it is a MS patch Tuesday today)
Surrender your internet connection, for the day...
The only ones which actually could be truly relied upon used formal
mathematical proof techniques to ensure reliability. Very few practitioners are
able to do it properly and it is pretty much reserved for ultra high
reliability safety and mission critical code.
And only applies to the smallest parts of the codebase. The \"engineering\"
comes in figuring out how to live with systems that aren\'t verifiable.
(you can\'t ensure hardware WILL work as advertised unless you have tested
every component that you put into the fabrication -- ah, but you can blame
someone else for YOUR system\'s failure)
It could be all be done to that standard iff commercial developers and their
customers were prepared to pay for it. However, they want it now and they keep
changing their minds about what it is they actually want so the goalposts are
forever shifting around. That sort of functionality creep is much less common
in hardware.
Exactly. And, software often is told to COMPENSATE for hardware shortcomings.
One of the sound systems used in early video games used a CVSD as an ARB.
But, the idiot who designed the hardware was 200% clueless about how the
software would use the hardware. So, the (dedicated!) processor had
to sit in a tight loop SHIFTING bits into the CVSD. Of course, each
path through the loop had to be balanced in terms of execution time
lest you get a beat component (as every 8th bit requires a new byte
to be fetched -- which takes a different amount of time than shifting
the current byte by one bit).
Hardware designers are typically clueless as to how their decisions
impact the software. And, as the company may have invested a \"couple
of kilobucks\" on a design and layout, Manglement\'s shortsightedness
fails to realize the tens of kilobucks that their penny-pinching
will cost!
[I once had a spectacular FAIL in a bit of hardware that I designed.
It was a custom CPU (\"chip\"). The guy writing the code (and the
tools to write it!) assumed addresses were byte-oriented. But,
the processor was truly a 16b machine and all of the addresses
were for 16b objects. So, all of the addresses generated by his tools
were exactly twice what they should have been (\"Didn\'t you notice
how the LSb was ALWAYS \'0\'?\") Simple fix but embarassing as we each
relied on assumptions that seemed natural to us where the wiser
approach would have made that statement explicit]
UK\'s NATS system is supposedly 6 sigma coding but its misbehaviour on Bank
Holiday Monday peak travel time was somewhat disastrous. It seems someone
managed to input the halt and catch fire instruction and the buffers ran out
before they were able to fix it. There will be a technical report out in due
course - my guess is that they have reduced overheads and no longer have some
of the key people who understand its internals. Malformed flight plan data
should not have been able to kill it stone dead - but apparently that is
exactly what happened!
Lunar landers, etc. Software is complex. Hardware is a walk in the
park. For anything but a trivial piece of code, you can\'t see all of the
interconnects/interdependencies.
https://www.ft.com/content/9fe22207-5867-4c4f-972b-620cdab10790
(might be paywalled)
If so Google \"UK air traffic control outage caused by unusual flight plan data\"
Electronic design, and FPGA coding, are intended to be bug-free first
pass and often are, when done right.
But using design and simulation *software* that you fail to acknowledge is
actually pretty good. If you had to do it with pencil and paper your would be
there forever.
When was the last time your calculator PROGRAM produced a verifiable error?
And, desktop software is considerably less complex than software used
in products where interactions arising from temporal differences can
prove unpredictable.
We bought a new stove/oven some time ago. Specify which oven, heat source,
setpoint temperature and time. START.
Ah, but if you want to change the time remaining (because you peeked
at the item and realize it could use another few minutes) AND the
timer expires WHILE YOU ARE TRYING TO CHANGE IT, the user interface
locks up (!). Your recourse is to shut off the oven (abort the
process) and then restart it using the settings you just CANCELED.
It\'s easy to see how this can evade testing -- if the test engineer
didn\'t have a good understanding of how the code worked so he
could challenge it with specially crafted test cases.
When drafting system specifications, I (try to) imagine every
situation that can come up and describe how each should be handled.
So, the test scaffolding and actual tests can be designed to verify
that behavior in the resulting product.
[How do you test for the case where the user tries to change the
remaining time AS the timer is expiring? How do you test for the
case where the process on the remote host crashes AFTER it has
received a request for service but before it has acknolwedged
that? Or, BEFORE it receives it? Or, WHILE acknowledging it?]
Hardware is easy to test: set voltage/current/freq/etc. and
observe result.
[We purchased a glass titty many years ago. At one point, we turned
it on, then off, then on again -- in relatively short order. I
guess the guy who designed the power supply hadn\'t considered this
possibility as the magic smoke rushed out of it! How hard can it
be to design a power supply???]
FPGAs are halfway software, so the coders tend to be less careful than
hardware designers. FPGA bug fixes are easy, so why bother to read
your own code?
That\'s ironic, when you think about it. The hardest bits, the physical
electronics, has the least bugs.
No, the physical electronics are the EASIEST bits. If designing
hardware was so difficult, then the solution to the software
\"problem\" would be to just have all the hardware designers switch
over to designing software! Problem solved INSTANTLY!
In practice, the problem would be worsened by a few orders of
magnitude as they suddenly found themselves living in an opaque world.
So do physical mechanical interlocks. I don\'t trust software or even electronic
interlocks to protect me compared to a damn great beam stop and a padlock on it
with the key in my pocket.
Note the miswired motor example, above. If the limit switches had
been hardwired, the problem still would have been present as the
problem was in the hardware -- the wiring of the motor.