wildly improbable events

J

John Larkin

Guest
We recently designed an 8-channel complex waveform generator. Each
output stage is composed of a DAC, a lowpass filter, an output
amplifier, a test relay, and an output connector. It's this one:

http://www.highlandtechnology.com/DSS/V346DS.html

You can see the gold output connectors, and the relays are hiding just
behind the front panel.

The harmonic distortion seemed a bit high, in the -40 dBc range at 32
MHz and max level output. We were poking around with a spectrum
analyzer and happened to do a 0-3 GHz sweep and lo, a big line at
about 1 GHz. Something's oscillating!

Cut to the bottom line: the eight output amps, 1.5 GHz current-mode
opamps, are individually stable, but oscillate together. Futzing with
some amps may affect the outputs of others, several channels away. And
the ensemble oscillations have multiple stable modes, including the
occasional "off."

What's happening is that the front panel is electromagnetically
resonating in a fundamental violin-string mode (peak swing in the
middle) at about 1 GHz, and couples pretty well into all the output
stages; no doubt the relays are helping. A few well-placed capacitors
fix the problem. It took a while to figure this out.

So the observation is: when something goes wrong, there are a number
of likely causes. Here, they were channel-channel trace couplings, Vcc
coupling, amplifier loop stability, pad-plane parasitic capacitance,
plain rotten opamps, stuff like that. But a complex system has many
possible, convoluted causalities other than the obvious ones. Suppose
there are a billion possible interactions, not unreasonable for a
system with hundreds of themselves-complex parts, all close and
well-coupled and interacting at frequencies like this. Suppose most of
those failure modes [1] are wildly improbable, like one chance in a
billion of ever happening.

1e9 * 1e-9 = 1

The final solution was wildly improbable. If suggested as a cause, one
would be tempted to say "no, that's just too bizarre." It was probable
that the actual problem *was* wildly improbable.

This sort of thing happens all the time in our business, in hardware
and software. Insanely unlikely insanely complex things happen,
because there are potentially so many of them. That makes it fun to
track them down.

John


[1] "failure mode" being a subjective thing. I think a 1 GHz
oscillation is a failure because I don't want one. For all I know, the
circuit may be proud of itself for pulling this off.
 
John Larkin wrote:
We recently designed an 8-channel complex waveform generator. Each
output stage is composed of a DAC, a lowpass filter, an output
amplifier, a test relay, and an output connector. It's this one:

http://www.highlandtechnology.com/DSS/V346DS.html

You can see the gold output connectors, and the relays are hiding just
behind the front panel.

The harmonic distortion seemed a bit high, in the -40 dBc range at 32
MHz and max level output. We were poking around with a spectrum
analyzer and happened to do a 0-3 GHz sweep and lo, a big line at
about 1 GHz. Something's oscillating!

Cut to the bottom line: the eight output amps, 1.5 GHz current-mode
opamps, are individually stable, but oscillate together. Futzing with
some amps may affect the outputs of others, several channels away. And
the ensemble oscillations have multiple stable modes, including the
occasional "off."

What's happening is that the front panel is electromagnetically
resonating in a fundamental violin-string mode (peak swing in the
middle) at about 1 GHz, and couples pretty well into all the output
stages; no doubt the relays are helping. A few well-placed capacitors
fix the problem. It took a while to figure this out.

So the observation is: when something goes wrong, there are a number
of likely causes. Here, they were channel-channel trace couplings, Vcc
coupling, amplifier loop stability, pad-plane parasitic capacitance,
plain rotten opamps, stuff like that. But a complex system has many
possible, convoluted causalities other than the obvious ones. Suppose
there are a billion possible interactions, not unreasonable for a
system with hundreds of themselves-complex parts, all close and
well-coupled and interacting at frequencies like this. Suppose most of
those failure modes [1] are wildly improbable, like one chance in a
billion of ever happening.

1e9 * 1e-9 = 1

The final solution was wildly improbable. If suggested as a cause, one
would be tempted to say "no, that's just too bizarre." It was probable
that the actual problem *was* wildly improbable.

This sort of thing happens all the time in our business, in hardware
and software. Insanely unlikely insanely complex things happen,
because there are potentially so many of them. That makes it fun to
track them down.

John

[1] "failure mode" being a subjective thing. I think a 1 GHz
oscillation is a failure because I don't want one. For all I know, the
circuit may be proud of itself for pulling this off.

It also involves things they don't teach in schools. Someone without
practical experience would spend way too much time looking for the
cause, or abandon the product.


--
http://improve-usenet.org/index.html

aioe.org, Goggle Groups, and Web TV users must request to be white
listed, or I will not see your messages.

If you have broadband, your ISP may have a NNTP news server included in
your account: http://www.usenettools.net/ISP.htm


There are two kinds of people on this earth:
The crazy, and the insane.
The first sign of insanity is denying that you're crazy.
 
On Sat, 23 Aug 2008 13:59:36 -0700, "Joel Koltner"
<zapwireDASHgroups@yahoo.com> wrote:

Your front panel is attached to grounded mounting holes on the PCB at roughly
the top and bottom then, eh?
The front panel is not strapped to the PCB ground plane, per VME
convention, but is screwed into the card cage, which effectively RF
grounds the ends. There's also a pcb trace that connects to the panel
mounting screws and to the SCSI connector shell, for esd/emi reasons,
and that's apparently participating in the resonance too. It is all,
well, complex.

Modern current-mode opamps can easily oscillate at a GHz or more. If
you don't have a scope or a spectrum analyzer that spots this, the
low-frequency symptoms can be bizarre.

John
 
On Sat, 23 Aug 2008 13:13:21 -0700, John Larkin wrote:

We recently designed an 8-channel complex waveform generator. Each output
stage is composed of a DAC, a lowpass filter, an output amplifier, a test
relay, and an output connector. It's this one:

http://www.highlandtechnology.com/DSS/V346DS.html

You can see the gold output connectors, and the relays are hiding just
behind the front panel.

The harmonic distortion seemed a bit high, in the -40 dBc range at 32 MHz
and max level output. We were poking around with a spectrum analyzer and
happened to do a 0-3 GHz sweep and lo, a big line at about 1 GHz.
Something's oscillating!

Cut to the bottom line: the eight output amps, 1.5 GHz current-mode
opamps, are individually stable, but oscillate together. Futzing with some
amps may affect the outputs of others, several channels away. And the
ensemble oscillations have multiple stable modes, including the occasional
"off."

What's happening is that the front panel is electromagnetically resonating
in a fundamental violin-string mode (peak swing in the middle) at about 1
GHz, and couples pretty well into all the output stages; no doubt the
relays are helping. A few well-placed capacitors fix the problem. It took
a while to figure this out.

So the observation is: when something goes wrong, there are a number of
likely causes. Here, they were channel-channel trace couplings, Vcc
coupling, amplifier loop stability, pad-plane parasitic capacitance, plain
rotten opamps, stuff like that. But a complex system has many possible,
convoluted causalities other than the obvious ones. Suppose there are a
billion possible interactions, not unreasonable for a system with hundreds
of themselves-complex parts, all close and well-coupled and interacting at
frequencies like this. Suppose most of those failure modes [1] are wildly
improbable, like one chance in a billion of ever happening.

1e9 * 1e-9 = 1

The final solution was wildly improbable. If suggested as a cause, one
would be tempted to say "no, that's just too bizarre." It was probable
that the actual problem *was* wildly improbable.

This sort of thing happens all the time in our business, in hardware and
software. Insanely unlikely insanely complex things happen, because there
are potentially so many of them. That makes it fun to track them down.

John


[1] "failure mode" being a subjective thing. I think a 1 GHz oscillation
is a failure because I don't want one. For all I know, the circuit may be
proud of itself for pulling this off.
I once was responsible for designing an audio augmentation system for the
Alaska court system. It amplified signals from several mikes near the
judge, clerk, attorneys, bailiff, etc, and fed the signals to individual
amps and equalizing filters. They were then combined to drive a
multiple speaker system so the rest of the court could hear. We carefully
designed around the obvious shielding and feedback problems, and came up
with a well tested prototype that everybody liked and signed off on.

When the first production unit was installed in one of the biggest
courtrooms, we got a panicky call because soap operas were coming in loud
and clear over the judges comments, to his displeasure.

This surprised us because it had passed our EMI tests with flying colors.
When we sent techs to investigate, they found >>2V/m of TV signal at
the clerks (master unit) location, directly in front of the curved bench.
It turned out the bench had hidden steel armor plate to protect the judge
from dissatisfied participants, and the TV station was about a mile away,
directly in front of the bench. The plate was acting as a cylindrical
reflector focused on the clerk, and boosting the EMI way beyond anything
we had designed or tested for.

We added enough internal shielding to brute-force our way through, but
always asked to see the intended unit location in the remaining
installations. The problem never occurred again to my knowledge.


The event wasn't as much complex as it was unexpected, but it's another
example of the puzzle-solving nature of R&D.
 
The most improbable, yet significant, event
for me was when I was working at Jackass Flats,
Nevada, on the nuclear rocket engine program.
This obviously was before any atmosphere test ban
treaties. Anyhow, the liquid hydrogen ran out and the
reactor proceeded to really bake out!
Of course it ruined the reactor.

See for general background :
http://en.wikipedia.org/wiki/Nuclear_rocket
For specifics on the style I worked on right out of college :
http://en.wikipedia.org/wiki/Nuclear_thermal_rocket#Practical_testing
 
This is why I stay away from anything over 60 Hz and less than 500 kCMil
conductor size.

--
Paul Hovnanian mailto:paul@Hovnanian.com
------------------------------------------------------------------
Do not mold, findle or sputilate.
 
Paul Hovnanian P.E. wrote:

This is why I stay away from anything over 60 Hz and less than 500 kCMil
conductor size.

That's being a wimp! :)

http://webpages.charter.net/jamie_5"
 
On Sat, 23 Aug 2008 22:14:04 -0700 (PDT), Immortalist
<reanimater_2000@yahoo.com> wrote:

On Aug 23, 1:13 pm, John Larkin
jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
We recently designed an 8-channel complex waveform generator. Each
output stage is composed of a DAC, a lowpass filter, an output
amplifier, a test relay, and an output connector. It's this one:

http://www.highlandtechnology.com/DSS/V346DS.html

You can see the gold output connectors, and the relays are hiding just
behind the front panel.

The harmonic distortion seemed a bit high, in the -40 dBc range at 32
MHz and max level output. We were poking around with a spectrum
analyzer and happened to do a 0-3 GHz sweep and lo, a big line at
about 1 GHz. Something's oscillating!

Cut to the bottom line: the eight output amps, 1.5 GHz current-mode
opamps, are individually stable, but oscillate together. Futzing with
some amps may affect the outputs of others, several channels away. And
the ensemble oscillations have multiple stable modes, including the
occasional "off."

What's happening is that the front panel is electromagnetically
resonating in a fundamental violin-string mode (peak swing in the
middle) at about 1 GHz, and couples pretty well into all the output
stages; no doubt the relays are helping. A few well-placed capacitors
fix the problem. It took a while to figure this out.

So the observation is: when something goes wrong, there are a number
of likely causes. Here, they were channel-channel trace couplings, Vcc
coupling, amplifier loop stability, pad-plane parasitic capacitance,
plain rotten opamps, stuff like that. But a complex system has many
possible, convoluted causalities other than the obvious ones. Suppose
there are a billion possible interactions, not unreasonable for a
system with hundreds of themselves-complex parts, all close and
well-coupled and interacting at frequencies like this. Suppose most of
those failure modes [1] are wildly improbable, like one chance in a
billion of ever happening.

1e9 * 1e-9 = 1

The final solution was wildly improbable. If suggested as a cause, one
would be tempted to say "no, that's just too bizarre." It was probable
that the actual problem *was* wildly improbable.

This sort of thing happens all the time in our business, in hardware
and software. Insanely unlikely insanely complex things happen,
because there are potentially so many of them. That makes it fun to
track them down.

John

The way you describe the problem made me think of "counter-intuitive
network logic" but the feedback stuff has probably more to do with
Sensitive Dependence on Initial Conditions...

Yes. We have eight coupled nonlinear oscillators with resonant widgets
galore. It has all sorts of modes. Of course it's supposed to be an
arbitrary waveform generator, and not just lay there, so whatever
terrible states are possible, it's going to find them.

At low frequencies, the em couplings are much weaker, so life is
simpler.

Electronics is fun.


Sensitive Dependence on Initial Conditions
http://www.schuelers.com/ChaosPsyche/part_1_14.htm
http://en.wikipedia.org/wiki/Butterfly_effect
http://everything2.com/index.pl?node_id=861246
http://www.perkel.com/nerd/butterflyeffect.htm

Network logic is counterintuitive. Say you need to lay a telephone
cable that will connect a bunch of cities; let's make that three for
illustration: Kansas City, San Diego, and Seattle. The total length of
the lines connecting those three cities is 3,000 miles. Common sense
says that if you add a fourth city to your telephone network, the
total length of your cable will have to increase. But that's not how
network logic works. By adding a fourth city as a hub (let's make that
Salt Lake City) and running the lines from each of the three cities
through Salt Lake City, we can decrease the total mileage of cable to
2,850 or 5 percent less than the original 3,000 miles. Therefore the
total unraveled length of a network can be shortened by adding nodes
to it! Yet there is a limit to this effect. Frank Hwang and Ding Zhu
Du, working at Bell Laboratories in 1990, proved that the best savings
a system might enjoy from introducing new points into a network would
peak at about 13 percent. More is different.

On the other hand, in 1968 Dietrich Braess, a German operations
researcher, discovered that adding routes to an already congested
network will only slow it down. Now called Braess's Paradox,
scientists have found many examples of how adding capacity to a
crowded network reduces its overall production. In the late 1960s the
city planners of Stuttgart tried to ease downtown traffic by adding a
street. When they did, traffic got worse; then they blocked it off and
traffic improved. In 1992, New York City closed congested 42nd Street
on Earth Day, fearing the worst, but traffic actually improved that
day.

Then again, in 1990, three scientists working on networks of brain
neurons reported that increasing the gain-the responsivity-of
individual neurons did not increase their individual signal detection
performance, but it did increase the performance of the whole network
to detect signals.

http://www.kk.org/outofcontrol/ch2-g.html

The prime variable Kauffman played with was the connectivity of the
network. In a sparsely connected network, each node would on average
only connect to one other node, or less. In a richly connected
network, each node would link to ten or a hundred or a thousand or a
million other nodes. In theory the limit to the number of connections
per node is simply the total number of nodes, minus one. A million-
headed network could have a million-minus-one connections at each
node; every node is connected to every other node. To continue our
rough analogy, every employee of GM could be directly linked to all
749,999 other employees of GM.

As Kauffman varied this connectivity parameter in his generic
networks, he discovered something that would not surprise the CEO of
GM. A system where few agents influenced other agents was not very
adaptable. The soup of connections was too thin to transmit an
innovation. The system would fail to evolve. As Kauffman increased the
average number of links between nodes, the system became more
resilient, "bouncing back" when perturbed. The system could maintain
stability while the environment changed. It would evolve. The
completely unexpected finding was that beyond a certain level of
linking density, continued connectivity would only decrease the
adaptability of the system as a whole.

Kauffman graphed this effect as a hill. The top of the hill was
optimal flexibility to change. One low side of the hill was a sparsely
connected system: flat-footed and stagnant. The other low side was an
overly connected system: a frozen grid-lock of a thousand mutual
pulls. So many conflicting influences came to bear on one node that
whole sections of the system sank into rigid paralysis. Kauffman
called this second extreme a "complexity catastrophe." Much to
everyone's surprise, you could have too much connectivity. In the long
run, an overly linked system was as debilitating as a mob of
uncoordinated loners.

Somewhere in the middle was a peak of just-right connectivity that
gave the network its maximal nimbleness. Kauffman found this
measurable "Goldilocks'" point in his model networks. His colleagues
had trouble believing his maximal value at first because it seemed
counterintuitive at the time. The optimal connectivity for the
distilled systems Kauffman studied was very low, "somewhere in the
single digits." Large networks with thousands of members adapted best
with less than ten connections per member. Some nets peaked at less
than two connections on average per node! A massively parallel system
did not need to be heavily connected in order to adapt. Minimal
average connection, done widely, was enough.

Kauffman's second unexpected finding was that this low optimal value
didn't seem to fluctuate much, no matter how many members comprised a
specific network. In other words, as more members were added to the
network, it didn't pay (in terms of systemwide adaptability) to
increase the number of links to each node. To evolve most rapidly, add
members but don't increase average link rates. This result confirmed
what Craig Reynolds had found in his synthetic flocks: you could load
a flock up with more and more members without having to reconfigure
its structure.

Kauffman found that at the low end, with less than two connections per
agent or organism, the whole system wasn't nimble enough to keep up
with change. If the community of agents lacked sufficient internal
communication, it could not solve a problem as a group. More exactly,
they fell into isolated patches of cooperative feedback but didn't
interact with each other.

At the ideal number of connections, the ideal amount of information
flowed between agents, and the system as a whole found the optimal
solutions consistently. If their environment was changing rapidly,
this meant that the network remained stable-persisting as a whole over
time.

Kauffman's Law states that above a certain point, increasing the
richness of connections between agents freezes adaptation. Nothing
gets done because too many actions hinge on too many other
contradictory actions. In the landscape metaphor, ultra-connectance
produces ultra-ruggedness, making any move a likely fall off a peak of
adaptation into a valley of nonadaptation. Another way of putting it,
too many agents have a say in each other's work, and bureaucratic
rigor mortis sets in. Adaptability conks out into grid-lock. For a
contemporary culture primed to the virtues of connecting up, this low
ceiling of connectivity comes as unexpected news.

We postmodern communication addicts might want to pay attention to
this. In our networked society we are pumping up both the total number
of people connected (in 1993, the global network of networks was
expanding at the rate of 15 percent additional users per month!), and
the number of people and places to whom each member is connected.
Faxes, phones, direct junk mail, and large cross-referenced data bases
in business and government in effect increase the number of links
between each person. Neither expansion particularly increases the
adaptability of our system (society) as a whole.

http://www.kk.org/outofcontrol/ch20-d.html


[1] "failure mode" being a subjective thing. I think a 1 GHz
oscillation is a failure because I don't want one. For all I know, the
circuit may be proud of itself for pulling this off.

Coincidence: I'm about halfway through reading Kauffman's "At Home in
the Universe." The theme is that our universe is self-orginizing and
specifically that chemistry is prone to autocatalytic reactions that
pretty much make life and DNA inevitable. I believe in evolution after
a fashion, but I've always been skeptical that DNA and its supporting
systems could spring up on its own out of inorganic muck. I'll keep
reading.

It's not just bad improbable events that keep popping up; good ones
can happen, too, but less often of course. Which is how circuits get
designed.

John
 
On Aug 23, 1:13 pm, John Larkin
<jjlar...@highNOTlandTHIStechnologyPART.com> wrote:
We recently designed an 8-channel complex waveform generator. Each
output stage is composed of a DAC, a lowpass filter, an output
amplifier, a test relay, and an output connector. It's this one:

http://www.highlandtechnology.com/DSS/V346DS.html

You can see the gold output connectors, and the relays are hiding just
behind the front panel.

The harmonic distortion seemed a bit high, in the -40 dBc range at 32
MHz and max level output. We were poking around with a spectrum
analyzer and happened to do a 0-3 GHz sweep and lo, a big line at
about 1 GHz. Something's oscillating!

Cut to the bottom line: the eight output amps, 1.5 GHz current-mode
opamps, are individually stable, but oscillate together. Futzing with
some amps may affect the outputs of others, several channels away. And
the ensemble oscillations have multiple stable modes, including the
occasional "off."

What's happening is that the front panel is electromagnetically
resonating in a fundamental violin-string mode (peak swing in the
middle) at about 1 GHz, and couples pretty well into all the output
stages; no doubt the relays are helping. A few well-placed capacitors
fix the problem. It took a while to figure this out.

So the observation is: when something goes wrong, there are a number
of likely causes. Here, they were channel-channel trace couplings, Vcc
coupling, amplifier loop stability, pad-plane parasitic capacitance,
plain rotten opamps, stuff like that. But a complex system has many
possible, convoluted causalities other than the obvious ones. Suppose
there are a billion possible interactions, not unreasonable for a
system with hundreds of themselves-complex parts, all close and
well-coupled and interacting at frequencies like this. Suppose most of
those failure modes [1] are wildly improbable, like one chance in a
billion of ever happening.

1e9 * 1e-9 = 1

The final solution was wildly improbable. If suggested as a cause, one
would be tempted to say "no, that's just too bizarre." It was probable
that the actual problem *was* wildly improbable.

This sort of thing happens all the time in our business, in hardware
and software. Insanely unlikely insanely complex things happen,
because there are potentially so many of them. That makes it fun to
track them down.

John
The way you describe the problem made me think of "counter-intuitive
network logic" but the feedback stuff has probably more to do with
Sensitive Dependence on Initial Conditions...

Sensitive Dependence on Initial Conditions
http://www.schuelers.com/ChaosPsyche/part_1_14.htm
http://en.wikipedia.org/wiki/Butterfly_effect
http://everything2.com/index.pl?node_id=861246
http://www.perkel.com/nerd/butterflyeffect.htm

Network logic is counterintuitive. Say you need to lay a telephone
cable that will connect a bunch of cities; let's make that three for
illustration: Kansas City, San Diego, and Seattle. The total length of
the lines connecting those three cities is 3,000 miles. Common sense
says that if you add a fourth city to your telephone network, the
total length of your cable will have to increase. But that's not how
network logic works. By adding a fourth city as a hub (let's make that
Salt Lake City) and running the lines from each of the three cities
through Salt Lake City, we can decrease the total mileage of cable to
2,850 or 5 percent less than the original 3,000 miles. Therefore the
total unraveled length of a network can be shortened by adding nodes
to it! Yet there is a limit to this effect. Frank Hwang and Ding Zhu
Du, working at Bell Laboratories in 1990, proved that the best savings
a system might enjoy from introducing new points into a network would
peak at about 13 percent. More is different.

On the other hand, in 1968 Dietrich Braess, a German operations
researcher, discovered that adding routes to an already congested
network will only slow it down. Now called Braess's Paradox,
scientists have found many examples of how adding capacity to a
crowded network reduces its overall production. In the late 1960s the
city planners of Stuttgart tried to ease downtown traffic by adding a
street. When they did, traffic got worse; then they blocked it off and
traffic improved. In 1992, New York City closed congested 42nd Street
on Earth Day, fearing the worst, but traffic actually improved that
day.

Then again, in 1990, three scientists working on networks of brain
neurons reported that increasing the gain-the responsivity-of
individual neurons did not increase their individual signal detection
performance, but it did increase the performance of the whole network
to detect signals.

http://www.kk.org/outofcontrol/ch2-g.html

The prime variable Kauffman played with was the connectivity of the
network. In a sparsely connected network, each node would on average
only connect to one other node, or less. In a richly connected
network, each node would link to ten or a hundred or a thousand or a
million other nodes. In theory the limit to the number of connections
per node is simply the total number of nodes, minus one. A million-
headed network could have a million-minus-one connections at each
node; every node is connected to every other node. To continue our
rough analogy, every employee of GM could be directly linked to all
749,999 other employees of GM.

As Kauffman varied this connectivity parameter in his generic
networks, he discovered something that would not surprise the CEO of
GM. A system where few agents influenced other agents was not very
adaptable. The soup of connections was too thin to transmit an
innovation. The system would fail to evolve. As Kauffman increased the
average number of links between nodes, the system became more
resilient, "bouncing back" when perturbed. The system could maintain
stability while the environment changed. It would evolve. The
completely unexpected finding was that beyond a certain level of
linking density, continued connectivity would only decrease the
adaptability of the system as a whole.

Kauffman graphed this effect as a hill. The top of the hill was
optimal flexibility to change. One low side of the hill was a sparsely
connected system: flat-footed and stagnant. The other low side was an
overly connected system: a frozen grid-lock of a thousand mutual
pulls. So many conflicting influences came to bear on one node that
whole sections of the system sank into rigid paralysis. Kauffman
called this second extreme a "complexity catastrophe." Much to
everyone's surprise, you could have too much connectivity. In the long
run, an overly linked system was as debilitating as a mob of
uncoordinated loners.

Somewhere in the middle was a peak of just-right connectivity that
gave the network its maximal nimbleness. Kauffman found this
measurable "Goldilocks'" point in his model networks. His colleagues
had trouble believing his maximal value at first because it seemed
counterintuitive at the time. The optimal connectivity for the
distilled systems Kauffman studied was very low, "somewhere in the
single digits." Large networks with thousands of members adapted best
with less than ten connections per member. Some nets peaked at less
than two connections on average per node! A massively parallel system
did not need to be heavily connected in order to adapt. Minimal
average connection, done widely, was enough.

Kauffman's second unexpected finding was that this low optimal value
didn't seem to fluctuate much, no matter how many members comprised a
specific network. In other words, as more members were added to the
network, it didn't pay (in terms of systemwide adaptability) to
increase the number of links to each node. To evolve most rapidly, add
members but don't increase average link rates. This result confirmed
what Craig Reynolds had found in his synthetic flocks: you could load
a flock up with more and more members without having to reconfigure
its structure.

Kauffman found that at the low end, with less than two connections per
agent or organism, the whole system wasn't nimble enough to keep up
with change. If the community of agents lacked sufficient internal
communication, it could not solve a problem as a group. More exactly,
they fell into isolated patches of cooperative feedback but didn't
interact with each other.

At the ideal number of connections, the ideal amount of information
flowed between agents, and the system as a whole found the optimal
solutions consistently. If their environment was changing rapidly,
this meant that the network remained stable-persisting as a whole over
time.

Kauffman's Law states that above a certain point, increasing the
richness of connections between agents freezes adaptation. Nothing
gets done because too many actions hinge on too many other
contradictory actions. In the landscape metaphor, ultra-connectance
produces ultra-ruggedness, making any move a likely fall off a peak of
adaptation into a valley of nonadaptation. Another way of putting it,
too many agents have a say in each other's work, and bureaucratic
rigor mortis sets in. Adaptability conks out into grid-lock. For a
contemporary culture primed to the virtues of connecting up, this low
ceiling of connectivity comes as unexpected news.

We postmodern communication addicts might want to pay attention to
this. In our networked society we are pumping up both the total number
of people connected (in 1993, the global network of networks was
expanding at the rate of 15 percent additional users per month!), and
the number of people and places to whom each member is connected.
Faxes, phones, direct junk mail, and large cross-referenced data bases
in business and government in effect increase the number of links
between each person. Neither expansion particularly increases the
adaptability of our system (society) as a whole.

http://www.kk.org/outofcontrol/ch20-d.html

[1] "failure mode" being a subjective thing. I think a 1 GHz
oscillation is a failure because I don't want one. For all I know, the
circuit may be proud of itself for pulling this off.
 
John Larkin wrote:

We recently designed an 8-channel complex waveform generator. Each
output stage is composed of a DAC, a lowpass filter, an output
amplifier, a test relay, and an output connector. It's this one:

http://www.highlandtechnology.com/DSS/V346DS.html

You can see the gold output connectors, and the relays are hiding just
behind the front panel.

The harmonic distortion seemed a bit high, in the -40 dBc range at 32
MHz and max level output. We were poking around with a spectrum
analyzer and happened to do a 0-3 GHz sweep and lo, a big line at
about 1 GHz. Something's oscillating!

Cut to the bottom line: the eight output amps, 1.5 GHz current-mode
opamps, are individually stable, but oscillate together. Futzing with
some amps may affect the outputs of others, several channels away. And
the ensemble oscillations have multiple stable modes, including the
occasional "off."

What's happening is that the front panel is electromagnetically
resonating in a fundamental violin-string mode (peak swing in the
middle) at about 1 GHz, and couples pretty well into all the output
stages; no doubt the relays are helping. A few well-placed capacitors
fix the problem. It took a while to figure this out.

So the observation is: when something goes wrong, there are a number
of likely causes. Here, they were channel-channel trace couplings, Vcc
coupling, amplifier loop stability, pad-plane parasitic capacitance,
plain rotten opamps, stuff like that. But a complex system has many
possible, convoluted causalities other than the obvious ones. Suppose
there are a billion possible interactions, not unreasonable for a
system with hundreds of themselves-complex parts, all close and
well-coupled and interacting at frequencies like this. Suppose most of
those failure modes [1] are wildly improbable, like one chance in a
billion of ever happening.

1e9 * 1e-9 = 1

The final solution was wildly improbable. If suggested as a cause, one
would be tempted to say "no, that's just too bizarre." It was probable
that the actual problem *was* wildly improbable.

This sort of thing happens all the time in our business, in hardware
and software. Insanely unlikely insanely complex things happen,
because there are potentially so many of them. That makes it fun to
track them down.

John


[1] "failure mode" being a subjective thing. I think a 1 GHz
oscillation is a failure because I don't want one. For all I know, the
circuit may be proud of itself for pulling this off.


OMG!
Did the front panel also act like a semi-focused antenna?
What was the signal lavel at 100 feet? At 1 mile?
 
On Sun, 24 Aug 2008 12:19:11 GMT, NoSpam@daqarta.com (Bob Masta)
wrote:

On Sat, 23 Aug 2008 13:13:21 -0700, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

snip

So the observation is: when something goes wrong, there are a number
of likely causes. Here, they were channel-channel trace couplings, Vcc
coupling, amplifier loop stability, pad-plane parasitic capacitance,
plain rotten opamps, stuff like that. But a complex system has many
possible, convoluted causalities other than the obvious ones. Suppose
there are a billion possible interactions, not unreasonable for a
system with hundreds of themselves-complex parts, all close and
well-coupled and interacting at frequencies like this. Suppose most of
those failure modes [1] are wildly improbable, like one chance in a
billion of ever happening.

1e9 * 1e-9 = 1

The final solution was wildly improbable. If suggested as a cause, one
would be tempted to say "no, that's just too bizarre." It was probable
that the actual problem *was* wildly improbable.

This sort of thing happens all the time in our business, in hardware
and software. Insanely unlikely insanely complex things happen,
because there are potentially so many of them. That makes it fun to
track them down.

Ahh, but "improbable" implies you understand the total system well enough to
assign probabilities!

True. What I call "improbable" is an output that was caused by a
longish chain of apparently unrelated or unexpected causalities. In an
electronic system, we're not talking actual statistical probability
chains, just very obscure and unintended paths. If we built 5 more of
these boards, they'd likely all oscillate.

Of course, an oscillation is a chain of causality that loops back on
itself. A causes B causes C causes A, 120 degree phase shift at each
step. It has no origin and no end.

John
 
On Sun, 24 Aug 2008 09:52:59 -0700 (PDT), Bret Cahill
<BretCahill@aol.com> wrote:

Your front panel is attached to grounded mounting holes on the PCB at roughly
the top and bottom then, eh?

The front panel is not strapped to the PCB ground plane, per VME
convention,

The intersection of the set of functional people and the set of people
who are impressed with your work has a name:

It's called the "null set."


Bret Cahill
They are in the electronics groups. You wouldn't understand.

John
 
On Sun, 24 Aug 2008 09:52:59 -0700 (PDT), Bret Cahill
<BretCahill@aol.com> wrote:

Your front panel is attached to grounded mounting holes on the PCB at roughly
the top and bottom then, eh?

The front panel is not strapped to the PCB ground plane, per VME
convention,

The intersection of the set of functional people and the set of people
who are impressed with your work has a name:

It's called the "null set."
---
As does the intersection of the set of dysfunctional people and the
set of jealous, malevolent wretches.

It's called Bret Cahill.

JF
 
John Larkin wrote:
On Sun, 24 Aug 2008 09:52:59 -0700 (PDT), Bret Cahill
BretCahill@aol.com> wrote:

Your front panel is attached to grounded mounting holes on the PCB at roughly
the top and bottom then, eh?

The front panel is not strapped to the PCB ground plane, per VME
convention,

The intersection of the set of functional people and the set of people
who are impressed with your work has a name:

It's called the "null set."


Bret Cahill


They are in the electronics groups. You wouldn't understand.

John

Bret doesn't understand why the sun comes up, every day, let alone
anything to do with electronics. That's why he is, and will continue to
be a null contribution to this world, and solving its problems.


--
http://improve-usenet.org/index.html

aioe.org, Goggle Groups, and Web TV users must request to be white
listed, or I will not see your messages.

If you have broadband, your ISP may have a NNTP news server included in
your account: http://www.usenettools.net/ISP.htm


There are two kinds of people on this earth:
The crazy, and the insane.
The first sign of insanity is denying that you're crazy.
 
Your front panel is attached to grounded mounting holes on the PCB at roughly
the top and bottom then, eh?

The front panel is not strapped to the PCB ground plane, per VME
convention,
The intersection of the set of functional people and the set of people
who are impressed with your work has a name:

It's called the "null set."


Bret Cahill
 
On 8/24/08 11:31 AM, in article
TfKdnSa8I-qdNCzVnZ2dnUVZ_ofinZ2d@earthlink.com, "Michael A. Terrell"
<mike.terrell@earthlink.net> wrote:

John Larkin wrote:

On Sun, 24 Aug 2008 09:52:59 -0700 (PDT), Bret Cahill
BretCahill@aol.com> wrote:

Your front panel is attached to grounded mounting holes on the PCB at
roughly
the top and bottom then, eh?

The front panel is not strapped to the PCB ground plane, per VME
convention,

The intersection of the set of functional people and the set of people
who are impressed with your work has a name:

It's called the "null set."


Bret Cahill


They are in the electronics groups. You wouldn't understand.

John


Bret doesn't understand why the sun comes up, every day, let alone
anything to do with electronics. That's why he is, and will continue to
be a null contribution to this world, and solving its problems.
I agree with your determination of Cahill being either a dim bulb or one
with no filament at all, but..... The sun doesn't come up, the earth sets.
 
On Aug 23, 3:05 pm, Bill Ward <bw...@REMOVETHISix.netcom.com> wrote:
On Sat, 23 Aug 2008 13:13:21 -0700, John Larkin wrote:

We recently designed an 8-channel complex waveform generator. Each output
stage is composed of a DAC, a lowpass filter, an output amplifier, a test
relay, and an output connector. It's this one:

http://www.highlandtechnology.com/DSS/V346DS.html

You can see the gold output connectors, and the relays are hiding just
behind the front panel.

The harmonic distortion seemed a bit high, in the -40 dBc range at 32 MHz
and max level output. We were poking around with a spectrum analyzer and
happened to do a 0-3 GHz sweep and lo, a big line at about 1 GHz.
Something's oscillating!

Cut to the bottom line: the eight output amps, 1.5 GHz current-mode
opamps, are individually stable, but oscillate together. Futzing with some
amps may affect the outputs of others, several channels away. And the
ensemble oscillations have multiple stable modes, including the occasional
"off."

What's happening is that the front panel is electromagnetically resonating
in a fundamental violin-string mode (peak swing in the middle) at about 1
GHz, and couples pretty well into all the output stages; no doubt the
relays are helping. A few well-placed capacitors fix the problem. It took
a while to figure this out.

So the observation is: when something goes wrong, there are a number of
likely causes. Here, they were channel-channel trace couplings, Vcc
coupling, amplifier loop stability, pad-plane parasitic capacitance, plain
rotten opamps, stuff like that. But a complex system has many possible,
convoluted causalities other than the obvious ones. Suppose there are a
billion possible interactions, not unreasonable for a system with hundreds
of themselves-complex parts, all close and well-coupled and interacting at
frequencies like this. Suppose most of those failure modes [1] are wildly
improbable, like one chance in a billion of ever happening.

1e9 * 1e-9 = 1

The final solution was wildly improbable. If suggested as a cause, one
would be tempted to say "no, that's just too bizarre." It was probable
that the actual problem *was* wildly improbable.

This sort of thing happens all the time in our business, in hardware and
software. Insanely unlikely insanely complex things happen, because there
are potentially so many of them. That makes it fun to track them down.

John

[1] "failure mode" being a subjective thing. I think a 1 GHz oscillation
is a failure because I don't want one. For all I know, the circuit may be
proud of itself for pulling this off.

I once was responsible for designing an audio augmentation system for the
Alaska court system. It amplified signals from several mikes near the
judge, clerk, attorneys, bailiff, etc, and fed the signals to individual
amps and equalizing filters. They were then combined to drive a
multiple speaker system so the rest of the court could hear. We carefully
designed around the obvious shielding and feedback problems, and came up
with a well tested prototype that everybody liked and signed off on.

When the first production unit was installed in one of the biggest
courtrooms, we got a panicky call because soap operas were coming in loud
and clear over the judges comments, to his displeasure.

This surprised us because it had passed our EMI tests with flying colors.
When we sent techs to investigate, they found >>2V/m of TV signal at
the clerks (master unit) location, directly in front of the curved bench.
It turned out the bench had hidden steel armor plate to protect the judge
from dissatisfied participants, and the TV station was about a mile away,
directly in front of the bench. The plate was acting as a cylindrical
reflector focused on the clerk, and boosting the EMI way beyond anything
we had designed or tested for.

We added enough internal shielding to brute-force our way through, but
always asked to see the intended unit location in the remaining
installations. The problem never occurred again to my knowledge.

The event wasn't as much complex as it was unexpected, but it's another
example of the puzzle-solving nature of R&D.
I built a RC-servo, worked well, except it picked up
the local AM radio station, so I disconnected the
control voltage input from the Rx circuitry and it
was still there even more! The servo arm was
under control of the AM station. I doubt I could
design that on purpose. Using "trial and error"
I clipped in a cap that fixed it, but I was a bit haunted
because I didn't really understand the anomally.
Ken
 
Don Bowey wrote:
Michael A. Terrell wrote:

Bret doesn't understand why the sun comes up, every day, let alone
anything to do with electronics. That's why he is, and will continue to
be a null contribution to this world, and solving its problems.


I agree with your determination of Cahill being either a dim bulb or one
with no filament at all, but..... The sun doesn't come up, the earth sets.

Geeze! Now he'll start 50 more of his moronic threads telling us its
an optical illusion, and that the earth really orbits the sun at a 90
degree angle, and attempt to prove it with his half assed knowledge of
life, the universe and everything!

Actually, people have finally finished charging their home power
systems with all the sunlight, and it starts to get brighter as more and
more systems shut down. ;-)


--
http://improve-usenet.org/index.html

aioe.org, Goggle Groups, and Web TV users must request to be white
listed, or I will not see your messages.

If you have broadband, your ISP may have a NNTP news server included in
your account: http://www.usenettools.net/ISP.htm


There are two kinds of people on this earth:
The crazy, and the insane.
The first sign of insanity is denying that you're crazy.
 
On Sun, 24 Aug 2008 12:57:45 -0700 (PDT), Bret Cahill
<BretCahill@aol.com> wrote:

This sort of thing happens all the time in our business, in hardware
and software.

I can imagine!

Insanely unlikely insanely complex things happen,

And they seem especially complex to simpletons who cannot think
rationally.

because there are potentially so many of them. That makes it fun to
track them down.

Fun enough to type LOL! a few times?


Bret Cahill
Hey, Bret, show us some of your work.

John
 
On Sun, 24 Aug 2008 12:57:45 -0700 (PDT), Bret Cahill
<BretCahill@aol.com> wrote:

This sort of thing happens all the time in our business, in hardware
and software.

I can imagine!

Insanely unlikely insanely complex things happen,

And they seem especially complex to simpletons who cannot think
rationally.

because there are potentially so many of them. That makes it fun to
track them down.

Fun enough to type LOL! a few times?
---
Oh, my, you've really let this thing grow into an obsession, haven't
you?

JF
 

Welcome to EDABoard.com

Sponsor

Back
Top