Driver to drive?

<tabbypurr@gmail.com> wrote in message
news:fa1510bf-6028-4255-b703-469a7aa432d5@googlegroups.com...
I'm looking at putting together something similar to a Class AB/B audio
amp, but it will be driven outside its linear range into saturation a lot
of the time.

So you want low distortion, but it's a precondition that it has high
distortion?

You can't have it both ways. This isn't even about the having and eating of
cake, but the simultaneous having and not having of it!


That's all well & good but for one thing: wrapping nfb round saturating
outputs doesn't work too well as it takes time for output devices to
unsaturate, and the nfb effectively overreacts, adding distortion.

Well... mine doesn't :)

When I have the option of preventing it, at least.

As JL mentioned, a lot of opamps are well behaved.

I'd like more opamps to expose their internal compensation node so their
outputs can be clamped externally, but alas, there are many things that I
like that just ain't gonna happen (for worse and for better, admittedly!).

A classic example: using two opamps to control a linear power supply in
voltage or current regulation mode, with the outputs diode-OR'd. Whenever
one is active, the other is just moping around hugging the rail. So when it
comes time for it to kick into gear, it has to sit up and climb all the way
off the rail to the operating point. Integrator windup. In a power supply,
it looks something like diode recovery, but terrifically slow (10s of
microseconds, milliseconds even, if limited by compensation capacitors).
Semantically-simple solution: clamp the inactive opamp's gain node so it
doesn't wander off into the weeds.

Another example, the precision analog rectifier. When reverse biased, the
opamp rails; this can be partly ameliorated by strapping a diode from output
to -in, so the amp follows its input, and only needs to swing two diode
drops to return to forward bias.


Keeping distortion low matters here. What tips would you recommend to keep
unwanted distortion minimised?

There are some far-out techniques to compress or expand, and predistort
signals, with the result that the final distortion cancels out and the
output is linear. But this is mostly done with RF amps, where the added
complexity is justified by the cost (capital -- RF transistors are pricey --
and operating electricity).

At low frequencies, just do what musicians do: use a bigger fucking amp.

;-)

Tim

--
Seven Transistor Labs, LLC
Electrical Engineering Consultation and Contract Design
Website: https://www.seventransistorlabs.com/
 
On 21/02/18 11:08, TTman wrote:
On 21/02/2018 00:57, mpm wrote:
On Tuesday, February 20, 2018 at 5:44:32 PM UTC-5, TTman wrote:

Fit a nice big pair of 100W Cibie rally lights. It will be like driving
in daylight :)

...and it will also be like driving in a ditch after I run you off the
road! :)

I consider it extremely rude, unsafe (and possibly illegal) to use
high-beams, especially super-bright aftermarket bulbs, that blind
other drivers. The use of these bulbs seems to be gaining popularity
with the younger crowd (which arguably, doesn't really know how to
drive in the first place.) Unless you define driving as
Fast-&-Furious video game.

Of course, the opposite is also very annoying and unsafe:
Drivers who either don't turn their lights on at all, or run around
with no brake lights. (Don't get me started on that -- ever since
LED's.. more and more folks with brake lights out. Or so it seems!)

Certainly not illegal in the UK and anyway, the OP said he drives on
rural roads. They also have to operate on main beam only and go out on
dipped beam. No problem for other drivers.....

It's a long time since I read the highway code, but yes - powerful
headlights are illegal to use in the UK. It is illegal even to use the
car's fog lights unless there is thick fog. Additional lights that are
sharp enough to cause annoyance, distractions, or temporary blinding to
other drivers are always illegal on public roads.

The rules are quite simple. If you distract other drivers, it is
dangerous. If it is dangerous, it is illegal.
 
Tim Williams wrote:

--------------------

<tabbypurr@gmail.com>

------------------

I'm looking at putting together something similar to a Class AB/B audio
amp, but it will be driven outside its linear range into saturation a lot
of the time.


So you want low distortion, but it's a precondition that it has high
distortion?

You can't have it both ways. This isn't even about the having and eating of
cake, but the simultaneous having and not having of it!

** Our "tabbypurr" wants us to emulate the trick Schrodinger's cat was
famous for.




..... Phil
 
On Wednesday, 21 February 2018 03:16:17 UTC, bitrex wrote:
On 02/20/2018 10:01 PM, tabbypurr wrote:

I'm looking at putting together something similar to a Class AB/B audio amp, but it will be driven outside its linear range into saturation a lot of the time. That's all well & good but for one thing: wrapping nfb round saturating outputs doesn't work too well as it takes time for output devices to unsaturate, and the nfb effectively overreacts, adding distortion. Keeping distortion low matters here. What tips would you recommend to keep unwanted distortion minimised?


thanks, NT


Don't see why it would be that much different than crossover distortion;
it takes time for the output to pass thru the dead band as one device
stops conducting and the other device begins, negative feedback reduces
crossover distortion just fine at low frequency when the open loop gain
of the rest of the amp is high.

How are you driving the complementary emitter followers of a class AB/B
amp into saturation, anyway? Wouldn't the base drive voltages have to
swing above and below the rails?

It was very late, and I think saturation is maybe the wrong term. The amplifier output is railed frequently, I need it to come off the rails cleanly. Producing a circuit is the next step.


NT
 
On Wednesday, 21 February 2018 03:16:17 UTC, bitrex wrote:
On 02/20/2018 10:01 PM, tabbypurr wrote:

I'm looking at putting together something similar to a Class AB/B audio amp, but it will be driven outside its linear range into saturation a lot of the time. That's all well & good but for one thing: wrapping nfb round saturating outputs doesn't work too well as it takes time for output devices to unsaturate, and the nfb effectively overreacts, adding distortion. Keeping distortion low matters here. What tips would you recommend to keep unwanted distortion minimised?


thanks, NT


Don't see why it would be that much different than crossover distortion;
it takes time for the output to pass thru the dead band as one device
stops conducting and the other device begins, negative feedback reduces
crossover distortion just fine at low frequency when the open loop gain
of the rest of the amp is high.

How are you driving the complementary emitter followers of a class AB/B
amp into saturation, anyway? Wouldn't the base drive voltages have to
swing above and below the rails?

If what you're saying is correct, why would class AB amps exist? AIUI It's because class B's glitches aren't fully solved by nfb.


NT
 
On Wednesday, 21 February 2018 08:25:26 UTC, jurb...@gmail.com wrote:

"** So NT wants output device saturation but don't want much waveform distortion ? While keeping all nature of his project secret? "

Well it does pique the curiosity a bit, and evoke an invective from you.

Translation : Things are the same as always.

Just what might be the mystery application ? Is there a prize for figuring it out maybe ? :)

Sorry, can't reveal what the end use is on this one. I know it'd be easier if I could.


NT
 
On Wednesday, 21 February 2018 08:45:20 UTC, Tauno Voipio wrote:
On 21.2.18 05:01, tabbypurr wrote:

I'm looking at putting together something similar to a Class AB/B audio amp, but it will be driven outside its linear range into saturation a lot of the time. That's all well & good but for one thing: wrapping nfb round saturating outputs doesn't work too well as it takes time for output devices to unsaturate, and the nfb effectively overreacts, adding distortion. Keeping distortion low matters here. What tips would you recommend to keep unwanted distortion minimised?


thanks, NT


If it's going to be a guitar amp, forget distortion, it's
a part of the steel-wire music ...

It seems that you have re-invented the transient intermodulation
distortion (TIM). Google for articles of prof. Matti Otala about
the TIM.

thanks, got it


NT
 
On Wednesday, 21 February 2018 10:01:12 UTC, Phil Allison wrote:
jurb...@gmail.com wrote:

-----------------------------

"** So NT wants output device saturation but don't want much
waveform distortion ? While keeping all nature of his project secret? "



Just what might be the mystery application ? Is there a prize for
figuring it out maybe ? :)



** If you guessed it - he would deny it.

A though bubble looking for a clever idea for him to plagiarise.




.... Phil

Phil does go into stupid mode quickly.
 
On Wednesday, 21 February 2018 10:09:35 UTC, Tim Williams wrote:
tabbypurr> wrote in message
news:fa1510bf-6028-4255-b703-469a7aa432d5@googlegroups.com...

I'm looking at putting together something similar to a Class AB/B audio
amp, but it will be driven outside its linear range into saturation a lot
of the time.


So you want low distortion, but it's a precondition that it has high
distortion?

Heh. No, output railing is not distortion, it's how it's meant to be. I just need it to come off the rails back into linear mode quickly & cleanly.


You can't have it both ways. This isn't even about the having and eating of
cake, but the simultaneous having and not having of it!


That's all well & good but for one thing: wrapping nfb round saturating
outputs doesn't work too well as it takes time for output devices to
unsaturate, and the nfb effectively overreacts, adding distortion.


Well... mine doesn't :)

When I have the option of preventing it, at least.

As JL mentioned, a lot of opamps are well behaved.

I'd like more opamps to expose their internal compensation node so their
outputs can be clamped externally, but alas, there are many things that I
like that just ain't gonna happen (for worse and for better, admittedly!).

A classic example: using two opamps to control a linear power supply in
voltage or current regulation mode, with the outputs diode-OR'd. Whenever
one is active, the other is just moping around hugging the rail. So when it
comes time for it to kick into gear, it has to sit up and climb all the way
off the rail to the operating point. Integrator windup. In a power supply,
it looks something like diode recovery, but terrifically slow (10s of
microseconds, milliseconds even, if limited by compensation capacitors).
Semantically-simple solution: clamp the inactive opamp's gain node so it
doesn't wander off into the weeds.

Another example, the precision analog rectifier. When reverse biased, the
opamp rails; this can be partly ameliorated by strapping a diode from output
to -in, so the amp follows its input, and only needs to swing two diode
drops to return to forward bias.


Keeping distortion low matters here. What tips would you recommend to keep
unwanted distortion minimised?


There are some far-out techniques to compress or expand, and predistort
signals, with the result that the final distortion cancels out and the
output is linear. But this is mostly done with RF amps, where the added
complexity is justified by the cost (capital -- RF transistors are pricey --
and operating electricity).

At low frequencies, just do what musicians do: use a bigger fucking amp.

;-)

Tim

A bigger amp won't do the basic functionality.


NT
 
On Wednesday, February 21, 2018 at 10:17:02 PM UTC+11, tabb...@gmail.com wrote:
On Wednesday, 21 February 2018 10:09:35 UTC, Tim Williams wrote:
tabbypurr> wrote in message
news:fa1510bf-6028-4255-b703-469a7aa432d5@googlegroups.com...

I'm looking at putting together something similar to a Class AB/B audio
amp, but it will be driven outside its linear range into saturation a lot
of the time.


So you want low distortion, but it's a precondition that it has high
distortion?

Heh. No, output railing is not distortion, it's how it's meant to be. I just need it to come off the rails back into linear mode quickly & cleanly.


You can't have it both ways. This isn't even about the having and eating of
cake, but the simultaneous having and not having of it!


That's all well & good but for one thing: wrapping nfb round saturating
outputs doesn't work too well as it takes time for output devices to
unsaturate, and the nfb effectively overreacts, adding distortion.


Well... mine doesn't :)

When I have the option of preventing it, at least.

As JL mentioned, a lot of opamps are well behaved.

I'd like more opamps to expose their internal compensation node so their
outputs can be clamped externally, but alas, there are many things that I
like that just ain't gonna happen (for worse and for better, admittedly!).

A classic example: using two opamps to control a linear power supply in
voltage or current regulation mode, with the outputs diode-OR'd. Whenever
one is active, the other is just moping around hugging the rail. So when it
comes time for it to kick into gear, it has to sit up and climb all the way
off the rail to the operating point. Integrator windup. In a power supply,
it looks something like diode recovery, but terrifically slow (10s of
microseconds, milliseconds even, if limited by compensation capacitors).
Semantically-simple solution: clamp the inactive opamp's gain node so it
doesn't wander off into the weeds.

Another example, the precision analog rectifier. When reverse biased, the
opamp rails; this can be partly ameliorated by strapping a diode from output
to -in, so the amp follows its input, and only needs to swing two diode
drops to return to forward bias.


Keeping distortion low matters here. What tips would you recommend to keep
unwanted distortion minimised?


There are some far-out techniques to compress or expand, and predistort
signals, with the result that the final distortion cancels out and the
output is linear. But this is mostly done with RF amps, where the added
complexity is justified by the cost (capital -- RF transistors are pricey --
and operating electricity).

At low frequencies, just do what musicians do: use a bigger fucking amp.

;-)

Tim

A bigger amp won't do the basic functionality.


NT

Disconnect the input when the output gets close to the rail, sample and hold that input level, and re-connect the input when the input gets past that voltage going the other way.

You'll need two separate systems - one for each rail, but only one switch and and sample and hold, and you might be able to get away with only one comparator looking at input and the latched input value (though two might make life easier).

The switches that disconnect the input should not have too capacitative feedthrough when they switch, but decent analog switches specify that on the data sheet.

--
Bill Sloman, Sydney
 
On 02/20/2018 10:48 PM, John Larkin wrote:
On Tue, 20 Feb 2018 21:40:31 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/20/2018 08:43 PM, Paul Hovnanian P.E. wrote:
Jim Thompson wrote:

https://www.nbcbayarea.com/news/local/Diseased-Streets-472430013.html

If they are smart, they won't wake up the hobos before running the steam
cleaner down the sidewalk.


The news story claims "Over 100!" discarded IV needles found in the
areas they surveyed but you can easily click through all the "red"
streets in the map and see that the tally tops out at barely 50. And
there are big clumps of 7 or 10 needles in some locations.

Pay one or two bums a couple bucks to dump their spent needles in a
couple locations and hey presto you've got yourself a story.

And you have an army of well-paid attorneys, consultants, NGOs,
providers, and city staffers actually soaking up the funding.

We never go downtown. That's for bankers and tourists. Our
neighborhood is green, quiet, clean, and safe.

https://www.dropbox.com/s/kpl55nnziaubq9z/Ohlone_Way_3.jpg?raw=1

I've never seen a needle on Ohlone Way. You might get stuck picking
blackberries.

It's a histrionic "story" crafted to play well with histrionics of all
political persuasions, left or right few Americans can seem to resist a
good "OMG THINK OF THE CHILDREN!" tale.

Meanwhile I'd estimate statistics on the number of children who could be
confirmed to have caught the AIDS or any infectious disease from any
discarded IV needle in SF or any other large city for that matter is
likely pretty close to 0.

Also they seem to have inflated the number of needles found from the
data points on the map to the TV story by a factor of 2. Maybe some
would shrug and say so what it's bad either way but IMO the media
deserves to get the side-eye when they do things like that, regardless
of what the story is about.
 
On 02/21/2018 05:59 AM, tabbypurr@gmail.com wrote:
On Wednesday, 21 February 2018 03:16:17 UTC, bitrex wrote:
On 02/20/2018 10:01 PM, tabbypurr wrote:

I'm looking at putting together something similar to a Class AB/B audio amp, but it will be driven outside its linear range into saturation a lot of the time. That's all well & good but for one thing: wrapping nfb round saturating outputs doesn't work too well as it takes time for output devices to unsaturate, and the nfb effectively overreacts, adding distortion. Keeping distortion low matters here. What tips would you recommend to keep unwanted distortion minimised?


thanks, NT


Don't see why it would be that much different than crossover distortion;
it takes time for the output to pass thru the dead band as one device
stops conducting and the other device begins, negative feedback reduces
crossover distortion just fine at low frequency when the open loop gain
of the rest of the amp is high.

How are you driving the complementary emitter followers of a class AB/B
amp into saturation, anyway? Wouldn't the base drive voltages have to
swing above and below the rails?

It was very late, and I think saturation is maybe the wrong term. The amplifier output is railed frequently, I need it to come off the rails cleanly. Producing a circuit is the next step.


NT

Gotcha, but traditional class AB/B audio power amps in the way I think
of them with complimentary emitter followers can't get very close to the
rails, particularly the positive supply rail where there's usually a
current source of some type feeding the voltage amplification stage,
which needs some headroom to do its job.

There are some power amps designed to run directly off say 12 volt
supply in a car, they sometimes use a bootstrap cap from the output back
to the current source of the VAS to boost its supply voltage above the
rail so the base drive for the NPN output follower can swing higher than
usual to get more power output
 
On Tuesday, February 20, 2018 at 8:38:48 PM UTC-5, olds...@tubes.com wrote:
On Tue, 20 Feb 2018 16:50:28 -0800 (PST), mpm wrote:

If you get to 55 and still have that soldering iron, you'll find you want to jab your eye with it! :)

Have you spoken to a psychologist about this urge? :)

Yes!
And she suggested I use liquid solder flux from now on instead of eye drops.
 
On 02/21/2018 06:01 AM, tabbypurr@gmail.com wrote:
On Wednesday, 21 February 2018 03:16:17 UTC, bitrex wrote:
On 02/20/2018 10:01 PM, tabbypurr wrote:

I'm looking at putting together something similar to a Class AB/B audio amp, but it will be driven outside its linear range into saturation a lot of the time. That's all well & good but for one thing: wrapping nfb round saturating outputs doesn't work too well as it takes time for output devices to unsaturate, and the nfb effectively overreacts, adding distortion. Keeping distortion low matters here. What tips would you recommend to keep unwanted distortion minimised?


thanks, NT


Don't see why it would be that much different than crossover distortion;
it takes time for the output to pass thru the dead band as one device
stops conducting and the other device begins, negative feedback reduces
crossover distortion just fine at low frequency when the open loop gain
of the rest of the amp is high.

How are you driving the complementary emitter followers of a class AB/B
amp into saturation, anyway? Wouldn't the base drive voltages have to
swing above and below the rails?

If what you're saying is correct, why would class AB amps exist? AIUI It's because class B's glitches aren't fully solved by nfb.


NT

They're pretty much "fully solved" at very low frequency, but a pure
class B amp intrinsically has a shitton of THD even for relatively small
signals, and so as frequency increases and open-loop gain drops off from
the dominant pole compensation the ability of the NFB loop to compensate
starts going tits-up fairly quickly.

It's not that NFB has _no_ ability to correct for crossover distortion
intrinsically, or distortion caused by any other source internal to the
amplifier design, it can do all of that stuff, but it's entirely
dependent on the amp having sufficient gain-bandwidth available to do it
effectively for signals in the range of frequencies of interest

The large signal sine-wave output of a class B LM324 op amp with unity
gain looks lovely at 20 Hz, materially indistinguishable from any class
AB op amp to my eyes, but like garbage once you start getting into even
the low kHz range
 
On 20/02/2018 23:12, bitrex wrote:
On 02/20/2018 01:52 PM, Pimpom wrote:

What are prepaid prices are like there? A price war has been going on
for some time here in India. One of the current favourite plans allows
unlimited calls and texting for 28 days, plus 1.5GB of 4G data per
day. A couple of months ago, most service providers offered this plan
for ~US$5-6. Now it's down to about $3.

Boost Mobile which I have is no contract, I get unlimited nationwide
talk/SMS for $30/month. Plus 4GB of full-speed data _per month_; whether
it's 4G or 3G depends on where you are, even where I am in a metro area
of 3 million plus people 25 miles outside the city center 4G is pretty
spotty and it falls back to 3G about 50% of the time. Data used to
stream music doesn't count against the cap so there's that.

That seems punitive. The going rate for that in the UK would be about
half what you pay at ÂŁ10 with some offering rollover of any unused data.

I have a Moto e4 phone which is a pretty nice budget Android smartphone,
IMO one of the better low-end models compared to LG and Samsung. It's
normally around $100-130 but with first month's payment you get it for $50.

Data is very expensive, Boost dings you an extra $5 for every 1GB of
data you want above the cap with the plan I have.

WOW! That is *seriously* expensive. Mine runs at ÂŁ1/GB if I go over.
Shop around for data only SIM deals and you get 50p/GB over here(UK).
You can get special offers on contract SIMs (and negotiate even better
deals by threatening to leave - customer loyalty is never rewarded). If
you aren't talking to customer retention at least every other year then
you are paying more than you should be for an inferior service.

UK PAYG SIMs are everywhere at supermarket and pound store checkouts.
TracPhone is like a convenience store/drugstore shelf prepaid phone,
it's not a particularly good value, prices start at like $15 for 200
minutes talk/500 text messages and 1 gig of data that expires in a
month, plus the cost of the phone. People sometimes call them as
"burner" or "drug dealer" phones and they're not half wrong, the low
prices and the fact you can buy airtime cards at the supermarket off the
shelf appeal to people who'd rather pay cash

Are there no US SIM only PAYG deals where the paid for value stays on
the phone forever until you use it up? (provided you make at least one
chargeable call every six months) They are best for UK light users.

Also at least one UK mobile company offers a free data SIM with a
200MB/month cap for people who only want email and text access.

--
Regards,
Martin Brown
 
On 21/02/18 13:12, bitrex wrote:
On 02/20/2018 10:48 PM, John Larkin wrote:
On Tue, 20 Feb 2018 21:40:31 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/20/2018 08:43 PM, Paul Hovnanian P.E. wrote:
Jim Thompson wrote:

https://www.nbcbayarea.com/news/local/Diseased-Streets-472430013.html


If they are smart, they won't wake up the hobos before running the
steam
cleaner down the sidewalk.


The news story claims "Over 100!" discarded IV needles found in the
areas they surveyed but you can easily click through all the "red"
streets in the map and see that the tally tops out at barely 50. And
there are big clumps of 7 or 10 needles in some locations.

Pay one or two bums a couple bucks to dump their spent needles in a
couple locations and hey presto you've got yourself a story.

And you have an army of well-paid attorneys, consultants, NGOs,
providers, and city staffers actually soaking up the funding.

We never go downtown. That's for bankers and tourists. Our
neighborhood is green, quiet, clean, and safe.

https://www.dropbox.com/s/kpl55nnziaubq9z/Ohlone_Way_3.jpg?raw=1

I've never seen a needle on Ohlone Way. You might get stuck picking
blackberries.



It's a histrionic "story" crafted to play well with histrionics of all
political persuasions, left or right few Americans can seem to resist a
good "OMG THINK OF THE CHILDREN!" tale.

Meanwhile I'd estimate statistics on the number of children who could be
confirmed to have caught the AIDS or any infectious disease from any
discarded IV needle in SF or any other large city for that matter is
likely pretty close to 0.

That statistic is probably accurate. However, it is pretty unpleasant
to find discarded needles around the place, especially if you are
talking about parks, schools, kindergartens, etc. This applies whether
you are a child, adult, or whatever.

Of course, it is not the only thing left lying around by inconsiderate
people - dog turds probably lead to far more infections (they used to be
the leading cause of childhood blindness), and cigarette stubs abound in
some places.

Also they seem to have inflated the number of needles found from the
data points on the map to the TV story by a factor of 2. Maybe some
would shrug and say so what it's bad either way but IMO the media
deserves to get the side-eye when they do things like that, regardless
of what the story is about.
 
On 02/20/2018 08:44 PM, krw@notreal.com wrote:
On Tue, 20 Feb 2018 20:31:56 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 10:21 PM, krw@notreal.com wrote:
On Tue, 13 Feb 2018 19:23:23 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 07:11 PM, krw@notreal.com wrote:
On Tue, 13 Feb 2018 18:34:34 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 06:08 PM, krw@notreal.com wrote:
On Sun, 11 Feb 2018 19:01:06 -0600, Les Cargill
lcargill99@comcast.com> wrote:

John Larkin wrote:
On Sun, 11 Feb 2018 16:45:33 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/11/2018 11:54 AM, John Larkin wrote:

https://www.amazon.com/Brotopia-Breaking-Boys-Silicon-Valley/dp/0735213534/ref=sr_1_1?ie=UTF8&qid=1518366993&sr=8-1&keywords=brotopia



From the NYT review:

What happened in the 1960s and 1970s was that the industry was
exploding and was starved for talent. There just weren’t enough
people to do the jobs in computing. So they hired these two
psychologists, William Cannon and Dallis Perry, to come up with a
personality test to screen for good programmers.

Those men decided, in screening about 1,200 men and 200 women,
that good programmers don’t like people — that they have a
complete disinterest in people. These tests were widely
influential and used at various companies for decades.

Most qualities that make a "good programmer" in 2018 have little to
do with how ingenious or clever you can make the stuff you type
into the box; compilers are very good at this point, CPU horsepower
is plentiful and premature optimization is the root of all evil.
It's not easy to write high-performance code but it takes concerted
effort to write truly poor performing code in most languages.

The most valuable skills probably have to do with a) is your
software based on solid design principles b) how well you can
justify that design/explain your reasoning to others and c) how
easily someone can come in fresh and read your documentation and
understand the principles of the design. Which if you have no
interest in people and little experience interacting with them
socially is going to be a tough row to hoe.

There will always be a place for the seriously schizoid hotshots,
e.g. physics code for games needs to eek out every bit of
performance. A physics engine coder might be insulted if you
sullied his labor with anything as mundane as _gameplay_.

Also the problem with being totally disinterested in people is that
you usually end up thinking you're a lot better than you actually
are, without any metric to judge your performance how can you
semi-objectively judge.

That review also names a recently-me-too-fallen individual that
we once did battle with, but I'm not allowed to make any
disparaging remarks about that.

I am continually amazed by the aggressive and toxic Silicon
Valley culture... which has spread into the worldwide
semiconductor industry.

Software used to be fun and novel but then a lot of the industry
got lame and cynical, like hey let's build a grocery store list app
for a phone with Bluetooth integration, sell that shit for $2.99 on
the app store and hope to get bought out by the Big G for $50 mil
in 14 months.

I think a lotta women bailed out on that scene because "on average"
they don't value the bignum bills as highly vs. not getting your
soul crushed to get 'em. Who can blame 'em, I don't. Seems like
most of the "angel" VC funders in the Valley aren't even tech
educated people anymore they're like MBAs and pundits and famous
bloggers and various Wall Street investment bank weenies after some
fast bucks



I'm reading "Coders at Work", which is a bunch of interviews with
famous programmers. What's impressive is how ad-hoc they are, with
no obvious systematic approaches, sometimes bottom-up, sometimes
top-down; they just sort of do it.


Most people who have done something noteworthy enough to be
interviewed probably didn't have a lot of examples to work from.

Top-down/bottom up is a legit thing to mix up, tho. Start
with the risk items.

Or the unknowns.

Many of them have no formal, or
informal, training in CS or EE or anything. I doubt that many know
what a state machine is.


Unsurprising. Software people are very often blind to determinism in
general.

FSM are not a widely understood technique in software.
God only knows why. Even the people who know about them
often think they are always equivalent to regular expressions.

I naturally code that way but when I've shown my code to "real
programmers", they say "that's neat"! To me, it's the obvious way to
get across the street - one step at a time (with some checks along the
way).

It's not that FSMs aren't a useful design pattern for some problems,
it's just that the software world has moved on to things like OOP,
functional, and generic programming which are more scalable, extensible,
and abstract

...even where not appropriate.


Nowadays on large projects the most "appropriate" design pattern is most
often one that generates a codebase 20 or 30 people can work with
simultaneously and not influence each other too much. You can't have
everyone involved in a project mucking with the state diagrams and
making changes that break other people's shit, so you have to run
modifications thru some master planner guy, which is a bottleneck and
wastes time. It's too "top down." Doesn't scale well.

In a large project like that, programmers shouldn't be making changes
to *ANY* state machines. That's the what the architect(s) do. The
programmers implement, they don't design. Chip designers don't change
the architecture of a processor, either. They implement the
architect's design. That's probably why there is so much shit
software out there. Too many cooks.


That's called "waterfall design" and has been passe in the software
world for I'd guess 30 years or so. It's brittle, inflexible, too
"top-down", there is no one "architect" or small group of architects who
have a complete God's-eye picture of every single "state" or function or
branch of code in a 100 million-line codebase.

So has defect-free software. There is a reason Win* is such shit.

Processor design is a different animal it takes years and years to bring
a new architecture from concept to finalized design, the software world
is working under much tighter deadlines and customers _expect_ major
changes to be possible for most of the development cycle.

No, it's exactly the same thing. One group believes in the quality of
their work.

Giving a client this "the design is finalized and being implemented
according to God's plan by the code monkeys and it cannot be touched"
stuff would be a way to find yourself rapidly out of business

So you think the race to the bottom is a good thing. I could have
guessed that.

There's a lot of lousy code out there, but there's also more great code
out there than there's ever been in history. Well-designed and
implemented software using "modern" practices is not at all hard to
find. Flawed hardware designed top-down is not at all hard to find, e.g.
Spectre, Meltdown.

Windows isn't shitty because of the techniques used to develop modern
software, it's mostly because it is and always has been a slave to the
past and backwards compatibility. It's always needed a ground-up rewrite
since about 1990 which it never got.

I remember watching a 133 MHz PC running BeOS display 32 full-motion
videos in windows on its desktop while you could simultaneously edit a
document in the word processor with no hiccups or lag at all. In 1997.
Win 95 would choke and crash under a tenth of that workload. But BeOS
couldn't run 10 year old DOS productivity software. BeOS is gone with
the snows of yesteryear.
 
On Wed, 21 Feb 2018 08:17:19 -0500, bitrex
<bitrex@de.lete.earthlink.net> wrote:

On 02/20/2018 08:44 PM, krw@notreal.com wrote:
On Tue, 20 Feb 2018 20:31:56 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 10:21 PM, krw@notreal.com wrote:
On Tue, 13 Feb 2018 19:23:23 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 07:11 PM, krw@notreal.com wrote:
On Tue, 13 Feb 2018 18:34:34 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 06:08 PM, krw@notreal.com wrote:
On Sun, 11 Feb 2018 19:01:06 -0600, Les Cargill
lcargill99@comcast.com> wrote:

John Larkin wrote:
On Sun, 11 Feb 2018 16:45:33 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/11/2018 11:54 AM, John Larkin wrote:

https://www.amazon.com/Brotopia-Breaking-Boys-Silicon-Valley/dp/0735213534/ref=sr_1_1?ie=UTF8&qid=1518366993&sr=8-1&keywords=brotopia



From the NYT review:

What happened in the 1960s and 1970s was that the industry was
exploding and was starved for talent. There just weren’t enough
people to do the jobs in computing. So they hired these two
psychologists, William Cannon and Dallis Perry, to come up with a
personality test to screen for good programmers.

Those men decided, in screening about 1,200 men and 200 women,
that good programmers don’t like people — that they have a
complete disinterest in people. These tests were widely
influential and used at various companies for decades.

Most qualities that make a "good programmer" in 2018 have little to
do with how ingenious or clever you can make the stuff you type
into the box; compilers are very good at this point, CPU horsepower
is plentiful and premature optimization is the root of all evil.
It's not easy to write high-performance code but it takes concerted
effort to write truly poor performing code in most languages.

The most valuable skills probably have to do with a) is your
software based on solid design principles b) how well you can
justify that design/explain your reasoning to others and c) how
easily someone can come in fresh and read your documentation and
understand the principles of the design. Which if you have no
interest in people and little experience interacting with them
socially is going to be a tough row to hoe.

There will always be a place for the seriously schizoid hotshots,
e.g. physics code for games needs to eek out every bit of
performance. A physics engine coder might be insulted if you
sullied his labor with anything as mundane as _gameplay_.

Also the problem with being totally disinterested in people is that
you usually end up thinking you're a lot better than you actually
are, without any metric to judge your performance how can you
semi-objectively judge.

That review also names a recently-me-too-fallen individual that
we once did battle with, but I'm not allowed to make any
disparaging remarks about that.

I am continually amazed by the aggressive and toxic Silicon
Valley culture... which has spread into the worldwide
semiconductor industry.

Software used to be fun and novel but then a lot of the industry
got lame and cynical, like hey let's build a grocery store list app
for a phone with Bluetooth integration, sell that shit for $2.99 on
the app store and hope to get bought out by the Big G for $50 mil
in 14 months.

I think a lotta women bailed out on that scene because "on average"
they don't value the bignum bills as highly vs. not getting your
soul crushed to get 'em. Who can blame 'em, I don't. Seems like
most of the "angel" VC funders in the Valley aren't even tech
educated people anymore they're like MBAs and pundits and famous
bloggers and various Wall Street investment bank weenies after some
fast bucks



I'm reading "Coders at Work", which is a bunch of interviews with
famous programmers. What's impressive is how ad-hoc they are, with
no obvious systematic approaches, sometimes bottom-up, sometimes
top-down; they just sort of do it.


Most people who have done something noteworthy enough to be
interviewed probably didn't have a lot of examples to work from.

Top-down/bottom up is a legit thing to mix up, tho. Start
with the risk items.

Or the unknowns.

Many of them have no formal, or
informal, training in CS or EE or anything. I doubt that many know
what a state machine is.


Unsurprising. Software people are very often blind to determinism in
general.

FSM are not a widely understood technique in software.
God only knows why. Even the people who know about them
often think they are always equivalent to regular expressions.

I naturally code that way but when I've shown my code to "real
programmers", they say "that's neat"! To me, it's the obvious way to
get across the street - one step at a time (with some checks along the
way).

It's not that FSMs aren't a useful design pattern for some problems,
it's just that the software world has moved on to things like OOP,
functional, and generic programming which are more scalable, extensible,
and abstract

...even where not appropriate.


Nowadays on large projects the most "appropriate" design pattern is most
often one that generates a codebase 20 or 30 people can work with
simultaneously and not influence each other too much. You can't have
everyone involved in a project mucking with the state diagrams and
making changes that break other people's shit, so you have to run
modifications thru some master planner guy, which is a bottleneck and
wastes time. It's too "top down." Doesn't scale well.

In a large project like that, programmers shouldn't be making changes
to *ANY* state machines. That's the what the architect(s) do. The
programmers implement, they don't design. Chip designers don't change
the architecture of a processor, either. They implement the
architect's design. That's probably why there is so much shit
software out there. Too many cooks.


That's called "waterfall design" and has been passe in the software
world for I'd guess 30 years or so. It's brittle, inflexible, too
"top-down", there is no one "architect" or small group of architects who
have a complete God's-eye picture of every single "state" or function or
branch of code in a 100 million-line codebase.

So has defect-free software. There is a reason Win* is such shit.

Processor design is a different animal it takes years and years to bring
a new architecture from concept to finalized design, the software world
is working under much tighter deadlines and customers _expect_ major
changes to be possible for most of the development cycle.

No, it's exactly the same thing. One group believes in the quality of
their work.

Giving a client this "the design is finalized and being implemented
according to God's plan by the code monkeys and it cannot be touched"
stuff would be a way to find yourself rapidly out of business

So you think the race to the bottom is a good thing. I could have
guessed that.


There's a lot of lousy code out there, but there's also more great code
out there than there's ever been in history. Well-designed and
implemented software using "modern" practices is not at all hard to
find. Flawed hardware designed top-down is not at all hard to find, e.g.
Spectre, Meltdown.

What an asinine statement. First, of course there is decent code out
there and even more than ever. Dumbshit, there is a *lot* more code
every day. Almost all shit. Compared to stuff like the Shuttle OBS
and S/360, almost all is shit. Some is man-rated, so one hopes it's
somewhat better but it's certainly not done in an ad-hoc,
throw-it-together, uncontrolled process as you suggest. It's not the
code we see every day.
Windows isn't shitty because of the techniques used to develop modern
software, it's mostly because it is and always has been a slave to the
past and backwards compatibility. It's always needed a ground-up rewrite
since about 1990 which it never got.

Wrong. Stack overflows are precisely due to the way code is developed
today. There is no excuse for such things.

I remember watching a 133 MHz PC running BeOS display 32 full-motion
videos in windows on its desktop while you could simultaneously edit a
document in the word processor with no hiccups or lag at all. In 1997.
Win 95 would choke and crash under a tenth of that workload. But BeOS
couldn't run 10 year old DOS productivity software. BeOS is gone with
the snows of yesteryear.

Change of subject again.
 
On 02/20/2018 10:13 PM, Clifford Heath wrote:
On 21/02/18 12:44, krw@notreal.com wrote:
On Tue, 20 Feb 2018 20:31:56 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 10:21 PM, krw@notreal.com wrote:
On Tue, 13 Feb 2018 19:23:23 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 07:11 PM, krw@notreal.com wrote:
On Tue, 13 Feb 2018 18:34:34 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 06:08 PM, krw@notreal.com wrote:
On Sun, 11 Feb 2018 19:01:06 -0600, Les Cargill
lcargill99@comcast.com> wrote:

John Larkin wrote:
On Sun, 11 Feb 2018 16:45:33 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/11/2018 11:54 AM, John Larkin wrote:

https://www.amazon.com/Brotopia-Breaking-Boys-Silicon-Valley/dp/0735213534/ref=sr_1_1?ie=UTF8&qid=1518366993&sr=8-1&keywords=brotopia




    From the NYT review:

What happened in the 1960s and 1970s was that the industry was
exploding and was starved for talent. There just weren’t enough
people to do the jobs in computing. So they hired these two
psychologists, William Cannon and Dallis Perry, to come up
with a
personality test to screen for good programmers.

Those men decided, in screening about 1,200 men and 200 women,
that good programmers don’t like people — that they have a
complete disinterest in people. These tests were widely
influential and used at various companies for decades.

Most qualities that make a "good programmer" in 2018 have
little to
do with how ingenious or clever you can make the stuff you type
into the box; compilers are very good at this point, CPU
horsepower
is plentiful and premature optimization is the root of all evil.
It's not easy to write high-performance code but it takes
concerted
effort to write truly poor performing code in most languages.

The most valuable skills probably have to do with a) is your
software based on solid design principles b) how well you can
justify that design/explain your reasoning to others and c) how
easily someone can come in fresh and read your documentation and
understand the principles of the design. Which if you have no
interest in people and little experience interacting with them
socially is going to be a tough row to hoe.

There will always be a place for the seriously schizoid
hotshots,
e.g. physics code for games needs to eek out every bit of
performance. A physics engine coder might be insulted if you
sullied his labor with anything as mundane as _gameplay_.

Also the problem with being totally disinterested in people
is that
you usually end up thinking you're a lot better than you
actually
are, without any metric to judge your performance how can you
semi-objectively judge.

That review also names a recently-me-too-fallen individual that
we once did battle with, but I'm not allowed to make any
disparaging remarks about that.

I am continually amazed by the aggressive and toxic Silicon
Valley culture... which has spread into the worldwide
semiconductor industry.

Software used to be fun and novel but then a lot of the industry
got lame and cynical, like hey let's build a grocery store
list app
for a phone with Bluetooth integration, sell that shit for
$2.99 on
the app store and hope to get bought out by the Big G for $50
mil
in 14 months.

I think a lotta women bailed out on that scene because "on
average"
they don't value the bignum bills as highly vs. not getting your
soul crushed to get 'em. Who can blame 'em, I don't. Seems like
most of the "angel" VC funders in the Valley aren't even tech
educated people anymore they're like MBAs and pundits and famous
bloggers and various Wall Street investment bank weenies
after some
fast bucks



I'm reading "Coders at Work", which is a bunch of interviews with
famous programmers. What's impressive is how ad-hoc they are,
with
no obvious systematic approaches, sometimes bottom-up, sometimes
top-down; they just sort of do it.


Most people who have done something noteworthy enough to be
interviewed probably didn't have a lot of examples to work from.

Top-down/bottom up is a legit thing to mix up, tho. Start
with the risk items.

Or the unknowns.

Many of them have no formal, or
informal, training in CS or EE or anything. I doubt that many
know
what a state machine is.


Unsurprising. Software people are very often blind to
determinism in
general.

FSM are not a widely understood technique in software.
God only knows why. Even the people who know about them
often think they are always equivalent to regular expressions.

I naturally code that way but when I've shown my code to "real
programmers", they say "that's neat"!  To me, it's the obvious
way to
get across the street - one step at a time (with some checks
along the
way).

It's not that FSMs aren't a useful design pattern for some problems,
it's just that the software world has moved on to things like OOP,
functional, and generic programming which are more scalable,
extensible,
and abstract

...even where not appropriate.


Nowadays on large projects the most "appropriate" design pattern is
most
often one that generates a codebase 20 or 30 people can work with
simultaneously and not influence each other too much.  You can't have
everyone involved in a project mucking with the state diagrams and
making changes that break other people's shit, so you have to run
modifications thru some master planner guy, which is a bottleneck and
wastes time. It's too "top down." Doesn't scale well.

In a large project like that, programmers shouldn't be making changes
to *ANY* state machines.  That's the what the architect(s) do.  The
programmers implement, they don't design.  Chip designers don't change
the architecture of a processor, either.  They implement the
architect's design.  That's probably why there is so much shit
software out there.  Too many cooks.


That's called "waterfall design" and has been passe in the software
world for I'd guess 30 years or so. It's brittle, inflexible, too
"top-down", there is no one "architect" or small group of architects who
have a complete God's-eye picture of every single "state" or function or
branch of code in a 100 million-line codebase.

So has defect-free software.  There is a reason Win* is such shit.

There are many reasons why Windoze is shit, but this isn't one of them.

Waterfall is passe because it implied that the design is complete,
unambiguous and accurate. If that was true, then it is also
*compilable*, i.e. it only needs a machine to execute it.

We have a word for such a compilable design: we call it code.
When you insist on design-up-front, you're just writing the code
in design phase, and that has all the same problems of any coding.
It's an infinite regress.

Clifford Heath.

Right, you don't really gain anything by shifting all the design work to
the God-guy "architect", it just means you're implicitly bottlenecking
yourself thru having one guy write all the code instead of ten.

You don't hire a team of software engineers to be data-entry people to
bring Great Visionary Architect Howard Roark's perfect design into
being, that's not how it works. If you want to hire typists hire typists.
 
bill.sloman@ieee.org wrote:

Have you checked the actual mains voltage level at your home? Sometimes the power supply company screws up and supplies an out-of-specification high voltage to some houses

In America? Very little chance. Our entire grid runs pretty clean.
For the most part dead on 60 Hertz.

Residential branch transformers *can be* set a few volts high if the
farmer two tenths of a mile down at the end of the run has a lot of line
drop. Mostly right on 120 VAC though.
 

Welcome to EDABoard.com

Sponsor

Back
Top