Driver to drive?

bill.sloman@ieee.org wrote:

> My guess would be that these would be houses that were close to a sub-station that also supplied houses that were much further away ...

That is not how power distribution works. The subs provide HV right
to the residential areas. The local branch feed transformers can all
adjust for differences in the HV line to attain the proper local feed
voltage, but that HV reading is seldom more than a few tens of volts out
of thousands.

Now, had you said something about being far away from the final LV AC
transformer that feeds a residential branch, THEN the voltage at the end
of a feed can be a few volts down, and out of only 240, that makes a
bigger difference.

The system is designed so that the voltage drops on long HV feeds
amounts to a tiny percentage of the whole. This is why at the local
level, a transformer is placed at specific intervals and sized to cover
the number of connections within those intervals.

The only place one would typically find a lower voltage at the outlet
would be a couple thousand feet down an unfinished driveway, where the
property owner did not want to pay for his own transformer.
 
tabbypurr@gmail.com wrote...
I'm looking at putting together something similar to a Class AB/B
audio amp, but it will be driven outside its linear range ...
time for output devices to unsaturate ...
Keeping distortion low matters here.

JL has mentioned the issue of feedback integrator windup.
I'd say two aspects can be significant to success.
1) Create a circuit that's linear without feedback,
use minimum feedback, without internal integrators.
2) Make the circuit very fast, faster than needed.

My AMP-70 power amplifier design is an example. The
configuration is intrinsically both linear and fast.
https://www.dropbox.com/sh/an6lcx7y3e3o8zm/AACUoCLGKDOkusNcJnUlcGc2a?dl=0

My circuit was inspired by the Tektronix PG-508 50MHz
function-generator output stage, read my AoE writeup.

It was meant to be used all the way to the high-power
rails, and it recovers instantly. The version in my
drawings is complicated, but that's because it works
to 10MHz with a blisteringly-high slew rate, which
required using many fragile video-output transistors.
A bit slower version could actually be quite simple.


--
Thanks,
- Win
 
John Larkin wrote:

> We squeeze most of the water out.

To dump needlessly onto almond groves, right?

Suck-o-dat river dry. Doesn't matter if you really need it or not...
law says it ours, so we're takin' it!
 
On 02/21/2018 08:27 AM, krw@notreal.com wrote:
On Wed, 21 Feb 2018 08:17:19 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/20/2018 08:44 PM, krw@notreal.com wrote:
On Tue, 20 Feb 2018 20:31:56 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 10:21 PM, krw@notreal.com wrote:
On Tue, 13 Feb 2018 19:23:23 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 07:11 PM, krw@notreal.com wrote:
On Tue, 13 Feb 2018 18:34:34 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/13/2018 06:08 PM, krw@notreal.com wrote:
On Sun, 11 Feb 2018 19:01:06 -0600, Les Cargill
lcargill99@comcast.com> wrote:

John Larkin wrote:
On Sun, 11 Feb 2018 16:45:33 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/11/2018 11:54 AM, John Larkin wrote:

https://www.amazon.com/Brotopia-Breaking-Boys-Silicon-Valley/dp/0735213534/ref=sr_1_1?ie=UTF8&qid=1518366993&sr=8-1&keywords=brotopia



From the NYT review:

What happened in the 1960s and 1970s was that the industry was
exploding and was starved for talent. There just weren’t enough
people to do the jobs in computing. So they hired these two
psychologists, William Cannon and Dallis Perry, to come up with a
personality test to screen for good programmers.

Those men decided, in screening about 1,200 men and 200 women,
that good programmers don’t like people — that they have a
complete disinterest in people. These tests were widely
influential and used at various companies for decades.

Most qualities that make a "good programmer" in 2018 have little to
do with how ingenious or clever you can make the stuff you type
into the box; compilers are very good at this point, CPU horsepower
is plentiful and premature optimization is the root of all evil.
It's not easy to write high-performance code but it takes concerted
effort to write truly poor performing code in most languages.

The most valuable skills probably have to do with a) is your
software based on solid design principles b) how well you can
justify that design/explain your reasoning to others and c) how
easily someone can come in fresh and read your documentation and
understand the principles of the design. Which if you have no
interest in people and little experience interacting with them
socially is going to be a tough row to hoe.

There will always be a place for the seriously schizoid hotshots,
e.g. physics code for games needs to eek out every bit of
performance. A physics engine coder might be insulted if you
sullied his labor with anything as mundane as _gameplay_.

Also the problem with being totally disinterested in people is that
you usually end up thinking you're a lot better than you actually
are, without any metric to judge your performance how can you
semi-objectively judge.

That review also names a recently-me-too-fallen individual that
we once did battle with, but I'm not allowed to make any
disparaging remarks about that.

I am continually amazed by the aggressive and toxic Silicon
Valley culture... which has spread into the worldwide
semiconductor industry.

Software used to be fun and novel but then a lot of the industry
got lame and cynical, like hey let's build a grocery store list app
for a phone with Bluetooth integration, sell that shit for $2.99 on
the app store and hope to get bought out by the Big G for $50 mil
in 14 months.

I think a lotta women bailed out on that scene because "on average"
they don't value the bignum bills as highly vs. not getting your
soul crushed to get 'em. Who can blame 'em, I don't. Seems like
most of the "angel" VC funders in the Valley aren't even tech
educated people anymore they're like MBAs and pundits and famous
bloggers and various Wall Street investment bank weenies after some
fast bucks



I'm reading "Coders at Work", which is a bunch of interviews with
famous programmers. What's impressive is how ad-hoc they are, with
no obvious systematic approaches, sometimes bottom-up, sometimes
top-down; they just sort of do it.


Most people who have done something noteworthy enough to be
interviewed probably didn't have a lot of examples to work from.

Top-down/bottom up is a legit thing to mix up, tho. Start
with the risk items.

Or the unknowns.

Many of them have no formal, or
informal, training in CS or EE or anything. I doubt that many know
what a state machine is.


Unsurprising. Software people are very often blind to determinism in
general.

FSM are not a widely understood technique in software.
God only knows why. Even the people who know about them
often think they are always equivalent to regular expressions.

I naturally code that way but when I've shown my code to "real
programmers", they say "that's neat"! To me, it's the obvious way to
get across the street - one step at a time (with some checks along the
way).

It's not that FSMs aren't a useful design pattern for some problems,
it's just that the software world has moved on to things like OOP,
functional, and generic programming which are more scalable, extensible,
and abstract

...even where not appropriate.


Nowadays on large projects the most "appropriate" design pattern is most
often one that generates a codebase 20 or 30 people can work with
simultaneously and not influence each other too much. You can't have
everyone involved in a project mucking with the state diagrams and
making changes that break other people's shit, so you have to run
modifications thru some master planner guy, which is a bottleneck and
wastes time. It's too "top down." Doesn't scale well.

In a large project like that, programmers shouldn't be making changes
to *ANY* state machines. That's the what the architect(s) do. The
programmers implement, they don't design. Chip designers don't change
the architecture of a processor, either. They implement the
architect's design. That's probably why there is so much shit
software out there. Too many cooks.


That's called "waterfall design" and has been passe in the software
world for I'd guess 30 years or so. It's brittle, inflexible, too
"top-down", there is no one "architect" or small group of architects who
have a complete God's-eye picture of every single "state" or function or
branch of code in a 100 million-line codebase.

So has defect-free software. There is a reason Win* is such shit.

Processor design is a different animal it takes years and years to bring
a new architecture from concept to finalized design, the software world
is working under much tighter deadlines and customers _expect_ major
changes to be possible for most of the development cycle.

No, it's exactly the same thing. One group believes in the quality of
their work.

Giving a client this "the design is finalized and being implemented
according to God's plan by the code monkeys and it cannot be touched"
stuff would be a way to find yourself rapidly out of business

So you think the race to the bottom is a good thing. I could have
guessed that.


There's a lot of lousy code out there, but there's also more great code
out there than there's ever been in history. Well-designed and
implemented software using "modern" practices is not at all hard to
find. Flawed hardware designed top-down is not at all hard to find, e.g.
Spectre, Meltdown.

What an asinine statement. First, of course there is decent code out
there and even more than ever. Dumbshit, there is a *lot* more code
every day. Almost all shit. Compared to stuff like the Shuttle OBS
and S/360, almost all is shit. Some is man-rated, so one hopes it's
somewhat better but it's certainly not done in an ad-hoc,
throw-it-together, uncontrolled process as you suggest. It's not the
code we see every day.

lol you have a weird idea how software is developed in practice. Even
the Space Shuttle GPC software was not developed purely top-down by the
methods you're talking about. They solicited feature and modification
requests from everybody under the sun all the way down to maintenance
techs and the ground crew. In the early 1980s the team at IBM was
processing 30 or 40 change/feature requests a week. Clearly they were
not all implemented but they were all taken seriously.

What made it so reliable was not the "design philosophy", it had a huge
"ad-hoc" component to it, but it was extensively, extensively,
extensively bug checked and tested, and enormous amounts of money was
thrown at that process.

Windows isn't shitty because of the techniques used to develop modern
software, it's mostly because it is and always has been a slave to the
past and backwards compatibility. It's always needed a ground-up rewrite
since about 1990 which it never got.

Wrong. Stack overflows are precisely due to the way code is developed
today. There is no excuse for such things.

The majority of code out there in 2018 runs on virtual machines in
browsers or the JVM there is no hardware stack to overflow. And in
compiled languages it's just the reverse; stack overflows are primarily
a consequence of how "legacy" languages from the 1970s and 1980s manage
memory which is full of undefined behavior and stupid unintuitive
gotchas that even the best programmers have trouble reasoning about
effectively.

I remember watching a 133 MHz PC running BeOS display 32 full-motion
videos in windows on its desktop while you could simultaneously edit a
document in the word processor with no hiccups or lag at all. In 1997.
Win 95 would choke and crash under a tenth of that workload. But BeOS
couldn't run 10 year old DOS productivity software. BeOS is gone with
the snows of yesteryear.

Change of subject again.
 
On Wed, 21 Feb 2018 00:17:18 -0800 (PST), Phil Allison
<pallison49@gmail.com> wrote:

upsid...@downunder.com wrote:

----------------------------

On the old florescent tubes, such as the common 4ft shop lights, the
filament only came on during starting.


** The filaments are hot all the time.

Take a look at an old or dead fluoro tube and see what the ends look like.


At least in 220-240 Vac countries, the heater is on only during the
start sequence.


** The context for my remark has been snipped - see if you can find it.

I quoted your whole previous post, so do not accuse me for snipping
your post.

It had to do with "filaments" in CFLs running hot and heating the electronics in the same enclosure, given hundreds or thousands of hours.

Seeing as both ends appear exposed inside the same small enclosure - guess what ??


The starter connects the two heaters in series with
the inductance (ballast). When the starter opens, the current is cut
and the inductive kickback strikes the tube.


** A fluoro ballast is a multi Henry inductor sized to drop the supply voltage just enough to supply the correct running current for the particular tube or combination. It is also designed NOT to saturate when full AC voltage is applied, limiting current to a safe value for the filaments during the pulse start sequence. Typically it's about double the running current.


Once lit, the electrons hit the anode (and ions hit the cathode),
heating them up to such a temperature that allows sufficient cathode
emission, so no need for any filament current any more.


** Fraid filaments and cathodes are one and the same. The tube's running current passes through the filaments constantly HEATING them.

How would the electrons flowing _through_ the filament, when only one
of the filament contacts are connected to an external circuit (ballast
or neutral) ? Remember, the other filament contact is not connected
anywhere, since starter is open.

Are the electrons flying in the tube smart enough that they hit the
unconnected end of the filament, then run through the filament and
then into the connected pin ?

https://en.wikipedia.org/wiki/File:Tanninglampend.jpg
Most likely the electron hit the ring electrode around the filament
and then flow into the connected pin or at least a very short distance
of the filament closest to the connected pin. The strong current
(several hundred mA) hits the ring heating it up and works as a
cathode the next half cycle.

Ordinary electronic tube anodes can run red hot due to the electrons
hitting the anode. Running a rectifier tube with a red hot anode will
increase the reverse leakage, when the hot anode emits electrons when
reverse biased.

The tungsten coil has a mixture of oxides applied so it acts as a good cathode.



.... Phil
 
On Tuesday, February 20, 2018 at 10:01:46 PM UTC-5, tabb...@gmail.com wrote:
I'm looking at putting together something similar to a Class AB/B audio amp, but it will be driven outside its linear range into saturation a lot of the time. That's all well & good but for one thing: wrapping nfb round saturating outputs doesn't work too well as it takes time for output devices to unsaturate, and the nfb effectively overreacts, adding distortion. Keeping distortion low matters here. What tips would you recommend to keep unwanted distortion minimised?


thanks, NT

Sounds a little like capacitor windup in a control loop.
For which I just like to make sure all the amps hit the rail at about
the same time. That might not apply in your case.

George H.
 
On 21/02/2018 13:27, krw@notreal.com wrote:
On Wed, 21 Feb 2018 08:17:19 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/20/2018 08:44 PM, krw@notreal.com wrote:
On Tue, 20 Feb 2018 20:31:56 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

Giving a client this "the design is finalized and being implemented
according to God's plan by the code monkeys and it cannot be touched"
stuff would be a way to find yourself rapidly out of business

So you think the race to the bottom is a good thing. I could have
guessed that.

No. But getting paid for work done is rather important. The customer is
always right (even when he is wrong). I give mine the opportunity not to
make stupid mistakes but if they fail to take my advice they sign in
blood that they understand what they are asking for will cost them.

There's a lot of lousy code out there, but there's also more great code
out there than there's ever been in history. Well-designed and
implemented software using "modern" practices is not at all hard to
find. Flawed hardware designed top-down is not at all hard to find, e.g.
Spectre, Meltdown.

Spectre and Metldown are security exploits of correctly working but
deterministic hardware. The cache engineering was sound in terms of
achieving faster execution. FDIV bug and incorrectly accepted invalid
OPCODES were genuine CPU hardware design mistakes.

What an asinine statement. First, of course there is decent code out
there and even more than ever. Dumbshit, there is a *lot* more code
every day. Almost all shit.

I'd guess it was about 50:50 without having done a detailed survey. You
hear about the big stuff that goes horribly wrong - you don't hear about
the small successes like ABS brakes, mobile phones etc which just work.

Compared to stuff like the Shuttle OBS
and S/360, almost all is shit. Some is man-rated, so one hopes it's
somewhat better but it's certainly not done in an ad-hoc,
throw-it-together, uncontrolled process as you suggest. It's not the
code we see every day.

I wouldn't use Shuttle code as an example - there was a known synch
issue with the multiple processor voting implementation that could
freeze the launch and wasn't worth going in to fix. They also had
schedule induced process issues that made it into comp.risks

http://catless.ncl.ac.uk/Risks/6.18.html#subj1

As for OS/360 I always loved Fortran G1's lack of confidence when it
reported "NO DIAGNOSTICS GENERATED?". The "?" being a trailing NUL byte
which IBM change management procedures made too difficult to correct.
(it was after all a cosmetic bug)

Your IBM terminal concentrators were unmitigated crap. Phoenix ran a
vastly larger number of simultaneous terminal sessions using DEC PDP11
hardware to link them. NIOP could run rings round original IBM code.

Parrot - a TCAM replacement by Hazel & Stonely is actually online free
these days. https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-5.pdf

Windows isn't shitty because of the techniques used to develop modern
software, it's mostly because it is and always has been a slave to the
past and backwards compatibility. It's always needed a ground-up rewrite
since about 1990 which it never got.

Wrong. Stack overflows are precisely due to the way code is developed
today. There is no excuse for such things.

Although I agree that there is no excuse for it today but the worst bugs
are mostly a side effect of the ubiquitous C nul terminated string.

We have never had better tools for static code analysis and runtime
defensive testing but sadly they are not deployed very widely :(

--
Regards,
Martin Brown
 
On 20/02/2018 22:51, bitrex wrote:
On 02/20/2018 01:49 PM, Rob wrote:
bitrex <bitrex@de.lete.earthlink.net> wrote:

Overseas SMS is often considered a "weird old-fashioned American thing";

SMS is a European invention made by the Groupe SpĂŠciale Mobile, the
USA was never involved in it.

OK sure, my point was that nobody under the age of 35 in Europe uses it
for text messaging anymore, unlike the US. Regardless of its place of
invention

If you said under 25's I might agree. SMS has its advantages and you
generally get an infinite number of them included in any contract.

The main advantage is that you can have an asynchronous short
conversation with someone who doesn't need to answer their phone.

Banks here still use it as part of 2FA protocols. Bit of a nuisance if
you don't have a reliable mobile signal at home.

--
Regards,
Martin Brown
 
On Wednesday, February 21, 2018 at 7:58:28 AM UTC-5, David Brown wrote:
On 21/02/18 13:12, bitrex wrote:
On 02/20/2018 10:48 PM, John Larkin wrote:
On Tue, 20 Feb 2018 21:40:31 -0500, bitrex
bitrex@de.lete.earthlink.net> wrote:

On 02/20/2018 08:43 PM, Paul Hovnanian P.E. wrote:
Jim Thompson wrote:

https://www.nbcbayarea.com/news/local/Diseased-Streets-472430013.html


If they are smart, they won't wake up the hobos before running the
steam
cleaner down the sidewalk.


The news story claims "Over 100!" discarded IV needles found in the
areas they surveyed but you can easily click through all the "red"
streets in the map and see that the tally tops out at barely 50. And
there are big clumps of 7 or 10 needles in some locations.

Pay one or two bums a couple bucks to dump their spent needles in a
couple locations and hey presto you've got yourself a story.

And you have an army of well-paid attorneys, consultants, NGOs,
providers, and city staffers actually soaking up the funding.

We never go downtown. That's for bankers and tourists. Our
neighborhood is green, quiet, clean, and safe.

https://www.dropbox.com/s/kpl55nnziaubq9z/Ohlone_Way_3.jpg?raw=1

I've never seen a needle on Ohlone Way. You might get stuck picking
blackberries.



It's a histrionic "story" crafted to play well with histrionics of all
political persuasions, left or right few Americans can seem to resist a
good "OMG THINK OF THE CHILDREN!" tale.

Meanwhile I'd estimate statistics on the number of children who could be
confirmed to have caught the AIDS or any infectious disease from any
discarded IV needle in SF or any other large city for that matter is
likely pretty close to 0.

That statistic is probably accurate. However, it is pretty unpleasant
to find discarded needles around the place, especially if you are
talking about parks, schools, kindergartens, etc. This applies whether
you are a child, adult, or whatever.

Of course, it is not the only thing left lying around by inconsiderate
people - dog turds probably lead to far more infections (they used to be
the leading cause of childhood blindness), and cigarette stubs abound in
some places.


Also they seem to have inflated the number of needles found from the
data points on the map to the TV story by a factor of 2. Maybe some
would shrug and say so what it's bad either way but IMO the media
deserves to get the side-eye when they do things like that, regardless
of what the story is about.

There is the serial pooper phenomenon going on now. I know how to cure them permanently of their problem, it requires them to wear a bag for the rest of their life.
 
Joerg wrote:

On 2018-02-19 15:00, Lasse Langwadt Christensen wrote:
Den mandag den 19. februar 2018 kl. 23.53.49 UTC+1 skrev Joerg:
On 2018-02-19 14:41, Lasse Langwadt Christensen wrote:
Den mandag den 19. februar 2018 kl. 23.13.43 UTC+1 skrev Joerg:
On 2018-02-19 13:34, Clifford Heath wrote:

[...]


I think the LEDs try to detect the dimmer phase angle, and modify
their SMPS setting to draw the desired amount of power. There's
obviously filter time constants in there. The modern dimmers are
doing the same thing, and the control functions fight.


SMPS? In LED light bulbs? That would be like having gold-plated shafts
on the inside of a car transmission.

https://youtu.be/HNaU76L296Q?t=9m56s



That one flickers quite badly. The ones I disected so far either
contained just passives or sometimes a couple of TO92 devices (rest also
all through-hole). Which surprised me because even my bicycle lights
contain buck converters.

on a bicycle light you don't have a lot of voltage to play with and you
need pretty good efficiency when running on batteries


Yes, it is remarkably efficient. It only gets warm when on 8W, at half
power it stays cool even on a summer day when not riding fast. The mains
powered LED lamps are efficient as well. We used to have a total of 150W
in the living room chandelier which is lit at least 4h/day. Now less
than 30W and nothing becomes hot anymore.
The real test would be to place a kill-a-watt tracking switch in the
line *before* the dimmer/lamp circuit, and see what is actually getting
used at each set point.
 
On 02/21/2018 04:08 AM, Steve Wilson wrote:
John Larkin <jjlarkin@highlandtechnology.com> wrote:

On Wed, 21 Feb 2018 04:51:28 GMT, Steve Wilson <no@spam.com> wrote:

John Larkin <jjlarkin@highlandtechnology.com> wrote:

On Wed, 21 Feb 2018 01:54:05 GMT, Steve Wilson <no@spam.com> wrote:
Amps are amps. The load step demonstrates the ringing, and the fix
for the ringing.

Your simulation shows considerably different ringing between current
rise and fall. So the currents matter.

Sure, the output transistor emitter has a very different impedance
from 30 to 200 mA. So the pole from that impedance into the ceramic
caps is different for the two currents.

You need to model the actual currents you are using. I suspect the
idle current may be much lower, and the actual charge current may be
higher.

I'd be happier to see the results with a pulsed 100ns 12 Amp load.

I did it for you. The compensation cap is critical and very different
from your result. See below.

That seems to be my sim; same currents, no ESR in the output cap, just
not as pretty. Any engineering doc should have a title, author, and
date.

Sorry, I picked the wrong file. You can see the original by changing the
current, pulse width and cap values as listed below.

This is a newsgroup discussion. The title is shown at the top of the
schematic. The date is the date of the post and is shown in the header. If
I put my name on the document, you will get pissed.

I tried adding 20 mohms ESR to the output caps in my sim. Nothing
changed; it rings badly without the added compensation, and doesn't
ring with the RC, or just the C, from ADJ to ground.

Pulsing at 10 amps for 100 ns, the results are about the same: lots of
ringing, fixed by adding the same comps.

The compensation parts help.

You may need to use a switched resistive load to provide some
damping. This may have a significant effect on the ringing.

Note the load transient response in Figs 3 and 4 of the TI datasheet
show a considerably different response than your model.

They probably use caps with a lot of ESR. And the models differ too.

You need to model the ESR.

You sure like to tell me what I need to do. But I don't report to you.

Sensitive? Not at all. Try the generic "you".

I increased the output cap from 12 uF to 20uF to match your value, amd
changed the pulse width from 100ns to 160ns to maintain the same dv.

The compensation cap was very difficult to optimize. You either get
underdamped with overshoot or overdamped with overshoot. But I ended up
with the same value as you - 20nF. So the response is sensitive to pulse
width, compensation cap and output cap values. Relatively small changes
have a big effect on the response.

The compensation cap ESR seems to have no effect. The output cap ESR has a
very significant effect on the shape of the response.

At the optimal compensation cap value, the results hardly depend on
pulse width at all.

I ran your sim with pulse widths from 50 ns to 1 us, and with a
compensation cap of 15 nF the droop is at most 70 mV over the whole
range, i.e. 0.23%, which isn't bad at all.

With no compensation cap, it's a mess for sure.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
https://hobbs-eo.com
 
On Tuesday, February 20, 2018 at 4:12:50 PM UTC-5, Ivan Shmakov wrote:
So, we were making some prototypes for Chebyshev passive filters
for the SW range, and found out that most of the time, the
resonance frequencies of the individual LC circuits there are
considerably lower than expected, with the apparent culprit
being the inductances of the custom coils we made, which seem
to be much higher (20% to 30%) in the operating range
(about 25 MHz) than both the design values and the values as
measured on a 200 kHz RLC meter. (Say, about 93 nH @ 25.5 MHz
vs. 83 nH @ 200 kHz.)

The question is, is it some well-known effect, or am I mistaking
something else for the perceived change in inductance?

(I understand that at higher frequencies the stray capacitance
effectively turns a coil into a parallel LC circuit by itself,
but the resulting self-resonance lies well above 100 MHz, so I
guess it shouldn't make much difference at 25 MHz.)

why would you guess that?

at the SRF the stray cap will make the inductor look like __infinity__.

At 25 MHz the effect is still significant and will increase the apparent inductance.

I would take inductance readings at several frequencies and examine the slope of the inductance vs freq. for a clue.

or add an additional know value of "stray cap" and observe the effect on measured inductance.

I think the issue IS the stray capacitance.


mark
 
John Larkin wrote:

snip

George H.

I want to dimly light up the 120V LED bulb that's already in the
overhead fixture in the shower. I can't run more wires.

Where is the wall switch at that turns it on? Or if a permanent
dimming is desired, place a pre-set dimmer right on the line and hang it
inside the wall.

More likely the wire that feeds the "overhead" portion of that stall
is actually accessible in a nearby verticle wall. So if one is not
present, you could add a switch/dimmer.
 
Winfield Hill wrote:

Neon John wrote...

I haven't gotten anything in the last couple of weeks.

Ah, we're dead. That would explain a lot.


You have books... So you must be a dead poet. ;-)
 
jurb6006@gmail.com wrote:

"Alive and well, though the noise is bigger than the signal.
(If you block google groups posts you won't see this.) "

I think quite a few people block Google posts because spammers use it. It is free and all you need to do is create another email address and you're in business again if they shut you down. Extra email addresses are also useful for other things to spammers.

I block nothing, and I am not seeing Usenet being used as a SPAM
portal much any more... if it ever was.
 
Chris wrote:

On 17/02/2018 2:50 AM, Long Hair wrote:
bitrex wrote:

On 02/16/2018 10:09 AM, Phil Hobbs wrote:
https://www.newyorker.com/cartoon/a21502

Cheers

Phil Hobbs


Nihilist consulting: the process of doing work that doesn't need to be
done, for a customer that doesn't know what he needs, for a product that
doesn't need to exist

I can tell you what is wrong with that claim for a small fee...

http://dilbert.com/strip/1994-05-09

For some reason, that reminds me of Donald Trump. I think the
character with the liver is his dopey son.
 
On 02/20/2018 10:14 PM, John Larkin wrote:
On Tue, 20 Feb 2018 19:01:33 -0800 (PST), tabbypurr@gmail.com wrote:

I'm looking at putting together something similar to a Class AB/B audio amp, but it will be driven outside its linear range into saturation a lot of the time. That's all well & good but for one thing: wrapping nfb round saturating outputs doesn't work too well as it takes time for output devices to unsaturate, and the nfb effectively overreacts, adding distortion. Keeping distortion low matters here. What tips would you recommend to keep unwanted distortion minimised?


thanks, NT

Lots of rail-rail output opamps come off the rails clean and fast. One
architecture has a lot of wideband gain inside the amp, and the major
loop compensation is a cap from the output pin back into a late gain
stage. So there's no internal, buried compensation pole to wind up
when the output stage clips.
Those ones (e.g. the AD8605, a fave) also have much lower open-loop
output impedance, because of course the local feedback means they aren't
really running open loop. I use those ones for driving single-ended
ADCs a fair amount, and they work great.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
https://hobbs-eo.com
 
On Tuesday, February 20, 2018 at 9:58:56 PM UTC-5, Winfield Hill wrote:
even robots deserve a chance.

1.
https://techcrunch.com/2018/02/20/humans-sow-seeds-of-destruction-by-abusing-poor-robot-just-trying-to-walk-through-a-door/

2.
https://techcrunch.com/2018/02/12/boston-dynamics-newest-robot-learns-to-open-doors/

if a robot takes my job, it should also have to pay my taxes
m



--
Thanks,
- Win
 
On Thursday, February 22, 2018 at 12:50:32 AM UTC+11, Long Hair wrote:
bill.sloman@ieee.org wrote:

My guess would be that these would be houses that were close to a sub-station that also supplied houses that were much further away ...

That is not how power distribution works. The subs provide HV right
to the residential areas. The local branch feed transformers can all
adjust for differences in the HV line to attain the proper local feed
voltage, but that HV reading is seldom more than a few tens of volts out
of thousands.

Now, had you said something about being far away from the final LV AC
transformer that feeds a residential branch, THEN the voltage at the end
of a feed can be a few volts down, and out of only 240, that makes a
bigger difference.

That is what I had in mind.

There have been news stores about people who got very short lives out of the their incandescent bulbs because their supply voltage was higher than it should have been - being at the wrong end of a long feed is a plausible explanation, but my guess is that most of them were some idiot setting up the final LV AC transformer wrong.

--
Bill Sloman, Sydney
 
bill.sloman@ieee.org wrote:

Google does let you mark posts as spam (or other forms of abuse), and hides them from you thereafter. Whether they delete them if enough people mark them as abuse is something I don't know.

They delete nothing (save criminal material).

USERS filter posts.

I think it is quite retarded as I have every post and they are easy to
wade through.

That is how I know the assholes from the intelligent folk, because
instead of filtering them, essentially ignoring them, I examine their
petty non-sense, and they dislike my tone with them and filter me.

The fact that they filter at all, means data is lost. I think my
quantum brain can handle a little trash. It is a hell of a lot better
than being some of the trash. (waves at krw)
 

Welcome to EDABoard.com

Sponsor

Back
Top