Maximum Power Point Tracking: Optimizing Solar Panels 58 Comments by: Maya Posch...

....
> A cheap chinese meter might be truly cheap and nasty, and correspondlngly dangerous, but anybody who sold it to you would risk being sued if it was.

Even for the old model with HV 1000V range, there is a warning label at the back \"Do not test voltage over 250 volts\". Defendant counters that plaintiff did not read the warning label,

If the item was given out free, there is no damage.
 
On Sunday, December 18, 2022 at 7:20:41 AM UTC-8, Fred Bloggs wrote:
\"The Department of Justice prides itself on exercising independent prosecutorial discretion but they can\'t turn a blind eye to a coequal branch of government that has done such an exhaustive investigation when they pass all of that evidence over to [the] DOJ and they recommend criminal investigations and prosecutions,\" he said.

\"It is great that they\'re talking about obstructing official proceedings, that\'s a 20-year offense. It\'s great that they are talking about conspiracy to commit offenses against the United States, that\'s a 5-year offense,\" Kirschner said Friday.

https://www.newsweek.com/trump-may-face-25-years-prison-blocked-future-office-kirschner-1767910

It\'s the nuance and exactitude required for criminal prosecution that\'s responsible for spending a million person-hours on finally producing charges against the Times Square fried chicken pushcart operator.

And the Earth MAY fall out of orbit and crash into the Sun. Try dealing with REALISTIC possibilities.
 
On Sunday, December 18, 2022 at 7:20:41 AM UTC-8, Fred Bloggs wrote:
\"The Department of Justice prides itself on exercising independent prosecutorial discretion but they can\'t turn a blind eye to a coequal branch of government that has done such an exhaustive investigation when they pass all of that evidence over to [the] DOJ and they recommend criminal investigations and prosecutions,\" he said.

\"It is great that they\'re talking about obstructing official proceedings, that\'s a 20-year offense. It\'s great that they are talking about conspiracy to commit offenses against the United States, that\'s a 5-year offense,\" Kirschner said Friday.

https://www.newsweek.com/trump-may-face-25-years-prison-blocked-future-office-kirschner-1767910

It\'s the nuance and exactitude required for criminal prosecution that\'s responsible for spending a million person-hours on finally producing charges against the Times Square fried chicken pushcart operator.

And the Earth MAY fall out of orbit and crash into the Sun. Try dealing with REALISTIC possibilities.
 
On Sunday, December 18, 2022 at 7:20:41 AM UTC-8, Fred Bloggs wrote:
\"The Department of Justice prides itself on exercising independent prosecutorial discretion but they can\'t turn a blind eye to a coequal branch of government that has done such an exhaustive investigation when they pass all of that evidence over to [the] DOJ and they recommend criminal investigations and prosecutions,\" he said.

\"It is great that they\'re talking about obstructing official proceedings, that\'s a 20-year offense. It\'s great that they are talking about conspiracy to commit offenses against the United States, that\'s a 5-year offense,\" Kirschner said Friday.

https://www.newsweek.com/trump-may-face-25-years-prison-blocked-future-office-kirschner-1767910

It\'s the nuance and exactitude required for criminal prosecution that\'s responsible for spending a million person-hours on finally producing charges against the Times Square fried chicken pushcart operator.

And the Earth MAY fall out of orbit and crash into the Sun. Try dealing with REALISTIC possibilities.
 
whit3rd wrote:
On Thursday, January 5, 2023 at 6:12:20 PM UTC-8, Phil Hobbs wrote:
whit3rd wrote:
On Thursday, January 5, 2023 at 1:41:38 PM UTC-8, Phil Hobbs wrote:
whit3rd wrote:

[about FFT/divide/inverseFFT deconvolution]

...the FFT algorithm has no mechanism to
accept data with non-constant signficance, which is what, obviously,
happens with a divide-by-almost-zero step in the data processing.
It\'s gonna give you what the \'signal\' says, not what the \'signal\' and known
signal/noise ratio, tell you. That means using an FFT for the inverse is
excessively noise-sensitive. There\'s OTHER ways to do a Fourier inversion
that do allow the noise estimate its due influence.

The problem has nothing to do with the FFT, and everything to do with
what you\'re trying to do with it. Dividing transforms is a perfectly
rational way to deconvolve, provided you take into account the
finite-length effects and prepare the denominator correctly.

Think again; an FFT algorithm implements least-squares fitting, essentially;
Bollocks. An FFT is an information-preserving operation, unlike
least-squares fits.
there\'s zero difference between the transform\'s inversion and the original data,
Right, i.e. it\'s not a least-squares fit, it\'s exact.

No, it IS BOTH a least-squares fit, and an exact one.

which (zero) is obviously the minimum of sum-of-squares-of-differences.
You\'re maybe thinking of a continuous-time orthonormal-function
expansion, e.g. a Fourier or Bessel or Chebyshev series. In that case,
_truncating_ the series leads to the least-squares optimum for that
order. Least squares optima tend not to be that useful--the infamous
\"Gibbs phenomenon\" being a typical example.

But there are a lot, a lot of ways of producing a finite expansion that
don\'t have that problem, just as there are all sorts of ways of
controlling noise gain in deconvolution.

Dividing by an inverse function is precisely a weighting operation.

Not true unless the \'inverse function\' is real nonnegative numbers (i.e.
statistical weights). That deconvolution division is complex.

So what? Your weighting function isn\'t allowed to have a phase angle?
Mine is. ;)


But, it\'s not correct if the standard deviations of the elements are not identical,
because it IS minimizing sum-of-squares of differences, rather than the
(correct) sum of (squares-of-differences/sigma-squared-of-this-element).

What makes that the One True Algorithm? Why assume that all frequencies
have to have the same SNR? That\'s not at all common in real life, and
it\'s often a win to sacrifice a significant amount of SNR for improved
resolution, data rate, or what have you. It\'s horses for courses.

The theorem that least-sum-of-squared-difference is best, is
(see Matthews and Walker for the full treatment) easily derived
by assuming a Gaussian noise profile.

Soooo, exactly why is it best? Name dropping isn\'t an argument.

The FFT algorithm DOES assume the same noise level, OR is an incorrect
fit in the minimum-least-squares sense.

Listen, man, the FFT just takes a finite-length sampled function and
transforms it to and from the discrete frequency domain. In itself, it
has nothing to do with fitting, or least squares, or noise, or anything
else. You can build fitting, or filtering, or lots of kinds of
operations by using FFTs down in the guts someplace, but that ain\'t
anything FFT-specific.

You cannot call a mechanical
FFT any reasonable least-squares fit in other cases. Being invertible,
the hash it makes of mixing noise in the forward transform fails to
show up in a simple-inversion reverse transform, but it IS a
statistical error to do it thus.

Show me. It sounds like you haven\'t implemented a correct signal
processing scheme of that sort, at least not recently. If you don\'t
account for the effects of the finite length of the transform,
specifically overlap errors due to failing to zero-pad, you\'ll wind up
with nonsense. But that isn\'t the FFT\'s fault.

I don\'t know if you have a copy of my book, but a major emphasis of
Chapter 17 on DSP is how to do DSP and still preserve the
continuous-time notions of bandwidth, SNR, and so on in the process.

The problem does not follow S/N ratio arguments, only absolute noise
estimates on a per-sample basis.

But you\'re insisting that there\'s One True Algorithm for everything, and
that\'s nonsense.

And none of that has anything to do with how you perform the actual
deconvolution.

Unless it does... and there\'s a good argument for slow-transform calculation here.
I\'d be interested to hear it.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
whit3rd wrote:
On Thursday, January 5, 2023 at 6:12:20 PM UTC-8, Phil Hobbs wrote:
whit3rd wrote:
On Thursday, January 5, 2023 at 1:41:38 PM UTC-8, Phil Hobbs wrote:
whit3rd wrote:

[about FFT/divide/inverseFFT deconvolution]

...the FFT algorithm has no mechanism to
accept data with non-constant signficance, which is what, obviously,
happens with a divide-by-almost-zero step in the data processing.
It\'s gonna give you what the \'signal\' says, not what the \'signal\' and known
signal/noise ratio, tell you. That means using an FFT for the inverse is
excessively noise-sensitive. There\'s OTHER ways to do a Fourier inversion
that do allow the noise estimate its due influence.

The problem has nothing to do with the FFT, and everything to do with
what you\'re trying to do with it. Dividing transforms is a perfectly
rational way to deconvolve, provided you take into account the
finite-length effects and prepare the denominator correctly.

Think again; an FFT algorithm implements least-squares fitting, essentially;
Bollocks. An FFT is an information-preserving operation, unlike
least-squares fits.
there\'s zero difference between the transform\'s inversion and the original data,
Right, i.e. it\'s not a least-squares fit, it\'s exact.

No, it IS BOTH a least-squares fit, and an exact one.

which (zero) is obviously the minimum of sum-of-squares-of-differences.
You\'re maybe thinking of a continuous-time orthonormal-function
expansion, e.g. a Fourier or Bessel or Chebyshev series. In that case,
_truncating_ the series leads to the least-squares optimum for that
order. Least squares optima tend not to be that useful--the infamous
\"Gibbs phenomenon\" being a typical example.

But there are a lot, a lot of ways of producing a finite expansion that
don\'t have that problem, just as there are all sorts of ways of
controlling noise gain in deconvolution.

Dividing by an inverse function is precisely a weighting operation.

Not true unless the \'inverse function\' is real nonnegative numbers (i.e.
statistical weights). That deconvolution division is complex.

So what? Your weighting function isn\'t allowed to have a phase angle?
Mine is. ;)


But, it\'s not correct if the standard deviations of the elements are not identical,
because it IS minimizing sum-of-squares of differences, rather than the
(correct) sum of (squares-of-differences/sigma-squared-of-this-element).

What makes that the One True Algorithm? Why assume that all frequencies
have to have the same SNR? That\'s not at all common in real life, and
it\'s often a win to sacrifice a significant amount of SNR for improved
resolution, data rate, or what have you. It\'s horses for courses.

The theorem that least-sum-of-squared-difference is best, is
(see Matthews and Walker for the full treatment) easily derived
by assuming a Gaussian noise profile.

Soooo, exactly why is it best? Name dropping isn\'t an argument.

The FFT algorithm DOES assume the same noise level, OR is an incorrect
fit in the minimum-least-squares sense.

Listen, man, the FFT just takes a finite-length sampled function and
transforms it to and from the discrete frequency domain. In itself, it
has nothing to do with fitting, or least squares, or noise, or anything
else. You can build fitting, or filtering, or lots of kinds of
operations by using FFTs down in the guts someplace, but that ain\'t
anything FFT-specific.

You cannot call a mechanical
FFT any reasonable least-squares fit in other cases. Being invertible,
the hash it makes of mixing noise in the forward transform fails to
show up in a simple-inversion reverse transform, but it IS a
statistical error to do it thus.

Show me. It sounds like you haven\'t implemented a correct signal
processing scheme of that sort, at least not recently. If you don\'t
account for the effects of the finite length of the transform,
specifically overlap errors due to failing to zero-pad, you\'ll wind up
with nonsense. But that isn\'t the FFT\'s fault.

I don\'t know if you have a copy of my book, but a major emphasis of
Chapter 17 on DSP is how to do DSP and still preserve the
continuous-time notions of bandwidth, SNR, and so on in the process.

The problem does not follow S/N ratio arguments, only absolute noise
estimates on a per-sample basis.

But you\'re insisting that there\'s One True Algorithm for everything, and
that\'s nonsense.

And none of that has anything to do with how you perform the actual
deconvolution.

Unless it does... and there\'s a good argument for slow-transform calculation here.
I\'d be interested to hear it.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
whit3rd wrote:
On Thursday, January 5, 2023 at 6:12:20 PM UTC-8, Phil Hobbs wrote:
whit3rd wrote:
On Thursday, January 5, 2023 at 1:41:38 PM UTC-8, Phil Hobbs wrote:
whit3rd wrote:

[about FFT/divide/inverseFFT deconvolution]

...the FFT algorithm has no mechanism to
accept data with non-constant signficance, which is what, obviously,
happens with a divide-by-almost-zero step in the data processing.
It\'s gonna give you what the \'signal\' says, not what the \'signal\' and known
signal/noise ratio, tell you. That means using an FFT for the inverse is
excessively noise-sensitive. There\'s OTHER ways to do a Fourier inversion
that do allow the noise estimate its due influence.

The problem has nothing to do with the FFT, and everything to do with
what you\'re trying to do with it. Dividing transforms is a perfectly
rational way to deconvolve, provided you take into account the
finite-length effects and prepare the denominator correctly.

Think again; an FFT algorithm implements least-squares fitting, essentially;
Bollocks. An FFT is an information-preserving operation, unlike
least-squares fits.
there\'s zero difference between the transform\'s inversion and the original data,
Right, i.e. it\'s not a least-squares fit, it\'s exact.

No, it IS BOTH a least-squares fit, and an exact one.

which (zero) is obviously the minimum of sum-of-squares-of-differences.
You\'re maybe thinking of a continuous-time orthonormal-function
expansion, e.g. a Fourier or Bessel or Chebyshev series. In that case,
_truncating_ the series leads to the least-squares optimum for that
order. Least squares optima tend not to be that useful--the infamous
\"Gibbs phenomenon\" being a typical example.

But there are a lot, a lot of ways of producing a finite expansion that
don\'t have that problem, just as there are all sorts of ways of
controlling noise gain in deconvolution.

Dividing by an inverse function is precisely a weighting operation.

Not true unless the \'inverse function\' is real nonnegative numbers (i.e.
statistical weights). That deconvolution division is complex.

So what? Your weighting function isn\'t allowed to have a phase angle?
Mine is. ;)


But, it\'s not correct if the standard deviations of the elements are not identical,
because it IS minimizing sum-of-squares of differences, rather than the
(correct) sum of (squares-of-differences/sigma-squared-of-this-element).

What makes that the One True Algorithm? Why assume that all frequencies
have to have the same SNR? That\'s not at all common in real life, and
it\'s often a win to sacrifice a significant amount of SNR for improved
resolution, data rate, or what have you. It\'s horses for courses.

The theorem that least-sum-of-squared-difference is best, is
(see Matthews and Walker for the full treatment) easily derived
by assuming a Gaussian noise profile.

Soooo, exactly why is it best? Name dropping isn\'t an argument.

The FFT algorithm DOES assume the same noise level, OR is an incorrect
fit in the minimum-least-squares sense.

Listen, man, the FFT just takes a finite-length sampled function and
transforms it to and from the discrete frequency domain. In itself, it
has nothing to do with fitting, or least squares, or noise, or anything
else. You can build fitting, or filtering, or lots of kinds of
operations by using FFTs down in the guts someplace, but that ain\'t
anything FFT-specific.

You cannot call a mechanical
FFT any reasonable least-squares fit in other cases. Being invertible,
the hash it makes of mixing noise in the forward transform fails to
show up in a simple-inversion reverse transform, but it IS a
statistical error to do it thus.

Show me. It sounds like you haven\'t implemented a correct signal
processing scheme of that sort, at least not recently. If you don\'t
account for the effects of the finite length of the transform,
specifically overlap errors due to failing to zero-pad, you\'ll wind up
with nonsense. But that isn\'t the FFT\'s fault.

I don\'t know if you have a copy of my book, but a major emphasis of
Chapter 17 on DSP is how to do DSP and still preserve the
continuous-time notions of bandwidth, SNR, and so on in the process.

The problem does not follow S/N ratio arguments, only absolute noise
estimates on a per-sample basis.

But you\'re insisting that there\'s One True Algorithm for everything, and
that\'s nonsense.

And none of that has anything to do with how you perform the actual
deconvolution.

Unless it does... and there\'s a good argument for slow-transform calculation here.
I\'d be interested to hear it.

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Saturday, January 7, 2023 at 11:21:02 AM UTC-8, Don Y wrote:
On 1/7/2023 12:18 PM, Don Y wrote:
Amusing that I don\'t see any hardware types advocating for
building hardware to provide the same level of functionality
that one expects from *inexpensive* software!

Of course, some applications would be trivial to implement!
SPICE would just be a bag of components (\"Here is the model
for the 4K7 resistor\") and a soldering iron. Never have
to worry about bugs -- or upgrades -- ever again!

Thus, the analog computer is reborn! No better
simulation of analog devices need ever be sought, accuracy-wise,
but there are still the familiar analog computer drawbacks: such a
computer is strictly Harvard architecture, no self-modifying
code allowed.
 
On Saturday, January 7, 2023 at 11:21:02 AM UTC-8, Don Y wrote:
On 1/7/2023 12:18 PM, Don Y wrote:
Amusing that I don\'t see any hardware types advocating for
building hardware to provide the same level of functionality
that one expects from *inexpensive* software!

Of course, some applications would be trivial to implement!
SPICE would just be a bag of components (\"Here is the model
for the 4K7 resistor\") and a soldering iron. Never have
to worry about bugs -- or upgrades -- ever again!

Thus, the analog computer is reborn! No better
simulation of analog devices need ever be sought, accuracy-wise,
but there are still the familiar analog computer drawbacks: such a
computer is strictly Harvard architecture, no self-modifying
code allowed.
 
On Saturday, January 7, 2023 at 11:21:02 AM UTC-8, Don Y wrote:
On 1/7/2023 12:18 PM, Don Y wrote:
Amusing that I don\'t see any hardware types advocating for
building hardware to provide the same level of functionality
that one expects from *inexpensive* software!

Of course, some applications would be trivial to implement!
SPICE would just be a bag of components (\"Here is the model
for the 4K7 resistor\") and a soldering iron. Never have
to worry about bugs -- or upgrades -- ever again!

Thus, the analog computer is reborn! No better
simulation of analog devices need ever be sought, accuracy-wise,
but there are still the familiar analog computer drawbacks: such a
computer is strictly Harvard architecture, no self-modifying
code allowed.
 
Three Jeeps wrote:
On Tuesday, January 10, 2023 at 5:57:10 PM UTC-5, Joerg wrote:
On 1/10/23 8:22 AM, Phil Hobbs wrote:
Phil Hobbs wrote:
Joerg wrote:
On 1/2/23 5:57 PM, Phil Hobbs wrote:
John Larkin wrote:
On Mon, 2 Jan 2023 11:00:52 -0800, Joerg
ne...@analogconsultants.com> wrote:

On 1/1/23 11:08 PM, Jan Panteltje wrote:
[...]
In the EE school I was in it was known that only
\'hobbyists\' would pass the final exams. The dropout
in the first year was very very very high.


At my university the drop-out rate (start to degree)
was at times 83%.

Too many kids selected an EE degree based on some high
school counselor\'s advice, or dreams of a tidy income.
Too late.

I dunno. Washing out of a hard program isn\'t the worst
thing that can happen to a young person. It\'s not nearly
as bad as hanging on by the skin of your teeth and then
failing over a decade or so in the industry.

The old saying, \"C\'s get degrees\" has caused a lot of
misery of that sort.


I had pretty bad grades because I worked a lot on the side,
did \"pre-degree consulting\" and stuff like that. Bad grades
are ok.

In an honest system, bad grades mean that the student either
didn\'t do the work, or was unable or unwilling to do it well.
There can be lots of reasons for that, such as being
unavoidably too busy, but that\'s not the usual case.

The result is wasted time and money, and usually a skill set
that\'s full of holes and harder to build on later. It sounds
like you were sort of making up your own enrichment curriculum
as you went on, which is a bit different, of course.

I really lost interest in attending university lectures after a
few things were taught by professors that were profoundly wrong.
The first one was that RF transmitters must have an output
impedance equal to the impedance of the connected load or cable.
The week after I brought in the schematic of a then-modern
transistorized ham radio transceiver and pointed out the final
amplifier. The professor didn\'t really know what to say.

Number two: The same guy said that grounded gate circuits in RF
stages make no sense at all. Huh? I did one of those during my very
first job assignment when the ink on my degree was barely dry. And
lots before as a hobbyist.

Number three: Another professor said that we only need to learn all
this transistor-level stuff for the exam. Once we graduated this
would all be obsoleted by integrated circuits. That one took the
cake. Still, it seemed I was the only one who didn\'t believe such
nonsense. However, it provided me with the epiphany \"Ha! This is my
niche!\". And that\'s what it became. Never looked back.

This was at a European ivy league place which made it even more
disappointing.

<snip>
umm \"The first one was that RF transmitters must have an output
impedance equal to the
impedance of the connected load or cable. \"
I am not an \'RF\' guy but have dabbled with ham radio designs, and did
do audio amp designs. I clearly remember circuit analysis being done
to ensure that impedance matching was done because it is essential
for maximum power transfer. So how is that wrong? The fact that you
had a counter example doesn\'t make the theory wrong, just the
counterexample.

The theory makes assumptions that are sometimes unphysical, specifically
that the source can be accurately described by a Thevenin equivalent
circuit of an infinitely-stiff voltage source in series with a fixed
source impedance. In situations where that\'s more or less true, the
theory works fine, but in real life that\'s not how you design circuits
when efficiency matters.

The problem with that assumption is maybe easier to see if you consider
an op amp. Near DC, a unity-gain follower probably has an output
impedance of a milliohm or so, and might have an output swing of 12 V,
depending on the supplies. With the prof\'s assumptions, the maximum
available power would be

P_matched = (12 V)**2 / 0.004 ohms = 36 kW.

Maybe not. ;)

On the third point, I don\'t think he was wrong, just very narrow
minded. In one of my digital logic design courses various methods of
gate minimization\' were beat into us (K-maps, prime implicates, etc).
Thought it was foolish, after all, IC gates were cheap, fast,
plentiful. Twenty years later I remember doing gate minimization for
PALs....

Knowing a bit of symbolic logic is good for the mind, anyway, and can
help in all sorts of situations. Last year I designed a temperature
controller that uses wire-NOR to save parts in a four-quadrant current
limiter. (The four quadrants had to have unequal limits, for TE cooler
datasheet reasons.)

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
Three Jeeps wrote:
On Tuesday, January 10, 2023 at 5:57:10 PM UTC-5, Joerg wrote:
On 1/10/23 8:22 AM, Phil Hobbs wrote:
Phil Hobbs wrote:
Joerg wrote:
On 1/2/23 5:57 PM, Phil Hobbs wrote:
John Larkin wrote:
On Mon, 2 Jan 2023 11:00:52 -0800, Joerg
ne...@analogconsultants.com> wrote:

On 1/1/23 11:08 PM, Jan Panteltje wrote:
[...]
In the EE school I was in it was known that only
\'hobbyists\' would pass the final exams. The dropout
in the first year was very very very high.


At my university the drop-out rate (start to degree)
was at times 83%.

Too many kids selected an EE degree based on some high
school counselor\'s advice, or dreams of a tidy income.
Too late.

I dunno. Washing out of a hard program isn\'t the worst
thing that can happen to a young person. It\'s not nearly
as bad as hanging on by the skin of your teeth and then
failing over a decade or so in the industry.

The old saying, \"C\'s get degrees\" has caused a lot of
misery of that sort.


I had pretty bad grades because I worked a lot on the side,
did \"pre-degree consulting\" and stuff like that. Bad grades
are ok.

In an honest system, bad grades mean that the student either
didn\'t do the work, or was unable or unwilling to do it well.
There can be lots of reasons for that, such as being
unavoidably too busy, but that\'s not the usual case.

The result is wasted time and money, and usually a skill set
that\'s full of holes and harder to build on later. It sounds
like you were sort of making up your own enrichment curriculum
as you went on, which is a bit different, of course.

I really lost interest in attending university lectures after a
few things were taught by professors that were profoundly wrong.
The first one was that RF transmitters must have an output
impedance equal to the impedance of the connected load or cable.
The week after I brought in the schematic of a then-modern
transistorized ham radio transceiver and pointed out the final
amplifier. The professor didn\'t really know what to say.

Number two: The same guy said that grounded gate circuits in RF
stages make no sense at all. Huh? I did one of those during my very
first job assignment when the ink on my degree was barely dry. And
lots before as a hobbyist.

Number three: Another professor said that we only need to learn all
this transistor-level stuff for the exam. Once we graduated this
would all be obsoleted by integrated circuits. That one took the
cake. Still, it seemed I was the only one who didn\'t believe such
nonsense. However, it provided me with the epiphany \"Ha! This is my
niche!\". And that\'s what it became. Never looked back.

This was at a European ivy league place which made it even more
disappointing.

<snip>
umm \"The first one was that RF transmitters must have an output
impedance equal to the
impedance of the connected load or cable. \"
I am not an \'RF\' guy but have dabbled with ham radio designs, and did
do audio amp designs. I clearly remember circuit analysis being done
to ensure that impedance matching was done because it is essential
for maximum power transfer. So how is that wrong? The fact that you
had a counter example doesn\'t make the theory wrong, just the
counterexample.

The theory makes assumptions that are sometimes unphysical, specifically
that the source can be accurately described by a Thevenin equivalent
circuit of an infinitely-stiff voltage source in series with a fixed
source impedance. In situations where that\'s more or less true, the
theory works fine, but in real life that\'s not how you design circuits
when efficiency matters.

The problem with that assumption is maybe easier to see if you consider
an op amp. Near DC, a unity-gain follower probably has an output
impedance of a milliohm or so, and might have an output swing of 12 V,
depending on the supplies. With the prof\'s assumptions, the maximum
available power would be

P_matched = (12 V)**2 / 0.004 ohms = 36 kW.

Maybe not. ;)

On the third point, I don\'t think he was wrong, just very narrow
minded. In one of my digital logic design courses various methods of
gate minimization\' were beat into us (K-maps, prime implicates, etc).
Thought it was foolish, after all, IC gates were cheap, fast,
plentiful. Twenty years later I remember doing gate minimization for
PALs....

Knowing a bit of symbolic logic is good for the mind, anyway, and can
help in all sorts of situations. Last year I designed a temperature
controller that uses wire-NOR to save parts in a four-quadrant current
limiter. (The four quadrants had to have unequal limits, for TE cooler
datasheet reasons.)

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
Three Jeeps wrote:
On Tuesday, January 10, 2023 at 5:57:10 PM UTC-5, Joerg wrote:
On 1/10/23 8:22 AM, Phil Hobbs wrote:
Phil Hobbs wrote:
Joerg wrote:
On 1/2/23 5:57 PM, Phil Hobbs wrote:
John Larkin wrote:
On Mon, 2 Jan 2023 11:00:52 -0800, Joerg
ne...@analogconsultants.com> wrote:

On 1/1/23 11:08 PM, Jan Panteltje wrote:
[...]
In the EE school I was in it was known that only
\'hobbyists\' would pass the final exams. The dropout
in the first year was very very very high.


At my university the drop-out rate (start to degree)
was at times 83%.

Too many kids selected an EE degree based on some high
school counselor\'s advice, or dreams of a tidy income.
Too late.

I dunno. Washing out of a hard program isn\'t the worst
thing that can happen to a young person. It\'s not nearly
as bad as hanging on by the skin of your teeth and then
failing over a decade or so in the industry.

The old saying, \"C\'s get degrees\" has caused a lot of
misery of that sort.


I had pretty bad grades because I worked a lot on the side,
did \"pre-degree consulting\" and stuff like that. Bad grades
are ok.

In an honest system, bad grades mean that the student either
didn\'t do the work, or was unable or unwilling to do it well.
There can be lots of reasons for that, such as being
unavoidably too busy, but that\'s not the usual case.

The result is wasted time and money, and usually a skill set
that\'s full of holes and harder to build on later. It sounds
like you were sort of making up your own enrichment curriculum
as you went on, which is a bit different, of course.

I really lost interest in attending university lectures after a
few things were taught by professors that were profoundly wrong.
The first one was that RF transmitters must have an output
impedance equal to the impedance of the connected load or cable.
The week after I brought in the schematic of a then-modern
transistorized ham radio transceiver and pointed out the final
amplifier. The professor didn\'t really know what to say.

Number two: The same guy said that grounded gate circuits in RF
stages make no sense at all. Huh? I did one of those during my very
first job assignment when the ink on my degree was barely dry. And
lots before as a hobbyist.

Number three: Another professor said that we only need to learn all
this transistor-level stuff for the exam. Once we graduated this
would all be obsoleted by integrated circuits. That one took the
cake. Still, it seemed I was the only one who didn\'t believe such
nonsense. However, it provided me with the epiphany \"Ha! This is my
niche!\". And that\'s what it became. Never looked back.

This was at a European ivy league place which made it even more
disappointing.

<snip>
umm \"The first one was that RF transmitters must have an output
impedance equal to the
impedance of the connected load or cable. \"
I am not an \'RF\' guy but have dabbled with ham radio designs, and did
do audio amp designs. I clearly remember circuit analysis being done
to ensure that impedance matching was done because it is essential
for maximum power transfer. So how is that wrong? The fact that you
had a counter example doesn\'t make the theory wrong, just the
counterexample.

The theory makes assumptions that are sometimes unphysical, specifically
that the source can be accurately described by a Thevenin equivalent
circuit of an infinitely-stiff voltage source in series with a fixed
source impedance. In situations where that\'s more or less true, the
theory works fine, but in real life that\'s not how you design circuits
when efficiency matters.

The problem with that assumption is maybe easier to see if you consider
an op amp. Near DC, a unity-gain follower probably has an output
impedance of a milliohm or so, and might have an output swing of 12 V,
depending on the supplies. With the prof\'s assumptions, the maximum
available power would be

P_matched = (12 V)**2 / 0.004 ohms = 36 kW.

Maybe not. ;)

On the third point, I don\'t think he was wrong, just very narrow
minded. In one of my digital logic design courses various methods of
gate minimization\' were beat into us (K-maps, prime implicates, etc).
Thought it was foolish, after all, IC gates were cheap, fast,
plentiful. Twenty years later I remember doing gate minimization for
PALs....

Knowing a bit of symbolic logic is good for the mind, anyway, and can
help in all sorts of situations. Last year I designed a temperature
controller that uses wire-NOR to save parts in a four-quadrant current
limiter. (The four quadrants had to have unequal limits, for TE cooler
datasheet reasons.)

Cheers

Phil Hobbs


--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 04/01/2023 14:54, bitrex wrote:
On 1/4/2023 9:52 AM, bitrex wrote:
On 1/3/2023 7:30 PM, Phil Hobbs wrote:

I agree that knowing the fundamentals cold is very important.
However, (a) physics isn\'t for everyone, by a long chalk; and (b)
there\'s a glorious intellectual heritage in engineering, so calling
it \'vocational training\' is pejorative.

Cheers

Phil \"Intermediate energy state\" Hobbs


Advanced engineering mathematics:

https://www.ebay.com/itm/194964206310

Which is pretty advanced, I don\'t know how many BS-type EEs know about
the orthogonality of Bessel functions, or regularly use contour
integration for anything.

I once used contour integration to obtain a fringe field correction on a
mass spectrometer magnet. The objective was to take out the first order
aberrations and make the focal plane orthogonal to the optic axis.

It was one of the first electromagnetic optics codes where the magnitude
of the predicted voltages on electrodes was sometime right. Prior to
that you were lucky if it had the right sign! The original code came off
a mainframe and was intended for designing atom smashers. A listing
arrived at the company from academia with my new boss.

Physics was mainly into Chebyshev polynomials for solving wavefunction
equations since it housed one of the world experts in the field.

But not as advanced as \"Advanced Mathematical Methods for Scientists &
Engineers\", which is largely about perturbation methods, boundary
layer theory, and WKB approximations. Sounds fun I guess, I just got a
used copy from Amazon for $8

I would expect stuff like the WKB approximation is regularly used more
in optics design than in circuit design, though.

A bit like Green\'s function I\'m inclined to think that WKB is seldom
used at all now that we have very fast raytracers on the desktop PC. It
may still be taught at undergraduate level today but mainly to weed out
those that are not going to make it as a theoretical physicist (which is
where it was used back in my day as an undergraduate).

Padé rational approximation methods are undergoing something of a
Renaissance. Things go in cycles. I keep waiting for Clifford Algebras
to take off as my supervisor promised they soon would (~2 decades ago).

Things which do have an important place in modern software that is
intended to be provably correct are invariants (borrowed from physics).

--
Regards,
Martin Brown
 
On 04/01/2023 14:54, bitrex wrote:
On 1/4/2023 9:52 AM, bitrex wrote:
On 1/3/2023 7:30 PM, Phil Hobbs wrote:

I agree that knowing the fundamentals cold is very important.
However, (a) physics isn\'t for everyone, by a long chalk; and (b)
there\'s a glorious intellectual heritage in engineering, so calling
it \'vocational training\' is pejorative.

Cheers

Phil \"Intermediate energy state\" Hobbs


Advanced engineering mathematics:

https://www.ebay.com/itm/194964206310

Which is pretty advanced, I don\'t know how many BS-type EEs know about
the orthogonality of Bessel functions, or regularly use contour
integration for anything.

I once used contour integration to obtain a fringe field correction on a
mass spectrometer magnet. The objective was to take out the first order
aberrations and make the focal plane orthogonal to the optic axis.

It was one of the first electromagnetic optics codes where the magnitude
of the predicted voltages on electrodes was sometime right. Prior to
that you were lucky if it had the right sign! The original code came off
a mainframe and was intended for designing atom smashers. A listing
arrived at the company from academia with my new boss.

Physics was mainly into Chebyshev polynomials for solving wavefunction
equations since it housed one of the world experts in the field.

But not as advanced as \"Advanced Mathematical Methods for Scientists &
Engineers\", which is largely about perturbation methods, boundary
layer theory, and WKB approximations. Sounds fun I guess, I just got a
used copy from Amazon for $8

I would expect stuff like the WKB approximation is regularly used more
in optics design than in circuit design, though.

A bit like Green\'s function I\'m inclined to think that WKB is seldom
used at all now that we have very fast raytracers on the desktop PC. It
may still be taught at undergraduate level today but mainly to weed out
those that are not going to make it as a theoretical physicist (which is
where it was used back in my day as an undergraduate).

Padé rational approximation methods are undergoing something of a
Renaissance. Things go in cycles. I keep waiting for Clifford Algebras
to take off as my supervisor promised they soon would (~2 decades ago).

Things which do have an important place in modern software that is
intended to be provably correct are invariants (borrowed from physics).

--
Regards,
Martin Brown
 
On 04/01/2023 14:54, bitrex wrote:
On 1/4/2023 9:52 AM, bitrex wrote:
On 1/3/2023 7:30 PM, Phil Hobbs wrote:

I agree that knowing the fundamentals cold is very important.
However, (a) physics isn\'t for everyone, by a long chalk; and (b)
there\'s a glorious intellectual heritage in engineering, so calling
it \'vocational training\' is pejorative.

Cheers

Phil \"Intermediate energy state\" Hobbs


Advanced engineering mathematics:

https://www.ebay.com/itm/194964206310

Which is pretty advanced, I don\'t know how many BS-type EEs know about
the orthogonality of Bessel functions, or regularly use contour
integration for anything.

I once used contour integration to obtain a fringe field correction on a
mass spectrometer magnet. The objective was to take out the first order
aberrations and make the focal plane orthogonal to the optic axis.

It was one of the first electromagnetic optics codes where the magnitude
of the predicted voltages on electrodes was sometime right. Prior to
that you were lucky if it had the right sign! The original code came off
a mainframe and was intended for designing atom smashers. A listing
arrived at the company from academia with my new boss.

Physics was mainly into Chebyshev polynomials for solving wavefunction
equations since it housed one of the world experts in the field.

But not as advanced as \"Advanced Mathematical Methods for Scientists &
Engineers\", which is largely about perturbation methods, boundary
layer theory, and WKB approximations. Sounds fun I guess, I just got a
used copy from Amazon for $8

I would expect stuff like the WKB approximation is regularly used more
in optics design than in circuit design, though.

A bit like Green\'s function I\'m inclined to think that WKB is seldom
used at all now that we have very fast raytracers on the desktop PC. It
may still be taught at undergraduate level today but mainly to weed out
those that are not going to make it as a theoretical physicist (which is
where it was used back in my day as an undergraduate).

Padé rational approximation methods are undergoing something of a
Renaissance. Things go in cycles. I keep waiting for Clifford Algebras
to take off as my supervisor promised they soon would (~2 decades ago).

Things which do have an important place in modern software that is
intended to be provably correct are invariants (borrowed from physics).

--
Regards,
Martin Brown
 
On 1/5/2023 6:42 AM, Dan Purgert wrote:
On 2023-01-05, Don Y wrote:
On 1/5/2023 5:07 AM, Dan Purgert wrote:
[...]
On the other hand; I already have a (weak) grasp of what a (linear?)
power-supply needs to have -- rectifier -> smoothing caps -> (I think) a
handful of transistors & resistors to get the desired voltage -> output
capacitor(s). And this (hopefully) gets me a little more understanding
of using transistors as more than just switches ... well, maybe.

Yes, but, at the end of the day, you\'re reinventing something that
a gazillion manufacturers sell in a bazillion different varieties
for less money than the postage on the parts you\'ll order to roll
your own.

Yep. And at the end of the day, when I memorized my 1-12 times tables in
grade school \"because you won\'t always have a calculator in your
pocket!\" ...

That\'s a different sort of skillset -- one that is more universal.
(we didn\'t have calculators when I was a kid. And, we also had to
memorize the squares up to 20! As well as the preamble to the
constitution, etc. -- never know when THAT might come in handy :< )

Again, this is more of a \"homework project(tm)\". Yeah, it\'s been done a
million times; yeah, I could just buy a linear regulator IC ...

If you scale down your current requirement, you can make a crude
regulator with a *stiff* input filter, a biased zener and a pass
transistor. The zener is chosen to be a diode drop (the base-emitter
junction of the pass transistor) above the desired voltage. It
drives the base of the (NPN) pass transistor -- which gives you
current gain. The diode drop from base to emitter \"subtracts\"
that from the zener voltage to give you the desired output
voltage.

With everything \"exposed\" like this, you can see what happens
to the base of the transistor as the input filter sags.
If it sags too low, the zener drops out and the pass transistor
just \"looks\" like a diode (drop) in series with the filter
voltage.

(and, of course, there\'s nothing there to carry the load)

As the input filter voltage is increased, the transistor
is called on to drop more voltage (collector to emitter)
to achieve the desired output voltage. So, it dissipates
more power.

(the zener voltage will also increase, slightly, because
of the added bias current flowing through it so the output
voltage will appear to inch up)

It\'s a relatively simple circuit but lets you see how things interact.

Ages ago, there was a \"teaching toy\" (LECTOR or something like that)
that consisted of discretes packaged in little plastic \"sugar
cubes\" with contacts on the sides and schematic symbol on top.
A magnet held each to a metallic base. So, you could \"wire\"
a circuit SCHEMATICALLY just by abutting these little sugar
cubes and see it work.

If you have a \'scope, it can be a good exercise -- if only to let you
see how the \"signal\" changes as it moves through the circuit. And,
how it reacts to differing loads (e.g., see the ripple on the input
filter increase when the filter is having to supply a larger load
reflected through the regulator. What happens if you change the value
of the filter? Or, use a half-wave rectifier?)

That\'s more like where my thought process was going with this \"homework
project\". Fiddle with it, see what happens \"inside\" an LM338 (etc),
and, well, hopefully learn a thing or two. While it\'s not a scope, I
have one of those USB logic-analyzers that \"can\" do analog readings as
well, so we\'ll see how it works out.

Just be sure you don\'t exceed the voltage range of the analyzer.
You\'d hate to lose something of value because it \"accidentally\"
saw a voltage that it couldn\'t handle.

Worst case, I have a powersupply that I\'m less concerned about blowing a
part inside, because, well, I have the schematic and the parts and ...

[I don\'t mean to discourage you. Rather, hoping you find something
to tackle that leads to an \"aha\" moment -- which tends to cause you to
crave more such moments.]

Yes, I am craving \"aha-moments\" with analog. It\'s pure wizardry seeing
a couple long wires (plus some other components) pull enough energy out
of thin air, that I can listen to a broadcast ...

Programming micros is fun enough too; but I\'m finding I\'m pulling away
from C, and into Assembly, just for the chance at those same \"aha
moments\".

A language is a language. The differences are the level of abstraction
supported and the design methodologies targeted. With ASM, you can
essentially do *anything* (after all, every compiler eventually causes
\"machine instructions\" to be executed). But, you HAVE TO do everything!

By contrast, higher level languages (its debatable just how \"high\"
C is on that scale) relieve you of varying degrees of details.

E.g., in some languages, there is a \"runtime\" that will do things like
garbage collect automatically so you don\'t have to track dynamic
memory usage. The goal being to keep you focused on the problem
you are solving, not the mechanism that is solving it.

[...]
That\'s why I suggested looking at the problem differently. Instead
of buying something that someone else has claimed is a \"moisture sensor\",
think about how moisture affects things and how those effects might be
detected.

Mhm, right now I\'m using the module\'s circuit (plus that blog that tore
it apart) as that jumping off point.

At the moment, I know it\'s a 555 running in astable mode, with the
\"sensor\" part being a pair of traces on the PCB acting as a capacitor.
Best I can figure at the moment (and not having understood that blog I\'m
reading... or the full implications of the 555 datasheet), those
\"capacitor-of-pcb-traces\" will potentially vary the 555\'s duty cycle,
which means the cap on the output will have more time to bleed through
its resistor... Or maybe once I get the understanding, I\'ll see that the
\"sensor\" part is actually on the output, rather than the input...

You can buy \"humidity sensors\" that work on capacitance changes.
From what I recall, they weren\'t very robust.

For example, *hair* stretches when wet. Can you conceive of a way to
sense this elasticity?

Could probably do something with a load cell ... soak the hair, stretch
it in the cell (but cell still reads zero), as it dries, the cell will
deflect.

In humidifiers, I think the hair just tugs on a mechanical contact
closure; when humid enough, the contact opens.

Water is sensed in fuels by noting changes in conductivity.

Yeah, I have the feeling a resistive sensor would corrode in no time
flat...

[...]
So, back to this moisture sensor project. As you said, there are a
billion different \"module\" things out there, with half again as many
sloppily written \"tutorials\" for their use (bleh).

Thankfully, I was pointed to one blog (whose name escapes me at the
moment, and I can\'t find it in my browser history after a quick search,
so I must\'ve bookmarked it on the currently dead tablet...), where they
actually took a dive into the module and showed what it did, and how
(and a good bit of it sailed over my head -- but there were so many
links to references).

I found the use of chilled mirrors to sense dewpoint to be an \"aha\"
moment. Almost comical.

yay, getting out of a hot shower :)

Essentially, that\'s how they work. You control the temperature
of a mirror (heat/cool) and watch to see when the mirror \"fogs up\".
That\'s the dew point (the point at which water vapor condenses)

[...]
I\'ve found that people are less interested in the more technical
devices I can show them (potential clients, friends, etc.). But,
the \"novelty\" items get lots of attention. Perhaps because they
are easier to relate to than something overly technical (folks
don\'t want to appear ignorant if they don\'t understand how something
works or its purpose). Or, maybe they just appreciate the whimsy!?

Indeed. Might as well be magic ;)

I think it\'s just that it\'s \"unexpected\". You\'re accustomed to seeing
products that \"do something\" (useful). To, instead, see something that
is a manifestation of \"tongue-in-cheek\" merits a smile.

I\'ve been working on a kinematic clock in the style of Rube Goldberg
for the backyard (think \"lawn sculpture\"). Again, whimsy. But, there
will likely be more interest as you will have to \"follow\" the various
motions/mechanisms to see how it works -- as well as how it \"displays\"
the time.

It\'s a challenge because it\'s mechanical, has to survive in the
elements, can\'t be a distraction for the neighbors, etc. Plus,
I\'ll have to figure out how to BUILD each mechanism, not just
design them!

And, of course, the loop has to be closed so the time is
always *correct* -- with an invisible mechanism. *THAT* being
the intellectual puzzle for the viewer to grok!
 
On 1/5/2023 6:42 AM, Dan Purgert wrote:
On 2023-01-05, Don Y wrote:
On 1/5/2023 5:07 AM, Dan Purgert wrote:
[...]
On the other hand; I already have a (weak) grasp of what a (linear?)
power-supply needs to have -- rectifier -> smoothing caps -> (I think) a
handful of transistors & resistors to get the desired voltage -> output
capacitor(s). And this (hopefully) gets me a little more understanding
of using transistors as more than just switches ... well, maybe.

Yes, but, at the end of the day, you\'re reinventing something that
a gazillion manufacturers sell in a bazillion different varieties
for less money than the postage on the parts you\'ll order to roll
your own.

Yep. And at the end of the day, when I memorized my 1-12 times tables in
grade school \"because you won\'t always have a calculator in your
pocket!\" ...

That\'s a different sort of skillset -- one that is more universal.
(we didn\'t have calculators when I was a kid. And, we also had to
memorize the squares up to 20! As well as the preamble to the
constitution, etc. -- never know when THAT might come in handy :< )

Again, this is more of a \"homework project(tm)\". Yeah, it\'s been done a
million times; yeah, I could just buy a linear regulator IC ...

If you scale down your current requirement, you can make a crude
regulator with a *stiff* input filter, a biased zener and a pass
transistor. The zener is chosen to be a diode drop (the base-emitter
junction of the pass transistor) above the desired voltage. It
drives the base of the (NPN) pass transistor -- which gives you
current gain. The diode drop from base to emitter \"subtracts\"
that from the zener voltage to give you the desired output
voltage.

With everything \"exposed\" like this, you can see what happens
to the base of the transistor as the input filter sags.
If it sags too low, the zener drops out and the pass transistor
just \"looks\" like a diode (drop) in series with the filter
voltage.

(and, of course, there\'s nothing there to carry the load)

As the input filter voltage is increased, the transistor
is called on to drop more voltage (collector to emitter)
to achieve the desired output voltage. So, it dissipates
more power.

(the zener voltage will also increase, slightly, because
of the added bias current flowing through it so the output
voltage will appear to inch up)

It\'s a relatively simple circuit but lets you see how things interact.

Ages ago, there was a \"teaching toy\" (LECTOR or something like that)
that consisted of discretes packaged in little plastic \"sugar
cubes\" with contacts on the sides and schematic symbol on top.
A magnet held each to a metallic base. So, you could \"wire\"
a circuit SCHEMATICALLY just by abutting these little sugar
cubes and see it work.

If you have a \'scope, it can be a good exercise -- if only to let you
see how the \"signal\" changes as it moves through the circuit. And,
how it reacts to differing loads (e.g., see the ripple on the input
filter increase when the filter is having to supply a larger load
reflected through the regulator. What happens if you change the value
of the filter? Or, use a half-wave rectifier?)

That\'s more like where my thought process was going with this \"homework
project\". Fiddle with it, see what happens \"inside\" an LM338 (etc),
and, well, hopefully learn a thing or two. While it\'s not a scope, I
have one of those USB logic-analyzers that \"can\" do analog readings as
well, so we\'ll see how it works out.

Just be sure you don\'t exceed the voltage range of the analyzer.
You\'d hate to lose something of value because it \"accidentally\"
saw a voltage that it couldn\'t handle.

Worst case, I have a powersupply that I\'m less concerned about blowing a
part inside, because, well, I have the schematic and the parts and ...

[I don\'t mean to discourage you. Rather, hoping you find something
to tackle that leads to an \"aha\" moment -- which tends to cause you to
crave more such moments.]

Yes, I am craving \"aha-moments\" with analog. It\'s pure wizardry seeing
a couple long wires (plus some other components) pull enough energy out
of thin air, that I can listen to a broadcast ...

Programming micros is fun enough too; but I\'m finding I\'m pulling away
from C, and into Assembly, just for the chance at those same \"aha
moments\".

A language is a language. The differences are the level of abstraction
supported and the design methodologies targeted. With ASM, you can
essentially do *anything* (after all, every compiler eventually causes
\"machine instructions\" to be executed). But, you HAVE TO do everything!

By contrast, higher level languages (its debatable just how \"high\"
C is on that scale) relieve you of varying degrees of details.

E.g., in some languages, there is a \"runtime\" that will do things like
garbage collect automatically so you don\'t have to track dynamic
memory usage. The goal being to keep you focused on the problem
you are solving, not the mechanism that is solving it.

[...]
That\'s why I suggested looking at the problem differently. Instead
of buying something that someone else has claimed is a \"moisture sensor\",
think about how moisture affects things and how those effects might be
detected.

Mhm, right now I\'m using the module\'s circuit (plus that blog that tore
it apart) as that jumping off point.

At the moment, I know it\'s a 555 running in astable mode, with the
\"sensor\" part being a pair of traces on the PCB acting as a capacitor.
Best I can figure at the moment (and not having understood that blog I\'m
reading... or the full implications of the 555 datasheet), those
\"capacitor-of-pcb-traces\" will potentially vary the 555\'s duty cycle,
which means the cap on the output will have more time to bleed through
its resistor... Or maybe once I get the understanding, I\'ll see that the
\"sensor\" part is actually on the output, rather than the input...

You can buy \"humidity sensors\" that work on capacitance changes.
From what I recall, they weren\'t very robust.

For example, *hair* stretches when wet. Can you conceive of a way to
sense this elasticity?

Could probably do something with a load cell ... soak the hair, stretch
it in the cell (but cell still reads zero), as it dries, the cell will
deflect.

In humidifiers, I think the hair just tugs on a mechanical contact
closure; when humid enough, the contact opens.

Water is sensed in fuels by noting changes in conductivity.

Yeah, I have the feeling a resistive sensor would corrode in no time
flat...

[...]
So, back to this moisture sensor project. As you said, there are a
billion different \"module\" things out there, with half again as many
sloppily written \"tutorials\" for their use (bleh).

Thankfully, I was pointed to one blog (whose name escapes me at the
moment, and I can\'t find it in my browser history after a quick search,
so I must\'ve bookmarked it on the currently dead tablet...), where they
actually took a dive into the module and showed what it did, and how
(and a good bit of it sailed over my head -- but there were so many
links to references).

I found the use of chilled mirrors to sense dewpoint to be an \"aha\"
moment. Almost comical.

yay, getting out of a hot shower :)

Essentially, that\'s how they work. You control the temperature
of a mirror (heat/cool) and watch to see when the mirror \"fogs up\".
That\'s the dew point (the point at which water vapor condenses)

[...]
I\'ve found that people are less interested in the more technical
devices I can show them (potential clients, friends, etc.). But,
the \"novelty\" items get lots of attention. Perhaps because they
are easier to relate to than something overly technical (folks
don\'t want to appear ignorant if they don\'t understand how something
works or its purpose). Or, maybe they just appreciate the whimsy!?

Indeed. Might as well be magic ;)

I think it\'s just that it\'s \"unexpected\". You\'re accustomed to seeing
products that \"do something\" (useful). To, instead, see something that
is a manifestation of \"tongue-in-cheek\" merits a smile.

I\'ve been working on a kinematic clock in the style of Rube Goldberg
for the backyard (think \"lawn sculpture\"). Again, whimsy. But, there
will likely be more interest as you will have to \"follow\" the various
motions/mechanisms to see how it works -- as well as how it \"displays\"
the time.

It\'s a challenge because it\'s mechanical, has to survive in the
elements, can\'t be a distraction for the neighbors, etc. Plus,
I\'ll have to figure out how to BUILD each mechanism, not just
design them!

And, of course, the loop has to be closed so the time is
always *correct* -- with an invisible mechanism. *THAT* being
the intellectual puzzle for the viewer to grok!
 
On 1/5/2023 6:42 AM, Dan Purgert wrote:
On 2023-01-05, Don Y wrote:
On 1/5/2023 5:07 AM, Dan Purgert wrote:
[...]
On the other hand; I already have a (weak) grasp of what a (linear?)
power-supply needs to have -- rectifier -> smoothing caps -> (I think) a
handful of transistors & resistors to get the desired voltage -> output
capacitor(s). And this (hopefully) gets me a little more understanding
of using transistors as more than just switches ... well, maybe.

Yes, but, at the end of the day, you\'re reinventing something that
a gazillion manufacturers sell in a bazillion different varieties
for less money than the postage on the parts you\'ll order to roll
your own.

Yep. And at the end of the day, when I memorized my 1-12 times tables in
grade school \"because you won\'t always have a calculator in your
pocket!\" ...

That\'s a different sort of skillset -- one that is more universal.
(we didn\'t have calculators when I was a kid. And, we also had to
memorize the squares up to 20! As well as the preamble to the
constitution, etc. -- never know when THAT might come in handy :< )

Again, this is more of a \"homework project(tm)\". Yeah, it\'s been done a
million times; yeah, I could just buy a linear regulator IC ...

If you scale down your current requirement, you can make a crude
regulator with a *stiff* input filter, a biased zener and a pass
transistor. The zener is chosen to be a diode drop (the base-emitter
junction of the pass transistor) above the desired voltage. It
drives the base of the (NPN) pass transistor -- which gives you
current gain. The diode drop from base to emitter \"subtracts\"
that from the zener voltage to give you the desired output
voltage.

With everything \"exposed\" like this, you can see what happens
to the base of the transistor as the input filter sags.
If it sags too low, the zener drops out and the pass transistor
just \"looks\" like a diode (drop) in series with the filter
voltage.

(and, of course, there\'s nothing there to carry the load)

As the input filter voltage is increased, the transistor
is called on to drop more voltage (collector to emitter)
to achieve the desired output voltage. So, it dissipates
more power.

(the zener voltage will also increase, slightly, because
of the added bias current flowing through it so the output
voltage will appear to inch up)

It\'s a relatively simple circuit but lets you see how things interact.

Ages ago, there was a \"teaching toy\" (LECTOR or something like that)
that consisted of discretes packaged in little plastic \"sugar
cubes\" with contacts on the sides and schematic symbol on top.
A magnet held each to a metallic base. So, you could \"wire\"
a circuit SCHEMATICALLY just by abutting these little sugar
cubes and see it work.

If you have a \'scope, it can be a good exercise -- if only to let you
see how the \"signal\" changes as it moves through the circuit. And,
how it reacts to differing loads (e.g., see the ripple on the input
filter increase when the filter is having to supply a larger load
reflected through the regulator. What happens if you change the value
of the filter? Or, use a half-wave rectifier?)

That\'s more like where my thought process was going with this \"homework
project\". Fiddle with it, see what happens \"inside\" an LM338 (etc),
and, well, hopefully learn a thing or two. While it\'s not a scope, I
have one of those USB logic-analyzers that \"can\" do analog readings as
well, so we\'ll see how it works out.

Just be sure you don\'t exceed the voltage range of the analyzer.
You\'d hate to lose something of value because it \"accidentally\"
saw a voltage that it couldn\'t handle.

Worst case, I have a powersupply that I\'m less concerned about blowing a
part inside, because, well, I have the schematic and the parts and ...

[I don\'t mean to discourage you. Rather, hoping you find something
to tackle that leads to an \"aha\" moment -- which tends to cause you to
crave more such moments.]

Yes, I am craving \"aha-moments\" with analog. It\'s pure wizardry seeing
a couple long wires (plus some other components) pull enough energy out
of thin air, that I can listen to a broadcast ...

Programming micros is fun enough too; but I\'m finding I\'m pulling away
from C, and into Assembly, just for the chance at those same \"aha
moments\".

A language is a language. The differences are the level of abstraction
supported and the design methodologies targeted. With ASM, you can
essentially do *anything* (after all, every compiler eventually causes
\"machine instructions\" to be executed). But, you HAVE TO do everything!

By contrast, higher level languages (its debatable just how \"high\"
C is on that scale) relieve you of varying degrees of details.

E.g., in some languages, there is a \"runtime\" that will do things like
garbage collect automatically so you don\'t have to track dynamic
memory usage. The goal being to keep you focused on the problem
you are solving, not the mechanism that is solving it.

[...]
That\'s why I suggested looking at the problem differently. Instead
of buying something that someone else has claimed is a \"moisture sensor\",
think about how moisture affects things and how those effects might be
detected.

Mhm, right now I\'m using the module\'s circuit (plus that blog that tore
it apart) as that jumping off point.

At the moment, I know it\'s a 555 running in astable mode, with the
\"sensor\" part being a pair of traces on the PCB acting as a capacitor.
Best I can figure at the moment (and not having understood that blog I\'m
reading... or the full implications of the 555 datasheet), those
\"capacitor-of-pcb-traces\" will potentially vary the 555\'s duty cycle,
which means the cap on the output will have more time to bleed through
its resistor... Or maybe once I get the understanding, I\'ll see that the
\"sensor\" part is actually on the output, rather than the input...

You can buy \"humidity sensors\" that work on capacitance changes.
From what I recall, they weren\'t very robust.

For example, *hair* stretches when wet. Can you conceive of a way to
sense this elasticity?

Could probably do something with a load cell ... soak the hair, stretch
it in the cell (but cell still reads zero), as it dries, the cell will
deflect.

In humidifiers, I think the hair just tugs on a mechanical contact
closure; when humid enough, the contact opens.

Water is sensed in fuels by noting changes in conductivity.

Yeah, I have the feeling a resistive sensor would corrode in no time
flat...

[...]
So, back to this moisture sensor project. As you said, there are a
billion different \"module\" things out there, with half again as many
sloppily written \"tutorials\" for their use (bleh).

Thankfully, I was pointed to one blog (whose name escapes me at the
moment, and I can\'t find it in my browser history after a quick search,
so I must\'ve bookmarked it on the currently dead tablet...), where they
actually took a dive into the module and showed what it did, and how
(and a good bit of it sailed over my head -- but there were so many
links to references).

I found the use of chilled mirrors to sense dewpoint to be an \"aha\"
moment. Almost comical.

yay, getting out of a hot shower :)

Essentially, that\'s how they work. You control the temperature
of a mirror (heat/cool) and watch to see when the mirror \"fogs up\".
That\'s the dew point (the point at which water vapor condenses)

[...]
I\'ve found that people are less interested in the more technical
devices I can show them (potential clients, friends, etc.). But,
the \"novelty\" items get lots of attention. Perhaps because they
are easier to relate to than something overly technical (folks
don\'t want to appear ignorant if they don\'t understand how something
works or its purpose). Or, maybe they just appreciate the whimsy!?

Indeed. Might as well be magic ;)

I think it\'s just that it\'s \"unexpected\". You\'re accustomed to seeing
products that \"do something\" (useful). To, instead, see something that
is a manifestation of \"tongue-in-cheek\" merits a smile.

I\'ve been working on a kinematic clock in the style of Rube Goldberg
for the backyard (think \"lawn sculpture\"). Again, whimsy. But, there
will likely be more interest as you will have to \"follow\" the various
motions/mechanisms to see how it works -- as well as how it \"displays\"
the time.

It\'s a challenge because it\'s mechanical, has to survive in the
elements, can\'t be a distraction for the neighbors, etc. Plus,
I\'ll have to figure out how to BUILD each mechanism, not just
design them!

And, of course, the loop has to be closed so the time is
always *correct* -- with an invisible mechanism. *THAT* being
the intellectual puzzle for the viewer to grok!
 
On Thu, 5 Jan 2023 18:52:16 -0600, Les Cargill <lcargil99@gmail.com>
wrote:

Joe Gwinn wrote:
snip

Yes. I made my living as a embedded-realtime software developed for a
few decades. My colleagues and I were all EEs who took a wrong turn,
and the language of choice was assembler, or machine code, on the
iron. The pure computer-science folk were completely baffled by
embedded real time.



Bizarre. And probably true. A significant fraction of CS is in
Djikstra\'s work on semaphores and the other stuff so related.

It\'s the one unavoidable topic unless you\'re purely in pure
cooperative systems and there are no interrupts.

I think real time software is more akin to clockmaking anyway.

Clockmaking?


Joe Gwinn
 

Welcome to EDABoard.com

Sponsor

Back
Top