Maximum Power Point Tracking: Optimizing Solar Panels 58 Comments by: Maya Posch...

On Wed, 4 Jan 2023 16:08:37 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/01/2023 14:54, bitrex wrote:
On 1/4/2023 9:52 AM, bitrex wrote:
On 1/3/2023 7:30 PM, Phil Hobbs wrote:

I agree that knowing the fundamentals cold is very important.
However, (a) physics isn\'t for everyone, by a long chalk; and (b)
there\'s a glorious intellectual heritage in engineering, so calling
it \'vocational training\' is pejorative.

Cheers

Phil \"Intermediate energy state\" Hobbs


Advanced engineering mathematics:

https://www.ebay.com/itm/194964206310

Which is pretty advanced, I don\'t know how many BS-type EEs know about
the orthogonality of Bessel functions, or regularly use contour
integration for anything.

I once used contour integration to obtain a fringe field correction on a
mass spectrometer magnet. The objective was to take out the first order
aberrations and make the focal plane orthogonal to the optic axis.

It was one of the first electromagnetic optics codes where the magnitude
of the predicted voltages on electrodes was sometime right. Prior to
that you were lucky if it had the right sign! The original code came off
a mainframe and was intended for designing atom smashers. A listing
arrived at the company from academia with my new boss.

Physics was mainly into Chebyshev polynomials for solving wavefunction
equations since it housed one of the world experts in the field.

But not as advanced as \"Advanced Mathematical Methods for Scientists &
Engineers\", which is largely about perturbation methods, boundary
layer theory, and WKB approximations. Sounds fun I guess, I just got a
used copy from Amazon for $8

I would expect stuff like the WKB approximation is regularly used more
in optics design than in circuit design, though.

A bit like Green\'s function I\'m inclined to think that WKB is seldom
used at all now that we have very fast raytracers on the desktop PC. It
may still be taught at undergraduate level today but mainly to weed out
those that are not going to make it as a theoretical physicist (which is
where it was used back in my day as an undergraduate).

Padé rational approximation methods are undergoing something of a
Renaissance. Things go in cycles. I keep waiting for Clifford Algebras
to take off as my supervisor promised they soon would (~2 decades ago).

Things which do have an important place in modern software that is
intended to be provably correct are invariants (borrowed from physics).

Yes, but if I recall we called them Assertions:

..<https://en.wikipedia.org/wiki/Assertion_(software_development)>

Software also has Invariants, but I don\'t know that either one came
from the Physics world.

..<https://en.wikipedia.org/wiki/Invariant_(mathematics)#Invariants_in_computer_science>

The main difference in software seems to be that assertions are
logical statements about the value of a single variable, while
Invariants apply an assertion to the result of a specified function.

One kind of assertion was visual - coplot a 2D plot of something, plus
a circle, and visually verify concentricity. The eye is _very_ good
at this, so it was a very robust and sensitive test.

I for one used them heavily, with some being used in operation, not
just development. This was done in runtime code, not as a property of
the programming language and/or compiler.

Joe Gwinn
 
On Thu, 5 Jan 2023 09:53:21 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/01/2023 22:25, John Larkin wrote:
On Wed, 4 Jan 2023 10:30:35 -0800, Joerg <news@analogconsultants.com
wrote:

On 1/2/23 2:34 PM, Joe Gwinn wrote:
[snip]
Antenna pattern is first calibrated by a like process.

My time-domain routine didn\'t need any golden numbers and converged
every single time within less than half a second. We let the uC handle
that because the computational load dropped to peanuts. The big DSP
became unemployed.

The project start was the usual, everyone saying that FFT was the name
of the game and there wasn\'t any other decent way. If it didn\'t work in
time domain I\'d have to buy everyone a beer at night. If it did,
everyone had to buy me a beer. I needed a designated driver that night ...

Given an actual waveform a(t) and a desired waveform d(t), we can fix
a to make d with an equalizer having impulse response e(t)

d(t) = a(t) ** e(t) ** is convolution

Finding e is the reverse convolution problem.

The classic way to find e(t) is to do complex FFTs on a and d and
complex divide to get the FFT of e, then reverse FFT. That usually
makes a bunch of divide-by-0 or divide-by-almost-0 points, which sort
of blows up.

Which is why no one apart from an EE who skipped all the advanced maths
classes would ever try to do it that way.

As mentioned elsewhere, this issue is scalability: EE applications
can easily require 2^20 sample (I+Q) transforms.


Effective deconvolution algorithms have been known since the late 1970\'s
when computers became powerful enough to implement them. The first big
breakthrough in applying non-linear constraints like positivity of a
brightness distribution was Gull & Daniel, Nature 1978, 272, 686-690
(implementation was mathematically a bit flakey but it still worked OK)

https://www.nature.com/articles/272686a0

Prior to that you would always have non-sensical rings of negative
brightness around bright point sources caused by the truncated Fourier
transform.

In X-ray Computerized Tomography (CT), computations were very slow
because of the need to impose a non-negativity constraint on pixel
X-ray absorption values - the target substance has no X-Ray power
gain. This was done by a brute-force iterative process.

Nowadays, they use fast Radon transforms. I assume that they
alternate between domains, and impose the constraints in whichever
domain makes physical sense.


Slightly later more mathematically refined versions widely used:

John Skilling & Bryan\'s Maximum Entropy Image Reconstruction

https://ui.adsabs.harvard.edu/abs/1984MNRAS.211..111S/abstract

Tim Cornwell\'s & Evans VM at the VLA

https://ui.adsabs.harvard.edu/abs/1985A%26A...143...77C/abstract

Prior to that there were still some quite respectable linear
deconvolution methods that involved weighting down the higher
frequencies with a constraint (additive frequency dependent term in the
denominator). Effectively a penalty function that prevents wild changes
between adjacent pixels by constraining the second derivative.

Later Maximum Entropy deconvolution methods became routine and could
solve very difficult problems albeit at high computational cost. They
were the way that deconvolved images from the flawed HST were made.

The fault in the primary mirror was determined using a code from Jodrell
Bank intended for adjusting the panels for focus on the big dish.

That makes sense.


I do it in time domain.

Feed forward compensation for step changes in input signal is as old as
the hills. Mass spectrometers have used it since their invention. It is
a one trick pony and only works in very limited circumstances.

10^11 ohm resistors were anything but pure resistors.

There was a whole year when the one guy in the world who made the best
ones finally retired and when the new guy really hadn\'t got the knack.

Even if there was a written recipe, it would take time to rediscover
the lore that people just know.

Joe Gwinn
 
On Thu, 5 Jan 2023 09:53:21 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/01/2023 22:25, John Larkin wrote:
On Wed, 4 Jan 2023 10:30:35 -0800, Joerg <news@analogconsultants.com
wrote:

On 1/2/23 2:34 PM, Joe Gwinn wrote:
[snip]
Antenna pattern is first calibrated by a like process.

My time-domain routine didn\'t need any golden numbers and converged
every single time within less than half a second. We let the uC handle
that because the computational load dropped to peanuts. The big DSP
became unemployed.

The project start was the usual, everyone saying that FFT was the name
of the game and there wasn\'t any other decent way. If it didn\'t work in
time domain I\'d have to buy everyone a beer at night. If it did,
everyone had to buy me a beer. I needed a designated driver that night ...

Given an actual waveform a(t) and a desired waveform d(t), we can fix
a to make d with an equalizer having impulse response e(t)

d(t) = a(t) ** e(t) ** is convolution

Finding e is the reverse convolution problem.

The classic way to find e(t) is to do complex FFTs on a and d and
complex divide to get the FFT of e, then reverse FFT. That usually
makes a bunch of divide-by-0 or divide-by-almost-0 points, which sort
of blows up.

Which is why no one apart from an EE who skipped all the advanced maths
classes would ever try to do it that way.

As mentioned elsewhere, this issue is scalability: EE applications
can easily require 2^20 sample (I+Q) transforms.


Effective deconvolution algorithms have been known since the late 1970\'s
when computers became powerful enough to implement them. The first big
breakthrough in applying non-linear constraints like positivity of a
brightness distribution was Gull & Daniel, Nature 1978, 272, 686-690
(implementation was mathematically a bit flakey but it still worked OK)

https://www.nature.com/articles/272686a0

Prior to that you would always have non-sensical rings of negative
brightness around bright point sources caused by the truncated Fourier
transform.

In X-ray Computerized Tomography (CT), computations were very slow
because of the need to impose a non-negativity constraint on pixel
X-ray absorption values - the target substance has no X-Ray power
gain. This was done by a brute-force iterative process.

Nowadays, they use fast Radon transforms. I assume that they
alternate between domains, and impose the constraints in whichever
domain makes physical sense.


Slightly later more mathematically refined versions widely used:

John Skilling & Bryan\'s Maximum Entropy Image Reconstruction

https://ui.adsabs.harvard.edu/abs/1984MNRAS.211..111S/abstract

Tim Cornwell\'s & Evans VM at the VLA

https://ui.adsabs.harvard.edu/abs/1985A%26A...143...77C/abstract

Prior to that there were still some quite respectable linear
deconvolution methods that involved weighting down the higher
frequencies with a constraint (additive frequency dependent term in the
denominator). Effectively a penalty function that prevents wild changes
between adjacent pixels by constraining the second derivative.

Later Maximum Entropy deconvolution methods became routine and could
solve very difficult problems albeit at high computational cost. They
were the way that deconvolved images from the flawed HST were made.

The fault in the primary mirror was determined using a code from Jodrell
Bank intended for adjusting the panels for focus on the big dish.

That makes sense.


I do it in time domain.

Feed forward compensation for step changes in input signal is as old as
the hills. Mass spectrometers have used it since their invention. It is
a one trick pony and only works in very limited circumstances.

10^11 ohm resistors were anything but pure resistors.

There was a whole year when the one guy in the world who made the best
ones finally retired and when the new guy really hadn\'t got the knack.

Even if there was a written recipe, it would take time to rediscover
the lore that people just know.

Joe Gwinn
 
On Thu, 5 Jan 2023 09:53:21 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/01/2023 22:25, John Larkin wrote:
On Wed, 4 Jan 2023 10:30:35 -0800, Joerg <news@analogconsultants.com
wrote:

On 1/2/23 2:34 PM, Joe Gwinn wrote:
[snip]
Antenna pattern is first calibrated by a like process.

My time-domain routine didn\'t need any golden numbers and converged
every single time within less than half a second. We let the uC handle
that because the computational load dropped to peanuts. The big DSP
became unemployed.

The project start was the usual, everyone saying that FFT was the name
of the game and there wasn\'t any other decent way. If it didn\'t work in
time domain I\'d have to buy everyone a beer at night. If it did,
everyone had to buy me a beer. I needed a designated driver that night ...

Given an actual waveform a(t) and a desired waveform d(t), we can fix
a to make d with an equalizer having impulse response e(t)

d(t) = a(t) ** e(t) ** is convolution

Finding e is the reverse convolution problem.

The classic way to find e(t) is to do complex FFTs on a and d and
complex divide to get the FFT of e, then reverse FFT. That usually
makes a bunch of divide-by-0 or divide-by-almost-0 points, which sort
of blows up.

Which is why no one apart from an EE who skipped all the advanced maths
classes would ever try to do it that way.

As mentioned elsewhere, this issue is scalability: EE applications
can easily require 2^20 sample (I+Q) transforms.


Effective deconvolution algorithms have been known since the late 1970\'s
when computers became powerful enough to implement them. The first big
breakthrough in applying non-linear constraints like positivity of a
brightness distribution was Gull & Daniel, Nature 1978, 272, 686-690
(implementation was mathematically a bit flakey but it still worked OK)

https://www.nature.com/articles/272686a0

Prior to that you would always have non-sensical rings of negative
brightness around bright point sources caused by the truncated Fourier
transform.

In X-ray Computerized Tomography (CT), computations were very slow
because of the need to impose a non-negativity constraint on pixel
X-ray absorption values - the target substance has no X-Ray power
gain. This was done by a brute-force iterative process.

Nowadays, they use fast Radon transforms. I assume that they
alternate between domains, and impose the constraints in whichever
domain makes physical sense.


Slightly later more mathematically refined versions widely used:

John Skilling & Bryan\'s Maximum Entropy Image Reconstruction

https://ui.adsabs.harvard.edu/abs/1984MNRAS.211..111S/abstract

Tim Cornwell\'s & Evans VM at the VLA

https://ui.adsabs.harvard.edu/abs/1985A%26A...143...77C/abstract

Prior to that there were still some quite respectable linear
deconvolution methods that involved weighting down the higher
frequencies with a constraint (additive frequency dependent term in the
denominator). Effectively a penalty function that prevents wild changes
between adjacent pixels by constraining the second derivative.

Later Maximum Entropy deconvolution methods became routine and could
solve very difficult problems albeit at high computational cost. They
were the way that deconvolved images from the flawed HST were made.

The fault in the primary mirror was determined using a code from Jodrell
Bank intended for adjusting the panels for focus on the big dish.

That makes sense.


I do it in time domain.

Feed forward compensation for step changes in input signal is as old as
the hills. Mass spectrometers have used it since their invention. It is
a one trick pony and only works in very limited circumstances.

10^11 ohm resistors were anything but pure resistors.

There was a whole year when the one guy in the world who made the best
ones finally retired and when the new guy really hadn\'t got the knack.

Even if there was a written recipe, it would take time to rediscover
the lore that people just know.

Joe Gwinn
 
On Tuesday, January 3, 2023 at 10:51:19 PM UTC+11, Jan Panteltje wrote:
On a sunny day (Tue, 3 Jan 2023 01:18:56 -0800 (PST)) it happened Anthony William Sloman <bill....@ieee.org> wrote in <d1307b99-f37d-4845...@googlegroups.com>:
On Tuesday, January 3, 2023 at 4:59:53 PM UTC+11, Jan Panteltje wrote:
On a sunny day (Mon, 2 Jan 2023 15:24:10 -0800 (PST)) it happened whit3rd <whi...@gmail.com> wrote in <24344a75-68f7-4b3a...@googlegroups.com>:
On Sunday, January 1, 2023 at 10:41:42 PM UTC-8, Jan Panteltje wrote:

Climate change is caused by earth orbit variations and changes in the sun.

The \'earth orbit variations\' has different time progression than we see,

You are confusing \'weather\' with climate.

He isn\'t.

Just a few hundred years ago we had the \'little ice age\' in Europe
Try reading this:
http://old.world-mysteries.com/alignments/mpl_al3b.htm

If \'save the world\' idiots have not killed that site yet...

It is irrelevant in this context - it eventually gets through to the \"Milankovich
Cycles\" which don\'t explain the warming we have seen over the last century
or so.

and as for \'changes in the sun\'-- that\'s the billion-year timescale.

Oh, our measurements with advanced electronics during the ice age confirm
that right ;-) HAHAHAHA

Ignorant idiot. He is talking about the predictable evolution of the sun (as a main
sequence star - at least so far).

https://theplanets.org/types-of-stars/main-sequence-star-life-cycle-and-other-facts/

In a mere million years, one doesn\'t expect a single degree F, let alone C, of difference.
So, your concerns with orbit and sun are misplaced.

We producing CO2 has little to do with it.

It fits the data, though,

It does not, if you had looked up CO2 versus ice ages an warm periods you
would see CO2 lagging at times.

The ice age/interglacial alternation is essentially driven by snow cover - CO2
levels follow that and provide positive feedback. Our feeding extra CO2
into the atmosphere is driving the system, and the change in snow cover - as
in melting the Arctic ice cover - is providing positive feedback, even if
you are too dim to realise it.

Google is you answer, it was all discussed here before.

Google pick up climate change denial propaganda as well as scientific information.
You happen to prefer the propaganda.

But brainwashed masses and kids will rather try to save the world by cladding
paintings and killing all useful power generation systems,

UNTIL they get cold feet and then are the first ones to cry for nuclear, coal,
and oil.

Solar power and wind turbines do happen to be useful power generation systems,
but they don\'t put money in the pockets of the fossil carbon extraction
industry.

The \'from Al Gore\' item of interest was the internet, historically.

And the internet is used by the climate idiots to manipulate you, It (climate
fear and the internet) is used to sell ever newer and many times inferior
stuff.

It is equally used by the climate change denial propaganda machine to manipulate
gullible idiots like you and John Larkin - it is a two-edged sword.

Electricity generated from solar panels is cheaper than power from any other
source. That doesn\'t make it inferior. The fossil carbon extraction industry
doesn\'t like it - they are being undercut and losing the profits they used
to make, out of you, amongst others.

Your clue is missing :)
I\'d rather pay the utility rates to get nice 24/7 power
than having all of my garden and roof covered by panels that will only work
part of the time with part of the power.

You\'d have to buy a battery - Telsa sell a \"power wall\" that does the job - to get power overnight.
Letting the utility companies do it for you is certainly easier, but not cheaper


I HAVE those, you climate lier has none I take it from your drivel?
I have battery backup too...

But you don\'t understand how to use it.

> As climate will change -CO2 emitted by you farting or not - we need ALL THE POWER WE CAN GET from ALL sources to cope.

Farts are methane and hydrogen, not CO2. Methane is a more potent greenhouse gas than CO2, but it oxidises to CO2 with half-life of about nine years, so it\'s an academic point. We can use all the power we can get, but getting it by wrecking the climate isn\'t a good idea. We can certainly get enough from renewable sources, but we are going to have to build a lot more wind turbines and solar panels before they can take over the whole job.

Mass migration will be (and already is) happening.
Look at history: people\'s migration, flooding seas changing vegetation...

What history? The Mediterannian dried out from time to time, but that\'s pre-history - even before the flooding of the Black Sea, which didn\'t happen until shortly after the end of the last ice age. Find a link or two.

And look at what some hurricanes and hail can do and have done to those panels.
Not even mentioning snow an dust covering those.

I don\'t see much evidence of damage on the panels I can see from my balcony.. I live in inner city Sydney, and there aren\'t that many. There were more across the road from our flat in Nijmegen - that was lower-rise housing - and they seemed to last pretty well.

--
Bill Sloman, Sydney
 
On Tuesday, January 3, 2023 at 10:51:19 PM UTC+11, Jan Panteltje wrote:
On a sunny day (Tue, 3 Jan 2023 01:18:56 -0800 (PST)) it happened Anthony William Sloman <bill....@ieee.org> wrote in <d1307b99-f37d-4845...@googlegroups.com>:
On Tuesday, January 3, 2023 at 4:59:53 PM UTC+11, Jan Panteltje wrote:
On a sunny day (Mon, 2 Jan 2023 15:24:10 -0800 (PST)) it happened whit3rd <whi...@gmail.com> wrote in <24344a75-68f7-4b3a...@googlegroups.com>:
On Sunday, January 1, 2023 at 10:41:42 PM UTC-8, Jan Panteltje wrote:

Climate change is caused by earth orbit variations and changes in the sun.

The \'earth orbit variations\' has different time progression than we see,

You are confusing \'weather\' with climate.

He isn\'t.

Just a few hundred years ago we had the \'little ice age\' in Europe
Try reading this:
http://old.world-mysteries.com/alignments/mpl_al3b.htm

If \'save the world\' idiots have not killed that site yet...

It is irrelevant in this context - it eventually gets through to the \"Milankovich
Cycles\" which don\'t explain the warming we have seen over the last century
or so.

and as for \'changes in the sun\'-- that\'s the billion-year timescale.

Oh, our measurements with advanced electronics during the ice age confirm
that right ;-) HAHAHAHA

Ignorant idiot. He is talking about the predictable evolution of the sun (as a main
sequence star - at least so far).

https://theplanets.org/types-of-stars/main-sequence-star-life-cycle-and-other-facts/

In a mere million years, one doesn\'t expect a single degree F, let alone C, of difference.
So, your concerns with orbit and sun are misplaced.

We producing CO2 has little to do with it.

It fits the data, though,

It does not, if you had looked up CO2 versus ice ages an warm periods you
would see CO2 lagging at times.

The ice age/interglacial alternation is essentially driven by snow cover - CO2
levels follow that and provide positive feedback. Our feeding extra CO2
into the atmosphere is driving the system, and the change in snow cover - as
in melting the Arctic ice cover - is providing positive feedback, even if
you are too dim to realise it.

Google is you answer, it was all discussed here before.

Google pick up climate change denial propaganda as well as scientific information.
You happen to prefer the propaganda.

But brainwashed masses and kids will rather try to save the world by cladding
paintings and killing all useful power generation systems,

UNTIL they get cold feet and then are the first ones to cry for nuclear, coal,
and oil.

Solar power and wind turbines do happen to be useful power generation systems,
but they don\'t put money in the pockets of the fossil carbon extraction
industry.

The \'from Al Gore\' item of interest was the internet, historically.

And the internet is used by the climate idiots to manipulate you, It (climate
fear and the internet) is used to sell ever newer and many times inferior
stuff.

It is equally used by the climate change denial propaganda machine to manipulate
gullible idiots like you and John Larkin - it is a two-edged sword.

Electricity generated from solar panels is cheaper than power from any other
source. That doesn\'t make it inferior. The fossil carbon extraction industry
doesn\'t like it - they are being undercut and losing the profits they used
to make, out of you, amongst others.

Your clue is missing :)
I\'d rather pay the utility rates to get nice 24/7 power
than having all of my garden and roof covered by panels that will only work
part of the time with part of the power.

You\'d have to buy a battery - Telsa sell a \"power wall\" that does the job - to get power overnight.
Letting the utility companies do it for you is certainly easier, but not cheaper


I HAVE those, you climate lier has none I take it from your drivel?
I have battery backup too...

But you don\'t understand how to use it.

> As climate will change -CO2 emitted by you farting or not - we need ALL THE POWER WE CAN GET from ALL sources to cope.

Farts are methane and hydrogen, not CO2. Methane is a more potent greenhouse gas than CO2, but it oxidises to CO2 with half-life of about nine years, so it\'s an academic point. We can use all the power we can get, but getting it by wrecking the climate isn\'t a good idea. We can certainly get enough from renewable sources, but we are going to have to build a lot more wind turbines and solar panels before they can take over the whole job.

Mass migration will be (and already is) happening.
Look at history: people\'s migration, flooding seas changing vegetation...

What history? The Mediterannian dried out from time to time, but that\'s pre-history - even before the flooding of the Black Sea, which didn\'t happen until shortly after the end of the last ice age. Find a link or two.

And look at what some hurricanes and hail can do and have done to those panels.
Not even mentioning snow an dust covering those.

I don\'t see much evidence of damage on the panels I can see from my balcony.. I live in inner city Sydney, and there aren\'t that many. There were more across the road from our flat in Nijmegen - that was lower-rise housing - and they seemed to last pretty well.

--
Bill Sloman, Sydney
 
On Tuesday, January 3, 2023 at 10:51:19 PM UTC+11, Jan Panteltje wrote:
On a sunny day (Tue, 3 Jan 2023 01:18:56 -0800 (PST)) it happened Anthony William Sloman <bill....@ieee.org> wrote in <d1307b99-f37d-4845...@googlegroups.com>:
On Tuesday, January 3, 2023 at 4:59:53 PM UTC+11, Jan Panteltje wrote:
On a sunny day (Mon, 2 Jan 2023 15:24:10 -0800 (PST)) it happened whit3rd <whi...@gmail.com> wrote in <24344a75-68f7-4b3a...@googlegroups.com>:
On Sunday, January 1, 2023 at 10:41:42 PM UTC-8, Jan Panteltje wrote:

Climate change is caused by earth orbit variations and changes in the sun.

The \'earth orbit variations\' has different time progression than we see,

You are confusing \'weather\' with climate.

He isn\'t.

Just a few hundred years ago we had the \'little ice age\' in Europe
Try reading this:
http://old.world-mysteries.com/alignments/mpl_al3b.htm

If \'save the world\' idiots have not killed that site yet...

It is irrelevant in this context - it eventually gets through to the \"Milankovich
Cycles\" which don\'t explain the warming we have seen over the last century
or so.

and as for \'changes in the sun\'-- that\'s the billion-year timescale.

Oh, our measurements with advanced electronics during the ice age confirm
that right ;-) HAHAHAHA

Ignorant idiot. He is talking about the predictable evolution of the sun (as a main
sequence star - at least so far).

https://theplanets.org/types-of-stars/main-sequence-star-life-cycle-and-other-facts/

In a mere million years, one doesn\'t expect a single degree F, let alone C, of difference.
So, your concerns with orbit and sun are misplaced.

We producing CO2 has little to do with it.

It fits the data, though,

It does not, if you had looked up CO2 versus ice ages an warm periods you
would see CO2 lagging at times.

The ice age/interglacial alternation is essentially driven by snow cover - CO2
levels follow that and provide positive feedback. Our feeding extra CO2
into the atmosphere is driving the system, and the change in snow cover - as
in melting the Arctic ice cover - is providing positive feedback, even if
you are too dim to realise it.

Google is you answer, it was all discussed here before.

Google pick up climate change denial propaganda as well as scientific information.
You happen to prefer the propaganda.

But brainwashed masses and kids will rather try to save the world by cladding
paintings and killing all useful power generation systems,

UNTIL they get cold feet and then are the first ones to cry for nuclear, coal,
and oil.

Solar power and wind turbines do happen to be useful power generation systems,
but they don\'t put money in the pockets of the fossil carbon extraction
industry.

The \'from Al Gore\' item of interest was the internet, historically.

And the internet is used by the climate idiots to manipulate you, It (climate
fear and the internet) is used to sell ever newer and many times inferior
stuff.

It is equally used by the climate change denial propaganda machine to manipulate
gullible idiots like you and John Larkin - it is a two-edged sword.

Electricity generated from solar panels is cheaper than power from any other
source. That doesn\'t make it inferior. The fossil carbon extraction industry
doesn\'t like it - they are being undercut and losing the profits they used
to make, out of you, amongst others.

Your clue is missing :)
I\'d rather pay the utility rates to get nice 24/7 power
than having all of my garden and roof covered by panels that will only work
part of the time with part of the power.

You\'d have to buy a battery - Telsa sell a \"power wall\" that does the job - to get power overnight.
Letting the utility companies do it for you is certainly easier, but not cheaper


I HAVE those, you climate lier has none I take it from your drivel?
I have battery backup too...

But you don\'t understand how to use it.

> As climate will change -CO2 emitted by you farting or not - we need ALL THE POWER WE CAN GET from ALL sources to cope.

Farts are methane and hydrogen, not CO2. Methane is a more potent greenhouse gas than CO2, but it oxidises to CO2 with half-life of about nine years, so it\'s an academic point. We can use all the power we can get, but getting it by wrecking the climate isn\'t a good idea. We can certainly get enough from renewable sources, but we are going to have to build a lot more wind turbines and solar panels before they can take over the whole job.

Mass migration will be (and already is) happening.
Look at history: people\'s migration, flooding seas changing vegetation...

What history? The Mediterannian dried out from time to time, but that\'s pre-history - even before the flooding of the Black Sea, which didn\'t happen until shortly after the end of the last ice age. Find a link or two.

And look at what some hurricanes and hail can do and have done to those panels.
Not even mentioning snow an dust covering those.

I don\'t see much evidence of damage on the panels I can see from my balcony.. I live in inner city Sydney, and there aren\'t that many. There were more across the road from our flat in Nijmegen - that was lower-rise housing - and they seemed to last pretty well.

--
Bill Sloman, Sydney
 
On 02/01/2023 20:55, DecadentLinuxUserNumeroUno@decadence.org wrote:

<snip>
In presuming that the questioner is referring to the \'phase\' of the
incoming radio signal, the answer depends on the distance apart from
each other they are and also with respect to the origination of the
signal.

Nah, he\'s talking about the demodulated audio phase...

<OP>
I have another table radio, KLM, expensive, that I had for about 33
years when the speaker switch started to fail**, and I turn that one on
too, to the same station, also at maximum volume, and I can hear in the
bathroom just fine, but I wonder if some frequencies are out of phase
from one radio to the other, cancelling each other, and I\'m not hearing
them.
</OP>

As audio doesn\'t care too much about phase (except between left and
right stereo for any given radio), there\'s no guarantee of any sort that
different radios will end up with the same audio band phase
relationships. Not just the amplifier chain, but also the speaker and
enclosure.

--
Cheers
Clive
 
On 02/01/2023 20:55, DecadentLinuxUserNumeroUno@decadence.org wrote:

<snip>
In presuming that the questioner is referring to the \'phase\' of the
incoming radio signal, the answer depends on the distance apart from
each other they are and also with respect to the origination of the
signal.

Nah, he\'s talking about the demodulated audio phase...

<OP>
I have another table radio, KLM, expensive, that I had for about 33
years when the speaker switch started to fail**, and I turn that one on
too, to the same station, also at maximum volume, and I can hear in the
bathroom just fine, but I wonder if some frequencies are out of phase
from one radio to the other, cancelling each other, and I\'m not hearing
them.
</OP>

As audio doesn\'t care too much about phase (except between left and
right stereo for any given radio), there\'s no guarantee of any sort that
different radios will end up with the same audio band phase
relationships. Not just the amplifier chain, but also the speaker and
enclosure.

--
Cheers
Clive
 
On 02/01/2023 20:55, DecadentLinuxUserNumeroUno@decadence.org wrote:

<snip>
In presuming that the questioner is referring to the \'phase\' of the
incoming radio signal, the answer depends on the distance apart from
each other they are and also with respect to the origination of the
signal.

Nah, he\'s talking about the demodulated audio phase...

<OP>
I have another table radio, KLM, expensive, that I had for about 33
years when the speaker switch started to fail**, and I turn that one on
too, to the same station, also at maximum volume, and I can hear in the
bathroom just fine, but I wonder if some frequencies are out of phase
from one radio to the other, cancelling each other, and I\'m not hearing
them.
</OP>

As audio doesn\'t care too much about phase (except between left and
right stereo for any given radio), there\'s no guarantee of any sort that
different radios will end up with the same audio band phase
relationships. Not just the amplifier chain, but also the speaker and
enclosure.

--
Cheers
Clive
 
On Tuesday, January 3, 2023 at 6:59:42 PM UTC-5, RichD wrote:
On January 1, John Larkin wrote:
https://www.theregister.com/2022/07/18/electrical_engineers_extinction/?td=rt-9cp
I\'ve been thinking for some time now that EE schools don\'t turn out
people who like electricity, but maker culture might.
I advise younguns against an engineering degree, it\'s over-specialized,
and obsolete in 5 years.

I tell them to get a physics education. Study hard. Then you have the
tools to do anything you want.

That is, first the academics, then the vocational training.


--
Rich
Advise that should be taken with a grain of salt, IMHO. If you think you know it all when you get your undergrad you are badly mistaken. If you turn off your learning you are outdated in less than 10 years. IMHO, Graduate school tends to be a hedge against obsolescence. Also getting connected with the right kind of company lets an EE develop and be successful. Working in a sweatshop mentality, top heavy company can stifle ones creativity and learning (as well as getting noticed for your talents).

It has been my experience for a number of (tens of) years that EE grads consistently get job offers with salaries in the top 30% of the job offers of graduates across the board for a given university. As a member of an EE dept faculty advisory board I\'ve seen these numbers for many many years. It is one of a set of metrics used to evaluate the program as well as comparisons to other universities.
 
On Tuesday, January 3, 2023 at 6:59:42 PM UTC-5, RichD wrote:
On January 1, John Larkin wrote:
https://www.theregister.com/2022/07/18/electrical_engineers_extinction/?td=rt-9cp
I\'ve been thinking for some time now that EE schools don\'t turn out
people who like electricity, but maker culture might.
I advise younguns against an engineering degree, it\'s over-specialized,
and obsolete in 5 years.

I tell them to get a physics education. Study hard. Then you have the
tools to do anything you want.

That is, first the academics, then the vocational training.


--
Rich
Advise that should be taken with a grain of salt, IMHO. If you think you know it all when you get your undergrad you are badly mistaken. If you turn off your learning you are outdated in less than 10 years. IMHO, Graduate school tends to be a hedge against obsolescence. Also getting connected with the right kind of company lets an EE develop and be successful. Working in a sweatshop mentality, top heavy company can stifle ones creativity and learning (as well as getting noticed for your talents).

It has been my experience for a number of (tens of) years that EE grads consistently get job offers with salaries in the top 30% of the job offers of graduates across the board for a given university. As a member of an EE dept faculty advisory board I\'ve seen these numbers for many many years. It is one of a set of metrics used to evaluate the program as well as comparisons to other universities.
 
On Tuesday, January 3, 2023 at 6:59:42 PM UTC-5, RichD wrote:
On January 1, John Larkin wrote:
https://www.theregister.com/2022/07/18/electrical_engineers_extinction/?td=rt-9cp
I\'ve been thinking for some time now that EE schools don\'t turn out
people who like electricity, but maker culture might.
I advise younguns against an engineering degree, it\'s over-specialized,
and obsolete in 5 years.

I tell them to get a physics education. Study hard. Then you have the
tools to do anything you want.

That is, first the academics, then the vocational training.


--
Rich
Advise that should be taken with a grain of salt, IMHO. If you think you know it all when you get your undergrad you are badly mistaken. If you turn off your learning you are outdated in less than 10 years. IMHO, Graduate school tends to be a hedge against obsolescence. Also getting connected with the right kind of company lets an EE develop and be successful. Working in a sweatshop mentality, top heavy company can stifle ones creativity and learning (as well as getting noticed for your talents).

It has been my experience for a number of (tens of) years that EE grads consistently get job offers with salaries in the top 30% of the job offers of graduates across the board for a given university. As a member of an EE dept faculty advisory board I\'ve seen these numbers for many many years. It is one of a set of metrics used to evaluate the program as well as comparisons to other universities.
 
On Thursday, January 5, 2023 at 1:41:38 PM UTC-8, Phil Hobbs wrote:
> whit3rd wrote:

[about FFT/divide/inverseFFT deconvolution]

...the FFT algorithm has no mechanism to
accept data with non-constant signficance, which is what, obviously,
happens with a divide-by-almost-zero step in the data processing.
It\'s gonna give you what the \'signal\' says, not what the \'signal\' and known
signal/noise ratio, tell you. That means using an FFT for the inverse is
excessively noise-sensitive. There\'s OTHER ways to do a Fourier inversion
that do allow the noise estimate its due influence.

The problem has nothing to do with the FFT, and everything to do with
what you\'re trying to do with it. Dividing transforms is a perfectly
rational way to deconvolve, provided you take into account the
finite-length effects and prepare the denominator correctly.

Think again; an FFT algorithm implements least-squares fitting, essentially;
there\'s zero difference between the transform\'s inversion and the original data,
which (zero) is obviously the minimum of sum-of-squares-of-differences.

But, it\'s not correct if the standard deviations of the elements are not identical,
because it IS minimizing sum-of-squares of differences, rather than the
(correct) sum of (squares-of-differences/sigma-squared-of-this-element).
 
On Thursday, January 5, 2023 at 1:41:38 PM UTC-8, Phil Hobbs wrote:
> whit3rd wrote:

[about FFT/divide/inverseFFT deconvolution]

...the FFT algorithm has no mechanism to
accept data with non-constant signficance, which is what, obviously,
happens with a divide-by-almost-zero step in the data processing.
It\'s gonna give you what the \'signal\' says, not what the \'signal\' and known
signal/noise ratio, tell you. That means using an FFT for the inverse is
excessively noise-sensitive. There\'s OTHER ways to do a Fourier inversion
that do allow the noise estimate its due influence.

The problem has nothing to do with the FFT, and everything to do with
what you\'re trying to do with it. Dividing transforms is a perfectly
rational way to deconvolve, provided you take into account the
finite-length effects and prepare the denominator correctly.

Think again; an FFT algorithm implements least-squares fitting, essentially;
there\'s zero difference between the transform\'s inversion and the original data,
which (zero) is obviously the minimum of sum-of-squares-of-differences.

But, it\'s not correct if the standard deviations of the elements are not identical,
because it IS minimizing sum-of-squares of differences, rather than the
(correct) sum of (squares-of-differences/sigma-squared-of-this-element).
 
On Thursday, January 5, 2023 at 1:41:38 PM UTC-8, Phil Hobbs wrote:
> whit3rd wrote:

[about FFT/divide/inverseFFT deconvolution]

...the FFT algorithm has no mechanism to
accept data with non-constant signficance, which is what, obviously,
happens with a divide-by-almost-zero step in the data processing.
It\'s gonna give you what the \'signal\' says, not what the \'signal\' and known
signal/noise ratio, tell you. That means using an FFT for the inverse is
excessively noise-sensitive. There\'s OTHER ways to do a Fourier inversion
that do allow the noise estimate its due influence.

The problem has nothing to do with the FFT, and everything to do with
what you\'re trying to do with it. Dividing transforms is a perfectly
rational way to deconvolve, provided you take into account the
finite-length effects and prepare the denominator correctly.

Think again; an FFT algorithm implements least-squares fitting, essentially;
there\'s zero difference between the transform\'s inversion and the original data,
which (zero) is obviously the minimum of sum-of-squares-of-differences.

But, it\'s not correct if the standard deviations of the elements are not identical,
because it IS minimizing sum-of-squares of differences, rather than the
(correct) sum of (squares-of-differences/sigma-squared-of-this-element).
 
On Wed, 11 Jan 2023 13:59:10 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 10/01/2023 15:39, John Larkin wrote:
On Tue, 10 Jan 2023 10:21:20 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 09/01/2023 11:23, John Larkin wrote:
On Mon, 9 Jan 2023 09:57:03 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

In the old days a compile cycle was sufficiently tedious that you tried
to get as many faults out in each batch run as you could.

Exactly. People were more careful. Like we are still with hardware
design.

Board revisions and chip mask revisions are visibly high cost.

I find it ironic that you rail against software developers and yet trust
making your hardware designs based on the output of software simulators.

Do you mean FPGA test benches? An FPGA is just a tick-tock state
machine; it\'s not hard to get that right. The hard blocks inside an
FPGA, things like PLLs and serdes blocks, must be well tested or they
couldn\'t sell the chips.

I had Spice in mind - which is way more complex internally.

I don\'t entirely trust Spice. Once I found a crash bug LT Spice that
Mike fixed in hours.

https://www.youtube.com/watch?v=x6TrbD7-IwU

I like that idea, using a simulator to train your instincts. I do
disagree with Mike about some of his other no-nos.


Most progrmmers don\'t know what a software state machine is. I explain

I find that *very* hard to believe.
Did they sleep through their lectures?

Don\'t know. I never took any computer courses.

(Nor digital design, nor control theory)

them and they are amazed, or stunned. They write multi-thread
procedural hairballs with async inputs and depend on system calls that
someone else wrote. They are trapped in a very thin abstraction layer.
I ask embedded programers \"how long does that take to execute\" and
\"what frequency can we run that interrupt routime at\" and they have no
idea. They bail on the first and tend to gr

Sounds to me much more like what EE\'s do to software.

Some software is actually pretty reliable (and unlike hardware it tends
to become more reliable the longer that it is used for and bugs get
found and eliminated). We tend to notice the stuff that *doesn\'t* work.

Sure it\'s reliable after the bugs are worked out. Unless someone goes
out of business or the plane crashes.

And some very big software has latent issues that only ever get found
when some unusual error occurs. Often such latent faults lurk in dark
corners of error recovery or reporting code.

We found one in VAX VMS once due to someone (a beginner) writing a
programme that opened new IO channels every time around a loop.
Eventually it ran out of them at system wide level and what was the
first thing the error handler did?

Try to open an IO channel to the system console and report the failure.

I note that US domestic flights are down today due to a probable
software SNAFU. It is being reported as a computer failure.

https://abcnews.go.com/US/computer-failure-faa-impact-flights-nationwide/story?id=96358202

I\'ve wondered how awful it is to restart a giant system, credit card
or air traffic or Amazon or something, after it crashes. Some
programmers vacations and sleep cycles are probably affected.

Or what happens if a few key people quit.
 
On Wed, 11 Jan 2023 13:59:10 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 10/01/2023 15:39, John Larkin wrote:
On Tue, 10 Jan 2023 10:21:20 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 09/01/2023 11:23, John Larkin wrote:
On Mon, 9 Jan 2023 09:57:03 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

In the old days a compile cycle was sufficiently tedious that you tried
to get as many faults out in each batch run as you could.

Exactly. People were more careful. Like we are still with hardware
design.

Board revisions and chip mask revisions are visibly high cost.

I find it ironic that you rail against software developers and yet trust
making your hardware designs based on the output of software simulators.

Do you mean FPGA test benches? An FPGA is just a tick-tock state
machine; it\'s not hard to get that right. The hard blocks inside an
FPGA, things like PLLs and serdes blocks, must be well tested or they
couldn\'t sell the chips.

I had Spice in mind - which is way more complex internally.

I don\'t entirely trust Spice. Once I found a crash bug LT Spice that
Mike fixed in hours.

https://www.youtube.com/watch?v=x6TrbD7-IwU

I like that idea, using a simulator to train your instincts. I do
disagree with Mike about some of his other no-nos.


Most progrmmers don\'t know what a software state machine is. I explain

I find that *very* hard to believe.
Did they sleep through their lectures?

Don\'t know. I never took any computer courses.

(Nor digital design, nor control theory)

them and they are amazed, or stunned. They write multi-thread
procedural hairballs with async inputs and depend on system calls that
someone else wrote. They are trapped in a very thin abstraction layer.
I ask embedded programers \"how long does that take to execute\" and
\"what frequency can we run that interrupt routime at\" and they have no
idea. They bail on the first and tend to gr

Sounds to me much more like what EE\'s do to software.

Some software is actually pretty reliable (and unlike hardware it tends
to become more reliable the longer that it is used for and bugs get
found and eliminated). We tend to notice the stuff that *doesn\'t* work.

Sure it\'s reliable after the bugs are worked out. Unless someone goes
out of business or the plane crashes.

And some very big software has latent issues that only ever get found
when some unusual error occurs. Often such latent faults lurk in dark
corners of error recovery or reporting code.

We found one in VAX VMS once due to someone (a beginner) writing a
programme that opened new IO channels every time around a loop.
Eventually it ran out of them at system wide level and what was the
first thing the error handler did?

Try to open an IO channel to the system console and report the failure.

I note that US domestic flights are down today due to a probable
software SNAFU. It is being reported as a computer failure.

https://abcnews.go.com/US/computer-failure-faa-impact-flights-nationwide/story?id=96358202

I\'ve wondered how awful it is to restart a giant system, credit card
or air traffic or Amazon or something, after it crashes. Some
programmers vacations and sleep cycles are probably affected.

Or what happens if a few key people quit.
 
On Wed, 11 Jan 2023 13:59:10 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 10/01/2023 15:39, John Larkin wrote:
On Tue, 10 Jan 2023 10:21:20 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 09/01/2023 11:23, John Larkin wrote:
On Mon, 9 Jan 2023 09:57:03 +0000, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

In the old days a compile cycle was sufficiently tedious that you tried
to get as many faults out in each batch run as you could.

Exactly. People were more careful. Like we are still with hardware
design.

Board revisions and chip mask revisions are visibly high cost.

I find it ironic that you rail against software developers and yet trust
making your hardware designs based on the output of software simulators.

Do you mean FPGA test benches? An FPGA is just a tick-tock state
machine; it\'s not hard to get that right. The hard blocks inside an
FPGA, things like PLLs and serdes blocks, must be well tested or they
couldn\'t sell the chips.

I had Spice in mind - which is way more complex internally.

I don\'t entirely trust Spice. Once I found a crash bug LT Spice that
Mike fixed in hours.

https://www.youtube.com/watch?v=x6TrbD7-IwU

I like that idea, using a simulator to train your instincts. I do
disagree with Mike about some of his other no-nos.


Most progrmmers don\'t know what a software state machine is. I explain

I find that *very* hard to believe.
Did they sleep through their lectures?

Don\'t know. I never took any computer courses.

(Nor digital design, nor control theory)

them and they are amazed, or stunned. They write multi-thread
procedural hairballs with async inputs and depend on system calls that
someone else wrote. They are trapped in a very thin abstraction layer.
I ask embedded programers \"how long does that take to execute\" and
\"what frequency can we run that interrupt routime at\" and they have no
idea. They bail on the first and tend to gr

Sounds to me much more like what EE\'s do to software.

Some software is actually pretty reliable (and unlike hardware it tends
to become more reliable the longer that it is used for and bugs get
found and eliminated). We tend to notice the stuff that *doesn\'t* work.

Sure it\'s reliable after the bugs are worked out. Unless someone goes
out of business or the plane crashes.

And some very big software has latent issues that only ever get found
when some unusual error occurs. Often such latent faults lurk in dark
corners of error recovery or reporting code.

We found one in VAX VMS once due to someone (a beginner) writing a
programme that opened new IO channels every time around a loop.
Eventually it ran out of them at system wide level and what was the
first thing the error handler did?

Try to open an IO channel to the system console and report the failure.

I note that US domestic flights are down today due to a probable
software SNAFU. It is being reported as a computer failure.

https://abcnews.go.com/US/computer-failure-faa-impact-flights-nationwide/story?id=96358202

I\'ve wondered how awful it is to restart a giant system, credit card
or air traffic or Amazon or something, after it crashes. Some
programmers vacations and sleep cycles are probably affected.

Or what happens if a few key people quit.
 
On 1/4/2023 11:46 AM, Phil Hobbs wrote:
bitrex wrote:
On 1/3/2023 7:30 PM, Phil Hobbs wrote:
RichD wrote:
On January 1,  John Larkin wrote:
https://www.theregister.com/2022/07/18/electrical_engineers_extinction/?td=rt-9cp
I\'ve been thinking for some time now that EE schools don\'t turn out
people who like electricity, but maker culture might.

I advise younguns against an engineering degree, it\'s over-specialized,
and obsolete in 5 years.

Only if you get sucked into spending all your time on the flavor of
the month.  People who spend their time in school learning
fundamental things that are hard to master on your own (math, mostly)
and then pick up the other stuff as they go along don\'t get
obsolete.  That\'s not difficult to do in your average EE program even
today, AFAICT. Signals and systems, electrodynamics, solid state
theory, and a bit of quantum are all good things to know.

Spending all your time in school programming in Javascript or VHDL or
memorizing compliance requirements is not a good career move for an EE.

I tell them to get a physics education.  Study hard.  Then you have the
tools to do anything you want.

Physicists turn up everywhere, it\'s true.  Folks with bachelor\'s
degrees in physics can do most kinds of engineering, provided they\'re
willing to bone up on the specifics.  Of course there are some who
assume they know everything and just bull ahead till they fail, but,
well, human beings are everyplace. ;)  Thing is, the basic
professional qualification for a physicist is a doctorate, whereas in
engineering it\'s a BSEE.

That is, first the academics, then the vocational training.

I agree that knowing the fundamentals cold is very important.
However, (a) physics isn\'t for everyone, by a long chalk; and (b)
there\'s a glorious intellectual heritage in engineering, so calling
it \'vocational training\' is pejorative.

Cheers

Phil \"Intermediate energy state\" Hobbs


Advanced engineering mathematics:

https://www.ebay.com/itm/194964206310

Which is pretty advanced, I don\'t know how many BS-type EEs know about
the orthogonality of Bessel functions, or regularly use contour
integration for anything.

You need to be able to do contour integration in a whole lot of signals
and systems.  For instance, the proof that instability in a linear
system is the same as acausal behavior depends on it.

The exp(i omega t) in the Fourier integral means that you have to close
the contour in one half plane for positive time and the other for
negative time.  If there are any poles inside the negative-time contour,
you get acausal response and exponential growth.   (A very pretty result
first proved by E. C. Titchmarsh, I think.)

But not as advanced as \"Advanced Mathematical Methods for Scientists &
Engineers\", which is largely about perturbation methods, boundary
layer theory, and WKB approximations. Sounds fun I guess, I just got a
used copy from Amazon for $8

That\'s Bender & Orszag, right?  By far my favorite math book of all
time.  I just _love_ that one.  The prof for my (first year grad)
asymptotic methods class was a former EE (Stephanos Venakides, may his
tribe increase).  That helped a lot.  Math classes taught by
mathematicians tend to be dry, because they regard the subject like
philosophy, whereas to a scientist or engineer, math is a technology of
thought.

BITD Arfken\'s \"Mathematical Methods for Physicists\" was one of the
standard math books for undergraduate physics, along with Levenson &
Redheffer\'s complex variables book, Boyce & di Prima on ODEs, Carrier &
Pearson for PDEs, and something on linear algebra.  My linear alg class
was taught out of Schaum\'s Outline, believe it or not--super cheap and
actually a pretty good book.  Oh, and a little book on the theoretical
side of calculus, so that you can prove theorems and stuff if you need to.

Yes that\'s the one. I don\'t understand much beyond part II, maybe
someday, but the material about ODEs, difference equations, and
asymptotic expansions is worth the price of admission alone.

Fourier analysis, perturbation theory, asymptotic methods, cluster
expansions, tensor calculus, and Feynman path integrals were all taught
in physics classes.  I took four EE classes in grad school--Tony Siegman
on lasers, Steve Harris on nonlinear optics, and Ron Bracewell on how to
think in k-space (aka reciprocal space and Fourier space), and Bernie
Widrow on DSP.

I\'m taking an online course in statistical mechanics, it\'s pretty cool,
connecting quantum mechanics micro ---> the PVNRT macro

Cheers

Phil
 

Welcome to EDABoard.com

Sponsor

Back
Top