Maximum Power Point Tracking: Optimizing Solar Panels 58 Comments by: Maya Posch...

On 1/7/2023 7:25 PM, whit3rd wrote:
On Saturday, January 7, 2023 at 11:21:02 AM UTC-8, Don Y wrote:
On 1/7/2023 12:18 PM, Don Y wrote:
Amusing that I don\'t see any hardware types advocating for
building hardware to provide the same level of functionality
that one expects from *inexpensive* software!

Of course, some applications would be trivial to implement!
SPICE would just be a bag of components (\"Here is the model
for the 4K7 resistor\") and a soldering iron. Never have
to worry about bugs -- or upgrades -- ever again!

Thus, the analog computer is reborn! No better
simulation of analog devices need ever be sought, accuracy-wise,
but there are still the familiar analog computer drawbacks: such a
computer is strictly Harvard architecture, no self-modifying
code allowed.

Why can\'t you locate a resistor (or other load) near a thermistor
and have the \"code\" that\'s driving the resistor effectively alter
the performance of the circuit that relies on the thermistor? :>

If you think you can do anything, in hardware, that can be done in
software, then this should be possible, too, right? :>

One great way to guarantee the \"code\" rarely gets updated!
 
On 1/7/2023 7:25 PM, whit3rd wrote:
On Saturday, January 7, 2023 at 11:21:02 AM UTC-8, Don Y wrote:
On 1/7/2023 12:18 PM, Don Y wrote:
Amusing that I don\'t see any hardware types advocating for
building hardware to provide the same level of functionality
that one expects from *inexpensive* software!

Of course, some applications would be trivial to implement!
SPICE would just be a bag of components (\"Here is the model
for the 4K7 resistor\") and a soldering iron. Never have
to worry about bugs -- or upgrades -- ever again!

Thus, the analog computer is reborn! No better
simulation of analog devices need ever be sought, accuracy-wise,
but there are still the familiar analog computer drawbacks: such a
computer is strictly Harvard architecture, no self-modifying
code allowed.

Why can\'t you locate a resistor (or other load) near a thermistor
and have the \"code\" that\'s driving the resistor effectively alter
the performance of the circuit that relies on the thermistor? :>

If you think you can do anything, in hardware, that can be done in
software, then this should be possible, too, right? :>

One great way to guarantee the \"code\" rarely gets updated!
 
On 1/7/2023 7:25 PM, whit3rd wrote:
On Saturday, January 7, 2023 at 11:21:02 AM UTC-8, Don Y wrote:
On 1/7/2023 12:18 PM, Don Y wrote:
Amusing that I don\'t see any hardware types advocating for
building hardware to provide the same level of functionality
that one expects from *inexpensive* software!

Of course, some applications would be trivial to implement!
SPICE would just be a bag of components (\"Here is the model
for the 4K7 resistor\") and a soldering iron. Never have
to worry about bugs -- or upgrades -- ever again!

Thus, the analog computer is reborn! No better
simulation of analog devices need ever be sought, accuracy-wise,
but there are still the familiar analog computer drawbacks: such a
computer is strictly Harvard architecture, no self-modifying
code allowed.

Why can\'t you locate a resistor (or other load) near a thermistor
and have the \"code\" that\'s driving the resistor effectively alter
the performance of the circuit that relies on the thermistor? :>

If you think you can do anything, in hardware, that can be done in
software, then this should be possible, too, right? :>

One great way to guarantee the \"code\" rarely gets updated!
 
On Mon, 9 Jan 2023 16:17:07 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

On 1/9/2023 12:35 PM, whit3rd wrote:
Oh, but the people I worked with were NOT careful. One coworker rewrote
a function from the main library, but didn\'t give his variant a different name. Big
oops if you didn\'t know exactly what order to offer the libraries to the linker.
Another coworker decided his filenames would never exceed 72 characters,
and when (disk/directory/subdirectory/name combined) added up
to 73 characters for a particular subdirectory, programs that worked inputting from
user9 directory failed for user10 directory.

Neither coworker was open to change.

I worked with a guy who was constantly REbugging the floating point library.
You\'d wonder why YOUR code was suddenly not working -- only to discover,
in a misguided attempt to squeeze a few more cycles out of the FP library,
that he\'d BROKEN it, YET AGAIN!

Good practices require discipline. For a large firm, you can secure
the repository and install practices that ensure folks can\'t check-in
new releases without following a procedure (that includes having
test suites in place, etc.). For smaller firms, you rely on the
individual developers to SELF-discipline.

Should I have to review EVERY piece of code that I drag into
my build, just to verify that it works? What\'s the point of having
\"software assemblies\" if there are no guarantees in place that they
are correct? (Do you test every hardware component before securing
it to a PCB?)

We assume that ICs and inductors and connectors and whatever are
factory tested before we buy them; we rarely test production parts, a
few things like lasers maybe. We have to read data sheets carefully
(and skeptically) to really understand the parts. We sometimes test a
few parts, or breadboard sub-circuits, if we\'re not confident that we
fully understand them.

We can re-use parts of a design at the schematic copy-paste level. We
have a \"template\", schematic and PCB layout, for some boards that plug
into a common backplane.

We do extensive first-article testing on an assembled product, and
every production unit gets automated test and cal. You don\'t have to
calibrate software!

Software doesn\'t break like hardware can, but it can run in all sorts
of environments with all sorts of inputs. So it does break.

What\'s worse is most firms can\'t tell you the state of their
repositories; they rely on their workers for that. \"Yeah,
the floating point library works.\" \"Really? Who told you THAT?\"

I guess people still use FP libraries when the hardware doesn\'t do FP.
But do people still write their own? Scary.

I wrote one (68K cpu) math library where everything was signed 64
bits, as 32.32. No normalizing for add/sub! 32.32 is enough for
representing physical reality.
 
On Mon, 9 Jan 2023 16:17:07 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

On 1/9/2023 12:35 PM, whit3rd wrote:
Oh, but the people I worked with were NOT careful. One coworker rewrote
a function from the main library, but didn\'t give his variant a different name. Big
oops if you didn\'t know exactly what order to offer the libraries to the linker.
Another coworker decided his filenames would never exceed 72 characters,
and when (disk/directory/subdirectory/name combined) added up
to 73 characters for a particular subdirectory, programs that worked inputting from
user9 directory failed for user10 directory.

Neither coworker was open to change.

I worked with a guy who was constantly REbugging the floating point library.
You\'d wonder why YOUR code was suddenly not working -- only to discover,
in a misguided attempt to squeeze a few more cycles out of the FP library,
that he\'d BROKEN it, YET AGAIN!

Good practices require discipline. For a large firm, you can secure
the repository and install practices that ensure folks can\'t check-in
new releases without following a procedure (that includes having
test suites in place, etc.). For smaller firms, you rely on the
individual developers to SELF-discipline.

Should I have to review EVERY piece of code that I drag into
my build, just to verify that it works? What\'s the point of having
\"software assemblies\" if there are no guarantees in place that they
are correct? (Do you test every hardware component before securing
it to a PCB?)

We assume that ICs and inductors and connectors and whatever are
factory tested before we buy them; we rarely test production parts, a
few things like lasers maybe. We have to read data sheets carefully
(and skeptically) to really understand the parts. We sometimes test a
few parts, or breadboard sub-circuits, if we\'re not confident that we
fully understand them.

We can re-use parts of a design at the schematic copy-paste level. We
have a \"template\", schematic and PCB layout, for some boards that plug
into a common backplane.

We do extensive first-article testing on an assembled product, and
every production unit gets automated test and cal. You don\'t have to
calibrate software!

Software doesn\'t break like hardware can, but it can run in all sorts
of environments with all sorts of inputs. So it does break.

What\'s worse is most firms can\'t tell you the state of their
repositories; they rely on their workers for that. \"Yeah,
the floating point library works.\" \"Really? Who told you THAT?\"

I guess people still use FP libraries when the hardware doesn\'t do FP.
But do people still write their own? Scary.

I wrote one (68K cpu) math library where everything was signed 64
bits, as 32.32. No normalizing for add/sub! 32.32 is enough for
representing physical reality.
 
On Mon, 9 Jan 2023 16:17:07 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

On 1/9/2023 12:35 PM, whit3rd wrote:
Oh, but the people I worked with were NOT careful. One coworker rewrote
a function from the main library, but didn\'t give his variant a different name. Big
oops if you didn\'t know exactly what order to offer the libraries to the linker.
Another coworker decided his filenames would never exceed 72 characters,
and when (disk/directory/subdirectory/name combined) added up
to 73 characters for a particular subdirectory, programs that worked inputting from
user9 directory failed for user10 directory.

Neither coworker was open to change.

I worked with a guy who was constantly REbugging the floating point library.
You\'d wonder why YOUR code was suddenly not working -- only to discover,
in a misguided attempt to squeeze a few more cycles out of the FP library,
that he\'d BROKEN it, YET AGAIN!

Good practices require discipline. For a large firm, you can secure
the repository and install practices that ensure folks can\'t check-in
new releases without following a procedure (that includes having
test suites in place, etc.). For smaller firms, you rely on the
individual developers to SELF-discipline.

Should I have to review EVERY piece of code that I drag into
my build, just to verify that it works? What\'s the point of having
\"software assemblies\" if there are no guarantees in place that they
are correct? (Do you test every hardware component before securing
it to a PCB?)

We assume that ICs and inductors and connectors and whatever are
factory tested before we buy them; we rarely test production parts, a
few things like lasers maybe. We have to read data sheets carefully
(and skeptically) to really understand the parts. We sometimes test a
few parts, or breadboard sub-circuits, if we\'re not confident that we
fully understand them.

We can re-use parts of a design at the schematic copy-paste level. We
have a \"template\", schematic and PCB layout, for some boards that plug
into a common backplane.

We do extensive first-article testing on an assembled product, and
every production unit gets automated test and cal. You don\'t have to
calibrate software!

Software doesn\'t break like hardware can, but it can run in all sorts
of environments with all sorts of inputs. So it does break.

What\'s worse is most firms can\'t tell you the state of their
repositories; they rely on their workers for that. \"Yeah,
the floating point library works.\" \"Really? Who told you THAT?\"

I guess people still use FP libraries when the hardware doesn\'t do FP.
But do people still write their own? Scary.

I wrote one (68K cpu) math library where everything was signed 64
bits, as 32.32. No normalizing for add/sub! 32.32 is enough for
representing physical reality.
 
On a sunny day (Tue, 3 Jan 2023 23:37:37 -0800 (PST)) it happened Anthony
William Sloman <bill.sloman@ieee.org> wrote in
<0f5e064e-4ce2-41cf-ba7c-10a9ddad4a43n@googlegroups.com>:

On Wednesday, January 4, 2023 at 5:40:33 PM UTC+11, Jan Panteltje wrote:
On a sunny day (Tue, 3 Jan 2023 18:10:48 -0800 (PST)) it happened Anthony
William Sloman <bill....@ieee.org> wrote in
9903019d-6d90-4898...@googlegroups.com>:
On Wednesday, January 4, 2023 at 4:07:18 AM UTC+11, John Larkin wrote:
On Tue, 03 Jan 2023 15:32:59 GMT, Jan Panteltje
pNaonSt...@yahoo.com> wrote:

On a sunny day (Tue, 3 Jan 2023 04:28:41 -0800 (PST)) it happened Anthony

William Sloman <bill....@ieee.org> wrote in
9ed26dc7-3876-45e9...@googlegroups.com>:

As if Jan Panteltje had anything to teach - or at least anything to teach
that
was worth learning. He does seem to have swallowed the climate change denial

twaddle, hook line and sinker, and now wants to spread the misleading

message, rather like a Jehovah\'s Witness.

Well, for starters you could learn some \'tronix

Done that.

https://iopscience.iop.org/article/10.1088/0957-0233/7/11/015/meta

Does not seem particularly impressive or difficult to me,

It wasn\'t. But it was impressive enough to get published, and bits of it were difficult enough that people who published
similar papers. citing it missed some of the points it made.

1996?

Did the work in 1993, moved to the Netherlands in October 1993, didn\'t have enough to do and wrote it up - it\'s non-classic
solution to a classic problem, as you\'d be able to work out if you could understand it. There wasn\'t any hurry to write it up.

Jan won\'t be able to understand any of it, but it has been cited 25 times -

If being cited more is your standard of \'science\' or whatever then startrek or Donald Duck ....

It\'s got to be cited in a peer-reviewed journal article to count. Star Trek and Donald Duck don\'t count. It\'s an imperfect
measure, but nobody has come up with a better one.

My wife has an h-index of 80 which means that her eightieth most cited paper has more than 80 citations, and her most cited
papers had more than a thousand citations each.

Instrument science doesn\'t publish as much - the classic paper I cited, by N.T. Larsen, published in 1968, has now clocked up 42
citations - it was only 36 the last time I looked.

Its probably OK to be proud of what you did, more seriously speaking
I never felt that very much, for me its all just a learning process,,
Not only electronics, after I \'had seen it all\' in broadcasting I traveled the world searching
for truth as many hippies in those days did.. lived in a community too for years.
Then between travels worked via agencies and fixed jobs in many different fields.
Always using what I learned in the previous projects,
Electronics is everywhere.
You say \'nothing to do\', I sometimes had 2 job offers a the same time...
If you did your thing right the agencies would have plenty of work, fixed jobs too, many vacancies here.
I started my own company at some point, radio and TV repair shop, did some design too
changed into a VOF 4 or 5 people working... Then my interests changed again, left the VOF
it died a bit later without me.. fixed job again, making money, buying a house.. doing all sorts of projects
traveling world again... pension.....playing with electronics.
If I wanted to I could work at some company again, plenty of vacancies here in the Netherlands now.
We will see where it goes, or sail to some uninhabited islands in the warm south Pacific and train
my survival skills (done that too). Or maybe start a new study,,,

I have a bunch of Peltier elements from ebay,
stacking those to get low temperatures and as electrickety generator...
Specifications:
Model: SP1848-27145
Color : white
Lead Length: about 30CM
Size: 40MM * 40MM* 3.4MM

Toys we need:)

Rain now . garden half flooded... waves on the stones...
+10.8 C
Must be glowball worming!
 
On a sunny day (Tue, 3 Jan 2023 23:37:37 -0800 (PST)) it happened Anthony
William Sloman <bill.sloman@ieee.org> wrote in
<0f5e064e-4ce2-41cf-ba7c-10a9ddad4a43n@googlegroups.com>:

On Wednesday, January 4, 2023 at 5:40:33 PM UTC+11, Jan Panteltje wrote:
On a sunny day (Tue, 3 Jan 2023 18:10:48 -0800 (PST)) it happened Anthony
William Sloman <bill....@ieee.org> wrote in
9903019d-6d90-4898...@googlegroups.com>:
On Wednesday, January 4, 2023 at 4:07:18 AM UTC+11, John Larkin wrote:
On Tue, 03 Jan 2023 15:32:59 GMT, Jan Panteltje
pNaonSt...@yahoo.com> wrote:

On a sunny day (Tue, 3 Jan 2023 04:28:41 -0800 (PST)) it happened Anthony

William Sloman <bill....@ieee.org> wrote in
9ed26dc7-3876-45e9...@googlegroups.com>:

As if Jan Panteltje had anything to teach - or at least anything to teach
that
was worth learning. He does seem to have swallowed the climate change denial

twaddle, hook line and sinker, and now wants to spread the misleading

message, rather like a Jehovah\'s Witness.

Well, for starters you could learn some \'tronix

Done that.

https://iopscience.iop.org/article/10.1088/0957-0233/7/11/015/meta

Does not seem particularly impressive or difficult to me,

It wasn\'t. But it was impressive enough to get published, and bits of it were difficult enough that people who published
similar papers. citing it missed some of the points it made.

1996?

Did the work in 1993, moved to the Netherlands in October 1993, didn\'t have enough to do and wrote it up - it\'s non-classic
solution to a classic problem, as you\'d be able to work out if you could understand it. There wasn\'t any hurry to write it up.

Jan won\'t be able to understand any of it, but it has been cited 25 times -

If being cited more is your standard of \'science\' or whatever then startrek or Donald Duck ....

It\'s got to be cited in a peer-reviewed journal article to count. Star Trek and Donald Duck don\'t count. It\'s an imperfect
measure, but nobody has come up with a better one.

My wife has an h-index of 80 which means that her eightieth most cited paper has more than 80 citations, and her most cited
papers had more than a thousand citations each.

Instrument science doesn\'t publish as much - the classic paper I cited, by N.T. Larsen, published in 1968, has now clocked up 42
citations - it was only 36 the last time I looked.

Its probably OK to be proud of what you did, more seriously speaking
I never felt that very much, for me its all just a learning process,,
Not only electronics, after I \'had seen it all\' in broadcasting I traveled the world searching
for truth as many hippies in those days did.. lived in a community too for years.
Then between travels worked via agencies and fixed jobs in many different fields.
Always using what I learned in the previous projects,
Electronics is everywhere.
You say \'nothing to do\', I sometimes had 2 job offers a the same time...
If you did your thing right the agencies would have plenty of work, fixed jobs too, many vacancies here.
I started my own company at some point, radio and TV repair shop, did some design too
changed into a VOF 4 or 5 people working... Then my interests changed again, left the VOF
it died a bit later without me.. fixed job again, making money, buying a house.. doing all sorts of projects
traveling world again... pension.....playing with electronics.
If I wanted to I could work at some company again, plenty of vacancies here in the Netherlands now.
We will see where it goes, or sail to some uninhabited islands in the warm south Pacific and train
my survival skills (done that too). Or maybe start a new study,,,

I have a bunch of Peltier elements from ebay,
stacking those to get low temperatures and as electrickety generator...
Specifications:
Model: SP1848-27145
Color : white
Lead Length: about 30CM
Size: 40MM * 40MM* 3.4MM

Toys we need:)

Rain now . garden half flooded... waves on the stones...
+10.8 C
Must be glowball worming!
 
On a sunny day (Tue, 3 Jan 2023 23:37:37 -0800 (PST)) it happened Anthony
William Sloman <bill.sloman@ieee.org> wrote in
<0f5e064e-4ce2-41cf-ba7c-10a9ddad4a43n@googlegroups.com>:

On Wednesday, January 4, 2023 at 5:40:33 PM UTC+11, Jan Panteltje wrote:
On a sunny day (Tue, 3 Jan 2023 18:10:48 -0800 (PST)) it happened Anthony
William Sloman <bill....@ieee.org> wrote in
9903019d-6d90-4898...@googlegroups.com>:
On Wednesday, January 4, 2023 at 4:07:18 AM UTC+11, John Larkin wrote:
On Tue, 03 Jan 2023 15:32:59 GMT, Jan Panteltje
pNaonSt...@yahoo.com> wrote:

On a sunny day (Tue, 3 Jan 2023 04:28:41 -0800 (PST)) it happened Anthony

William Sloman <bill....@ieee.org> wrote in
9ed26dc7-3876-45e9...@googlegroups.com>:

As if Jan Panteltje had anything to teach - or at least anything to teach
that
was worth learning. He does seem to have swallowed the climate change denial

twaddle, hook line and sinker, and now wants to spread the misleading

message, rather like a Jehovah\'s Witness.

Well, for starters you could learn some \'tronix

Done that.

https://iopscience.iop.org/article/10.1088/0957-0233/7/11/015/meta

Does not seem particularly impressive or difficult to me,

It wasn\'t. But it was impressive enough to get published, and bits of it were difficult enough that people who published
similar papers. citing it missed some of the points it made.

1996?

Did the work in 1993, moved to the Netherlands in October 1993, didn\'t have enough to do and wrote it up - it\'s non-classic
solution to a classic problem, as you\'d be able to work out if you could understand it. There wasn\'t any hurry to write it up.

Jan won\'t be able to understand any of it, but it has been cited 25 times -

If being cited more is your standard of \'science\' or whatever then startrek or Donald Duck ....

It\'s got to be cited in a peer-reviewed journal article to count. Star Trek and Donald Duck don\'t count. It\'s an imperfect
measure, but nobody has come up with a better one.

My wife has an h-index of 80 which means that her eightieth most cited paper has more than 80 citations, and her most cited
papers had more than a thousand citations each.

Instrument science doesn\'t publish as much - the classic paper I cited, by N.T. Larsen, published in 1968, has now clocked up 42
citations - it was only 36 the last time I looked.

Its probably OK to be proud of what you did, more seriously speaking
I never felt that very much, for me its all just a learning process,,
Not only electronics, after I \'had seen it all\' in broadcasting I traveled the world searching
for truth as many hippies in those days did.. lived in a community too for years.
Then between travels worked via agencies and fixed jobs in many different fields.
Always using what I learned in the previous projects,
Electronics is everywhere.
You say \'nothing to do\', I sometimes had 2 job offers a the same time...
If you did your thing right the agencies would have plenty of work, fixed jobs too, many vacancies here.
I started my own company at some point, radio and TV repair shop, did some design too
changed into a VOF 4 or 5 people working... Then my interests changed again, left the VOF
it died a bit later without me.. fixed job again, making money, buying a house.. doing all sorts of projects
traveling world again... pension.....playing with electronics.
If I wanted to I could work at some company again, plenty of vacancies here in the Netherlands now.
We will see where it goes, or sail to some uninhabited islands in the warm south Pacific and train
my survival skills (done that too). Or maybe start a new study,,,

I have a bunch of Peltier elements from ebay,
stacking those to get low temperatures and as electrickety generator...
Specifications:
Model: SP1848-27145
Color : white
Lead Length: about 30CM
Size: 40MM * 40MM* 3.4MM

Toys we need:)

Rain now . garden half flooded... waves on the stones...
+10.8 C
Must be glowball worming!
 
On 1/2/2023 2:08 AM, Jan Panteltje wrote:
[...]
The dropout in the first year was very very very high.
For the 30 or so in the first year, I was 1 of the 4 people at the final graduation party in the local pub,
[...]

In other words, the school really sucked at predicting who could handle
the work. I.e., their admissions office sucked. 😉
 
On 1/2/2023 2:08 AM, Jan Panteltje wrote:
[...]
The dropout in the first year was very very very high.
For the 30 or so in the first year, I was 1 of the 4 people at the final graduation party in the local pub,
[...]

In other words, the school really sucked at predicting who could handle
the work. I.e., their admissions office sucked. 😉
 
On 1/2/2023 2:08 AM, Jan Panteltje wrote:
[...]
The dropout in the first year was very very very high.
For the 30 or so in the first year, I was 1 of the 4 people at the final graduation party in the local pub,
[...]

In other words, the school really sucked at predicting who could handle
the work. I.e., their admissions office sucked. 😉
 
On 1/6/2023 2:08 PM, Dan Purgert wrote:
Granted, there is a balance point you have to \"engineer\" to.

That\'s what makes it software *engineering* and not \"programming\".
ANYONE can write code. SOME can write reliable code. Even
fewer can *engineer* software solutions to the problem at hand
(too many engineers think they know how products should behave
without actually understanding their user bases)

I certainly can\'t engineer. Closest I\'ve come was architect (and that
was only because in the company\'s infinite wisdom they downsized the
unpopular-with-management but knew everything about everything old timer
in 2020). Granted it took their \"first choice\" two or three _highly_
visible mistakes over the course of a few months of (weekend)
deployments before they started listening to me...

Thankfully I got out after 18 months. That was highly stressful.

There are several steps to designing a product/solution.
First is figuring out what it needs to be/do.
Next, how to approach it.
Actually *doing* it is often just busywork.

[I spend ~40% of my time in specification/design; ~20% implementation
and ~40% verification/validation. Some firms would let me offload the
last step (actually might PREFER it as it gives them other eyes on
the project and reassurances that less was likely going to slip
through from omission). Some try to do the first 40% (I usually
don\'t take those jobs as, IMO, people usually don\'t know what they
want and I don\'t want to be dealing with \"changes\" once they start
realizing their deficiencies).]

Knuth wrote a series of tomes covering most of the \"basic\" algorithms.
Surprisingly, much software is just a rearranging of these core
algorithms in different combinations.

TAOCP is on my amazon wishlist ...

Stevens\' books are also great reads -- but at the application level.
Organick\'s book is a must-have if you are interested in big systems.
McKusick if you\'re into Eunices.

There are others that address \"programming\" but often miss the
big picture by focusing on particulars. Or, \"methodologies\".

You may be able to find a used copy. OTOH, many folks are content to
have them on their bookshelves, even if not referenced.

Yeah, AMZN has used from time to time. It\'s more of a reminder to check
in on those prices.

Abe, eBay, local university bookstores, etc.

Indeed. The local universities never used it (\"local\" being within 2
hours drive)... the nearest that might\'ve used it is about 6h away, and
at that point ... ehhhh.

It wasn\'t \"used\" (i.e., as in part of classwork) when I was in school.
But, was a \'reference\" that folks learned was worth having on their shelves.
University bookstores often will resell \"used\" books that they\'ve
purchased from students (eager to recover some of their costs of learning).

Quite so. But C doesn\'t really have much more in the \'aha\' space for

C can be interesting if you start trying to adopt different
programming practices. E.g., most of my current project is coded
in C but is entirely object-based. And, objects are referenced by
something akin to file handles.

Sure, but there\'s only so much one can do on a micro, especially one
that doesn\'t have much (anything) in the way of \"external connectivity\"
(okay, sure, someone could _take_ it, but meh)

There are all sorts of different processors available. You just
have to decide what features you want and what price you are
willing to pay.

Oops, I cut out the wrong paragraph there. The response was _supposed_
to be referencing your comments regarding the compiler writing better /
less-exploitable assembly; not your object-oriented C.

Those of us who have been writing code for half a century have memories
of how bad the tools were \"way back then\". It takes discipline to
remember how much better (than humans!) they have become!

So, the new order of business is to concentrate on writing what
you WANT the code to do and let the tools figure out how best
to make that happen.

Indeed. Like I said, it\'s not that I don\'t like C (you can pry it out
of my cold, dead hands), but simply the case that -- in microcontrollers
anyway -- I\'ve kind of dried up the \"big aha\" moments. Not that I\'m an
expert in the language or anything; but just that there\'s not the same
kind of \"eureka!\" as I used to get. Maybe if I move to OOP, but honestly
that just never really \"clicked\" with me, in that my personal internal
monologue is \"Okay, so to check the status of IO-Expander 0xAA, send the
query string \'0xAA, 0xFF, 0x04, 0x00\' to the SPI module\".

As opposed to \"Okay, so I need to create a virtual representation of
IOExpander, and it\'ll allow read(int *data, int len) or write (int addr,
int len) [...], and then now that I\'ve done all of that, I can call
ioexp.read (address,expectedBytes)\"

There is a difference (big!) between object-ORIENTED and object-BASED.
IMO, you get the biggest bang for the buck adopting an object
paradigm -- but, the benefits of full OOPS areoften outweighed
by the baggage they bring along.

With object *based*, you get the main benefit of encapsulation
without the fluff of polymorphism or MI. (you can \"manually\"
blend those into an object-based implementation but aren\'t
required).

[Most engineers/programmers really only use procedural
languages and often just single-threaded... almost always
confined to a single computing node/processor]

I use procedural languages (C-ish*, ASM, SQL, Limbo-ish*) to
implement my object-based system.

[* I\'ve augmented the syntax of each of these to more directly
support an object-based notation wherein objects don\'t have
known (nor invariant) physical locations]

For example, my user interfaces exist on three \"planes\":
- foreground (whatever the user is directly interacting with;
where his attention lies)
- background (other things that the user is likely interested
in but without requiring his full attention)
- notifications (asynchronous events that the user needs to be
made aware of but with minimal \"cost\" to his focused activity)

How I place \"interactions\" in each of those planes has to be
consistent from the programmer/application\'s point of view.

E.g., an application shouldn\'t care if the user is blind (and
can\'t interact visually), deaf (can\'t interact aurally),
paralyzed (can\'t interact haptically), etc. It should just
want to interact and a middle layer should figure out how,
specifically, to do that. The interaction is abstracted
and reified elsewhere.

Similarly, I may want to play some music through a \"speaker\"
(another abstraction). The source may be an MP3, WAV, etc.
fetched from my DBMS (my only persistent store) *or* from
a broadcast source *or* from a local \"synthesizer\".

I shouldn\'t have to figure out how to get source to destination.
I shouldn\'t have to figure out how to get source FORMAT into
something compatible with destination\'s requirements. *I*
shouldn\'t have to perform any of these actions.

Instead, the *objects* involved should know how to meet
their own needs, at runtime (because you don\'t know, at compile
time, what the needs of a particular piece of music are
going to be, months or years from now!).

If a speaker needs to instantiate a transcoder (MP3->FLAC),
then *it* should do that. But, it shouldn\'t have to bear the
cost of that activity; it\'s not *its* fault that the user wants
to play an MP3! So, it should be able to instantiate an
MP3 transcoder <somewhere> in the system and let that
<something> do the work. Because *it* knows how to do that
efficiently.

If there are multiple \"transcoder servers\" active in the system,
the actual server chosen to instantiate THAT MP3 transcoder
might exist on one of many different nodes. (it can even
*move* while it is in operation!). If there are no transcoder
servers available, then one should be instantiated from the
Factory -- on a node that has surplus resources to support
that capability.

The programmer shouldn\'t have to be aware of the need for any
of these activities. The *system* should sort it out FOR him
(\"an OS provides services for the application\")

If the user aborts the operation, all of the resources (on
all of the nodes that are involved) should tear themselves
down -- again without the programmer having to DO anything.

The abstractions make this relatively easy. They serve
to reduce the complexity of the implementation and improve
its quality (what would happen if the programmer forgot
to kill off the transcoder? and, later, tried to play more
music -- creating yet another transcoder -- that he would
likewise forget to kill off...)

There are run-time (and hardware) costs to support all of
this \"managed HIDDEN complexity\" but processors are
pretty cheap, nowadays. It\'s not uncommon for *systems*
to have hundreds or thousands of them (and, as IoT gains
traction, scale that up by another order of magnitude -- or
two!)

Folks who are stuck in the procedural language mindset will
quickly find the complexity of those systems makes development
impractical.

[But, fine if your making a standalone \"thing\" that has a
relatively constrained evolutionary path ahead]

As for \"all sorts\" -- yeah, I\'m making my way into some of the new AVR
0- / 1- / 2-series chips (e.g. the 404/414/1624) as well as their DA and
DB series of megas...

I use a little 6-pin device in my PoE PD controller. PoE requires
a hardware handshake with the PSE controller to \"notify\" it that
a PD is plugged in and wanting power. You can buy controllers
that will do this automagically, for you.

Oh, nice. That\'s gotta have a 2-stage handshake then? Been a while
since I read up on the specs for 802.3af/at.

No. The handshake with the PSE is implied -- if power goes away
(when you asked it to), then you know it saw your disconnect;
if it comes on (when you requested it), likewise.

The handshake to the processor is a bit more involved as there\'s
just one-wire carrying all the traffic. But, the 6-pin has nothing
better to do than \"listen\" and \"reply\". And, the host has
resources up the wazoo -- and timeliness of the power sequencing
action is not critical.

[I am thinking of refactoring the design to put a better
processor, there, to also manage the power supplies and
load characterization of the entire module. When \"idle\",
there shouldn\'t be much extra \"work\" for it to do as the
local supplies will be shutdown and not need \"minding\"]

But, what if the device wants to \"unplug itself\" and *re*plug itself
at some future time? All the while, still remaining connected to
the network.

You\'d (logically) bring down the interface, and then bring it back up.
As I said, it\'s been a while; but as the initial \"hey I need power\"
signal is nothing more than something like a 22k resistor between the
power pins. Although, I think 802.3at can use LLDP to swap power
states.

You can\'t count on the network layer to be operational.
So, no \"messages\".

The 6pin has to be able to know to (simulate) a \"reinsertion\"
at a particular time. Or, when \"tickled\" by a \"wake up\"
signal that runs into the I/Os. And, to know what power
class to request. E.g., a module may power up needing only
a few watts (for the CPU/memory/NIC) or may need more
if its field also needs to be powered.

The PSE doesn\'t know what the module WILL need so it can\'t
make power budget decisions without some form of indication
from the PD. Powering up to the lowest power class -- only
to discover (in conversations with the remote MCU) that
there\'s not enough power currently available to support
its REAL needs would lead to powering back down (save
those few watts).

When should the PD \"try again\"? Will this part of the system
just oscillate between powered and not-powered, indefinitely?

\"I\'ll supply power IF I CAN. Otherwise, stay asleep. If
that\'s not acceptable to you, your 6pin can implement a
fall-back policy and tell you of that consequence when
I power you back up\"

Even \"dirt cheap\" MCUs can have interesting applications.
E.g., WRITE a program to read an A/DC to determine the
current \"output voltage\" and, based on that value, decide
whether or not to turn on a pass transistor (feeding a
choke) and for how long. Presto! You have your own
switching power supply -- implemented in software!

I was looking into something like that the other day, actually. Looks
like I\'d need to wrap my own inductor (eep), but otherwise, I think I
have the necessary other stuff.

Steal one out of a dead PC power supply. Make part of the project
figuring out how to take an unknown ferrite and get an inductor
of approximately the right characteristics for your need.

Oh, I have ferrite toroids around here somewhere. The \"eep\" is the \"and
now for my next trick ... I know how much inductance it has!\".

There are tables that can give you ballparks. Ideally, you\'d
know the characteristics of the ferrite.

If your load is \"sacrificial\", you measure performance and
decide whether to add another few windings, etc.

(hint: leave long leads on the torroid so you can just use
up some of that \"service loop\" for an additional winding)

[Of course, a bug in your code can fry your processor! :> ]

(poof) \"oops\". Learning by mistakes is still learning (but I\'d rather
learn from someone else\'s mistake :D)

Prototype it to power a simple/disposable resistive load
so you can watch to see how/if it is working without
putting too much \"at ri$k\".

Yep. Load resistors are fun things. I should have some 50-ohm loads
around here somewhere...

Lightbulbs, in a pinch.

[...] The joke was how the transistors were placed there to protect
the (cheap) *fuses*.

sounds about right!
 
On 1/6/2023 2:08 PM, Dan Purgert wrote:
Granted, there is a balance point you have to \"engineer\" to.

That\'s what makes it software *engineering* and not \"programming\".
ANYONE can write code. SOME can write reliable code. Even
fewer can *engineer* software solutions to the problem at hand
(too many engineers think they know how products should behave
without actually understanding their user bases)

I certainly can\'t engineer. Closest I\'ve come was architect (and that
was only because in the company\'s infinite wisdom they downsized the
unpopular-with-management but knew everything about everything old timer
in 2020). Granted it took their \"first choice\" two or three _highly_
visible mistakes over the course of a few months of (weekend)
deployments before they started listening to me...

Thankfully I got out after 18 months. That was highly stressful.

There are several steps to designing a product/solution.
First is figuring out what it needs to be/do.
Next, how to approach it.
Actually *doing* it is often just busywork.

[I spend ~40% of my time in specification/design; ~20% implementation
and ~40% verification/validation. Some firms would let me offload the
last step (actually might PREFER it as it gives them other eyes on
the project and reassurances that less was likely going to slip
through from omission). Some try to do the first 40% (I usually
don\'t take those jobs as, IMO, people usually don\'t know what they
want and I don\'t want to be dealing with \"changes\" once they start
realizing their deficiencies).]

Knuth wrote a series of tomes covering most of the \"basic\" algorithms.
Surprisingly, much software is just a rearranging of these core
algorithms in different combinations.

TAOCP is on my amazon wishlist ...

Stevens\' books are also great reads -- but at the application level.
Organick\'s book is a must-have if you are interested in big systems.
McKusick if you\'re into Eunices.

There are others that address \"programming\" but often miss the
big picture by focusing on particulars. Or, \"methodologies\".

You may be able to find a used copy. OTOH, many folks are content to
have them on their bookshelves, even if not referenced.

Yeah, AMZN has used from time to time. It\'s more of a reminder to check
in on those prices.

Abe, eBay, local university bookstores, etc.

Indeed. The local universities never used it (\"local\" being within 2
hours drive)... the nearest that might\'ve used it is about 6h away, and
at that point ... ehhhh.

It wasn\'t \"used\" (i.e., as in part of classwork) when I was in school.
But, was a \'reference\" that folks learned was worth having on their shelves.
University bookstores often will resell \"used\" books that they\'ve
purchased from students (eager to recover some of their costs of learning).

Quite so. But C doesn\'t really have much more in the \'aha\' space for

C can be interesting if you start trying to adopt different
programming practices. E.g., most of my current project is coded
in C but is entirely object-based. And, objects are referenced by
something akin to file handles.

Sure, but there\'s only so much one can do on a micro, especially one
that doesn\'t have much (anything) in the way of \"external connectivity\"
(okay, sure, someone could _take_ it, but meh)

There are all sorts of different processors available. You just
have to decide what features you want and what price you are
willing to pay.

Oops, I cut out the wrong paragraph there. The response was _supposed_
to be referencing your comments regarding the compiler writing better /
less-exploitable assembly; not your object-oriented C.

Those of us who have been writing code for half a century have memories
of how bad the tools were \"way back then\". It takes discipline to
remember how much better (than humans!) they have become!

So, the new order of business is to concentrate on writing what
you WANT the code to do and let the tools figure out how best
to make that happen.

Indeed. Like I said, it\'s not that I don\'t like C (you can pry it out
of my cold, dead hands), but simply the case that -- in microcontrollers
anyway -- I\'ve kind of dried up the \"big aha\" moments. Not that I\'m an
expert in the language or anything; but just that there\'s not the same
kind of \"eureka!\" as I used to get. Maybe if I move to OOP, but honestly
that just never really \"clicked\" with me, in that my personal internal
monologue is \"Okay, so to check the status of IO-Expander 0xAA, send the
query string \'0xAA, 0xFF, 0x04, 0x00\' to the SPI module\".

As opposed to \"Okay, so I need to create a virtual representation of
IOExpander, and it\'ll allow read(int *data, int len) or write (int addr,
int len) [...], and then now that I\'ve done all of that, I can call
ioexp.read (address,expectedBytes)\"

There is a difference (big!) between object-ORIENTED and object-BASED.
IMO, you get the biggest bang for the buck adopting an object
paradigm -- but, the benefits of full OOPS areoften outweighed
by the baggage they bring along.

With object *based*, you get the main benefit of encapsulation
without the fluff of polymorphism or MI. (you can \"manually\"
blend those into an object-based implementation but aren\'t
required).

[Most engineers/programmers really only use procedural
languages and often just single-threaded... almost always
confined to a single computing node/processor]

I use procedural languages (C-ish*, ASM, SQL, Limbo-ish*) to
implement my object-based system.

[* I\'ve augmented the syntax of each of these to more directly
support an object-based notation wherein objects don\'t have
known (nor invariant) physical locations]

For example, my user interfaces exist on three \"planes\":
- foreground (whatever the user is directly interacting with;
where his attention lies)
- background (other things that the user is likely interested
in but without requiring his full attention)
- notifications (asynchronous events that the user needs to be
made aware of but with minimal \"cost\" to his focused activity)

How I place \"interactions\" in each of those planes has to be
consistent from the programmer/application\'s point of view.

E.g., an application shouldn\'t care if the user is blind (and
can\'t interact visually), deaf (can\'t interact aurally),
paralyzed (can\'t interact haptically), etc. It should just
want to interact and a middle layer should figure out how,
specifically, to do that. The interaction is abstracted
and reified elsewhere.

Similarly, I may want to play some music through a \"speaker\"
(another abstraction). The source may be an MP3, WAV, etc.
fetched from my DBMS (my only persistent store) *or* from
a broadcast source *or* from a local \"synthesizer\".

I shouldn\'t have to figure out how to get source to destination.
I shouldn\'t have to figure out how to get source FORMAT into
something compatible with destination\'s requirements. *I*
shouldn\'t have to perform any of these actions.

Instead, the *objects* involved should know how to meet
their own needs, at runtime (because you don\'t know, at compile
time, what the needs of a particular piece of music are
going to be, months or years from now!).

If a speaker needs to instantiate a transcoder (MP3->FLAC),
then *it* should do that. But, it shouldn\'t have to bear the
cost of that activity; it\'s not *its* fault that the user wants
to play an MP3! So, it should be able to instantiate an
MP3 transcoder <somewhere> in the system and let that
<something> do the work. Because *it* knows how to do that
efficiently.

If there are multiple \"transcoder servers\" active in the system,
the actual server chosen to instantiate THAT MP3 transcoder
might exist on one of many different nodes. (it can even
*move* while it is in operation!). If there are no transcoder
servers available, then one should be instantiated from the
Factory -- on a node that has surplus resources to support
that capability.

The programmer shouldn\'t have to be aware of the need for any
of these activities. The *system* should sort it out FOR him
(\"an OS provides services for the application\")

If the user aborts the operation, all of the resources (on
all of the nodes that are involved) should tear themselves
down -- again without the programmer having to DO anything.

The abstractions make this relatively easy. They serve
to reduce the complexity of the implementation and improve
its quality (what would happen if the programmer forgot
to kill off the transcoder? and, later, tried to play more
music -- creating yet another transcoder -- that he would
likewise forget to kill off...)

There are run-time (and hardware) costs to support all of
this \"managed HIDDEN complexity\" but processors are
pretty cheap, nowadays. It\'s not uncommon for *systems*
to have hundreds or thousands of them (and, as IoT gains
traction, scale that up by another order of magnitude -- or
two!)

Folks who are stuck in the procedural language mindset will
quickly find the complexity of those systems makes development
impractical.

[But, fine if your making a standalone \"thing\" that has a
relatively constrained evolutionary path ahead]

As for \"all sorts\" -- yeah, I\'m making my way into some of the new AVR
0- / 1- / 2-series chips (e.g. the 404/414/1624) as well as their DA and
DB series of megas...

I use a little 6-pin device in my PoE PD controller. PoE requires
a hardware handshake with the PSE controller to \"notify\" it that
a PD is plugged in and wanting power. You can buy controllers
that will do this automagically, for you.

Oh, nice. That\'s gotta have a 2-stage handshake then? Been a while
since I read up on the specs for 802.3af/at.

No. The handshake with the PSE is implied -- if power goes away
(when you asked it to), then you know it saw your disconnect;
if it comes on (when you requested it), likewise.

The handshake to the processor is a bit more involved as there\'s
just one-wire carrying all the traffic. But, the 6-pin has nothing
better to do than \"listen\" and \"reply\". And, the host has
resources up the wazoo -- and timeliness of the power sequencing
action is not critical.

[I am thinking of refactoring the design to put a better
processor, there, to also manage the power supplies and
load characterization of the entire module. When \"idle\",
there shouldn\'t be much extra \"work\" for it to do as the
local supplies will be shutdown and not need \"minding\"]

But, what if the device wants to \"unplug itself\" and *re*plug itself
at some future time? All the while, still remaining connected to
the network.

You\'d (logically) bring down the interface, and then bring it back up.
As I said, it\'s been a while; but as the initial \"hey I need power\"
signal is nothing more than something like a 22k resistor between the
power pins. Although, I think 802.3at can use LLDP to swap power
states.

You can\'t count on the network layer to be operational.
So, no \"messages\".

The 6pin has to be able to know to (simulate) a \"reinsertion\"
at a particular time. Or, when \"tickled\" by a \"wake up\"
signal that runs into the I/Os. And, to know what power
class to request. E.g., a module may power up needing only
a few watts (for the CPU/memory/NIC) or may need more
if its field also needs to be powered.

The PSE doesn\'t know what the module WILL need so it can\'t
make power budget decisions without some form of indication
from the PD. Powering up to the lowest power class -- only
to discover (in conversations with the remote MCU) that
there\'s not enough power currently available to support
its REAL needs would lead to powering back down (save
those few watts).

When should the PD \"try again\"? Will this part of the system
just oscillate between powered and not-powered, indefinitely?

\"I\'ll supply power IF I CAN. Otherwise, stay asleep. If
that\'s not acceptable to you, your 6pin can implement a
fall-back policy and tell you of that consequence when
I power you back up\"

Even \"dirt cheap\" MCUs can have interesting applications.
E.g., WRITE a program to read an A/DC to determine the
current \"output voltage\" and, based on that value, decide
whether or not to turn on a pass transistor (feeding a
choke) and for how long. Presto! You have your own
switching power supply -- implemented in software!

I was looking into something like that the other day, actually. Looks
like I\'d need to wrap my own inductor (eep), but otherwise, I think I
have the necessary other stuff.

Steal one out of a dead PC power supply. Make part of the project
figuring out how to take an unknown ferrite and get an inductor
of approximately the right characteristics for your need.

Oh, I have ferrite toroids around here somewhere. The \"eep\" is the \"and
now for my next trick ... I know how much inductance it has!\".

There are tables that can give you ballparks. Ideally, you\'d
know the characteristics of the ferrite.

If your load is \"sacrificial\", you measure performance and
decide whether to add another few windings, etc.

(hint: leave long leads on the torroid so you can just use
up some of that \"service loop\" for an additional winding)

[Of course, a bug in your code can fry your processor! :> ]

(poof) \"oops\". Learning by mistakes is still learning (but I\'d rather
learn from someone else\'s mistake :D)

Prototype it to power a simple/disposable resistive load
so you can watch to see how/if it is working without
putting too much \"at ri$k\".

Yep. Load resistors are fun things. I should have some 50-ohm loads
around here somewhere...

Lightbulbs, in a pinch.

[...] The joke was how the transistors were placed there to protect
the (cheap) *fuses*.

sounds about right!
 
On 1/6/2023 2:08 PM, Dan Purgert wrote:
Granted, there is a balance point you have to \"engineer\" to.

That\'s what makes it software *engineering* and not \"programming\".
ANYONE can write code. SOME can write reliable code. Even
fewer can *engineer* software solutions to the problem at hand
(too many engineers think they know how products should behave
without actually understanding their user bases)

I certainly can\'t engineer. Closest I\'ve come was architect (and that
was only because in the company\'s infinite wisdom they downsized the
unpopular-with-management but knew everything about everything old timer
in 2020). Granted it took their \"first choice\" two or three _highly_
visible mistakes over the course of a few months of (weekend)
deployments before they started listening to me...

Thankfully I got out after 18 months. That was highly stressful.

There are several steps to designing a product/solution.
First is figuring out what it needs to be/do.
Next, how to approach it.
Actually *doing* it is often just busywork.

[I spend ~40% of my time in specification/design; ~20% implementation
and ~40% verification/validation. Some firms would let me offload the
last step (actually might PREFER it as it gives them other eyes on
the project and reassurances that less was likely going to slip
through from omission). Some try to do the first 40% (I usually
don\'t take those jobs as, IMO, people usually don\'t know what they
want and I don\'t want to be dealing with \"changes\" once they start
realizing their deficiencies).]

Knuth wrote a series of tomes covering most of the \"basic\" algorithms.
Surprisingly, much software is just a rearranging of these core
algorithms in different combinations.

TAOCP is on my amazon wishlist ...

Stevens\' books are also great reads -- but at the application level.
Organick\'s book is a must-have if you are interested in big systems.
McKusick if you\'re into Eunices.

There are others that address \"programming\" but often miss the
big picture by focusing on particulars. Or, \"methodologies\".

You may be able to find a used copy. OTOH, many folks are content to
have them on their bookshelves, even if not referenced.

Yeah, AMZN has used from time to time. It\'s more of a reminder to check
in on those prices.

Abe, eBay, local university bookstores, etc.

Indeed. The local universities never used it (\"local\" being within 2
hours drive)... the nearest that might\'ve used it is about 6h away, and
at that point ... ehhhh.

It wasn\'t \"used\" (i.e., as in part of classwork) when I was in school.
But, was a \'reference\" that folks learned was worth having on their shelves.
University bookstores often will resell \"used\" books that they\'ve
purchased from students (eager to recover some of their costs of learning).

Quite so. But C doesn\'t really have much more in the \'aha\' space for

C can be interesting if you start trying to adopt different
programming practices. E.g., most of my current project is coded
in C but is entirely object-based. And, objects are referenced by
something akin to file handles.

Sure, but there\'s only so much one can do on a micro, especially one
that doesn\'t have much (anything) in the way of \"external connectivity\"
(okay, sure, someone could _take_ it, but meh)

There are all sorts of different processors available. You just
have to decide what features you want and what price you are
willing to pay.

Oops, I cut out the wrong paragraph there. The response was _supposed_
to be referencing your comments regarding the compiler writing better /
less-exploitable assembly; not your object-oriented C.

Those of us who have been writing code for half a century have memories
of how bad the tools were \"way back then\". It takes discipline to
remember how much better (than humans!) they have become!

So, the new order of business is to concentrate on writing what
you WANT the code to do and let the tools figure out how best
to make that happen.

Indeed. Like I said, it\'s not that I don\'t like C (you can pry it out
of my cold, dead hands), but simply the case that -- in microcontrollers
anyway -- I\'ve kind of dried up the \"big aha\" moments. Not that I\'m an
expert in the language or anything; but just that there\'s not the same
kind of \"eureka!\" as I used to get. Maybe if I move to OOP, but honestly
that just never really \"clicked\" with me, in that my personal internal
monologue is \"Okay, so to check the status of IO-Expander 0xAA, send the
query string \'0xAA, 0xFF, 0x04, 0x00\' to the SPI module\".

As opposed to \"Okay, so I need to create a virtual representation of
IOExpander, and it\'ll allow read(int *data, int len) or write (int addr,
int len) [...], and then now that I\'ve done all of that, I can call
ioexp.read (address,expectedBytes)\"

There is a difference (big!) between object-ORIENTED and object-BASED.
IMO, you get the biggest bang for the buck adopting an object
paradigm -- but, the benefits of full OOPS areoften outweighed
by the baggage they bring along.

With object *based*, you get the main benefit of encapsulation
without the fluff of polymorphism or MI. (you can \"manually\"
blend those into an object-based implementation but aren\'t
required).

[Most engineers/programmers really only use procedural
languages and often just single-threaded... almost always
confined to a single computing node/processor]

I use procedural languages (C-ish*, ASM, SQL, Limbo-ish*) to
implement my object-based system.

[* I\'ve augmented the syntax of each of these to more directly
support an object-based notation wherein objects don\'t have
known (nor invariant) physical locations]

For example, my user interfaces exist on three \"planes\":
- foreground (whatever the user is directly interacting with;
where his attention lies)
- background (other things that the user is likely interested
in but without requiring his full attention)
- notifications (asynchronous events that the user needs to be
made aware of but with minimal \"cost\" to his focused activity)

How I place \"interactions\" in each of those planes has to be
consistent from the programmer/application\'s point of view.

E.g., an application shouldn\'t care if the user is blind (and
can\'t interact visually), deaf (can\'t interact aurally),
paralyzed (can\'t interact haptically), etc. It should just
want to interact and a middle layer should figure out how,
specifically, to do that. The interaction is abstracted
and reified elsewhere.

Similarly, I may want to play some music through a \"speaker\"
(another abstraction). The source may be an MP3, WAV, etc.
fetched from my DBMS (my only persistent store) *or* from
a broadcast source *or* from a local \"synthesizer\".

I shouldn\'t have to figure out how to get source to destination.
I shouldn\'t have to figure out how to get source FORMAT into
something compatible with destination\'s requirements. *I*
shouldn\'t have to perform any of these actions.

Instead, the *objects* involved should know how to meet
their own needs, at runtime (because you don\'t know, at compile
time, what the needs of a particular piece of music are
going to be, months or years from now!).

If a speaker needs to instantiate a transcoder (MP3->FLAC),
then *it* should do that. But, it shouldn\'t have to bear the
cost of that activity; it\'s not *its* fault that the user wants
to play an MP3! So, it should be able to instantiate an
MP3 transcoder <somewhere> in the system and let that
<something> do the work. Because *it* knows how to do that
efficiently.

If there are multiple \"transcoder servers\" active in the system,
the actual server chosen to instantiate THAT MP3 transcoder
might exist on one of many different nodes. (it can even
*move* while it is in operation!). If there are no transcoder
servers available, then one should be instantiated from the
Factory -- on a node that has surplus resources to support
that capability.

The programmer shouldn\'t have to be aware of the need for any
of these activities. The *system* should sort it out FOR him
(\"an OS provides services for the application\")

If the user aborts the operation, all of the resources (on
all of the nodes that are involved) should tear themselves
down -- again without the programmer having to DO anything.

The abstractions make this relatively easy. They serve
to reduce the complexity of the implementation and improve
its quality (what would happen if the programmer forgot
to kill off the transcoder? and, later, tried to play more
music -- creating yet another transcoder -- that he would
likewise forget to kill off...)

There are run-time (and hardware) costs to support all of
this \"managed HIDDEN complexity\" but processors are
pretty cheap, nowadays. It\'s not uncommon for *systems*
to have hundreds or thousands of them (and, as IoT gains
traction, scale that up by another order of magnitude -- or
two!)

Folks who are stuck in the procedural language mindset will
quickly find the complexity of those systems makes development
impractical.

[But, fine if your making a standalone \"thing\" that has a
relatively constrained evolutionary path ahead]

As for \"all sorts\" -- yeah, I\'m making my way into some of the new AVR
0- / 1- / 2-series chips (e.g. the 404/414/1624) as well as their DA and
DB series of megas...

I use a little 6-pin device in my PoE PD controller. PoE requires
a hardware handshake with the PSE controller to \"notify\" it that
a PD is plugged in and wanting power. You can buy controllers
that will do this automagically, for you.

Oh, nice. That\'s gotta have a 2-stage handshake then? Been a while
since I read up on the specs for 802.3af/at.

No. The handshake with the PSE is implied -- if power goes away
(when you asked it to), then you know it saw your disconnect;
if it comes on (when you requested it), likewise.

The handshake to the processor is a bit more involved as there\'s
just one-wire carrying all the traffic. But, the 6-pin has nothing
better to do than \"listen\" and \"reply\". And, the host has
resources up the wazoo -- and timeliness of the power sequencing
action is not critical.

[I am thinking of refactoring the design to put a better
processor, there, to also manage the power supplies and
load characterization of the entire module. When \"idle\",
there shouldn\'t be much extra \"work\" for it to do as the
local supplies will be shutdown and not need \"minding\"]

But, what if the device wants to \"unplug itself\" and *re*plug itself
at some future time? All the while, still remaining connected to
the network.

You\'d (logically) bring down the interface, and then bring it back up.
As I said, it\'s been a while; but as the initial \"hey I need power\"
signal is nothing more than something like a 22k resistor between the
power pins. Although, I think 802.3at can use LLDP to swap power
states.

You can\'t count on the network layer to be operational.
So, no \"messages\".

The 6pin has to be able to know to (simulate) a \"reinsertion\"
at a particular time. Or, when \"tickled\" by a \"wake up\"
signal that runs into the I/Os. And, to know what power
class to request. E.g., a module may power up needing only
a few watts (for the CPU/memory/NIC) or may need more
if its field also needs to be powered.

The PSE doesn\'t know what the module WILL need so it can\'t
make power budget decisions without some form of indication
from the PD. Powering up to the lowest power class -- only
to discover (in conversations with the remote MCU) that
there\'s not enough power currently available to support
its REAL needs would lead to powering back down (save
those few watts).

When should the PD \"try again\"? Will this part of the system
just oscillate between powered and not-powered, indefinitely?

\"I\'ll supply power IF I CAN. Otherwise, stay asleep. If
that\'s not acceptable to you, your 6pin can implement a
fall-back policy and tell you of that consequence when
I power you back up\"

Even \"dirt cheap\" MCUs can have interesting applications.
E.g., WRITE a program to read an A/DC to determine the
current \"output voltage\" and, based on that value, decide
whether or not to turn on a pass transistor (feeding a
choke) and for how long. Presto! You have your own
switching power supply -- implemented in software!

I was looking into something like that the other day, actually. Looks
like I\'d need to wrap my own inductor (eep), but otherwise, I think I
have the necessary other stuff.

Steal one out of a dead PC power supply. Make part of the project
figuring out how to take an unknown ferrite and get an inductor
of approximately the right characteristics for your need.

Oh, I have ferrite toroids around here somewhere. The \"eep\" is the \"and
now for my next trick ... I know how much inductance it has!\".

There are tables that can give you ballparks. Ideally, you\'d
know the characteristics of the ferrite.

If your load is \"sacrificial\", you measure performance and
decide whether to add another few windings, etc.

(hint: leave long leads on the torroid so you can just use
up some of that \"service loop\" for an additional winding)

[Of course, a bug in your code can fry your processor! :> ]

(poof) \"oops\". Learning by mistakes is still learning (but I\'d rather
learn from someone else\'s mistake :D)

Prototype it to power a simple/disposable resistive load
so you can watch to see how/if it is working without
putting too much \"at ri$k\".

Yep. Load resistors are fun things. I should have some 50-ohm loads
around here somewhere...

Lightbulbs, in a pinch.

[...] The joke was how the transistors were placed there to protect
the (cheap) *fuses*.

sounds about right!
 
On Mon, 2 Jan 2023 15:24:10 -0800 (PST), whit3rd <whit3rd@gmail.com>
wrote:

On Sunday, January 1, 2023 at 10:41:42 PM UTC-8, Jan Panteltje wrote:

Climate change is caused by earth orbit variations and changes in the sun.

The \'earth orbit variations\' has different time progression than we see, and
as for \'changes in the sun\'-- that\'s the billion-year timescale. In a mere million
years, one doesn\'t expect a single degree F, let alone C, of difference.
So, your concerns with orbit and sun are misplaced.

https://en.wikipedia.org/wiki/Solar_cycle

https://en.wikipedia.org/wiki/Little_Ice_Age

https://en.wikipedia.org/wiki/Medieval_Warm_Period

https://en.wikipedia.org/wiki/Ice_age#Variations_in_Earth\'s_orbit

https://en.wikipedia.org/wiki/Ice_age

CO2 may save us.
 
On Mon, 2 Jan 2023 15:24:10 -0800 (PST), whit3rd <whit3rd@gmail.com>
wrote:

On Sunday, January 1, 2023 at 10:41:42 PM UTC-8, Jan Panteltje wrote:

Climate change is caused by earth orbit variations and changes in the sun.

The \'earth orbit variations\' has different time progression than we see, and
as for \'changes in the sun\'-- that\'s the billion-year timescale. In a mere million
years, one doesn\'t expect a single degree F, let alone C, of difference.
So, your concerns with orbit and sun are misplaced.

https://en.wikipedia.org/wiki/Solar_cycle

https://en.wikipedia.org/wiki/Little_Ice_Age

https://en.wikipedia.org/wiki/Medieval_Warm_Period

https://en.wikipedia.org/wiki/Ice_age#Variations_in_Earth\'s_orbit

https://en.wikipedia.org/wiki/Ice_age

CO2 may save us.
 
On Mon, 2 Jan 2023 15:24:10 -0800 (PST), whit3rd <whit3rd@gmail.com>
wrote:

On Sunday, January 1, 2023 at 10:41:42 PM UTC-8, Jan Panteltje wrote:

Climate change is caused by earth orbit variations and changes in the sun.

The \'earth orbit variations\' has different time progression than we see, and
as for \'changes in the sun\'-- that\'s the billion-year timescale. In a mere million
years, one doesn\'t expect a single degree F, let alone C, of difference.
So, your concerns with orbit and sun are misplaced.

https://en.wikipedia.org/wiki/Solar_cycle

https://en.wikipedia.org/wiki/Little_Ice_Age

https://en.wikipedia.org/wiki/Medieval_Warm_Period

https://en.wikipedia.org/wiki/Ice_age#Variations_in_Earth\'s_orbit

https://en.wikipedia.org/wiki/Ice_age

CO2 may save us.
 
On Monday, January 9, 2023 at 5:47:17 AM UTC+11, Flyguy wrote:
On Sunday, January 8, 2023 at 6:41:15 AM UTC-8, bill....@ieee.org wrote:
On Sunday, January 8, 2023 at 5:52:35 PM UTC+11, Flyguy wrote:
On Saturday, January 7, 2023 at 11:31:33 AM UTC-8, Ed Lee wrote:
On Saturday, January 7, 2023 at 11:22:49 AM UTC-8, Fred Bloggs wrote:
On Saturday, January 7, 2023 at 2:16:50 PM UTC-5, Ed Lee wrote:
On Saturday, January 7, 2023 at 11:10:02 AM UTC-8, Fred Bloggs wrote:
On Saturday, January 7, 2023 at 1:42:05 PM UTC-5, Ed Lee wrote:
On Saturday, January 7, 2023 at 10:14:06 AM UTC-8, John Larkin wrote:
On Sat, 7 Jan 2023 09:34:02 -0800 (PST), Fred Bloggs <bloggs.fred...@gmail.com> wrote:

On Saturday, December 31, 2022 at 1:19:58 PM UTC-5, Ed Lee wrote:
snip
If you are going to be working on HV circuits (>240 V) ONLY use DMMs with a CAT certification (which cheap Chinese meters don\'t have).
https://www.fluke.com/en-us/learn/blog/safety/multimeter-guide
Which doesn\'t tell you much.

It tells you everything you need to know to make a purchasing decision. This IS NOT a designer\'s guide, Bozo.

This is sci.electronics.,design. The people who post here do imagine that they design electronics, even clowns like you.
The link you posted wasn\'t informative at the level you\'d need if you wanted to make an informed decision about buying a multimeter, not that you;d know anything about that

\"The latest UL standard for electrical test instruments is UL 61010B-1, which is a revision of 3111-1. It specifies the general safety requirements, such as material, design, and testing requirements, and the environmental conditions in which the standard applies. UL 3111-2-031 lists additional requirements for test probes. The requirements for hand-held current clamps, such as the current measuring portion of clamp meters, are included in UL 3111-2-032.

UL standards are gradually being harmonized with similar international standards, such as those published by IEC. Until this is completed, there may be significant differences between each group\'s standards. For example, IEC 61010-1 2nd Edition includes requirements for voltage-measuring instruments in CAT IV environments. UL 61010B-1 doesn\'t.\"

What Flyguy might be saying - if he knew what he was talking out - is that there are safety standards for multimeters. In the US they are published by the Underwriter
Laboratory.

I am WELL AWARE of UL and other testing labs.

But not aware enough top pull out an actual standard that said anything specific.

There are also international safety standards.

https://www.nema.org/standards/international/the-iec-and-nema

The International Electrotechnical Commission (IEC) headquartered in Geneva, Switzerland is the top level body.

A chinese multi-meter might well not conform to an American Underwriters Laboratory standard, but will probably conform to the relevant IEC standard, which isn\'t going to be much different.

Pure SPECULATION by Bozo completely UNVERIFIED by ANY facts whatsoever. But, why am I not surprised coming from Bill?

Sewage Sweeper didn\'t produce any facts of his own - and never does. When he\'s exposed to them, he ignores them, but he\'d great at recycling the abuse he gets, even when it is totally irrelevant.

A cheap chinese meter might be truly cheap and nasty, and correspondingly dangerous, but anybody who sold it to you would risk being sued if it was.

LOL! Just TRY suing a Chinese company - just TRY!!

You don\'t sue the manufacturer. You sue the retailer who sold you a device that wasn\'t fit for the purpose for which it was advertised.

It\'s more likely to be cheap because it was produced in high volume, rather than because the manufacturer cut any corners. I\'ve ran into one American instrument that didn\'t meet their published specifications, which is a slightly different kind of problem - it wasn\'t certainly wasn\'t cheap.

No, Bozo, they cut ALL KINDS of corners: https://www.youtube.com/watch?v=iGUiZT6kLDk

A youtube video is evidence?

> Notice that this meter has NO certification marks. And for GOOD REASON: it would NEVER pass.

Why should I care what some cheapskate idiot bought on E-bay? The device was CE marked, but the camera didn\'t linger longer to pick up the number of the relevant standard.

--
Bill Sloman, Sydney
 
On Monday, January 9, 2023 at 5:47:17 AM UTC+11, Flyguy wrote:
On Sunday, January 8, 2023 at 6:41:15 AM UTC-8, bill....@ieee.org wrote:
On Sunday, January 8, 2023 at 5:52:35 PM UTC+11, Flyguy wrote:
On Saturday, January 7, 2023 at 11:31:33 AM UTC-8, Ed Lee wrote:
On Saturday, January 7, 2023 at 11:22:49 AM UTC-8, Fred Bloggs wrote:
On Saturday, January 7, 2023 at 2:16:50 PM UTC-5, Ed Lee wrote:
On Saturday, January 7, 2023 at 11:10:02 AM UTC-8, Fred Bloggs wrote:
On Saturday, January 7, 2023 at 1:42:05 PM UTC-5, Ed Lee wrote:
On Saturday, January 7, 2023 at 10:14:06 AM UTC-8, John Larkin wrote:
On Sat, 7 Jan 2023 09:34:02 -0800 (PST), Fred Bloggs <bloggs.fred...@gmail.com> wrote:

On Saturday, December 31, 2022 at 1:19:58 PM UTC-5, Ed Lee wrote:
snip
If you are going to be working on HV circuits (>240 V) ONLY use DMMs with a CAT certification (which cheap Chinese meters don\'t have).
https://www.fluke.com/en-us/learn/blog/safety/multimeter-guide
Which doesn\'t tell you much.

It tells you everything you need to know to make a purchasing decision. This IS NOT a designer\'s guide, Bozo.

This is sci.electronics.,design. The people who post here do imagine that they design electronics, even clowns like you.
The link you posted wasn\'t informative at the level you\'d need if you wanted to make an informed decision about buying a multimeter, not that you;d know anything about that

\"The latest UL standard for electrical test instruments is UL 61010B-1, which is a revision of 3111-1. It specifies the general safety requirements, such as material, design, and testing requirements, and the environmental conditions in which the standard applies. UL 3111-2-031 lists additional requirements for test probes. The requirements for hand-held current clamps, such as the current measuring portion of clamp meters, are included in UL 3111-2-032.

UL standards are gradually being harmonized with similar international standards, such as those published by IEC. Until this is completed, there may be significant differences between each group\'s standards. For example, IEC 61010-1 2nd Edition includes requirements for voltage-measuring instruments in CAT IV environments. UL 61010B-1 doesn\'t.\"

What Flyguy might be saying - if he knew what he was talking out - is that there are safety standards for multimeters. In the US they are published by the Underwriter
Laboratory.

I am WELL AWARE of UL and other testing labs.

But not aware enough top pull out an actual standard that said anything specific.

There are also international safety standards.

https://www.nema.org/standards/international/the-iec-and-nema

The International Electrotechnical Commission (IEC) headquartered in Geneva, Switzerland is the top level body.

A chinese multi-meter might well not conform to an American Underwriters Laboratory standard, but will probably conform to the relevant IEC standard, which isn\'t going to be much different.

Pure SPECULATION by Bozo completely UNVERIFIED by ANY facts whatsoever. But, why am I not surprised coming from Bill?

Sewage Sweeper didn\'t produce any facts of his own - and never does. When he\'s exposed to them, he ignores them, but he\'d great at recycling the abuse he gets, even when it is totally irrelevant.

A cheap chinese meter might be truly cheap and nasty, and correspondingly dangerous, but anybody who sold it to you would risk being sued if it was.

LOL! Just TRY suing a Chinese company - just TRY!!

You don\'t sue the manufacturer. You sue the retailer who sold you a device that wasn\'t fit for the purpose for which it was advertised.

It\'s more likely to be cheap because it was produced in high volume, rather than because the manufacturer cut any corners. I\'ve ran into one American instrument that didn\'t meet their published specifications, which is a slightly different kind of problem - it wasn\'t certainly wasn\'t cheap.

No, Bozo, they cut ALL KINDS of corners: https://www.youtube.com/watch?v=iGUiZT6kLDk

A youtube video is evidence?

> Notice that this meter has NO certification marks. And for GOOD REASON: it would NEVER pass.

Why should I care what some cheapskate idiot bought on E-bay? The device was CE marked, but the camera didn\'t linger longer to pick up the number of the relevant standard.

--
Bill Sloman, Sydney
 

Welcome to EDABoard.com

Sponsor

Back
Top