Maximum Power Point Tracking: Optimizing Solar Panels 58 Comments by: Maya Posch...

On 1/12/2023 11:06 AM, Joe Gwinn wrote:
On Sat, 7 Jan 2023 17:16:57 -0700, Don Y <blockedofcourse@foo.invalid
wrote:

On 1/7/2023 4:11 PM, Joe Gwinn wrote:
Things which do have an important place in modern software that is
intended to be provably correct are invariants (borrowed from physics).

Yes, but if I recall we called them Assertions:

.<https://en.wikipedia.org/wiki/Assertion_(software_development)

Software also has Invariants, but I don\'t know that either one came
from the Physics world.

.<https://en.wikipedia.org/wiki/Invariant_(mathematics)#Invariants_in_computer_science

The main difference in software seems to be that assertions are
logical statements about the value of a single variable, while
Invariants apply an assertion to the result of a specified function.

One kind of assertion was visual - coplot a 2D plot of something, plus
a circle, and visually verify concentricity. The eye is _very_ good
at this, so it was a very robust and sensitive test.

I for one used them heavily, with some being used in operation, not
just development. This was done in runtime code, not as a property of
the programming language and/or compiler.

That, and the fact that many languages simply don\'t natively support
them, means they end up being a matter of \"discipline\" and not
really enforceable (design reviews?).

Sure they can be enforced in a design review. Simply ask the
programmer what the Invariants are and how the relevant code achieves
them.

That\'s *discipline*. You have to have procedures in place and
RELY on their enforcement -- by people qualified and committed.
And, Manglement has to be committed to expending the resources
to do those and impose their edicts.

If \"ship it\" is the bigger imperative, guess what will suffer?

The better approach is to build mechanisms (into the language
or runtime) that enforce these things. Then, you don\'t have to rely
on \"discipline\" to provide those protections.

This has nothing to do with whether the language has anything like
Invariants - the compiler won\'t understand the domain of the app code
being written, or know when and where to use which invariant.

Yes, you still rely on the developer to declare and define these
\"meaningfully\". But, if you are *designing* your code (and not
just relying on some /ad hoc/ process), you can define the
contracts before any of the code is written and codify these
with an OCL.

For systems of significant complexity *or* where the sources are
not visible, such OCL declarations tell the developer what he
can expect from the interface -- and, WHAT IS EXPECTED OF HIM!

I, for example, support the specification of invariant in-out conditions
in my IDL. I feel this makes the contract more explicit -- in terms
that the developer WILL see (and, that the auto-generated stubs will
enforce in case he wants to overlook them! :> ).

But, I can\'t mechanically require the designer of an object class
(or service) to define them meaningfully.

And, what do you do when one fails at runtime? How often does
code simply propagate assertions up to a top level handler...
that panic()\'s?

Yep. Depends on what the code does.

But, developers need to understand their intent AND have considered
a fallback strategy: \"What do I do *if*...\"

In 1985 or so, I built an invariant into some operating system code
used in a radar. At this level in the kernel, nothing like printf
exists. Nor did it have a printer for that matter.

You can use a \"black box\" to log messages that can\'t reach the
console (at this time). You wouldn\'t rely on a full-fledged
\"printf()-like\" facility to prepare messages. But, the intended
viewer of those messages can expend some effort to understand them
*in* error conditions. And, given that you want to maximize how much
you can store in the black-box resource, you\'d want messages to
be terse or heavily encoded.

So, just being able to store a series of integers is often enough
(even if the integers are the exp/mantissa of a FP value!)

I have a runtime monitor that lets me \"watch\" memory regions and
\"notice\" things of interest. The display being part of the
monitor\'s hardware support (in some cases, a set of 7-segment digits)

The problem was that the app guys were not ensuring that what had to
be global-access buffers were in fact in hardware global memory; if
not, the system\'s picture of reality would rapidly diverge. Subtle
but devastating.

So, what to do? This has to be checked at full speed, so it must be
dead simple: Add to the kernel a few lines of assembly code that
verified that the buffer address fell in hardware global memory and
intentionally execute an illegal instruction if not. This caught
*everybody* within 8 hours.

With modern hardware, you can rely on the processor (to some extent)
to catch these things. E.g., dereferencing an invalid pointer,
writing to TEXT space, stack under/over-flow, etc.

But, you have to plan with those things in mind.

More significantly, do you see real-time software implementing
invariants wrt deadlines? (oops! isn\'t that the whole point of RT??)
Again, I support the specification of per-task \"deadline handlers\"
but there\'s nothing that forces the developer to define one meaningfully.

Deadline schedulers do exist and were widely touted for realtime, but
never caught on outside of academia because they fail badly when the
planets align badly and deadline cannot be met. This proved too
fragile for real applications. Not to mention too complex.

If you\'re coding where time is important, then how is NOT
verifying timeliness constraints ARE being met LESS important
than verifying the constraints on a function call/return?

What mechanisms do you have to detect this, at runtime?
In the released product?

When folks are still failing to test the results of many common
functions (malloc() being a prime example... but, how often have
you tested the return value of printf()?), how can you expect
them to have sorted out what to do for each thrown exception,
failed invariant, etc.?

In RT, one does not use malloc except once, to get the working
buffers. After that, the RT code handles memory management.

Old Wives\' Tale. Your concern is wrt deterministic behavior
and/or timeliness. If you, the CAPABLE developer, know things
about the application and your implementation, why should you
be constrained NOT to use a facility? (why have malloc() at all?
why not just static buffers throughout?)

\"Don\'t run with scissors\" Why hasn\'t someone designed and sold
a device that prevents you from doing so? It could be as simple
as a 300 pound weight and a chain (to the scissors)!

Ans: there are times when you *need* to run with scissors.
So, be *wary*/vigilant when doing so, but don\'t anchor your
scissors to a large weight just to ensure you \"play safe\".

Java-style garbage collection would cause random system hangs, and so
cannot be used in realtime.

You don\'t need GC with dynamic memory use. Only to catch
stale references that the user isn\'t obligated to clean up
on his own. Java manages objects so has to bear that cost.
In other languages, the developer manages those resources -- and
is responsible for their proper housekeeping.

Finally, as they are never supposed to execute, some industries
dictate that they be removed from production code, classifying
them as \"dead code\" (they aren\'t supposed to have side-effects).

Would you have an argument for leaving this in your code?
if (FALSE) panic();
Isn\'t that what an invariant *effectively* reduces to?

Not if it\'s coded correctly. In C and like languages, one either
declares the relevant variable to be volatile, or hides a critical
part of the mechanism in a subroutine, to prevent the compiler\'s code
optimizer from making such assumptions. Or write it in assembler.

You\'ve missed the point.

The folks in the design review KNOW (from a detailed examination
of the code) that the condition (represented here by \"FALSE\")
will always resolve to FALSE. Always. So, they KNOW the branch
will never be taken. \"panic()\" represents dead code.

Note that the compiler can\'t always see how the condition will
resolve. Even with an OCL.

if (0 != return_one() - 1)
panic()

if \"return_one()\" -- which, returns the constant \"1\" -- is opaque,
the compiler can\'t know that the expression resolves to

if (0 != 1 - 1)

so it can\'t do anything other than generate the requisite code to
invoke return_one(). I.e., the compiler can\'t decide that this is
\"won\'t happen\".

Resorting to the opacity of functions lets you do things solely
for side-effects -- which would otherwise be if-fy.

E.g., if \"return_one()\" was actually \"fault_in_enough_stack_space()\"
then you wouldn\'t want the call elided -- because the code (presumably)
relies on having the requisite stack \"wired down\" (or faulted in)
before whatever comes next.
 
On 1/12/2023 11:06 AM, Joe Gwinn wrote:
On Sat, 7 Jan 2023 17:16:57 -0700, Don Y <blockedofcourse@foo.invalid
wrote:

On 1/7/2023 4:11 PM, Joe Gwinn wrote:
Things which do have an important place in modern software that is
intended to be provably correct are invariants (borrowed from physics).

Yes, but if I recall we called them Assertions:

.<https://en.wikipedia.org/wiki/Assertion_(software_development)

Software also has Invariants, but I don\'t know that either one came
from the Physics world.

.<https://en.wikipedia.org/wiki/Invariant_(mathematics)#Invariants_in_computer_science

The main difference in software seems to be that assertions are
logical statements about the value of a single variable, while
Invariants apply an assertion to the result of a specified function.

One kind of assertion was visual - coplot a 2D plot of something, plus
a circle, and visually verify concentricity. The eye is _very_ good
at this, so it was a very robust and sensitive test.

I for one used them heavily, with some being used in operation, not
just development. This was done in runtime code, not as a property of
the programming language and/or compiler.

That, and the fact that many languages simply don\'t natively support
them, means they end up being a matter of \"discipline\" and not
really enforceable (design reviews?).

Sure they can be enforced in a design review. Simply ask the
programmer what the Invariants are and how the relevant code achieves
them.

That\'s *discipline*. You have to have procedures in place and
RELY on their enforcement -- by people qualified and committed.
And, Manglement has to be committed to expending the resources
to do those and impose their edicts.

If \"ship it\" is the bigger imperative, guess what will suffer?

The better approach is to build mechanisms (into the language
or runtime) that enforce these things. Then, you don\'t have to rely
on \"discipline\" to provide those protections.

This has nothing to do with whether the language has anything like
Invariants - the compiler won\'t understand the domain of the app code
being written, or know when and where to use which invariant.

Yes, you still rely on the developer to declare and define these
\"meaningfully\". But, if you are *designing* your code (and not
just relying on some /ad hoc/ process), you can define the
contracts before any of the code is written and codify these
with an OCL.

For systems of significant complexity *or* where the sources are
not visible, such OCL declarations tell the developer what he
can expect from the interface -- and, WHAT IS EXPECTED OF HIM!

I, for example, support the specification of invariant in-out conditions
in my IDL. I feel this makes the contract more explicit -- in terms
that the developer WILL see (and, that the auto-generated stubs will
enforce in case he wants to overlook them! :> ).

But, I can\'t mechanically require the designer of an object class
(or service) to define them meaningfully.

And, what do you do when one fails at runtime? How often does
code simply propagate assertions up to a top level handler...
that panic()\'s?

Yep. Depends on what the code does.

But, developers need to understand their intent AND have considered
a fallback strategy: \"What do I do *if*...\"

In 1985 or so, I built an invariant into some operating system code
used in a radar. At this level in the kernel, nothing like printf
exists. Nor did it have a printer for that matter.

You can use a \"black box\" to log messages that can\'t reach the
console (at this time). You wouldn\'t rely on a full-fledged
\"printf()-like\" facility to prepare messages. But, the intended
viewer of those messages can expend some effort to understand them
*in* error conditions. And, given that you want to maximize how much
you can store in the black-box resource, you\'d want messages to
be terse or heavily encoded.

So, just being able to store a series of integers is often enough
(even if the integers are the exp/mantissa of a FP value!)

I have a runtime monitor that lets me \"watch\" memory regions and
\"notice\" things of interest. The display being part of the
monitor\'s hardware support (in some cases, a set of 7-segment digits)

The problem was that the app guys were not ensuring that what had to
be global-access buffers were in fact in hardware global memory; if
not, the system\'s picture of reality would rapidly diverge. Subtle
but devastating.

So, what to do? This has to be checked at full speed, so it must be
dead simple: Add to the kernel a few lines of assembly code that
verified that the buffer address fell in hardware global memory and
intentionally execute an illegal instruction if not. This caught
*everybody* within 8 hours.

With modern hardware, you can rely on the processor (to some extent)
to catch these things. E.g., dereferencing an invalid pointer,
writing to TEXT space, stack under/over-flow, etc.

But, you have to plan with those things in mind.

More significantly, do you see real-time software implementing
invariants wrt deadlines? (oops! isn\'t that the whole point of RT??)
Again, I support the specification of per-task \"deadline handlers\"
but there\'s nothing that forces the developer to define one meaningfully.

Deadline schedulers do exist and were widely touted for realtime, but
never caught on outside of academia because they fail badly when the
planets align badly and deadline cannot be met. This proved too
fragile for real applications. Not to mention too complex.

If you\'re coding where time is important, then how is NOT
verifying timeliness constraints ARE being met LESS important
than verifying the constraints on a function call/return?

What mechanisms do you have to detect this, at runtime?
In the released product?

When folks are still failing to test the results of many common
functions (malloc() being a prime example... but, how often have
you tested the return value of printf()?), how can you expect
them to have sorted out what to do for each thrown exception,
failed invariant, etc.?

In RT, one does not use malloc except once, to get the working
buffers. After that, the RT code handles memory management.

Old Wives\' Tale. Your concern is wrt deterministic behavior
and/or timeliness. If you, the CAPABLE developer, know things
about the application and your implementation, why should you
be constrained NOT to use a facility? (why have malloc() at all?
why not just static buffers throughout?)

\"Don\'t run with scissors\" Why hasn\'t someone designed and sold
a device that prevents you from doing so? It could be as simple
as a 300 pound weight and a chain (to the scissors)!

Ans: there are times when you *need* to run with scissors.
So, be *wary*/vigilant when doing so, but don\'t anchor your
scissors to a large weight just to ensure you \"play safe\".

Java-style garbage collection would cause random system hangs, and so
cannot be used in realtime.

You don\'t need GC with dynamic memory use. Only to catch
stale references that the user isn\'t obligated to clean up
on his own. Java manages objects so has to bear that cost.
In other languages, the developer manages those resources -- and
is responsible for their proper housekeeping.

Finally, as they are never supposed to execute, some industries
dictate that they be removed from production code, classifying
them as \"dead code\" (they aren\'t supposed to have side-effects).

Would you have an argument for leaving this in your code?
if (FALSE) panic();
Isn\'t that what an invariant *effectively* reduces to?

Not if it\'s coded correctly. In C and like languages, one either
declares the relevant variable to be volatile, or hides a critical
part of the mechanism in a subroutine, to prevent the compiler\'s code
optimizer from making such assumptions. Or write it in assembler.

You\'ve missed the point.

The folks in the design review KNOW (from a detailed examination
of the code) that the condition (represented here by \"FALSE\")
will always resolve to FALSE. Always. So, they KNOW the branch
will never be taken. \"panic()\" represents dead code.

Note that the compiler can\'t always see how the condition will
resolve. Even with an OCL.

if (0 != return_one() - 1)
panic()

if \"return_one()\" -- which, returns the constant \"1\" -- is opaque,
the compiler can\'t know that the expression resolves to

if (0 != 1 - 1)

so it can\'t do anything other than generate the requisite code to
invoke return_one(). I.e., the compiler can\'t decide that this is
\"won\'t happen\".

Resorting to the opacity of functions lets you do things solely
for side-effects -- which would otherwise be if-fy.

E.g., if \"return_one()\" was actually \"fault_in_enough_stack_space()\"
then you wouldn\'t want the call elided -- because the code (presumably)
relies on having the requisite stack \"wired down\" (or faulted in)
before whatever comes next.
 
On 1/12/2023 11:06 AM, Joe Gwinn wrote:
On Sat, 7 Jan 2023 17:16:57 -0700, Don Y <blockedofcourse@foo.invalid
wrote:

On 1/7/2023 4:11 PM, Joe Gwinn wrote:
Things which do have an important place in modern software that is
intended to be provably correct are invariants (borrowed from physics).

Yes, but if I recall we called them Assertions:

.<https://en.wikipedia.org/wiki/Assertion_(software_development)

Software also has Invariants, but I don\'t know that either one came
from the Physics world.

.<https://en.wikipedia.org/wiki/Invariant_(mathematics)#Invariants_in_computer_science

The main difference in software seems to be that assertions are
logical statements about the value of a single variable, while
Invariants apply an assertion to the result of a specified function.

One kind of assertion was visual - coplot a 2D plot of something, plus
a circle, and visually verify concentricity. The eye is _very_ good
at this, so it was a very robust and sensitive test.

I for one used them heavily, with some being used in operation, not
just development. This was done in runtime code, not as a property of
the programming language and/or compiler.

That, and the fact that many languages simply don\'t natively support
them, means they end up being a matter of \"discipline\" and not
really enforceable (design reviews?).

Sure they can be enforced in a design review. Simply ask the
programmer what the Invariants are and how the relevant code achieves
them.

That\'s *discipline*. You have to have procedures in place and
RELY on their enforcement -- by people qualified and committed.
And, Manglement has to be committed to expending the resources
to do those and impose their edicts.

If \"ship it\" is the bigger imperative, guess what will suffer?

The better approach is to build mechanisms (into the language
or runtime) that enforce these things. Then, you don\'t have to rely
on \"discipline\" to provide those protections.

This has nothing to do with whether the language has anything like
Invariants - the compiler won\'t understand the domain of the app code
being written, or know when and where to use which invariant.

Yes, you still rely on the developer to declare and define these
\"meaningfully\". But, if you are *designing* your code (and not
just relying on some /ad hoc/ process), you can define the
contracts before any of the code is written and codify these
with an OCL.

For systems of significant complexity *or* where the sources are
not visible, such OCL declarations tell the developer what he
can expect from the interface -- and, WHAT IS EXPECTED OF HIM!

I, for example, support the specification of invariant in-out conditions
in my IDL. I feel this makes the contract more explicit -- in terms
that the developer WILL see (and, that the auto-generated stubs will
enforce in case he wants to overlook them! :> ).

But, I can\'t mechanically require the designer of an object class
(or service) to define them meaningfully.

And, what do you do when one fails at runtime? How often does
code simply propagate assertions up to a top level handler...
that panic()\'s?

Yep. Depends on what the code does.

But, developers need to understand their intent AND have considered
a fallback strategy: \"What do I do *if*...\"

In 1985 or so, I built an invariant into some operating system code
used in a radar. At this level in the kernel, nothing like printf
exists. Nor did it have a printer for that matter.

You can use a \"black box\" to log messages that can\'t reach the
console (at this time). You wouldn\'t rely on a full-fledged
\"printf()-like\" facility to prepare messages. But, the intended
viewer of those messages can expend some effort to understand them
*in* error conditions. And, given that you want to maximize how much
you can store in the black-box resource, you\'d want messages to
be terse or heavily encoded.

So, just being able to store a series of integers is often enough
(even if the integers are the exp/mantissa of a FP value!)

I have a runtime monitor that lets me \"watch\" memory regions and
\"notice\" things of interest. The display being part of the
monitor\'s hardware support (in some cases, a set of 7-segment digits)

The problem was that the app guys were not ensuring that what had to
be global-access buffers were in fact in hardware global memory; if
not, the system\'s picture of reality would rapidly diverge. Subtle
but devastating.

So, what to do? This has to be checked at full speed, so it must be
dead simple: Add to the kernel a few lines of assembly code that
verified that the buffer address fell in hardware global memory and
intentionally execute an illegal instruction if not. This caught
*everybody* within 8 hours.

With modern hardware, you can rely on the processor (to some extent)
to catch these things. E.g., dereferencing an invalid pointer,
writing to TEXT space, stack under/over-flow, etc.

But, you have to plan with those things in mind.

More significantly, do you see real-time software implementing
invariants wrt deadlines? (oops! isn\'t that the whole point of RT??)
Again, I support the specification of per-task \"deadline handlers\"
but there\'s nothing that forces the developer to define one meaningfully.

Deadline schedulers do exist and were widely touted for realtime, but
never caught on outside of academia because they fail badly when the
planets align badly and deadline cannot be met. This proved too
fragile for real applications. Not to mention too complex.

If you\'re coding where time is important, then how is NOT
verifying timeliness constraints ARE being met LESS important
than verifying the constraints on a function call/return?

What mechanisms do you have to detect this, at runtime?
In the released product?

When folks are still failing to test the results of many common
functions (malloc() being a prime example... but, how often have
you tested the return value of printf()?), how can you expect
them to have sorted out what to do for each thrown exception,
failed invariant, etc.?

In RT, one does not use malloc except once, to get the working
buffers. After that, the RT code handles memory management.

Old Wives\' Tale. Your concern is wrt deterministic behavior
and/or timeliness. If you, the CAPABLE developer, know things
about the application and your implementation, why should you
be constrained NOT to use a facility? (why have malloc() at all?
why not just static buffers throughout?)

\"Don\'t run with scissors\" Why hasn\'t someone designed and sold
a device that prevents you from doing so? It could be as simple
as a 300 pound weight and a chain (to the scissors)!

Ans: there are times when you *need* to run with scissors.
So, be *wary*/vigilant when doing so, but don\'t anchor your
scissors to a large weight just to ensure you \"play safe\".

Java-style garbage collection would cause random system hangs, and so
cannot be used in realtime.

You don\'t need GC with dynamic memory use. Only to catch
stale references that the user isn\'t obligated to clean up
on his own. Java manages objects so has to bear that cost.
In other languages, the developer manages those resources -- and
is responsible for their proper housekeeping.

Finally, as they are never supposed to execute, some industries
dictate that they be removed from production code, classifying
them as \"dead code\" (they aren\'t supposed to have side-effects).

Would you have an argument for leaving this in your code?
if (FALSE) panic();
Isn\'t that what an invariant *effectively* reduces to?

Not if it\'s coded correctly. In C and like languages, one either
declares the relevant variable to be volatile, or hides a critical
part of the mechanism in a subroutine, to prevent the compiler\'s code
optimizer from making such assumptions. Or write it in assembler.

You\'ve missed the point.

The folks in the design review KNOW (from a detailed examination
of the code) that the condition (represented here by \"FALSE\")
will always resolve to FALSE. Always. So, they KNOW the branch
will never be taken. \"panic()\" represents dead code.

Note that the compiler can\'t always see how the condition will
resolve. Even with an OCL.

if (0 != return_one() - 1)
panic()

if \"return_one()\" -- which, returns the constant \"1\" -- is opaque,
the compiler can\'t know that the expression resolves to

if (0 != 1 - 1)

so it can\'t do anything other than generate the requisite code to
invoke return_one(). I.e., the compiler can\'t decide that this is
\"won\'t happen\".

Resorting to the opacity of functions lets you do things solely
for side-effects -- which would otherwise be if-fy.

E.g., if \"return_one()\" was actually \"fault_in_enough_stack_space()\"
then you wouldn\'t want the call elided -- because the code (presumably)
relies on having the requisite stack \"wired down\" (or faulted in)
before whatever comes next.
 
On Wed, 4 Jan 2023 09:52:02 -0500, bitrex <user@example.net> wrote:

On 1/3/2023 7:30 PM, Phil Hobbs wrote:
RichD wrote:
On January 1,  John Larkin wrote:
https://www.theregister.com/2022/07/18/electrical_engineers_extinction/?td=rt-9cp
I\'ve been thinking for some time now that EE schools don\'t turn out
people who like electricity, but maker culture might.

I advise younguns against an engineering degree, it\'s over-specialized,
and obsolete in 5 years.

Only if you get sucked into spending all your time on the flavor of the
month.  People who spend their time in school learning fundamental
things that are hard to master on your own (math, mostly) and then pick
up the other stuff as they go along don\'t get obsolete.  That\'s not
difficult to do in your average EE program even today, AFAICT.  Signals
and systems, electrodynamics, solid state theory, and a bit of quantum
are all good things to know.

Spending all your time in school programming in Javascript or VHDL or
memorizing compliance requirements is not a good career move for an EE.

I tell them to get a physics education.  Study hard.  Then you have the
tools to do anything you want.

Physicists turn up everywhere, it\'s true.  Folks with bachelor\'s degrees
in physics can do most kinds of engineering, provided they\'re willing to
bone up on the specifics.  Of course there are some who assume they know
everything and just bull ahead till they fail, but, well, human beings
are everyplace. ;)  Thing is, the basic professional qualification for a
physicist is a doctorate, whereas in engineering it\'s a BSEE.

That is, first the academics, then the vocational training.

I agree that knowing the fundamentals cold is very important.  However,
(a) physics isn\'t for everyone, by a long chalk; and (b) there\'s a
glorious intellectual heritage in engineering, so calling it \'vocational
training\' is pejorative.

Cheers

Phil \"Intermediate energy state\" Hobbs


Advanced engineering mathematics:

https://www.ebay.com/itm/194964206310

Which is pretty advanced, I don\'t know how many BS-type EEs know about
the orthogonality of Bessel functions, or regularly use contour
integration for anything.

But not as advanced as \"Advanced Mathematical Methods for Scientists &
Engineers\", which is largely about perturbation methods, boundary layer
theory, and WKB approximations. Sounds fun I guess, I just got a used
copy from Amazon for $8

I actually took an integral to compute a mosfet power dissipation.
That was 10 or 12 years ago. Now I use Spice.
 
On Wed, 4 Jan 2023 09:52:02 -0500, bitrex <user@example.net> wrote:

On 1/3/2023 7:30 PM, Phil Hobbs wrote:
RichD wrote:
On January 1,  John Larkin wrote:
https://www.theregister.com/2022/07/18/electrical_engineers_extinction/?td=rt-9cp
I\'ve been thinking for some time now that EE schools don\'t turn out
people who like electricity, but maker culture might.

I advise younguns against an engineering degree, it\'s over-specialized,
and obsolete in 5 years.

Only if you get sucked into spending all your time on the flavor of the
month.  People who spend their time in school learning fundamental
things that are hard to master on your own (math, mostly) and then pick
up the other stuff as they go along don\'t get obsolete.  That\'s not
difficult to do in your average EE program even today, AFAICT.  Signals
and systems, electrodynamics, solid state theory, and a bit of quantum
are all good things to know.

Spending all your time in school programming in Javascript or VHDL or
memorizing compliance requirements is not a good career move for an EE.

I tell them to get a physics education.  Study hard.  Then you have the
tools to do anything you want.

Physicists turn up everywhere, it\'s true.  Folks with bachelor\'s degrees
in physics can do most kinds of engineering, provided they\'re willing to
bone up on the specifics.  Of course there are some who assume they know
everything and just bull ahead till they fail, but, well, human beings
are everyplace. ;)  Thing is, the basic professional qualification for a
physicist is a doctorate, whereas in engineering it\'s a BSEE.

That is, first the academics, then the vocational training.

I agree that knowing the fundamentals cold is very important.  However,
(a) physics isn\'t for everyone, by a long chalk; and (b) there\'s a
glorious intellectual heritage in engineering, so calling it \'vocational
training\' is pejorative.

Cheers

Phil \"Intermediate energy state\" Hobbs


Advanced engineering mathematics:

https://www.ebay.com/itm/194964206310

Which is pretty advanced, I don\'t know how many BS-type EEs know about
the orthogonality of Bessel functions, or regularly use contour
integration for anything.

But not as advanced as \"Advanced Mathematical Methods for Scientists &
Engineers\", which is largely about perturbation methods, boundary layer
theory, and WKB approximations. Sounds fun I guess, I just got a used
copy from Amazon for $8

I actually took an integral to compute a mosfet power dissipation.
That was 10 or 12 years ago. Now I use Spice.
 
On Wed, 4 Jan 2023 09:52:02 -0500, bitrex <user@example.net> wrote:

On 1/3/2023 7:30 PM, Phil Hobbs wrote:
RichD wrote:
On January 1,  John Larkin wrote:
https://www.theregister.com/2022/07/18/electrical_engineers_extinction/?td=rt-9cp
I\'ve been thinking for some time now that EE schools don\'t turn out
people who like electricity, but maker culture might.

I advise younguns against an engineering degree, it\'s over-specialized,
and obsolete in 5 years.

Only if you get sucked into spending all your time on the flavor of the
month.  People who spend their time in school learning fundamental
things that are hard to master on your own (math, mostly) and then pick
up the other stuff as they go along don\'t get obsolete.  That\'s not
difficult to do in your average EE program even today, AFAICT.  Signals
and systems, electrodynamics, solid state theory, and a bit of quantum
are all good things to know.

Spending all your time in school programming in Javascript or VHDL or
memorizing compliance requirements is not a good career move for an EE.

I tell them to get a physics education.  Study hard.  Then you have the
tools to do anything you want.

Physicists turn up everywhere, it\'s true.  Folks with bachelor\'s degrees
in physics can do most kinds of engineering, provided they\'re willing to
bone up on the specifics.  Of course there are some who assume they know
everything and just bull ahead till they fail, but, well, human beings
are everyplace. ;)  Thing is, the basic professional qualification for a
physicist is a doctorate, whereas in engineering it\'s a BSEE.

That is, first the academics, then the vocational training.

I agree that knowing the fundamentals cold is very important.  However,
(a) physics isn\'t for everyone, by a long chalk; and (b) there\'s a
glorious intellectual heritage in engineering, so calling it \'vocational
training\' is pejorative.

Cheers

Phil \"Intermediate energy state\" Hobbs


Advanced engineering mathematics:

https://www.ebay.com/itm/194964206310

Which is pretty advanced, I don\'t know how many BS-type EEs know about
the orthogonality of Bessel functions, or regularly use contour
integration for anything.

But not as advanced as \"Advanced Mathematical Methods for Scientists &
Engineers\", which is largely about perturbation methods, boundary layer
theory, and WKB approximations. Sounds fun I guess, I just got a used
copy from Amazon for $8

I actually took an integral to compute a mosfet power dissipation.
That was 10 or 12 years ago. Now I use Spice.
 
On Thu, 5 Jan 2023 09:53:21 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/01/2023 22:25, John Larkin wrote:
On Wed, 4 Jan 2023 10:30:35 -0800, Joerg <news@analogconsultants.com
wrote:

On 1/2/23 2:34 PM, Joe Gwinn wrote:
[snip]
Antenna pattern is first calibrated by a like process.

My time-domain routine didn\'t need any golden numbers and converged
every single time within less than half a second. We let the uC handle
that because the computational load dropped to peanuts. The big DSP
became unemployed.

The project start was the usual, everyone saying that FFT was the name
of the game and there wasn\'t any other decent way. If it didn\'t work in
time domain I\'d have to buy everyone a beer at night. If it did,
everyone had to buy me a beer. I needed a designated driver that night ...

Given an actual waveform a(t) and a desired waveform d(t), we can fix
a to make d with an equalizer having impulse response e(t)

d(t) = a(t) ** e(t) ** is convolution

Finding e is the reverse convolution problem.

The classic way to find e(t) is to do complex FFTs on a and d and
complex divide to get the FFT of e, then reverse FFT. That usually
makes a bunch of divide-by-0 or divide-by-almost-0 points, which sort
of blows up.

Which is why no one apart from an EE who skipped all the advanced maths
classes would ever try to do it that way.

That statement makes no sense. There are lots of academic papers about
this method, with various kluges to keep the divides under control.

Deconvolution is an \"ill-posed problem\" so is publication-rich.

Effective deconvolution algorithms have been known since the late 1970\'s
when computers became powerful enough to implement them. The first big
breakthrough in applying non-linear constraints like positivity of a
brightness distribution was Gull & Daniel, Nature 1978, 272, 686-690
(implementation was mathematically a bit flakey but it still worked OK)

https://www.nature.com/articles/272686a0

Prior to that you would always have non-sensical rings of negative
brightness around bright point sources caused by the truncated Fourier
transform.

Slightly later more mathematically refined versions widely used:

John Skilling & Bryan\'s Maximum Entropy Image Reconstruction

https://ui.adsabs.harvard.edu/abs/1984MNRAS.211..111S/abstract

Tim Cornwell\'s & Evans VM at the VLA

https://ui.adsabs.harvard.edu/abs/1985A%26A...143...77C/abstract

Prior to that there were still some quite respectable linear
deconvolution methods that involved weighting down the higher
frequencies with a constraint (additive frequency dependent term in the
denominator). Effectively a penalty function that prevents wild changes
between adjacent pixels by constraining the second derivative.

Later Maximum Entropy deconvolution methods became routine and could
solve very difficult problems albeit at high computational cost. They
were the way that deconvolved images from the flawed HST were made.

The fault in the primary mirror was determined using a code from Jodrell
Bank intended for adjusting the panels for focus on the big dish.

I do it in time domain.

Feed forward compensation for step changes in input signal is as old as
the hills. Mass spectrometers have used it since their invention. It is
a one trick pony and only works in very limited circumstances.

10^11 ohm resistors were anything but pure resistors.

There was a whole year when the one guy in the world who made the best
ones finally retired and when the new guy really hadn\'t got the knack.

I built a roughly 40 ps TDR just for fun, as part of another proto
board.

https://www.dropbox.com/s/si81zpny0ttjqk1/Z368.JPG?raw=1

It worked, although I haven\'t commercialized it yet. My idea is to
make something that\'s fast but ugly, which isn\'t hard these days, and
make it have beautiful step response by passing it through a software
equalizer algorithm.

Here\'s my deconvolution thing:

https://www.dropbox.com/s/iqpldbkq2awdeml/TDR_Decon_demo.jpg?raw=1

The yellow trace is the assumed ratty TDR and purple is the filter
impulse response and the white trace is the convolved result.

This program will pretty-up some seriously nasty waveforms. It looks
like it can, in real life, make a horror into a beautiful step with
about half the 10:90 risetime of the original.

The program is fun to play with. Keep iterating and eventually things
explode in cool ways.
 
On Thu, 5 Jan 2023 09:53:21 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/01/2023 22:25, John Larkin wrote:
On Wed, 4 Jan 2023 10:30:35 -0800, Joerg <news@analogconsultants.com
wrote:

On 1/2/23 2:34 PM, Joe Gwinn wrote:
[snip]
Antenna pattern is first calibrated by a like process.

My time-domain routine didn\'t need any golden numbers and converged
every single time within less than half a second. We let the uC handle
that because the computational load dropped to peanuts. The big DSP
became unemployed.

The project start was the usual, everyone saying that FFT was the name
of the game and there wasn\'t any other decent way. If it didn\'t work in
time domain I\'d have to buy everyone a beer at night. If it did,
everyone had to buy me a beer. I needed a designated driver that night ...

Given an actual waveform a(t) and a desired waveform d(t), we can fix
a to make d with an equalizer having impulse response e(t)

d(t) = a(t) ** e(t) ** is convolution

Finding e is the reverse convolution problem.

The classic way to find e(t) is to do complex FFTs on a and d and
complex divide to get the FFT of e, then reverse FFT. That usually
makes a bunch of divide-by-0 or divide-by-almost-0 points, which sort
of blows up.

Which is why no one apart from an EE who skipped all the advanced maths
classes would ever try to do it that way.

That statement makes no sense. There are lots of academic papers about
this method, with various kluges to keep the divides under control.

Deconvolution is an \"ill-posed problem\" so is publication-rich.

Effective deconvolution algorithms have been known since the late 1970\'s
when computers became powerful enough to implement them. The first big
breakthrough in applying non-linear constraints like positivity of a
brightness distribution was Gull & Daniel, Nature 1978, 272, 686-690
(implementation was mathematically a bit flakey but it still worked OK)

https://www.nature.com/articles/272686a0

Prior to that you would always have non-sensical rings of negative
brightness around bright point sources caused by the truncated Fourier
transform.

Slightly later more mathematically refined versions widely used:

John Skilling & Bryan\'s Maximum Entropy Image Reconstruction

https://ui.adsabs.harvard.edu/abs/1984MNRAS.211..111S/abstract

Tim Cornwell\'s & Evans VM at the VLA

https://ui.adsabs.harvard.edu/abs/1985A%26A...143...77C/abstract

Prior to that there were still some quite respectable linear
deconvolution methods that involved weighting down the higher
frequencies with a constraint (additive frequency dependent term in the
denominator). Effectively a penalty function that prevents wild changes
between adjacent pixels by constraining the second derivative.

Later Maximum Entropy deconvolution methods became routine and could
solve very difficult problems albeit at high computational cost. They
were the way that deconvolved images from the flawed HST were made.

The fault in the primary mirror was determined using a code from Jodrell
Bank intended for adjusting the panels for focus on the big dish.

I do it in time domain.

Feed forward compensation for step changes in input signal is as old as
the hills. Mass spectrometers have used it since their invention. It is
a one trick pony and only works in very limited circumstances.

10^11 ohm resistors were anything but pure resistors.

There was a whole year when the one guy in the world who made the best
ones finally retired and when the new guy really hadn\'t got the knack.

I built a roughly 40 ps TDR just for fun, as part of another proto
board.

https://www.dropbox.com/s/si81zpny0ttjqk1/Z368.JPG?raw=1

It worked, although I haven\'t commercialized it yet. My idea is to
make something that\'s fast but ugly, which isn\'t hard these days, and
make it have beautiful step response by passing it through a software
equalizer algorithm.

Here\'s my deconvolution thing:

https://www.dropbox.com/s/iqpldbkq2awdeml/TDR_Decon_demo.jpg?raw=1

The yellow trace is the assumed ratty TDR and purple is the filter
impulse response and the white trace is the convolved result.

This program will pretty-up some seriously nasty waveforms. It looks
like it can, in real life, make a horror into a beautiful step with
about half the 10:90 risetime of the original.

The program is fun to play with. Keep iterating and eventually things
explode in cool ways.
 
On Thu, 5 Jan 2023 09:53:21 +0000, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 04/01/2023 22:25, John Larkin wrote:
On Wed, 4 Jan 2023 10:30:35 -0800, Joerg <news@analogconsultants.com
wrote:

On 1/2/23 2:34 PM, Joe Gwinn wrote:
[snip]
Antenna pattern is first calibrated by a like process.

My time-domain routine didn\'t need any golden numbers and converged
every single time within less than half a second. We let the uC handle
that because the computational load dropped to peanuts. The big DSP
became unemployed.

The project start was the usual, everyone saying that FFT was the name
of the game and there wasn\'t any other decent way. If it didn\'t work in
time domain I\'d have to buy everyone a beer at night. If it did,
everyone had to buy me a beer. I needed a designated driver that night ...

Given an actual waveform a(t) and a desired waveform d(t), we can fix
a to make d with an equalizer having impulse response e(t)

d(t) = a(t) ** e(t) ** is convolution

Finding e is the reverse convolution problem.

The classic way to find e(t) is to do complex FFTs on a and d and
complex divide to get the FFT of e, then reverse FFT. That usually
makes a bunch of divide-by-0 or divide-by-almost-0 points, which sort
of blows up.

Which is why no one apart from an EE who skipped all the advanced maths
classes would ever try to do it that way.

That statement makes no sense. There are lots of academic papers about
this method, with various kluges to keep the divides under control.

Deconvolution is an \"ill-posed problem\" so is publication-rich.

Effective deconvolution algorithms have been known since the late 1970\'s
when computers became powerful enough to implement them. The first big
breakthrough in applying non-linear constraints like positivity of a
brightness distribution was Gull & Daniel, Nature 1978, 272, 686-690
(implementation was mathematically a bit flakey but it still worked OK)

https://www.nature.com/articles/272686a0

Prior to that you would always have non-sensical rings of negative
brightness around bright point sources caused by the truncated Fourier
transform.

Slightly later more mathematically refined versions widely used:

John Skilling & Bryan\'s Maximum Entropy Image Reconstruction

https://ui.adsabs.harvard.edu/abs/1984MNRAS.211..111S/abstract

Tim Cornwell\'s & Evans VM at the VLA

https://ui.adsabs.harvard.edu/abs/1985A%26A...143...77C/abstract

Prior to that there were still some quite respectable linear
deconvolution methods that involved weighting down the higher
frequencies with a constraint (additive frequency dependent term in the
denominator). Effectively a penalty function that prevents wild changes
between adjacent pixels by constraining the second derivative.

Later Maximum Entropy deconvolution methods became routine and could
solve very difficult problems albeit at high computational cost. They
were the way that deconvolved images from the flawed HST were made.

The fault in the primary mirror was determined using a code from Jodrell
Bank intended for adjusting the panels for focus on the big dish.

I do it in time domain.

Feed forward compensation for step changes in input signal is as old as
the hills. Mass spectrometers have used it since their invention. It is
a one trick pony and only works in very limited circumstances.

10^11 ohm resistors were anything but pure resistors.

There was a whole year when the one guy in the world who made the best
ones finally retired and when the new guy really hadn\'t got the knack.

I built a roughly 40 ps TDR just for fun, as part of another proto
board.

https://www.dropbox.com/s/si81zpny0ttjqk1/Z368.JPG?raw=1

It worked, although I haven\'t commercialized it yet. My idea is to
make something that\'s fast but ugly, which isn\'t hard these days, and
make it have beautiful step response by passing it through a software
equalizer algorithm.

Here\'s my deconvolution thing:

https://www.dropbox.com/s/iqpldbkq2awdeml/TDR_Decon_demo.jpg?raw=1

The yellow trace is the assumed ratty TDR and purple is the filter
impulse response and the white trace is the convolved result.

This program will pretty-up some seriously nasty waveforms. It looks
like it can, in real life, make a horror into a beautiful step with
about half the 10:90 risetime of the original.

The program is fun to play with. Keep iterating and eventually things
explode in cool ways.
 
Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 2023-01-04, Phil Hobbs wrote:
Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 2023-01-03, Don Y wrote:
On 1/3/2023 9:42 AM, bitrex wrote:
And then people complain the US doesn\'t make electronics anymore. Challenging
programs with a high washout rate AND it doesn\'t pay too good? Wow hard to
believe everyone isn\'t jumping on that one, lol

Nowadays, even \"makers\" don\'t *make* electronics. They just buy
modules and write some code. Modern packages are just too tedious
for hobbyists; you want successes to encourage your efforts, not
failures.

The no-lead stuff and/or exposed bottom pad stuff is certainly difficult
to handle... but even 0402 is \"doable\" with naught more than a decent
magnifying glass (e.g. Optivisors)

The power pad thing is easily doable if you use paste and a hot plate.
For hand soldering, if you put a bit of flux on the pad beforehand, you
can wick solder up the thermal vias fairly well. A bigger PTH makes it
easier.

Yeah, I\'ve been trawling the (cheap garbage) options for plates and
hot-air guns on amazon (etc), since exposed-pad might give some fun
options in the future, though they\'re also mega-tiny parts :)

I use an ancient Corning lab hotplate (the kind with the magnetic
stirrer) from eBay, with a piece of 1/2-inch aluminum jig plate on top
of it. (It\'s about 6 x 9 inches, one of their standard sizes.) I have
a cheapish Extech thermocouple thermometer (3-channel) to let me set the
plate temperature, which should be about 250C.

For the time being I can make do with leaded ICs. Helps that I\'m not
into things that are super fancy (and am kind of \"going backwards\" in
the sense that I\'m trying to wrap my head around doing things with
analog ... )

Leaded ICs are a win, but some of them also have power pads. QFNs are
really fun in high-vibration environments. :(

Cheers

Phil Hobbs




--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 2023-01-04, Phil Hobbs wrote:
Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 2023-01-03, Don Y wrote:
On 1/3/2023 9:42 AM, bitrex wrote:
And then people complain the US doesn\'t make electronics anymore. Challenging
programs with a high washout rate AND it doesn\'t pay too good? Wow hard to
believe everyone isn\'t jumping on that one, lol

Nowadays, even \"makers\" don\'t *make* electronics. They just buy
modules and write some code. Modern packages are just too tedious
for hobbyists; you want successes to encourage your efforts, not
failures.

The no-lead stuff and/or exposed bottom pad stuff is certainly difficult
to handle... but even 0402 is \"doable\" with naught more than a decent
magnifying glass (e.g. Optivisors)

The power pad thing is easily doable if you use paste and a hot plate.
For hand soldering, if you put a bit of flux on the pad beforehand, you
can wick solder up the thermal vias fairly well. A bigger PTH makes it
easier.

Yeah, I\'ve been trawling the (cheap garbage) options for plates and
hot-air guns on amazon (etc), since exposed-pad might give some fun
options in the future, though they\'re also mega-tiny parts :)

I use an ancient Corning lab hotplate (the kind with the magnetic
stirrer) from eBay, with a piece of 1/2-inch aluminum jig plate on top
of it. (It\'s about 6 x 9 inches, one of their standard sizes.) I have
a cheapish Extech thermocouple thermometer (3-channel) to let me set the
plate temperature, which should be about 250C.

For the time being I can make do with leaded ICs. Helps that I\'m not
into things that are super fancy (and am kind of \"going backwards\" in
the sense that I\'m trying to wrap my head around doing things with
analog ... )

Leaded ICs are a win, but some of them also have power pads. QFNs are
really fun in high-vibration environments. :(

Cheers

Phil Hobbs




--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 2023-01-04, Phil Hobbs wrote:
Dan Purgert wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 2023-01-03, Don Y wrote:
On 1/3/2023 9:42 AM, bitrex wrote:
And then people complain the US doesn\'t make electronics anymore. Challenging
programs with a high washout rate AND it doesn\'t pay too good? Wow hard to
believe everyone isn\'t jumping on that one, lol

Nowadays, even \"makers\" don\'t *make* electronics. They just buy
modules and write some code. Modern packages are just too tedious
for hobbyists; you want successes to encourage your efforts, not
failures.

The no-lead stuff and/or exposed bottom pad stuff is certainly difficult
to handle... but even 0402 is \"doable\" with naught more than a decent
magnifying glass (e.g. Optivisors)

The power pad thing is easily doable if you use paste and a hot plate.
For hand soldering, if you put a bit of flux on the pad beforehand, you
can wick solder up the thermal vias fairly well. A bigger PTH makes it
easier.

Yeah, I\'ve been trawling the (cheap garbage) options for plates and
hot-air guns on amazon (etc), since exposed-pad might give some fun
options in the future, though they\'re also mega-tiny parts :)

I use an ancient Corning lab hotplate (the kind with the magnetic
stirrer) from eBay, with a piece of 1/2-inch aluminum jig plate on top
of it. (It\'s about 6 x 9 inches, one of their standard sizes.) I have
a cheapish Extech thermocouple thermometer (3-channel) to let me set the
plate temperature, which should be about 250C.

For the time being I can make do with leaded ICs. Helps that I\'m not
into things that are super fancy (and am kind of \"going backwards\" in
the sense that I\'m trying to wrap my head around doing things with
analog ... )

Leaded ICs are a win, but some of them also have power pads. QFNs are
really fun in high-vibration environments. :(

Cheers

Phil Hobbs




--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 1/8/2023 6:10 AM, Dan Purgert wrote:
On 2023-01-08, Don Y wrote:
On 1/7/2023 2:00 PM, Dan Purgert wrote:
[...]
Pff, I\'ve got 32 perfectly usable \"control\" characters right here in
ASCII :D

But, does your design/product know how to change its behavior
based on those external inputs/commands? Or, does it think that
*it* is the authoritative agent in its own use?

At the moment, no. But at the moment the \"design\" is still in the
\"okay, what is a minimum working example?\"

It was intended as a generic observation. Devices that aren\'t designed
with the notion of user input (and output) coming from a virtual
device (keypad, network socket, etc.) will end up needing to be
revisited at some point in the product\'s evolution (\"We\'ll tackle
that in Model 2\").

This can be painful if you\'ve made assumptions about how the
user will interact with the device and those assumptions don\'t
apply to other usage modalities.

E.g., in my current design, I have many \"status summary\" presentations.
If I had assumed that I was going to always present these graphically,
then there would be no way to \"pipe\" the information to another
automaton. Or, capture it in a file. Similarly, if I expected the
user to consume it visually, then a user with a vision deficit wouldn\'t
easily be accommodated (you\'d have to build a kludge \"converter\"
that knew how to map *this* information into a useful form for that
user... and some other information into a likewise (but possibly
different) form, etc.

If, instead, you don\'t bind the presentation to a particular
technology, then you can present it to a variety of different
consumers (human and otherwise). But, this decision influences
how you structure your design; if you\'re intent on painting
little pictures on a screen, you\'ll likely find it hard to
modify the design to, instead, drive a (e.g.) haptic display!

On top of that, it will soon be a common process to support
encryption and authentication in those actions (you don\'t want
someone to be able to control your furnace without your
consent. You likely also wouldn\'t want someone on the other end
of your network to twiddle with the settings on your \'scope,
power supply, etc.)

Note to self, tinfoil hat for the new furnace ...

\"Vandals\" encrypt your hard disk. Purely on the *assumption*
that there is something of value, there, that you would PAY
to recover. Is there?

Yes, but the only \"value\" is about 2 hours of \"read from this stack of
DVDs\". Which, is more of a case of \"well this is annoying\" and less of
\"well, I am screwed\".

They don\'t know that. So, they just blanket target everyone (that
they can)... and hope they catch a few (fools).

You might, instead, look at imaging the disk onto more spinning rust.

I use a USB dock to capture the (compressed) image of my system disk
as I am building a system (so, I can quickly rollback any changes that
I decide I don\'t want). IME, about a 2:1 compression is possible
(i.e., my 1T system disks can be imaged onto 500G drives). It\'s
also possible to send the image to a remote mount (or FTP, etc.).

[I keep a set of 3 or 4 drives and cycle through them as I
incrementally build up the system, taking snapshots at key
points: OS installed; drivers installed; updates installed;
core utilities (that I am almost certain to want on every box)
installed; first batch of applications; second batch; etc.]

Restore is considerably quicker (most compressors are costlier
to compress than decompress).

[I have a laptop that automatically restores itself on each
boot. Ensures it\'s always \"clean\" for ecommerce uses]

> That reminds me, gotta get a new stack...

...with blueberries and extra maple syrup!
 
On 1/8/2023 6:10 AM, Dan Purgert wrote:
On 2023-01-08, Don Y wrote:
On 1/7/2023 2:00 PM, Dan Purgert wrote:
[...]
Pff, I\'ve got 32 perfectly usable \"control\" characters right here in
ASCII :D

But, does your design/product know how to change its behavior
based on those external inputs/commands? Or, does it think that
*it* is the authoritative agent in its own use?

At the moment, no. But at the moment the \"design\" is still in the
\"okay, what is a minimum working example?\"

It was intended as a generic observation. Devices that aren\'t designed
with the notion of user input (and output) coming from a virtual
device (keypad, network socket, etc.) will end up needing to be
revisited at some point in the product\'s evolution (\"We\'ll tackle
that in Model 2\").

This can be painful if you\'ve made assumptions about how the
user will interact with the device and those assumptions don\'t
apply to other usage modalities.

E.g., in my current design, I have many \"status summary\" presentations.
If I had assumed that I was going to always present these graphically,
then there would be no way to \"pipe\" the information to another
automaton. Or, capture it in a file. Similarly, if I expected the
user to consume it visually, then a user with a vision deficit wouldn\'t
easily be accommodated (you\'d have to build a kludge \"converter\"
that knew how to map *this* information into a useful form for that
user... and some other information into a likewise (but possibly
different) form, etc.

If, instead, you don\'t bind the presentation to a particular
technology, then you can present it to a variety of different
consumers (human and otherwise). But, this decision influences
how you structure your design; if you\'re intent on painting
little pictures on a screen, you\'ll likely find it hard to
modify the design to, instead, drive a (e.g.) haptic display!

On top of that, it will soon be a common process to support
encryption and authentication in those actions (you don\'t want
someone to be able to control your furnace without your
consent. You likely also wouldn\'t want someone on the other end
of your network to twiddle with the settings on your \'scope,
power supply, etc.)

Note to self, tinfoil hat for the new furnace ...

\"Vandals\" encrypt your hard disk. Purely on the *assumption*
that there is something of value, there, that you would PAY
to recover. Is there?

Yes, but the only \"value\" is about 2 hours of \"read from this stack of
DVDs\". Which, is more of a case of \"well this is annoying\" and less of
\"well, I am screwed\".

They don\'t know that. So, they just blanket target everyone (that
they can)... and hope they catch a few (fools).

You might, instead, look at imaging the disk onto more spinning rust.

I use a USB dock to capture the (compressed) image of my system disk
as I am building a system (so, I can quickly rollback any changes that
I decide I don\'t want). IME, about a 2:1 compression is possible
(i.e., my 1T system disks can be imaged onto 500G drives). It\'s
also possible to send the image to a remote mount (or FTP, etc.).

[I keep a set of 3 or 4 drives and cycle through them as I
incrementally build up the system, taking snapshots at key
points: OS installed; drivers installed; updates installed;
core utilities (that I am almost certain to want on every box)
installed; first batch of applications; second batch; etc.]

Restore is considerably quicker (most compressors are costlier
to compress than decompress).

[I have a laptop that automatically restores itself on each
boot. Ensures it\'s always \"clean\" for ecommerce uses]

> That reminds me, gotta get a new stack...

...with blueberries and extra maple syrup!
 
On 1/8/2023 6:10 AM, Dan Purgert wrote:
On 2023-01-08, Don Y wrote:
On 1/7/2023 2:00 PM, Dan Purgert wrote:
[...]
Pff, I\'ve got 32 perfectly usable \"control\" characters right here in
ASCII :D

But, does your design/product know how to change its behavior
based on those external inputs/commands? Or, does it think that
*it* is the authoritative agent in its own use?

At the moment, no. But at the moment the \"design\" is still in the
\"okay, what is a minimum working example?\"

It was intended as a generic observation. Devices that aren\'t designed
with the notion of user input (and output) coming from a virtual
device (keypad, network socket, etc.) will end up needing to be
revisited at some point in the product\'s evolution (\"We\'ll tackle
that in Model 2\").

This can be painful if you\'ve made assumptions about how the
user will interact with the device and those assumptions don\'t
apply to other usage modalities.

E.g., in my current design, I have many \"status summary\" presentations.
If I had assumed that I was going to always present these graphically,
then there would be no way to \"pipe\" the information to another
automaton. Or, capture it in a file. Similarly, if I expected the
user to consume it visually, then a user with a vision deficit wouldn\'t
easily be accommodated (you\'d have to build a kludge \"converter\"
that knew how to map *this* information into a useful form for that
user... and some other information into a likewise (but possibly
different) form, etc.

If, instead, you don\'t bind the presentation to a particular
technology, then you can present it to a variety of different
consumers (human and otherwise). But, this decision influences
how you structure your design; if you\'re intent on painting
little pictures on a screen, you\'ll likely find it hard to
modify the design to, instead, drive a (e.g.) haptic display!

On top of that, it will soon be a common process to support
encryption and authentication in those actions (you don\'t want
someone to be able to control your furnace without your
consent. You likely also wouldn\'t want someone on the other end
of your network to twiddle with the settings on your \'scope,
power supply, etc.)

Note to self, tinfoil hat for the new furnace ...

\"Vandals\" encrypt your hard disk. Purely on the *assumption*
that there is something of value, there, that you would PAY
to recover. Is there?

Yes, but the only \"value\" is about 2 hours of \"read from this stack of
DVDs\". Which, is more of a case of \"well this is annoying\" and less of
\"well, I am screwed\".

They don\'t know that. So, they just blanket target everyone (that
they can)... and hope they catch a few (fools).

You might, instead, look at imaging the disk onto more spinning rust.

I use a USB dock to capture the (compressed) image of my system disk
as I am building a system (so, I can quickly rollback any changes that
I decide I don\'t want). IME, about a 2:1 compression is possible
(i.e., my 1T system disks can be imaged onto 500G drives). It\'s
also possible to send the image to a remote mount (or FTP, etc.).

[I keep a set of 3 or 4 drives and cycle through them as I
incrementally build up the system, taking snapshots at key
points: OS installed; drivers installed; updates installed;
core utilities (that I am almost certain to want on every box)
installed; first batch of applications; second batch; etc.]

Restore is considerably quicker (most compressors are costlier
to compress than decompress).

[I have a laptop that automatically restores itself on each
boot. Ensures it\'s always \"clean\" for ecommerce uses]

> That reminds me, gotta get a new stack...

...with blueberries and extra maple syrup!
 
On 1/1/2023 9:13 AM, Fred Bloggs wrote:
That\'s a bunch of whitewashing. The so-called historians who research such things, like that Doris Bergen, draw that superficial conclusion on the basis of absence of explicit judicial record. Her goal was to justify eliminating that excuse as war crimes defense, not to make the Nazis look less evil. German law embodied the legal principle of befehlsnotstand which was a legal defense against prosecution for refusing such orders. The Nazis viewed the law as a minor inconvenience to be circumvented, and would execute the perpetrator for some other trumped up crime. That would be something for which no official record existed, so the argument needed to be turned on its head by asking well what does the official record actually show. Checkmate.

https://en.wikipedia.org/wiki/Befehlsnotstand

Your link contradicts you. E.g., \"[...]In practice, refusing a superior
order to participate in war crimes by German soldiers almost never led
to dire consequences for the refusing person, and punishment, if any,
was relatively mild. It usually resulted in degradation and being sent
to serve with fighting units at the front.\"
 
On 1/1/2023 9:13 AM, Fred Bloggs wrote:
That\'s a bunch of whitewashing. The so-called historians who research such things, like that Doris Bergen, draw that superficial conclusion on the basis of absence of explicit judicial record. Her goal was to justify eliminating that excuse as war crimes defense, not to make the Nazis look less evil. German law embodied the legal principle of befehlsnotstand which was a legal defense against prosecution for refusing such orders. The Nazis viewed the law as a minor inconvenience to be circumvented, and would execute the perpetrator for some other trumped up crime. That would be something for which no official record existed, so the argument needed to be turned on its head by asking well what does the official record actually show. Checkmate.

https://en.wikipedia.org/wiki/Befehlsnotstand

Your link contradicts you. E.g., \"[...]In practice, refusing a superior
order to participate in war crimes by German soldiers almost never led
to dire consequences for the refusing person, and punishment, if any,
was relatively mild. It usually resulted in degradation and being sent
to serve with fighting units at the front.\"
 
On 1/1/2023 9:13 AM, Fred Bloggs wrote:
That\'s a bunch of whitewashing. The so-called historians who research such things, like that Doris Bergen, draw that superficial conclusion on the basis of absence of explicit judicial record. Her goal was to justify eliminating that excuse as war crimes defense, not to make the Nazis look less evil. German law embodied the legal principle of befehlsnotstand which was a legal defense against prosecution for refusing such orders. The Nazis viewed the law as a minor inconvenience to be circumvented, and would execute the perpetrator for some other trumped up crime. That would be something for which no official record existed, so the argument needed to be turned on its head by asking well what does the official record actually show. Checkmate.

https://en.wikipedia.org/wiki/Befehlsnotstand

Your link contradicts you. E.g., \"[...]In practice, refusing a superior
order to participate in war crimes by German soldiers almost never led
to dire consequences for the refusing person, and punishment, if any,
was relatively mild. It usually resulted in degradation and being sent
to serve with fighting units at the front.\"
 
....
> A cheap chinese meter might be truly cheap and nasty, and correspondlngly dangerous, but anybody who sold it to you would risk being sued if it was.

Even for the old model with HV 1000V range, there is a warning label at the back \"Do not test voltage over 250 volts\". Defendant counters that plaintiff did not read the warning label,

If the item was given out free, there is no damage.
 
....
> A cheap chinese meter might be truly cheap and nasty, and correspondlngly dangerous, but anybody who sold it to you would risk being sued if it was.

Even for the old model with HV 1000V range, there is a warning label at the back \"Do not test voltage over 250 volts\". Defendant counters that plaintiff did not read the warning label,

If the item was given out free, there is no damage.
 

Welcome to EDABoard.com

Sponsor

Back
Top