OT (?) AI (personal) threats...

On 7/16/2023 5:12 AM, Martin Brown wrote:
On 15/07/2023 20:56, Don Y wrote:
I\'m trying to come to a rational/consistent opinion wrt AI
and it\'s various, perceived \"threats\".

The most insidious one is that what is best for the AI is not necessarily best
for humanity.

But *who* sets those criteria? \"Humanity\" already engages in
behaviors that are \"not necessarily best for humanity\"; how
would this be any different?

We already have chips designed by AI to do AI and that will
likely continue into the future. The tricky bit is that they can\'t tell you why
they made a particular decision (at least not yet) so they are very much a
black box entity that seems smart.

If you go with a neural net doing the pattern recognition *and* making the
decision, then you are largely working with BFM. I doubt you will ever
be able to explain -- in common-sense terms -- these decisions as
they are effectively simultaneous equations.

I take a hybrid approach in my uses; I let the NNet look for patterns
and then have it modify a Production System that will actually make the
decisions (which will then be observed by the NNet which will then
tweek the productions which will then...). So, I can limit the types of
things that I let the NNet consider as \"significant\" (input neurons)
AND force it to alter the system\'s behavior in very limited ways.

\"No, you have no reason to consult the phase of the moon when making
this decision...\"

I can understand how a person that can be *replaced* by an AI
would fear for their livelihood.  But, that (to me) isn\'t a
blanket reason for banning/restricting AIs.  (we didn\'t
ban *calculators* out of fear they would \"make redundant\"
folks who spent their days totaling columns of figures!
or back hoes out of fear they would make ditch diggers
redundant).

There is always some backlash against automation of what used to be highly
skilled work. Luddites spring to mind here.

Yet, those same folks likely have no problem BENEFITING from
\"labor savings\" (redundancies) in the products that they
purchase/consume...

The uproar in the \"artistic\" world implying that they are
outright *stealing* their existing works seems a stretch,
as well.  If I wrote a story that sounded a hellofalot
like one of your stories -- or painted a picture that
resembled one of yours -- would that be \"wrong\"?  (e.g.,

Depends a bit on whether you try to pass it off as an original like some
forgers do.

But one can use NFTs (and a registry) to protect original works.
Even the works of AIs! Provenance then becomes a digitizable thing.

I think the areas where it is most dangerous is for digitising
extras in a day spent in studio and replacing their entire acting career with a
CGI avatar in the actual movie. The latest Indiana Jones movie shows quite a
bit of this CGI work in the last part.

Crowd scenes are costly and probably the easiest to synthesize.
Even desktop tools can perform a \"passable\" rendering of a
\"generic crowd\". They fall down when the creator gets lazy
about introducing variation into the \"actors\" (\"Gee, this guy
over here is making the same motions as this other guy over there...
the only differences are the colors of their shirts!\")

OTOH at least they get a days work out of it. The AI\'s will be smart enough
shortly to produce plausible looking individuals from a few parameters based on
how you say you want them to look!

Yes, as above. Perhaps easier to teach an AI how to ensure variation
in the parameters used to create the \"actors\" than it would be to hope
a human could be systematically \"random\".

A listers are probably safe for now (although the actress who played Rachael in
the first movie was motion captured and unaged by digital means so she looks
the same in both films). Harrison Ford is much older.

OTOH, the A-listers would be in the most demand for a firm that couldn\'t
afford the genuine article.

One of the problems with \"entertainment\" is that it is so transitory.
A \"bad actor\" (unfortunate choice of words) can get in, make his
money and be *gone* before the legal system can catch up to him.

The Abbatars Show in London is another example of what cutting edge video
processing technology can do. I\'m told it is very convincing as a real
performance by folk that have been to see it.

It is all the bit player actors who are in danger. If AI becomes prevalent they
each get one days paid work and then their appearance and voice print becomes
the property of the studio.

Likely one of the issues in the current \"labor actions\", here.
Treat it like historically: where\'s my royalty/residual?

[I wonder if Sheb Wooley is cursing NOT getting his due royalties?]

Likewise for some of the more formulaic movies and soaps - you could dispense
with the script writers once the AI is trained up on all the past programmes.
Generative AI is somewhat unnerving for creative types.

Exactly. But, this just leverages the fact that \"every story has
already been told\". New ones are just rehashes and blends of old ones.

> It used to be what we thought made us different to mere machines...

*Original* thought makes the difference. As suggested above,
a lot of what folks want to THINK of as original is just a
rehash/remix of old work.

This is particularly true in engineering and art (artists are
actually \"encouraged\" to steal others\' ideas)

imagine the number of variants of \"A Sunday Afternoon...\"
you could come up with that would be *different* works
yet strongly suggestive of that original -- should
those \"expressions\" be banned because they weren\'t
created by the original artist?

How could a talking head justify his claim to \"value\" wrt
an animated CGI figure making the same news presentation?

News readers days are numbered and so are lawyers since an AI backed by the
worlds largest online databases will beat them every time with instant recall
of the appropriate case law. GPT falls flat in this respect as it creates bogus
references to non-existent cases if backed into a corner as some hapless lazy
US lawyers found out the hard way:

Yes, but you can build a deterministic automaton to verify all
such references to \"vet\" any such claims. In that sense, such
an AI is more vulnerable because it has to back up its claims
(in verifiable ways).

An AI based on an LLM just has to make a story that sounds
entertaining.

https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt

It is one way that ChatGPT abuse for student essays can be detected...

I rely heavily on tools that are increasingly AI-driven
to verify the integrity of my hardware and software designs;
should they be banned/discouraged because they deprive
someone (me!?) of additional billable labor hours?

It is interesting that despite the huge rise in the ability of AI to leverage
the grunt work it is only really in the last year that they have come of age in
the ability to mimic other styles convincingly.

They were pretty much pastiches of the style they tried to mimic before (and
still are to some extent) but they are getting better at it.

Inject a randomizing element to mimic evolutionary changes.
Let the changes that seem to be successful persist...

If an AI improved your medical care, would you campaign to ban
them on the grounds that they displace doctors and other
medical practitioners?  Or, improved the fuel efficiency of
a vehicle?  Or...

[I.e., does it all just boil down to \"is *my* job threatened?\"]

In most cases AI can do most of the grunt work very efficiently and categorise
things to one of

1. Correct diagnosis
2. Probable diagnosis (but needs checking by an expert)
3. No further action required
4. Uncategorisable (needs checking by an expert)

And, the AI is consistent. You don\'t worry about whether the
practitioner \"on duty\" at the time was proficient, having a bad
day, distracted, etc.

Conceivably, an AI can be used to *judge* human efforts and
weed out the underperformers (in a way that isn\'t influenced
by money or personal relations)

Since a lot of scans are in category 3 it saves a lot of time for the experts
if they only have to look at the difficult edge cases.

As it learns the number of cases falling into bins 2 & 4 decrease with time,
but even now the best human pattern matchers (even quite average ones) can
still out perform computers on noisy image interpretation.

FWIW I use AI for chess puzzles and computer algebra tools to do things that
would have been unthinkable only a few years ago. It doesn\'t get tired and if
it takes it a few days to get a result who cares. It never makes mistakes with
missed expressions and these days can output computer code that is guaranteed
to be correct.

I think the value in engineering will come from those folks
who aren\'t particularly diligent in their methodology. If
you tend to be sloppy, you\'ll tend to see lots of \"adjustments\"
to your work.

The question then remains: will employers use this as a criteria
to determine who to dismiss? or, to drive wages down as the
CORRECTED works of the sloppy workers can approach the quality
of the more diligent?

Way back there were bugs in the Fortran output if the number of continuation
cards exceeded 9 (and it did happen with VSOP82).

Bugs tend to be relatively easy to find, given enough time and
exposure.

The tougher problem is identifying behaviors that are either
undesirable, unintended or unexpected.

Our microwave oven lets you type in the cook time. \"10\"
is obviously 10 seconds; \"20\" for 20, etc. And, \"60\"
is a minute! (makes sense). But, \"100\" is also a minute!
(implied \':\')

This is counterintuitive -- until someone shoves it in your
face!

E.g., I was making pancakes and kept adjusting the time
for the \"first side\" upward. From 90, I increased it to 100
and wondered why things went downhill!

Of course, this is a non-critical application and one
where I can stop and think about what\'s just happened.
But, imagine this behavior is codified into some
other application that is called on in a time of high stress
(\"Hmmm... the retro rockets didn\'t slow us enough at a
90 second burn -- we\'re going to crash! Lets try 100!\")
 
On Sun, 16 Jul 2023 16:12:07 -0400, ehsjr <ehsjr@verizon.net> wrote:

On 7/15/2023 3:56 PM, Don Y wrote:
I\'m trying to come to a rational/consistent opinion wrt AI
and it\'s various, perceived \"threats\".

I think the \"guts\" of their fear can be understood by
a mental experiment. (snipping to get there)

snip

I rely heavily on tools that are increasingly AI-driven
to verify the integrity of my hardware and software designs;
should they be banned/discouraged because they deprive
someone (me!?) of additional billable labor hours?

Imagine an AI that produces work indistinguishable from
what you produce.

Seems impossible to me.

A simple question can evoke trillions of possible answers. A question
complex enought to produce a small number of usable designs would have
to be the full product specification, which implies that most of the
thinking is already done.

Really good PCB autoplace and autoroute would be a good proof that AI
is useful in electronic design.


How would you react to the loss of
salary, the loss of recognition, the loss of reputation,
the loss of the sense of accomplishment, etc? That\'s
what the actors face.

Maybe AI could generate some non-dreadful plots and acting. For a
change.
 
On 7/16/2023 1:12 PM, ehsjr wrote:
On 7/15/2023 3:56 PM, Don Y wrote:
I\'m trying to come to a rational/consistent opinion wrt AI
and it\'s various, perceived \"threats\".

I think the \"guts\" of their fear can be understood by
a mental experiment. (snipping to get there)

snip

I rely heavily on tools that are increasingly AI-driven
to verify the integrity of my hardware and software designs;
should they be banned/discouraged because they deprive
someone (me!?) of additional billable labor hours?

Imagine an AI that produces work indistinguishable from
what you produce. How would you react to the loss of
salary, the loss of recognition, the loss of reputation,
the loss of the sense of accomplishment, etc?  That\'s
what the actors face.

I\'d figure I have to find another line of work *or* a way
to differentiate my \"efforts\" from those that the machine
can generate. The more \"creative/original\" the type of work,
the easier it would be to make that differentiation.

E.g., I would be hard-pressed to differentiate a *hole* dug
by me from a hole dug by a machine. OTOH, I suspect an
original/unique timepiece that I designed would be far more
appreciated by a \"discerning buyer\" than something likely
\"purely functional\" (and possibly of incredible accuracy!)
designed by machine.

[Much of my career for the past several decades has been in
creating products that haven\'t existed, previously. How
effective would an AI be at imagining solutions to incompletely
specified needs?]

It would be hypocritical to take advantage of \"gains\"
(in technology, AI, etc.) when they benefit me and
work to outlaw them when they don\'t, eh?
 
On 7/16/2023 12:13 PM, Clive Arthur wrote:
On 16/07/2023 16:25, bitrex wrote:

snip

The bear, named Orion, nodded solemnly, his voice resonating deep
within Tanya\'s soul.

Well, if he\'s to be named after a constellation, it might have been
better to use an Ursa rather than the hunter.

A few weeks back, I asked ChatGPT how to design a baseband OFDM
communication link, as that\'s what I\'ve been doing for a while.  The
answer was of no practical use to me, nor would it have been at the
start of the project /however/, with just a little massaging, it would
have made a very good presentation to management, all the right
buzzwords etc and without any of that pesky detail.

I\'ve tried to get it to draw ASCII schematics of e.g. \"An op amp in
inverting configuration with a gain of 1\" and the results are...ah,
creative..
 
On Monday, 17 July 2023 at 03:16:40 UTC+2, bitrex wrote:
On 7/16/2023 12:13 PM, Clive Arthur wrote:
On 16/07/2023 16:25, bitrex wrote:

snip

The bear, named Orion, nodded solemnly, his voice resonating deep
within Tanya\'s soul.

Well, if he\'s to be named after a constellation, it might have been
better to use an Ursa rather than the hunter.

A few weeks back, I asked ChatGPT how to design a baseband OFDM
communication link, as that\'s what I\'ve been doing for a while. The
answer was of no practical use to me, nor would it have been at the
start of the project /however/, with just a little massaging, it would
have made a very good presentation to management, all the right
buzzwords etc and without any of that pesky detail.

I\'ve tried to get it to draw ASCII schematics of e.g. \"An op amp in
inverting configuration with a gain of 1\" and the results are...ah,
creative..

AI is an old fake

AI has never existed in the past
and what is marketed today as AI
is still human\'s job done behind the closed doors
to attract marketing interest by the low brainers
 
>

Darius the Dumb has posted yet one more #veryStupidByLowIQaa article.
 
On 7/16/2023 8:16 PM, bitrex wrote:
On 7/16/2023 12:13 PM, Clive Arthur wrote:
On 16/07/2023 16:25, bitrex wrote:

snip

The bear, named Orion, nodded solemnly, his voice resonating deep
within Tanya\'s soul.

Well, if he\'s to be named after a constellation, it might have been
better to use an Ursa rather than the hunter.

A few weeks back, I asked ChatGPT how to design a baseband OFDM
communication link, as that\'s what I\'ve been doing for a while.  The
answer was of no practical use to me, nor would it have been at the
start of the project /however/, with just a little massaging, it would
have made a very good presentation to management, all the right
buzzwords etc and without any of that pesky detail.


I\'ve tried to get it to draw ASCII schematics of e.g. \"An op amp in
inverting configuration with a gain of 1\" and the results are...ah,
creative..

I found them to be a disaster.
 
On Saturday, July 15, 2023 at 9:06:23 PM UTC-5, Don Y wrote:
On 7/15/2023 5:22 PM, Dean Hoffman wrote:
If an AI improved your medical care, would you campaign to ban
them on the grounds that they displace doctors and other
medical practitioners? Or, improved the fuel efficiency of
a vehicle? Or...

[I.e., does it all just boil down to \"is *my* job threatened?\"]

Does it bother you that AI resurrected a Brazilian singer for a car commercial?
https://www.theguardian.com/world/2023/jul/14/brazil-singer-elis-regina-artificial-intelligence-volkswagen
Why should it \"bother\" me?

Would it bother you if an AI painted another Rembrandt? Even if
you knew it wasn\'t done *by* the Master? Would it be, somehow, less
artistic? Is the beauty/value in the work itself? Or, the /provenance/?

It doesn\'t seem right that the woman is supposedly endorsing something she might not have heard of. Let\'s take an extreme example. Suppose someone could get away with something like altering Martin Luther King\'s \"I have a Dream\" speech. Suppose someone could alter the play at home plate of a baseball game before it got to the viewers. Could someone get away with altering a speech by a presidential candidate?
I remember sportscaster Warner Wolf\'s line \"Let\'s go to the videotape\".
Even if I had a personal collection of genuine articles, any
perceived additional value they held (vs. the wannabes) would still to
be held (among people who assign value to uniqueness). I.e., the
owner of a \"modern equivalent\" could never pass his off as a
\"long lost original\"...

I sometimes watch the show Pawn Stars. The host calls in experts to check things
to see if they\'re original. He deals in genuine articles. Someone brought in a Stradivarius violin.
Someone else brought in a poem supposedly written by Jimi Hendrix. Both were fake. It matters to people if
it\'s real, original and not a knockoff.
It would be good if there is some sort of marking to distinguish AI generated vs. original human made.



I\'m annoyed that D Adams wasn\'t a more prolific writer. I\'d *welcome*
anything that an AI could f* it mimicked his wit and intellect.
(Or, would you be snobbish and avoid it out of \"loyalty\" to the original
artist?)
 
On 7/17/2023 3:51 AM, Dean Hoffman wrote:

Would it bother you if an AI painted another Rembrandt? Even if you knew
it wasn\'t done *by* the Master? Would it be, somehow, less artistic? Is
the beauty/value in the work itself? Or, the /provenance/?

It doesn\'t seem right that the woman is supposedly endorsing something she
might not have heard of.

Do you *really* think all of the endorsers on TV 9social media) etc. REALLY
use the products that they are pushing? Does Joe Namath (or Tom Selleck
or Alex RIP Trebeck) have a reverse mortgage? Does Newt Gingrich use
Title Lock and actually believe it offers value?

<https://www.ksl.com/article/50334898/ads-claim-title-thieves-can-steal-your-home-but-can-you-really-lose-your-house>

Giuliani used to pitch some malware product (the guy who sees fraud everywhere
yet can\'t seem to prove any of it?)

Let\'s take an extreme example. Suppose someone
could get away with something like altering Martin Luther King\'s \"I have a
Dream\" speech. Suppose someone could alter the play at home plate of a
baseball game before it got to the viewers. Could someone get away with
altering a speech by a presidential candidate?

That\'s using AI to commit fraud. I can cut and paste audio clips
(from magnetic tape) together and make it sound like you are
saying something else -- using 60 year old technology. Yet,
we haven\'t banned tape recorders.

[It\'s been possible to design a speech synthesizer that sounds
like a given person for many years, now. If I call you (at some
time of day when you are unlikely to pickup) sounding like your
wife and ask you to transfer $X from checking to some other
account, would you do so?]

I remember sportscaster Warner Wolf\'s line \"Let\'s go to the videotape\".

Even if I had a personal collection of genuine articles, any perceived
additional value they held (vs. the wannabes) would still to be held
(among people who assign value to uniqueness). I.e., the owner of a
\"modern equivalent\" could never pass his off as a \"long lost original\"...

I sometimes watch the show Pawn Stars. The host calls in experts to check
things to see if they\'re original. He deals in genuine articles. Someone
brought in a Stradivarius violin. Someone else brought in a poem supposedly
written by Jimi Hendrix. Both were fake. It matters to people if it\'s
real, original and not a knockoff.

That\'s provenance. You wouldn\'t buy any article WHOSE VALUE LIES
ENTIRELY IN IT\'S PROVENANCE without proof of same. (I\'ve got the
original copy of the Declaration of Independence, here -- I\'ll let
you have it for $19.95...)

That\'s why such experts exist and why things like NFTs exist.

It would be good if there is some sort of marking to distinguish AI
generated vs. original human made.

The problem comes from The Unwashed Masses who will believe anything
peddled to them by a huckster SOUNDING genuine (covid started in a lab,
masks don\'t work, there are microchips in the vaccines, etc.).

There\'s little you can do to dissuade these people from believing
what they WANT to believe. *EVIDENCE* to the contrary!

We have all sorts of hucksters LEGALLY promoting bad ideas and
there\'s nothing you can really do to stop them -- if they
stay within the bounds of the law (\"I didn\'t CLAIM this was
a medicine that would cure your terminal illness. I merely
let you vest your hope in that self-delusion -- and profitted
from YOUR *need* to believe\")

Again, if an existing technology can do these things, why
shouldn\'t an AI be able to do them, too?

I\'m annoyed that D Adams wasn\'t a more prolific writer. I\'d *welcome*
anything that an AI could f* it mimicked his wit and intellect. (Or,
would you be snobbish and avoid it out of \"loyalty\" to the original
artist?)
 
On 7/17/2023 11:34 AM, Don Y wrote:
That\'s using AI to commit fraud.  I can cut and paste audio clips
(from magnetic tape) together and make it sound like you are
saying something else -- using 60 year old technology.  Yet,
we haven\'t banned tape recorders.
Which reminded me of an amusing anecdote...

A friend made a mix tape many years ago (college) -- for
parties. One of the tunes was Yellow Submarine (Beatles).
But, he had manually (audio tape!) edited the song so that
*one* of the choruses was:
We all live in a yellow submarine
Yellow submarine, yellow submarine, yellow submarine
This is amazingly easy to do -- just by manually positioning
the tape for dubbing.

And, few people (esp in the party atmosphere) would ever notice
the \"fraud\". (and, those who had an inkling that something was
wrong wouldn\'t be able to (easily) verify their suspicions
 
On 7/16/2023 7:20 PM, Don Y wrote:
On 7/16/2023 1:12 PM, ehsjr wrote:
On 7/15/2023 3:56 PM, Don Y wrote:
I\'m trying to come to a rational/consistent opinion wrt AI
and it\'s various, perceived \"threats\".

I think the \"guts\" of their fear can be understood by
a mental experiment. (snipping to get there)

snip

I rely heavily on tools that are increasingly AI-driven
to verify the integrity of my hardware and software designs;
should they be banned/discouraged because they deprive
someone (me!?) of additional billable labor hours?

Imagine an AI that produces work indistinguishable from
what you produce. How would you react to the loss of
salary, the loss of recognition, the loss of reputation,
the loss of the sense of accomplishment, etc?  That\'s
what the actors face.

I\'d figure I have to find another line of work *or* a way
to differentiate my \"efforts\" from those that the machine
can generate.  The more \"creative/original\" the type of work,
the easier it would be to make that differentiation.

With that type of (good in my opinion) thinking you can\'t
understand how the actors fear AI. I suppose that renders
your original question unanswerable, to you. If you can\'t
imagine AI producing work indistinguishable from your own,
the thought experiment fails.

Ed

E.g., I would be hard-pressed to differentiate a *hole* dug
by me from a hole dug by a machine.  OTOH, I suspect an
original/unique timepiece that I designed would be far more
appreciated by a \"discerning buyer\" than something likely
\"purely functional\" (and possibly of incredible accuracy!)
designed by machine.

[Much of my career for the past several decades has been in
creating products that haven\'t existed, previously.  How
effective would an AI be at imagining solutions to incompletely
specified needs?]

It would be hypocritical to take advantage of \"gains\"
(in technology, AI, etc.) when they benefit me and
work to outlaw them when they don\'t, eh?
 
On 7/17/2023 11:57 AM, ehsjr wrote:
On 7/16/2023 7:20 PM, Don Y wrote:
On 7/16/2023 1:12 PM, ehsjr wrote:
On 7/15/2023 3:56 PM, Don Y wrote:
I\'m trying to come to a rational/consistent opinion wrt AI
and it\'s various, perceived \"threats\".

I think the \"guts\" of their fear can be understood by
a mental experiment. (snipping to get there)

snip

I rely heavily on tools that are increasingly AI-driven
to verify the integrity of my hardware and software designs;
should they be banned/discouraged because they deprive
someone (me!?) of additional billable labor hours?

Imagine an AI that produces work indistinguishable from
what you produce. How would you react to the loss of
salary, the loss of recognition, the loss of reputation,
the loss of the sense of accomplishment, etc?  That\'s
what the actors face.

I\'d figure I have to find another line of work *or* a way
to differentiate my \"efforts\" from those that the machine
can generate.  The more \"creative/original\" the type of work,
the easier it would be to make that differentiation.

With that type of (good in my opinion) thinking you can\'t
understand how the actors fear AI. I suppose that renders
your original question unanswerable, to you.  If you can\'t
imagine AI producing work indistinguishable from your own,
the thought experiment fails.

In \"this\" industry, one is ALWAYS evolving their skillset.
You don\'t expect to start out doing X and finish off doing X.

I started out designing hardware in SSI/MSI, different logic
families were just different design constraints; computing
setup and hold times for worst case timing, verifying operating
limits with temperature and Vcc, ensuring hazard-free decoding,
etc.

Then, PLAs came along -- different design techniques, tools,
skills, etc. Bipolar PROMs to replace junk logic.

Then, standard cell. Then, full custom. Now, folks write VHDL
to do what I had to do \"by hand\". (test vector generation, anybody?)

Ditto in software. ASM and \"scope loops\" gave way to HLLs and
symbolic debugging. Foreground/background systems gave way to
multitasking. Then multiprocessing. Then distributed systems.

I.e., there is ALWAYS change and it\'s an opportunity (unless you
are a stick-in-the-mud).

I suspect the vocations that are at risk are those that don\'t have
such evolving skillsets. What does a TV presenter evolve into
(in the NORMAL course of their career)? Ditto accountants,
lawyers, actors, writers, artists, etc.

They can possibly adopt other media and tools. But, their basic
skillset is effectively immutable. I suspect that they know this,
subconsciously. An actor can try to move into directing/producing.
But, how many directors/producers does that industry need? An
accountant can become a CPA or CFO, etc. but, again, that\'s a
limited opportunity.

EVERY painter can learn to use a roller... spray gun... etc.
 
On Sun, 16 Jul 2023 22:41:29 -0500, John S <Sophi.2@invalid.org>
wrote:

On 7/16/2023 8:16 PM, bitrex wrote:
On 7/16/2023 12:13 PM, Clive Arthur wrote:
On 16/07/2023 16:25, bitrex wrote:

snip

The bear, named Orion, nodded solemnly, his voice resonating deep
within Tanya\'s soul.

Well, if he\'s to be named after a constellation, it might have been
better to use an Ursa rather than the hunter.

A few weeks back, I asked ChatGPT how to design a baseband OFDM
communication link, as that\'s what I\'ve been doing for a while.  The
answer was of no practical use to me, nor would it have been at the
start of the project /however/, with just a little massaging, it would
have made a very good presentation to management, all the right
buzzwords etc and without any of that pesky detail.


I\'ve tried to get it to draw ASCII schematics of e.g. \"An op amp in
inverting configuration with a gain of 1\" and the results are...ah,
creative..


I found them to be a disaster.

Yes ! They look like something I might try to draw while on some
strong psychedelic drug.

boB
 
On Saturday, July 15, 2023 at 3:57:16 PM UTC-4, Don Y wrote:
I\'m trying to come to a rational/consistent opinion wrt AI
and it\'s various, perceived \"threats\".

I can understand how a person that can be *replaced* by an AI
would fear for their livelihood. But, that (to me) isn\'t a
blanket reason for banning/restricting AIs. (we didn\'t
ban *calculators* out of fear they would \"make redundant\"
folks who spent their days totaling columns of figures!
or back hoes out of fear they would make ditch diggers
redundant).

The uproar in the \"artistic\" world implying that they are
outright *stealing* their existing works seems a stretch,
as well. If I wrote a story that sounded a hellofalot
like one of your stories -- or painted a picture that
resembled one of yours -- would that be \"wrong\"? (e.g.,
imagine the number of variants of \"A Sunday Afternoon...\"
you could come up with that would be *different* works
yet strongly suggestive of that original -- should
those \"expressions\" be banned because they weren\'t
created by the original artist?

How could a talking head justify his claim to \"value\" wrt
an animated CGI figure making the same news presentation?

https://www.youtube.com/watch?v=cYdpOjletnc

I rely heavily on tools that are increasingly AI-driven
to verify the integrity of my hardware and software designs;
should they be banned/discouraged because they deprive
someone (me!?) of additional billable labor hours?

If an AI improved your medical care, would you campaign to ban
them on the grounds that they displace doctors and other
medical practitioners? Or, improved the fuel efficiency of
a vehicle? Or...

[I.e., does it all just boil down to \"is *my* job threatened?\"]

This could happen, if \'actors\' keep going out on strike:

https://www.imdb.com/title/tt0258153/
 
On 7/17/2023 7:06 PM, Michael Terrell wrote:
How could a talking head justify his claim to \"value\" wrt
an animated CGI figure making the same news presentation?

https://www.youtube.com/watch?v=cYdpOjletnc

Yes, I made this reference to SWMBO and it just went \"woosh\",
over her head. <frown>

I rely heavily on tools that are increasingly AI-driven
to verify the integrity of my hardware and software designs;
should they be banned/discouraged because they deprive
someone (me!?) of additional billable labor hours?

If an AI improved your medical care, would you campaign to ban
them on the grounds that they displace doctors and other
medical practitioners? Or, improved the fuel efficiency of
a vehicle? Or...

[I.e., does it all just boil down to \"is *my* job threatened?\"]

This could happen, if \'actors\' keep going out on strike:

https://www.imdb.com/title/tt0258153/

I don\'t think the \"AI threat\" just applies to actors, writers, etc.

A good deal of MANY jobs can be replaced by a \"smart monkey\"...
even moreso by a VERY smart monkey!

We already see \"nurse practitioners\" doing what doctors *used*
to do (though under the supervision of a doctor). Wait until
the *doctors* act under the supervision of an *AI* doctor!
(where does all that \"prestige\" go once YOU are delegated to
that subservient role?)

Tattoo artists? Physical therapists? Accountants?

Will we see self-driving garbage trucks? (navigate to next house;
park adjacent to trash container; activate grabber to lift
container into truck; step-and-repeat)

Mail delivery? (see above)

Think about the number of jobs that are largely static in
their skillsets and of limited promotion paths. I suspect
all will be targeted, eventually (talking heads should be
the first as they REALLY add no value! Imagine a Maxine
Headroom, Maxwell, Maximillion, etc.)
 
On Tuesday, July 18, 2023 at 1:34:41 AM UTC-4, Don Y wrote:
On 7/17/2023 7:06 PM, Michael Terrell wrote:
How could a talking head justify his claim to \"value\" wrt
an animated CGI figure making the same news presentation?

https://www.youtube.com/watch?v=cYdpOjletnc
Yes, I made this reference to SWMBO and it just went \"woosh\",
over her head. <frown
I rely heavily on tools that are increasingly AI-driven
to verify the integrity of my hardware and software designs;
should they be banned/discouraged because they deprive
someone (me!?)
If an AI improved your medical care, would you campaign to ban
them on the grounds that they displace doctors and other
medical practitioners? Or, improved the fuel efficiency of
a vehicle? Or...

My doctor was on vacation a while back. I had to show her replacement the proper way to apply the triple layer wraps to my legs.

[I.e., does it all just boil down to \"is *my* job threatened?\"]

This could happen, if \'actors\' keep going out on strike:

https://www.imdb.com/title/tt0258153/
I don\'t think the \"AI threat\" just applies to actors, writers, etc.

A good deal of MANY jobs can be replaced by a \"smart monkey\"...
even moreso by a VERY smart monkey!

Some could be replaced by a brain dead monkey.


We already see \"nurse practitioners\" doing what doctors *used*
to do (though under the supervision of a doctor). Wait until
the *doctors* act under the supervision of an *AI* doctor!
(where does all that \"prestige\" go once YOU are delegated to
that subservient role?)

Like the holographic doctor on Star Trek Voyager? (BTW, it is streaming on Paramount Plus. It is free. if you use Walmart Plus.)

Nurses have traditionally done the grunt work. Sadly there are a few that weren\'t even good at that.I was pissing blood, so I went to the VA clinic, only to be turned away by\' Nurse Rached\' I told her that it was a bladder infection and asked to be tested . She threw a \'Sloman\', ranting that it was obviously kidney stones, and that I couldn\'t possibly know what I was talking about. I made the hour long ride to a VA hospital, only to be told that I had a bladder infection, that should have been treated at my local clinic..

When I had a \'Third Nerve Palsey\' in my right eye, she told be to go buy F...ing eye drops and stpp my f..ing whining. That required seven trips to the VA hospital, an MRI and over six months to heal. If I had listened to her, the damage would have been permanent. I couldn\'t open the eyelid, nor move the eye if I used my finger to lift it. Even two years later, they would occasionally unlock if I turned my head too fast, to look at something closer.

She refused to give me my first Glucose meter, then ranted that I wasn\'t keeping a log to bring to my appointments. I was \'informed\' that it was impassible to use one properly without attending a class. It was sitting on her desk, so I took it, opened the box and coded it. I use the sample to verify it and then tested my blood while she was yelling at me.

A good AI would beat care like that, any day.


Tattoo artists? Physical therapists? Accountants?

Will we see self-driving garbage trucks? (navigate to next house;
park adjacent to trash container; activate grabber to lift
container into truck; step-and-repeat)

Mail delivery? (see above)

Think about the number of jobs that are largely static in
their skillsets and of limited promotion paths. I suspect
all will be targeted, eventually (talking heads should be
the first as they REALLY add no value! Imagine a Maxine
Headroom, Maxwell, Maximillion, etc.)

They are already using the \'Wind them up and watch them walk into the wall\' model!
 
On 7/18/2023 10:31 AM, Michael Terrell wrote:
My doctor was on vacation a while back. I had to show her replacement the
proper way to apply the triple layer wraps to my legs.

You should be grateful that the replacement didn\'t get all \"huffy\"
about \"being shown\"! Some have egos that get in the way of good care...

[One thing I liked about my PCP (he retired recently) was that he would
always try to answer my questions -- even if it meant digging out
medical texts to pour over in the exam room (while the NEXT patient
waited! :< ) ]

[I.e., does it all just boil down to \"is *my* job threatened?\"]

This could happen, if \'actors\' keep going out on strike:

https://www.imdb.com/title/tt0258153/
I don\'t think the \"AI threat\" just applies to actors, writers, etc.

A good deal of MANY jobs can be replaced by a \"smart monkey\"... even
moreso by a VERY smart monkey!

Some could be replaced by a brain dead monkey.

And this brings up an uncomfortable truth. What do we (as a society) \"do\"
with those folks who really can\'t contribute in the New World Order?
Surely we can\'t put them out to pasture. Do we *preserve* menial jobs
just to give them a place in society?

E.g., we try to accommodate Down\'s Syndrome kids, autistics, etc.
instead of institutionalizing them. What about folks who don\'t
fall in these categories but are just \"too stupid\" (said inelegantly)?

We already see \"nurse practitioners\" doing what doctors *used* to do
(though under the supervision of a doctor). Wait until the *doctors* act
under the supervision of an *AI* doctor! (where does all that \"prestige\"
go once YOU are delegated to that subservient role?)

Like the holographic doctor on Star Trek Voyager? (BTW, it is streaming on
Paramount Plus. It is free. if you use Walmart Plus.)

I don\'t think there will be much more effort than a cgi 2D image, at most.
I think people still would want to relate to a \"human appearing\" entity
even if it is just an embelishment over \"text output\".

OTOH, does a *doctor* need to see a cgi entity to accept a set of
recommendations issued by it?

> Nurses have traditionally done the grunt work.

Yeah, I had a lover many years ago who would remind me of that,
pretty regularly! :> She had a lot of disdain for many doctors
(\"*I* caught his MISTAKE and he ragged on me as if *I* had done
something wrong...)

Sadly there are a few that
weren\'t even good at that.I was pissing blood, so I went to the VA clinic,
only to be turned away by\' Nurse Rached\' I told her that it was a bladder
infection and asked to be tested . She threw a \'Sloman\', ranting that it was
obviously kidney stones, and that I couldn\'t possibly know what I was
talking about. I made the hour long ride to a VA hospital, only to be told
that I had a bladder infection, that should have been treated at my local
clinic.

The consolation is that she will likely receive care from someone
equally inept, in her lifetime. *AND*, it will doubly annoy her
because she will THINK that *she* was always providing excellent care...

When I had a \'Third Nerve Palsey\' in my right eye, she told be to go buy
F..ing eye drops and stpp my f..ing whining. That required seven trips to
the VA hospital, an MRI and over six months to heal. If I had listened to
her, the damage would have been permanent. I couldn\'t open the eyelid, nor
move the eye if I used my finger to lift it. Even two years later, they
would occasionally unlock if I turned my head too fast, to look at something
closer.

Wow.

I often drive a friend to the local VA hospital as the walk from the
parking lot to the appropriate \"sub-building\" is quite a hike for him.
I can, instead, drive him to the door closest to his destination
and then go park the car (and walk back to where he is getting his care).

First time I did this, he gave me his handicapped tag so I could
park in one of the handicapped spaces (don\'t know why as *I* don\'t
need that).

I was stunned to find it wasn\'t a \"set of spaces\" but, rather, a
frigging parking *lot*! When I made that observation to him,
later, he said \"Lots of guys here with problems...\" and just
directed his eyes out the windows of the waiting room we were
in to watch the folks moving by...

[So, Thank You for Your Service}

She refused to give me my first Glucose meter, then ranted that I wasn\'t
keeping a log to bring to my appointments. I was \'informed\' that it was
impassible to use one properly without attending a class. It was sitting on
her desk, so I took it, opened the box and coded it. I use the sample to
verify it and then tested my blood while she was yelling at me.

A good AI would beat care like that, any day.

Sorry that your experience has been so shitty. One thing I noticed
from my visits to the VA here (see above) is how polite and attentive
they have seemed to be. It was refreshing when contrasted to some of the
folks in \"private practice\" that don\'t hesitate to let you know
(and pay for!) that they are having a bad day...

Tattoo artists? Physical therapists? Accountants?

Will we see self-driving garbage trucks? (navigate to next house; park
adjacent to trash container; activate grabber to lift container into
truck; step-and-repeat)

Mail delivery? (see above)

Think about the number of jobs that are largely static in their skillsets
and of limited promotion paths. I suspect all will be targeted, eventually
(talking heads should be the first as they REALLY add no value! Imagine a
Maxine Headroom, Maxwell, Maximillion, etc.)

They are already using the \'Wind them up and watch them walk into the wall\'
model!

Which returns to the original question. If it\'s not YOUR job that\'s
being made redundant, then what reason NOT to exploit AIs?
 
On Tuesday, July 18, 2023 at 3:11:43 PM UTC-4, Don Y wrote:
On 7/18/2023 10:31 AM, Michael Terrell wrote:

My doctor was on vacation a while back. I had to show her replacement the
proper way to apply the triple layer wraps to my legs.
You should be grateful that the replacement didn\'t get all \"huffy\"
about \"being shown\"! Some have egos that get in the way of good care...

She was being trained for wound care, so she gladly accepted my help. It can look right and work OK, but there are tricks that allow you to unwrap them, instead of needing scissors to cut them off.


[One thing I liked about my PCP (he retired recently) was that he would
always try to answer my questions -- even if it meant digging out
medical texts to pour over in the exam room (while the NEXT patient
waited! :< ) ]

I had one VA doctor answer a very simple question wrong, after 30 seconds on the internet. Epsom Salt is labeled, \'\'Not for use by Diabetics\'. He told me that it dries out your skin. That was wrong, in two ways. It was used as a laxative at one time, but it affected your electrolytes. The second error was that it softens dead skin, so it helps remove rough, dead skin. The resit is revealing healthy skin without risk of scracthes or tearing the skin because it rolls off after soaking. This was when a new, nuch larger VA clinic opened about the sae distance south of me. I was told that there was a two year waiting list, but I applied foor a transfer. Two weeks later, they transferred me to a new doctor. Sadly, she had the same ego. Both of them were from Inda, and they played their, \"Patients are all low class\' card to the hilt.

She didn\'t last long. After her, most of my doctors are Veterans who worked in Military hospitals. They show us respect, and ask if we need anything more than just a regular checkup.


A good deal of MANY jobs can be replaced by a \"smart monkey\"... even
moreso by a VERY smart monkey!

Some could be replaced by a brain dead monkey.
And this brings up an uncomfortable truth. What do we (as a society) \"do\"
with those folks who really can\'t contribute in the New World Order?
Surely we can\'t put them out to pasture. Do we *preserve* menial jobs
just to give them a place in society?

E.g., we try to accommodate Down\'s Syndrome kids, autistics, etc.
instead of institutionalizing them. What about folks who don\'t
fall in these categories but are just \"too stupid\" (said inelegantly)?
We already see \"nurse practitioners\" doing what doctors *used* to do
(though under the supervision of a doctor). Wait until the *doctors* act
under the supervision of an *AI* doctor! (where does all that \"prestige\"
go once YOU are delegated to that subservient role?)

Like the holographic doctor on Star Trek Voyager? (BTW, it is streaming on
Paramount Plus. It is free. if you use Walmart Plus.)
I don\'t think there will be much more effort than a cgi 2D image, at most..
I think people still would want to relate to a \"human appearing\" entity
even if it is just an embelishment over \"text output\".

OTOH, does a *doctor* need to see a cgi entity to accept a set of
recommendations issued by it?

Nurses have traditionally done the grunt work.

Yeah, I had a lover many years ago who would remind me of that,
pretty regularly! :> She had a lot of disdain for many doctors
(\"*I* caught his MISTAKE and he ragged on me as if *I* had done
something wrong...)

Sadly there are a few that
weren\'t even good at that.I was pissing blood, so I went to the VA clinic,
only to be turned away by\' Nurse Ratched\' I told her that it was a bladder
infection and asked to be tested . She threw a \'Sloman\', ranting that it was
obviously kidney stones, and that I couldn\'t possibly know what I was
talking about. I made the hour long ride to a VA hospital, only to be told
that I had a bladder infection, that should have been treated at my local
clinic.

The consolation is that she will likely receive care from someone
equally inept, in her lifetime. *AND*, it will doubly annoy her
because she will THINK that *she* was always providing excellent care...

When I had a \'Third Nerve Palsey\' in my right eye, she told be to go buy
F..ing eye drops and stop my f..ing whining. That required seven trips to
the VA hospital, an MRI and over six months to heal. If I had listened to
her, the damage would have been permanent. I couldn\'t open the eyelid, nor
move the eye if I used my finger to lift it. Even two years later, they
would occasionally unlock if I turned my head too fast, to look at something
closer.
Wow.

I often drive a friend to the local VA hospital as the walk from the
parking lot to the appropriate \"sub-building\" is quite a hike for him.
I can, instead, drive him to the door closest to his destination
and then go park the car (and walk back to where he is getting his care).

They have a shuttle at the Gainesville VA hospital, since the parking lot is larger than the hospital. The Ocala CBOC just got a golf cart to take you from the from door, to your car.

First time I did this, he gave me his handicapped tag so I could
park in one of the handicapped spaces (don\'t know why as *I* don\'t
need that).

I was stunned to find it wasn\'t a \"set of spaces\" but, rather, a
frigging parking *lot*! When I made that observation to him,
later, he said \"Lots of guys here with problems...\" and just
directed his eyes out the windows of the waiting room we were
in to watch the folks moving by...

[So, Thank You for Your Service}

You\'re welcome.

The VA system has the highest average age and percentage of disabled patients of any other provider in the United States. They also offer many services that other hospitals don\'\'t. They also do research in fields like treating TBI. They created \'The Million Veterans Program\' that requested a DNA sample, The databse will also take your medical history, to look for connections to diseases.

She refused to give me my first Glucose meter, then ranted that I wasn\'t
keeping a log to bring to my appointments. I was \'informed\' that it was
impassible to use one properly without attending a class. It was sitting on
her desk, so I took it, opened the box and coded it. I use the sample to
verify it and then tested my blood while she was yelling at me.

A good AI would beat care like that, any day.

Sorry that your experience has been so shitty. One thing I noticed
from my visits to the VA here (see above) is how polite and attentive
they have seemed to be. It was refreshing when contrasted to some of the
folks in \"private practice\" that don\'t hesitate to let you know
(and pay for!) that they are having a bad day...

99% are good, but a few idiots and bullies slip though the hiring process.


Tattoo artists? Physical therapists? Accountants?

Will we see self-driving garbage trucks? (navigate to next house; park
adjacent to trash container; activate grabber to lift container into
truck; step-and-repeat)

Mail delivery? (see above)

Think about the number of jobs that are largely static in their skill sets
and of limited promotion paths. I suspect all will be targeted, eventually
(talking heads should be the first as they REALLY add no value! Imagine a
Maxine Headroom, Maxwell, Maximillion, etc.)

They are already using the \'Wind them up and watch them walk into the wall\'
model!
Which returns to the original question. If it\'s not YOUR job that\'s
being made redundant, then what reason NOT to exploit AIs?

I created an Expert System\' to help troubleshoot a very complex circuit board over 20 years ago. That was a low grade AI that got its input from the test fixture, then listed what needed to be checked on the monitor.

The original software simply stopped when there was a problem ,and not tel you what test had failed. On top of that, there were many errors in the test software. I politely asked the ET who built it to fix his mistakes. I was brushed off with, \"I don\'t remember building it. Go away. Fix the damned thing,yourself.\" Then he bitched when I did just that. He was told, \"You know better than to tell Michael that, if you don\'t want him to o it.\"
 
On 7/18/2023 6:43 PM, Michael Terrell wrote:

[One thing I liked about my PCP (he retired recently) was that he would
always try to answer my questions -- even if it meant digging out medical
texts to pour over in the exam room (while the NEXT patient waited! :< )
]

I had one VA doctor answer a very simple question wrong, after 30 seconds on
the internet. Epsom Salt is labeled, \'\'Not for use by Diabetics\'. He told me
that it dries out your skin. That was wrong, in two ways. It was used as a
laxative at one time, but it affected your electrolytes. The second error
was that it softens dead skin, so it helps remove rough, dead skin. The
resit is revealing healthy skin without risk of scracthes or tearing the
skin because it rolls off after soaking.

Then why the warning/contraindication?

This was when a new, nuch larger VA
clinic opened about the sae distance south of me. I was told that there was
a two year waiting list, but I applied foor a transfer. Two weeks later,
they transferred me to a new doctor. Sadly, she had the same ego. Both of
them were from Inda, and they played their, \"Patients are all low class\'
card to the hilt.

Yes, when my PCP retired, an Indian couple came in to take his place.
\"No thank you\". My experience has been that they have an \"attitude\":
\"I\'m the doctor, you will do what I say!\"

By contrast, my PCP would give *advice*. Then, we\'d figure out what
I was *willing* to do (\"No, let\'s defer the medication route and see
what I can do with dietary changes...\")

He was smart enough to trust my own self-assessment (and, I left him
a back-door where he could bring up his solution at a later date if
I failed to achieve my goal)

She didn\'t last long. After her, most of my doctors are Veterans who worked
in Military hospitals. They show us respect, and ask if we need anything
more than just a regular checkup.

My friend\'s sole complaint is that they have made some \"recommendations\"
and he\'s not keen on accepting that course of treatment (open heart surgery
with an estimated poor survivability). He claims that his refusal of
treatment can jeopardize his continued care (???). So, his solution is
to keep rescheduling appointments related to *that* care...

When I had a \'Third Nerve Palsey\' in my right eye, she told be to go
buy F..ing eye drops and stop my f..ing whining. That required seven
trips to the VA hospital, an MRI and over six months to heal. If I had
listened to her, the damage would have been permanent. I couldn\'t open
the eyelid, nor move the eye if I used my finger to lift it. Even two
years later, they would occasionally unlock if I turned my head too
fast, to look at something closer.
Wow.

I often drive a friend to the local VA hospital as the walk from the
parking lot to the appropriate \"sub-building\" is quite a hike for him. I
can, instead, drive him to the door closest to his destination and then go
park the car (and walk back to where he is getting his care).

They have a shuttle at the Gainesville VA hospital, since the parking lot is
larger than the hospital. The Ocala CBOC just got a golf cart to take you
from the from door, to your car.

This is the local VA:

<https://www.google.com/maps/@32.1811412,-110.9642396,675m/data=!3m1!1e3?entry=ttu>
My buddy gets care near the rotary at the center of the image.
Note there are only 8-12 handicap spots, there. So, you\'d
have to park \"out front\" (the black strips that look like barracks are
PV covered parking areas) and hike to your destination, through
the building (there may be an electric shuttle, indoors -- but I\'ve
never encountered it!)

If you are \"able-bodied\", it\'s just an annoyingly long walk
(most hospitals, here, are similar examples of sprawl).

But, if you have any health or mobility issues, it can be
brutal. When we walk BACK to the rotary, after his appointment
(I let him sit while I continue the trek to fetch the car),
he needs to stop a few times to catch his breath. Sometimes,
a passing unoccupied wheelchair may come along...

First time I did this, he gave me his handicapped tag so I could park in
one of the handicapped spaces (don\'t know why as *I* don\'t need that).

I was stunned to find it wasn\'t a \"set of spaces\" but, rather, a frigging
parking *lot*! When I made that observation to him, later, he said \"Lots
of guys here with problems...\" and just directed his eyes out the windows
of the waiting room we were in to watch the folks moving by...

[So, Thank You for Your Service}

You\'re welcome.

The VA system has the highest average age and percentage of disabled
patients of any other provider in the United States.

No doubt -- and for \"good\" (dubious choice of words) reason!

They also offer many
services that other hospitals don\'\'t. They also do research in fields like
treating TBI. They created \'The Million Veterans Program\' that requested a
DNA sample, The databse will also take your medical history, to look for
connections to diseases.

IIRC, Martin has made references to a similar program run by the NHS (?)

Of course, with our disjointed private medical services, there\'s no
central agency to coordinate such efforts -- outside of the VA.

[I installed a Reading Machine at this VA in ~1978. It was likely the
only one in all of AZ (we had only built 50, at the time)]

Think about the number of jobs that are largely static in their skill
sets and of limited promotion paths. I suspect all will be targeted,
eventually (talking heads should be the first as they REALLY add no
value! Imagine a Maxine Headroom, Maxwell, Maximillion, etc.)

They are already using the \'Wind them up and watch them walk into the
wall\' model!
Which returns to the original question. If it\'s not YOUR job that\'s being
made redundant, then what reason NOT to exploit AIs?

I created an Expert System\' to help troubleshoot a very complex circuit
board over 20 years ago. That was a low grade AI that got its input from the
test fixture, then listed what needed to be checked on the monitor.

Expert Systems are probably the easiest AIs to \"relate to\". They\'re
intuitive -- even if complex. But, they typically don\'t \"learn\".
A knowledge engineer is responsible for crafting them, relying on
his own specific knowledge of the application domain/problem space.

I.e., it\'s no smarter than the person who creates it.

But, it can be very thorough -- even moreso than its creator
because humans tend to forget details whereas the AI won\'t.

I\'ve tried to build most of my AIs as expert systems (\"Production
Systems\") that are dynamically modified by a neural net. So,
the rules can change (\"learn\") but, more importantly, a human
can inspect the rules, as they exist at any given point in time,
and understand WHY a particular decision was made/action taken.

The original software simply stopped when there was a problem ,and not tel
you what test had failed. On top of that, there were many errors in the test
software. I politely asked the ET who built it to fix his mistakes. I was
brushed off with, \"I don\'t remember building it. Go away. Fix the damned
thing,yourself.\" Then he bitched when I did just that. He was told, \"You
know better than to tell Michael that, if you don\'t want him to o it.\"

Letting \"engineers\" write code is almost always a mistake. Someone with
knowledge of the PROBLEM space needs to figure out what the code should do
and HOW it should do it as well as how it should interact with the
ACTUAL user(s).

[I can\'t begin to count the number of times I\'ve \"intentionally\" crashed
programs to point out \"unfortunate\" assumptions that their writers had
made in their designs -- without even having a formal notion of the
problem that they were trying to solve! :< ]

Otherwise, you get code that walks itself into a corner or forces a user
to do X when he really wants to do Y.

[I visited a website, recently, to create an account. I was offered a
choice of authentication strategies, at one point. I opted for the 2FA
option. Then, realized I wasn\'t happy with any of the \"second factors\"
they had implemented. Ah, but there\'s no way to \"go back\" to the point
before you made that choice! And, if you log out and log back in, it
cheerfully returns you to this same point in your account configuration
process. <frown> So, abandon the account and start over again! I
wonder if they have a GC process that periodically scans the accounts
to close out those that haven\'t been finalized in N days...?]

Of course, this is true of many things -- failing to consult the
stakeholders about their needs and IMPOSING your own notion of a
solution.
 
On 7/18/2023 8:52 PM, Don Y wrote:
The original software simply stopped when there was a problem ,and not tel
you what test had failed. On top of that, there were many errors in the test
software. I politely asked the ET who built it to fix his mistakes. I was
brushed off with, \"I don\'t remember building it. Go away. Fix the damned
thing,yourself.\" Then he bitched  when I did just that. He was told, \"You
know better than to tell Michael that, if you don\'t want him to o it.\"

Letting \"engineers\" write code is almost always a mistake.  Someone with

I should have said \"design\" instead of \"write\" though \"design\" is often
a misstatement of the process! <frown>

knowledge of the PROBLEM space needs to figure out what the code should do
and HOW it should do it as well as how it should interact with the
ACTUAL user(s).

[I can\'t begin to count the number of times I\'ve \"intentionally\" crashed
programs to point out \"unfortunate\" assumptions that their writers had
made in their designs -- without even having a formal notion of the
problem that they were trying to solve!  :< ]

Otherwise, you get code that walks itself into a corner or forces a user
to do X when he really wants to do Y.

[I visited a website, recently, to create an account.  I was offered a
choice of authentication strategies, at one point.  I opted for the 2FA
option.  Then, realized I wasn\'t happy with any of the \"second factors\"
they had implemented.  Ah, but there\'s no way to \"go back\" to the point
before you made that choice!  And, if you log out and log back in, it
cheerfully returns you to this same point in your account configuration
process.  <frown>  So, abandon the account and start over again!  I
wonder if they have a GC process that periodically scans the accounts
to close out those that haven\'t been finalized in N days...?]

Of course, this is true of many things -- failing to consult the
stakeholders about their needs and IMPOSING your own notion of a
solution.
 

Welcome to EDABoard.com

Sponsor

Back
Top