D
Don Y
Guest
On 7/16/2023 5:12 AM, Martin Brown wrote:
But *who* sets those criteria? \"Humanity\" already engages in
behaviors that are \"not necessarily best for humanity\"; how
would this be any different?
If you go with a neural net doing the pattern recognition *and* making the
decision, then you are largely working with BFM. I doubt you will ever
be able to explain -- in common-sense terms -- these decisions as
they are effectively simultaneous equations.
I take a hybrid approach in my uses; I let the NNet look for patterns
and then have it modify a Production System that will actually make the
decisions (which will then be observed by the NNet which will then
tweek the productions which will then...). So, I can limit the types of
things that I let the NNet consider as \"significant\" (input neurons)
AND force it to alter the system\'s behavior in very limited ways.
\"No, you have no reason to consult the phase of the moon when making
this decision...\"
Yet, those same folks likely have no problem BENEFITING from
\"labor savings\" (redundancies) in the products that they
purchase/consume...
But one can use NFTs (and a registry) to protect original works.
Even the works of AIs! Provenance then becomes a digitizable thing.
Crowd scenes are costly and probably the easiest to synthesize.
Even desktop tools can perform a \"passable\" rendering of a
\"generic crowd\". They fall down when the creator gets lazy
about introducing variation into the \"actors\" (\"Gee, this guy
over here is making the same motions as this other guy over there...
the only differences are the colors of their shirts!\")
Yes, as above. Perhaps easier to teach an AI how to ensure variation
in the parameters used to create the \"actors\" than it would be to hope
a human could be systematically \"random\".
OTOH, the A-listers would be in the most demand for a firm that couldn\'t
afford the genuine article.
One of the problems with \"entertainment\" is that it is so transitory.
A \"bad actor\" (unfortunate choice of words) can get in, make his
money and be *gone* before the legal system can catch up to him.
Likely one of the issues in the current \"labor actions\", here.
Treat it like historically: where\'s my royalty/residual?
[I wonder if Sheb Wooley is cursing NOT getting his due royalties?]
Exactly. But, this just leverages the fact that \"every story has
already been told\". New ones are just rehashes and blends of old ones.
> It used to be what we thought made us different to mere machines...
*Original* thought makes the difference. As suggested above,
a lot of what folks want to THINK of as original is just a
rehash/remix of old work.
This is particularly true in engineering and art (artists are
actually \"encouraged\" to steal others\' ideas)
Yes, but you can build a deterministic automaton to verify all
such references to \"vet\" any such claims. In that sense, such
an AI is more vulnerable because it has to back up its claims
(in verifiable ways).
An AI based on an LLM just has to make a story that sounds
entertaining.
Inject a randomizing element to mimic evolutionary changes.
Let the changes that seem to be successful persist...
And, the AI is consistent. You don\'t worry about whether the
practitioner \"on duty\" at the time was proficient, having a bad
day, distracted, etc.
Conceivably, an AI can be used to *judge* human efforts and
weed out the underperformers (in a way that isn\'t influenced
by money or personal relations)
I think the value in engineering will come from those folks
who aren\'t particularly diligent in their methodology. If
you tend to be sloppy, you\'ll tend to see lots of \"adjustments\"
to your work.
The question then remains: will employers use this as a criteria
to determine who to dismiss? or, to drive wages down as the
CORRECTED works of the sloppy workers can approach the quality
of the more diligent?
Bugs tend to be relatively easy to find, given enough time and
exposure.
The tougher problem is identifying behaviors that are either
undesirable, unintended or unexpected.
Our microwave oven lets you type in the cook time. \"10\"
is obviously 10 seconds; \"20\" for 20, etc. And, \"60\"
is a minute! (makes sense). But, \"100\" is also a minute!
(implied \':\')
This is counterintuitive -- until someone shoves it in your
face!
E.g., I was making pancakes and kept adjusting the time
for the \"first side\" upward. From 90, I increased it to 100
and wondered why things went downhill!
Of course, this is a non-critical application and one
where I can stop and think about what\'s just happened.
But, imagine this behavior is codified into some
other application that is called on in a time of high stress
(\"Hmmm... the retro rockets didn\'t slow us enough at a
90 second burn -- we\'re going to crash! Lets try 100!\")
On 15/07/2023 20:56, Don Y wrote:
I\'m trying to come to a rational/consistent opinion wrt AI
and it\'s various, perceived \"threats\".
The most insidious one is that what is best for the AI is not necessarily best
for humanity.
But *who* sets those criteria? \"Humanity\" already engages in
behaviors that are \"not necessarily best for humanity\"; how
would this be any different?
We already have chips designed by AI to do AI and that will
likely continue into the future. The tricky bit is that they can\'t tell you why
they made a particular decision (at least not yet) so they are very much a
black box entity that seems smart.
If you go with a neural net doing the pattern recognition *and* making the
decision, then you are largely working with BFM. I doubt you will ever
be able to explain -- in common-sense terms -- these decisions as
they are effectively simultaneous equations.
I take a hybrid approach in my uses; I let the NNet look for patterns
and then have it modify a Production System that will actually make the
decisions (which will then be observed by the NNet which will then
tweek the productions which will then...). So, I can limit the types of
things that I let the NNet consider as \"significant\" (input neurons)
AND force it to alter the system\'s behavior in very limited ways.
\"No, you have no reason to consult the phase of the moon when making
this decision...\"
I can understand how a person that can be *replaced* by an AI
would fear for their livelihood. But, that (to me) isn\'t a
blanket reason for banning/restricting AIs. (we didn\'t
ban *calculators* out of fear they would \"make redundant\"
folks who spent their days totaling columns of figures!
or back hoes out of fear they would make ditch diggers
redundant).
There is always some backlash against automation of what used to be highly
skilled work. Luddites spring to mind here.
Yet, those same folks likely have no problem BENEFITING from
\"labor savings\" (redundancies) in the products that they
purchase/consume...
The uproar in the \"artistic\" world implying that they are
outright *stealing* their existing works seems a stretch,
as well. If I wrote a story that sounded a hellofalot
like one of your stories -- or painted a picture that
resembled one of yours -- would that be \"wrong\"? (e.g.,
Depends a bit on whether you try to pass it off as an original like some
forgers do.
But one can use NFTs (and a registry) to protect original works.
Even the works of AIs! Provenance then becomes a digitizable thing.
I think the areas where it is most dangerous is for digitising
extras in a day spent in studio and replacing their entire acting career with a
CGI avatar in the actual movie. The latest Indiana Jones movie shows quite a
bit of this CGI work in the last part.
Crowd scenes are costly and probably the easiest to synthesize.
Even desktop tools can perform a \"passable\" rendering of a
\"generic crowd\". They fall down when the creator gets lazy
about introducing variation into the \"actors\" (\"Gee, this guy
over here is making the same motions as this other guy over there...
the only differences are the colors of their shirts!\")
OTOH at least they get a days work out of it. The AI\'s will be smart enough
shortly to produce plausible looking individuals from a few parameters based on
how you say you want them to look!
Yes, as above. Perhaps easier to teach an AI how to ensure variation
in the parameters used to create the \"actors\" than it would be to hope
a human could be systematically \"random\".
A listers are probably safe for now (although the actress who played Rachael in
the first movie was motion captured and unaged by digital means so she looks
the same in both films). Harrison Ford is much older.
OTOH, the A-listers would be in the most demand for a firm that couldn\'t
afford the genuine article.
One of the problems with \"entertainment\" is that it is so transitory.
A \"bad actor\" (unfortunate choice of words) can get in, make his
money and be *gone* before the legal system can catch up to him.
The Abbatars Show in London is another example of what cutting edge video
processing technology can do. I\'m told it is very convincing as a real
performance by folk that have been to see it.
It is all the bit player actors who are in danger. If AI becomes prevalent they
each get one days paid work and then their appearance and voice print becomes
the property of the studio.
Likely one of the issues in the current \"labor actions\", here.
Treat it like historically: where\'s my royalty/residual?
[I wonder if Sheb Wooley is cursing NOT getting his due royalties?]
Likewise for some of the more formulaic movies and soaps - you could dispense
with the script writers once the AI is trained up on all the past programmes.
Generative AI is somewhat unnerving for creative types.
Exactly. But, this just leverages the fact that \"every story has
already been told\". New ones are just rehashes and blends of old ones.
> It used to be what we thought made us different to mere machines...
*Original* thought makes the difference. As suggested above,
a lot of what folks want to THINK of as original is just a
rehash/remix of old work.
This is particularly true in engineering and art (artists are
actually \"encouraged\" to steal others\' ideas)
imagine the number of variants of \"A Sunday Afternoon...\"
you could come up with that would be *different* works
yet strongly suggestive of that original -- should
those \"expressions\" be banned because they weren\'t
created by the original artist?
How could a talking head justify his claim to \"value\" wrt
an animated CGI figure making the same news presentation?
News readers days are numbered and so are lawyers since an AI backed by the
worlds largest online databases will beat them every time with instant recall
of the appropriate case law. GPT falls flat in this respect as it creates bogus
references to non-existent cases if backed into a corner as some hapless lazy
US lawyers found out the hard way:
Yes, but you can build a deterministic automaton to verify all
such references to \"vet\" any such claims. In that sense, such
an AI is more vulnerable because it has to back up its claims
(in verifiable ways).
An AI based on an LLM just has to make a story that sounds
entertaining.
https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt
It is one way that ChatGPT abuse for student essays can be detected...
I rely heavily on tools that are increasingly AI-driven
to verify the integrity of my hardware and software designs;
should they be banned/discouraged because they deprive
someone (me!?) of additional billable labor hours?
It is interesting that despite the huge rise in the ability of AI to leverage
the grunt work it is only really in the last year that they have come of age in
the ability to mimic other styles convincingly.
They were pretty much pastiches of the style they tried to mimic before (and
still are to some extent) but they are getting better at it.
Inject a randomizing element to mimic evolutionary changes.
Let the changes that seem to be successful persist...
If an AI improved your medical care, would you campaign to ban
them on the grounds that they displace doctors and other
medical practitioners? Or, improved the fuel efficiency of
a vehicle? Or...
[I.e., does it all just boil down to \"is *my* job threatened?\"]
In most cases AI can do most of the grunt work very efficiently and categorise
things to one of
1. Correct diagnosis
2. Probable diagnosis (but needs checking by an expert)
3. No further action required
4. Uncategorisable (needs checking by an expert)
And, the AI is consistent. You don\'t worry about whether the
practitioner \"on duty\" at the time was proficient, having a bad
day, distracted, etc.
Conceivably, an AI can be used to *judge* human efforts and
weed out the underperformers (in a way that isn\'t influenced
by money or personal relations)
Since a lot of scans are in category 3 it saves a lot of time for the experts
if they only have to look at the difficult edge cases.
As it learns the number of cases falling into bins 2 & 4 decrease with time,
but even now the best human pattern matchers (even quite average ones) can
still out perform computers on noisy image interpretation.
FWIW I use AI for chess puzzles and computer algebra tools to do things that
would have been unthinkable only a few years ago. It doesn\'t get tired and if
it takes it a few days to get a result who cares. It never makes mistakes with
missed expressions and these days can output computer code that is guaranteed
to be correct.
I think the value in engineering will come from those folks
who aren\'t particularly diligent in their methodology. If
you tend to be sloppy, you\'ll tend to see lots of \"adjustments\"
to your work.
The question then remains: will employers use this as a criteria
to determine who to dismiss? or, to drive wages down as the
CORRECTED works of the sloppy workers can approach the quality
of the more diligent?
Way back there were bugs in the Fortran output if the number of continuation
cards exceeded 9 (and it did happen with VSOP82).
Bugs tend to be relatively easy to find, given enough time and
exposure.
The tougher problem is identifying behaviors that are either
undesirable, unintended or unexpected.
Our microwave oven lets you type in the cook time. \"10\"
is obviously 10 seconds; \"20\" for 20, etc. And, \"60\"
is a minute! (makes sense). But, \"100\" is also a minute!
(implied \':\')
This is counterintuitive -- until someone shoves it in your
face!
E.g., I was making pancakes and kept adjusting the time
for the \"first side\" upward. From 90, I increased it to 100
and wondered why things went downhill!
Of course, this is a non-critical application and one
where I can stop and think about what\'s just happened.
But, imagine this behavior is codified into some
other application that is called on in a time of high stress
(\"Hmmm... the retro rockets didn\'t slow us enough at a
90 second burn -- we\'re going to crash! Lets try 100!\")