AI is too good?...

D

Don Y

Guest
I was reading this:

<https://theathletic.com/4791440/2023/08/25/mlb-robot-umpires-future/>

and was wondering about the objection to robots being \"too accurate\".

[There is a lack of concensus whether or not this is the case -- though
I\'m sure technology could address that issue]

I would think that a batter (and a pitcher) would welcome a
repeatable definition of the strike zone to eliminate errors
and biases introduced by (fallible) human umpires.

Yet, the complaints seem to be whiny -- as if the finality of
the decision is what irks people (hey, if it is WRONG, then
you have documented evidence to use in making your case as to
its applicability if not its current rulings!)

I\'m trying to draw parallels to our banning photo-traffic-enforcement,
here. Almost universally, people are in favor of automated detection
of FLAGRANT violations (which are common enough) -- e.g., someone
crossing the stop line (not even having entered the intersection
proper!) ON a red light.

The gripes seemed to be more along the lines of enforcing the
technicalities of the law. E.g., if you\'ve crept into the
intersection to make a turn (waiting for traffic to clear)
and the light turns red before you are COMPLETELY in the
intersection, you were cited for a violation. When your
alternative, at that point, was to stall *in* the intersection
ALSO as a violation (i.e., your error was crossing the stop line
before you had a clear shot across the intersection)

Imagine an AI checking to see that you only have \"15 items or
less\" before allowing you to enter the Express Checkout lane.
Would folks gripe if they were flagged at 16 items? 28?
OTOH, would they gripe if the customer in front of them
was allowed through with those same 28 items??

What causes the whiny-ness? Why do folks think THEY deserve
\"a break\" yet others are abusing the system?
 
Imagine an AI checking to see that you only have \"15 items or
less\" before allowing you to enter the Express Checkout lane.
Would folks gripe if they were flagged at 16 items? 28?
OTOH, would they gripe if the customer in front of them
was allowed through with those same 28 items??

I think there should be a $1 fine at the register for each item exceeding 15 items. Fine if they want to pay for convenience. No AI needed.
 
On 8/26/2023 8:31 AM, Eddy Lee wrote:
Imagine an AI checking to see that you only have \"15 items or
less\" before allowing you to enter the Express Checkout lane.
Would folks gripe if they were flagged at 16 items? 28?
OTOH, would they gripe if the customer in front of them
was allowed through with those same 28 items??

I think there should be a $1 fine at the register for each item
exceeding 15 items. Fine if they want to pay for convenience.
No AI needed.

There have been some examples of how fines actually *encourage*
\"bad behavior\" (in essence, folks feel like they are paying and
are thus absolved of the moral consequences of their bad behavior).

[A short introduction as I don\'t have time to chase down the
actual reference:
<https://medium.com/fact-of-the-day-1/the-israeli-childcare-experiment-11d05ae83650>
Equally interesting papers here:
<https://rady.ucsd.edu/faculty-research/faculty/uri-gneezy.html>]
 
On Saturday, August 26, 2023 at 2:03:46 PM UTC-5, Don Y wrote:
On 8/26/2023 8:31 AM, Eddy Lee wrote:
Imagine an AI checking to see that you only have \"15 items or
less\" before allowing you to enter the Express Checkout lane.
Would folks gripe if they were flagged at 16 items? 28?
OTOH, would they gripe if the customer in front of them
was allowed through with those same 28 items??

I think there should be a $1 fine at the register for each item
exceeding 15 items. Fine if they want to pay for convenience.
No AI needed.
There have been some examples of how fines actually *encourage*
\"bad behavior\" (in essence, folks feel like they are paying and
are thus absolved of the moral consequences of their bad behavior).

[A short introduction as I don\'t have time to chase down the
actual reference:
https://medium.com/fact-of-the-day-1/the-israeli-childcare-experiment-11d05ae83650
Equally interesting papers here:
https://rady.ucsd.edu/faculty-research/faculty/uri-gneezy.html>]

I read to AI computers were allowed to talk to each other, before long they had there own language and they weren\'t able to be understood.
Now I question, whether that was a joke or not?
Mikek
 
On August 26, Don Y wrote:
There have been some examples of how fines actually *encourage*
\"bad behavior\" (in essence, folks feel like they are paying and
are thus absolved of the moral consequences of their bad behavior).

You mean like, Al Gore flying around in a private jet, blowing
hot air about the need to cut down on CO2 emissions, and
how we ALL have to make sacrifices? And then brags righteously
about the righteous payoffs he makes to righteous green
interest groups, which cancels his emissions, making him
righteously \'carbon neutral\'? Is that an example?

The hypocrisy of the political class is never ending, also
the revulsion it engenders -

--
Rich
 
On Saturday, August 26, 2023 at 2:33:59 PM UTC-7, RichD wrote:
On August 26, Don Y wrote:
There have been some examples of how fines actually *encourage*
\"bad behavior\" (in essence, folks feel like they are paying and
are thus absolved of the moral consequences of their bad behavior).
You mean like, Al Gore flying around in a private jet, blowing
hot air about the need to cut down on CO2 emissions, and
how we ALL have to make sacrifices? And then brags righteously
about the righteous payoffs he makes to righteous green
interest groups, which cancels his emissions, making him
righteously \'carbon neutral\'? Is that an example?

The hypocrisy of the political class is never ending, also
the revulsion it engenders -

Nah; \'private jet\' just means general aviation, rather than scheduled flights; it
keeps folk from being schedule-bound, which is the big-business-airline
doing mass production of air travel (which is also rather convenient).
The \'private jet\' is more flexible than the \'scheduled carrier\' and that\'s
important if (for example) a surgical emergency needs delivery of an
odd item pronto. The optics are bad, I suppose, but it\'s not evil.

Air Force One is a \'private jet\' competitor, but doesn\'t take paying passengers.
It\'s a ready item in case of emergency.
 
On 8/26/2023 12:17 PM, Lamont Cranston wrote:
On Saturday, August 26, 2023 at 2:03:46 PM UTC-5, Don Y wrote:

There have been some examples of how fines actually *encourage*
\"bad behavior\" (in essence, folks feel like they are paying and
are thus absolved of the moral consequences of their bad behavior).

[A short introduction as I don\'t have time to chase down the
actual reference:
https://medium.com/fact-of-the-day-1/the-israeli-childcare-experiment-11d05ae83650
Equally interesting papers here:
https://rady.ucsd.edu/faculty-research/faculty/uri-gneezy.html>]

I read to AI computers were allowed to talk to each other, before
long they had there own language and they weren\'t able to be understood.
Now I question, whether that was a joke or not?

The problem with (most) AIs is you (as a human) can\'t ask them
why they act the way they do or why they made a particular decision.
You can design networks that limit what they \"look at\". But,
you still can\'t \"ask\" for an explanation.

So, you can never be sure they are acting \"rationally\"
(whatever that means).

E.g., I can push all the text of USENET posts into a
generic network along with information as to which I
opened, which I replied to and which I ignored.
And, it can come up with a good predictive model for
how I might react to a new post.

But, the criteria that it might have decided upon
as most correlating with my choice to view a post
was: \"Number of R\'s in the post\". *And*, it
won\'t tell me that this is the criterion that it
is relying on, most!

WRT your comment above: are the hypothetical AIs
actually interacting in any meaningful way or just
learning how to get a reaction from the other?

Like a toddler learning that *bad* behavior will get
the parents\' attention far more effectively than
*good* behavior...
 

Welcome to EDABoard.com

Sponsor

Back
Top