PSU Design...

On Tue, 9 May 2023 07:03:42 -0700 (PDT), Anthony William Sloman
<bill.sloman@ieee.org> wrote:

On Tuesday, May 9, 2023 at 6:47:53?PM UTC+10, Cursitor Doom wrote:
On Mon, 8 May 2023 22:27:01 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Tuesday, May 9, 2023 at 3:44:04?AM UTC+10, Cursitor Doom wrote:
On Mon, 8 May 2023 08:20:04 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Monday, May 8, 2023 at 11:03:24?PM UTC+10, Cursitor Doom wrote:
On Mon, 08 May 2023 05:35:04 -0700, John Larkin <jla...@highlandSNIPMEtechnology.com> wrote:
On Mon, 08 May 2023 13:04:10 +0100, Cursitor Doom <c...@notformail.com> wrote:
On Mon, 08 May 2023 04:43:24 -0700, John Larkin <jla...@highlandSNIPMEtechnology.com> wrote:
On Mon, 08 May 2023 00:23:58 +0100, Cursitor Doom <c...@notformail.com> wrote:
On Sun, 7 May 2023 22:38:18 +0100, piglet <erichp...@hotmail.com> wrote:
On 07/05/2023 13:39, Cursitor Doom wrote:

snip

What fascinates me is how humans, hunter-gatherers, evolved to be able
to do calculus and design electronics.

Simply by standing on the shoulders of giants. We\'re talking mostly small discoveries and inventions over huge periods of time, with the odd quantum leap thrown in every couple of hundred years. Now we stand on the brink of AI and that should be the new quantum leap.

There aren\'t any quantum leaps. There are lots of small discoveries and inventions - more in recent years as we have developed social tools like writing and peer-reviewed publication to let us pick out the more useful ones and pass them on to other people who do find them useful.

No quantum leaps?? What about Pythagoras?

The \"Pythagoras\" theorem was known to the Babylonians

Galileo?

Galileo exploited - but didn\'t invent - the telescope. Scarcely a quantum leap.

Newton/Leibnitz?

Leibnitz and Newton both \"invented\" calculus at much the same time, which does suggest that some earlier work had given them both much the same idea. The notation we use is the one Leibnitz talked about to other people. Newton called his version \"fluxions\" and was much slower to tell other people about it.

Einstein?

Was a very clever guy, but his insights were based on earlier work. He may have invented Special Relativity, but one of it\'s features is called Lorenz contraction;

https://en.wikipedia.org/wiki/Length_contraction

Gutenberg?

Gutenberg introduced the printing press in Europe, long after it has been popular in China. The Chinese writing system uses a lot more characters than European alphabets, so moveable type was rather simpler and more useful in Europe. The process by which European writing systems became alphabetic doesn\'t seem to have involved any quantum leaps.

Marconi?

Marconi wasn\'t an inventer, but rather an exploiter of existing technology.

Nobel?

Nobel made a lot of money out of the simple idea of stabilising nitorgylcerin by soaking it up into kieselguhr

https://en.wikipedia.org/wiki/Diatomaceous_earth

No kind of quantum leap.

Oppenheimer? Teller? Berners-Lee?

All three took existing technologies a little further.

We\'ll know that artificial intelligence has got here when Google puts in a filter that stops a a and Cursitor Doom from posting irrelevant nonsense here. I\'m not holding my breath.

Yes, that would be a wonderfully useful tool for Socialists who can\'t take their crackpot theories being deconstructed by people they\'d much rather execute if the possibility existed.

I\'m sure you like to think that, but then again you think that you parading your fatuous delusions here is some kind of exercise in \"deconstruction\".

Executing you would solve the problem, but it would be the kind of over-kill that people like you do go in for. Sensible people would prefer to see your talents exploited where they might be useful - if you actually have any kind of useful talent. You do seem to be literate, but you don\'t seem to be able to understand what you read - Alfred Nobel made some kind of quantum leap in technology?

In Nobel\'s case, it wasn\'t the invention of dynamite that was the quantum leap. The leap was what it enabled mankind to achieve in
undertaking the kind of vast construction projects that before had been way too labour intensive to be economically viable.

Don\'t be silly. The progressive mechanisation of construction was one of those tedious incremental programs, and Nobel was one of the many people who kept it moving on.

And Teller/Oppenheimer? The progression from fission to fusion in the space of a few short years wasn\'t a quantum leap in your view??

Clearly not. Neither Teller or Oppenheimer \"invented\" fission.

https://en.wikipedia.org/wiki/Lise_Meitner

probably deserves the credit for that. The fact that Otto Hahn got the Nobel prize for it on his own is one of the more disgraceful episodes in the history of the Nobel prize.
And it was one more bit of incremental development, though nuclear bombs do represent a fairly dramatic increment in our capacity to make a mess of the planet.

Teller merely realised that a fission bomb could get a chunk of tritium, deuterium or lithium deuteride hot and dense enough to undergo nuclear fusion. Once you had the fission bomb it was a fairly obvious way of getting an even bigger blast - and even more neutrons - which the spectacularly dirty fission-fusion-fission bomb exploited,

Jeezus....
As usual, you fail to grasp the huge consequences of all these advances, preferring instead to focus on inflating your own ego (it\'s big enough already, Bill!)

The huge potential consequences of these developments - the fact that we haven\'t used a fission bomb in anger since 1945 makes it fairly obvious that they aren\'t actually \"advances\" - means that it\'s a technology that we should have had enough sense not to spend time and money on developing.

You don\'t seem to have any sense at all. Quite how pointing this out is supposed to inflate my ego escapes me. Wasting time drawing your attention to the fact that you don\'t know what you are talking about is a depressing reminder that I don\'t have enough constructive activities to fill my time, which in in fact tends more to deflate my ego.

If I had more self-respect I\'d ignore clowns like you, but I\'m reduced to shooting fish in a barrel.

I don\'t understand why you don\'t have enough constructive activities
to fill your time. Wouldn\'t building stuff be a more fulfilling use of
your time than lowering yourself to try to reason with opponents who,
like myself as you say, are so far beneath you as to not represent a
worthy challenge? You\'re not going to keep Alzheimer\'s at bay by
spending all day tapping away on your keyboard, Bill. The brain needs
to be exercised in multiple ways. Concentrating solely on this futile
exercise will only atrophy the parts of your brain that aren\'t getting
used. You don\'t seem to be aware of that and the grave consquences
that can arise from it.
 
On Wednesday, May 10, 2023 at 1:41:56 AM UTC+10, Cursitor Doom wrote:
On Tue, 9 May 2023 07:13:19 -0700 (PDT), Anthony William Sloman <bill.....@ieee.org> wrote:
On Tuesday, May 9, 2023 at 10:38:07?PM UTC+10, Rocky wrote:
On Monday, May 8, 2023 at 5:17:08?PM UTC+2, John Larkin wrote:
On Mon, 08 May 2023 14:01:43 +0100, Cursitor Doom <c...@notformail.com> wrote:

<snip>

As you\'d expect, ChatGPT misses the point that 7812 parts don\'t produce an exactly 12V output, so the chip that happens to have the highest output voltage is going to try and source most of the current. The design problem getting the parts to share the load - and they weren\'t designed to let you do that easily.

Presumably John Larkin had that in mind when he posed the problem - I\'m not impressed by his design skills, but it is a pretty obvious point.

Yeah, so it\'s using an ideal model. Bit of a problem there. But they\'ll perfect it in time, I\'ve no doubt.

About the same sort of time it will take to work out how to get clowns like you to realise how little you know, and how little you take in when exposed to information that you don\'t realise that you need to process.

There\'s a fundamental problem in dealing with more and less sophisticated users. Sometimes it is pointless to provide detailed information because the recipient doesn\'t know enough to realise that it means anything, and in other contexts providing too much information irritates the intended recipient because they fell that they are being patronised. One of my medical friend talked about three levels of information - for dumb patients, for intelligent patients and for other doctors. What other doctors got was briefer but a lot more informative.

AI will have to know a lot about the individuals it is informing before it will be able to get the level of discourse right.

--
Bill Sloman, Sydney
 
On Wednesday, May 10, 2023 at 1:51:57 AM UTC+10, Cursitor Doom wrote:
On Tue, 9 May 2023 07:03:42 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Tuesday, May 9, 2023 at 6:47:53?PM UTC+10, Cursitor Doom wrote:
On Mon, 8 May 2023 22:27:01 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Tuesday, May 9, 2023 at 3:44:04?AM UTC+10, Cursitor Doom wrote:
On Mon, 8 May 2023 08:20:04 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Monday, May 8, 2023 at 11:03:24?PM UTC+10, Cursitor Doom wrote:
On Mon, 08 May 2023 05:35:04 -0700, John Larkin <jla...@highlandSNIPMEtechnology.com> wrote:
On Mon, 08 May 2023 13:04:10 +0100, Cursitor Doom <c...@notformail.com> wrote:
On Mon, 08 May 2023 04:43:24 -0700, John Larkin <jla...@highlandSNIPMEtechnology.com> wrote:
On Mon, 08 May 2023 00:23:58 +0100, Cursitor Doom <c...@notformail.com> wrote:
On Sun, 7 May 2023 22:38:18 +0100, piglet <erichp...@hotmail..com> wrote:
On 07/05/2023 13:39, Cursitor Doom wrote:

<snip>

As usual, you fail to grasp the huge consequences of all these advances, preferring instead to focus on inflating your own ego (it\'s big enough already, Bill!)

The huge potential consequences of these developments - the fact that we haven\'t used a fission bomb in anger since 1945 makes it fairly obvious that they aren\'t actually \"advances\" - means that it\'s a technology that we should have had enough sense not to spend time and money on developing.

You don\'t seem to have any sense at all. Quite how pointing this out is supposed to inflate my ego escapes me. Wasting time drawing your attention to the fact that you don\'t know what you are talking about is a depressing reminder that I don\'t have enough constructive activities to fill my time, which in in fact tends more to deflate my ego.

If I had more self-respect I\'d ignore clowns like you, but I\'m reduced to shooting fish in a barrel.

I don\'t understand why you don\'t have enough constructive activities to fill your time. Wouldn\'t building stuff be a more fulfilling use of your time than lowering yourself to try to reason with opponents who, like myself as you say, are so far beneath you as to not represent a worthy challenge? You\'re not going to keep Alzheimer\'s at bay by spending all day tapping away on your keyboard, Bill. The brain needs to be exercised in multiple ways.

That\'s why I\'m also the treasure of the NSW branch of the IEEE, but it isn\'t a particulary time-consuming job.

> Concentrating solely on this futile exercise will only atrophy the parts of your brain that aren\'t getting used. You don\'t seem to be aware of that and the grave consequences that can arise from it.

Don\'t be silly. I\'ve been hanging out with psychologists for most of my life. There are a lot of bright females in the profession and I ended up marrying one of them.

https://en.wikipedia.org/wiki/Brian_Butterworth

isn\'t female, but I got to know him through my wife - he\'s the guy who pointed out that Ronald Reagan was showing early signs of dementia when he was running for his second term. I probably know a lot more about Alzheimers than you do - the fact that you are a half-wit and I\'m not gives me an even bigger edge.

--
Bill Sloman, Sydney
 
On Tue, 9 May 2023 09:05:46 -0700 (PDT), Anthony William Sloman
<bill.sloman@ieee.org> wrote:

On Wednesday, May 10, 2023 at 1:41:56?AM UTC+10, Cursitor Doom wrote:
On Tue, 9 May 2023 07:13:19 -0700 (PDT), Anthony William Sloman <bill....@ieee.org> wrote:
On Tuesday, May 9, 2023 at 10:38:07?PM UTC+10, Rocky wrote:
On Monday, May 8, 2023 at 5:17:08?PM UTC+2, John Larkin wrote:
On Mon, 08 May 2023 14:01:43 +0100, Cursitor Doom <c...@notformail.com> wrote:

snip

As you\'d expect, ChatGPT misses the point that 7812 parts don\'t produce an exactly 12V output, so the chip that happens to have the highest output voltage is going to try and source most of the current. The design problem getting the parts to share the load - and they weren\'t designed to let you do that easily.

Presumably John Larkin had that in mind when he posed the problem - I\'m not impressed by his design skills, but it is a pretty obvious point.

Yeah, so it\'s using an ideal model. Bit of a problem there. But they\'ll perfect it in time, I\'ve no doubt.

About the same sort of time it will take to work out how to get clowns like you to realise how little you know, and how little you take in when exposed to information that you don\'t realise that you need to process.

There\'s a fundamental problem in dealing with more and less sophisticated users. Sometimes it is pointless to provide detailed information because the recipient doesn\'t know enough to realise that it means anything, and in other contexts providing too much information irritates the intended recipient because they fell that they are being patronised. One of my medical friend talked about three levels of information - for dumb patients, for intelligent patients and for other doctors. What other doctors got was briefer but a lot more informative.

AI will have to know a lot about the individuals it is informing before it will be able to get the level of discourse right.

Right, so if I use AI in future I must be sure to tell it I\'m a
clinical imbecile, to taylor the info it gives to my intelligence
level. Most people tell me how brilliant I am, but I always suspected
they were just empty blandishments and now, thanks to you, I know that
to be the case. I really appreciate your candour, old friend.
 
On Tue, 9 May 2023 09:20:08 -0700 (PDT), Anthony William Sloman
<bill.sloman@ieee.org> wrote:

On Wednesday, May 10, 2023 at 1:51:57?AM UTC+10, Cursitor Doom wrote:
On Tue, 9 May 2023 07:03:42 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Tuesday, May 9, 2023 at 6:47:53?PM UTC+10, Cursitor Doom wrote:
On Mon, 8 May 2023 22:27:01 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Tuesday, May 9, 2023 at 3:44:04?AM UTC+10, Cursitor Doom wrote:
On Mon, 8 May 2023 08:20:04 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Monday, May 8, 2023 at 11:03:24?PM UTC+10, Cursitor Doom wrote:
On Mon, 08 May 2023 05:35:04 -0700, John Larkin <jla...@highlandSNIPMEtechnology.com> wrote:
On Mon, 08 May 2023 13:04:10 +0100, Cursitor Doom <c...@notformail.com> wrote:
On Mon, 08 May 2023 04:43:24 -0700, John Larkin <jla...@highlandSNIPMEtechnology.com> wrote:
On Mon, 08 May 2023 00:23:58 +0100, Cursitor Doom <c...@notformail.com> wrote:
On Sun, 7 May 2023 22:38:18 +0100, piglet <erichp...@hotmail.com> wrote:
On 07/05/2023 13:39, Cursitor Doom wrote:

snip

As usual, you fail to grasp the huge consequences of all these advances, preferring instead to focus on inflating your own ego (it\'s big enough already, Bill!)

The huge potential consequences of these developments - the fact that we haven\'t used a fission bomb in anger since 1945 makes it fairly obvious that they aren\'t actually \"advances\" - means that it\'s a technology that we should have had enough sense not to spend time and money on developing.

You don\'t seem to have any sense at all. Quite how pointing this out is supposed to inflate my ego escapes me. Wasting time drawing your attention to the fact that you don\'t know what you are talking about is a depressing reminder that I don\'t have enough constructive activities to fill my time, which in in fact tends more to deflate my ego.

If I had more self-respect I\'d ignore clowns like you, but I\'m reduced to shooting fish in a barrel.

I don\'t understand why you don\'t have enough constructive activities to fill your time. Wouldn\'t building stuff be a more fulfilling use of your time than lowering yourself to try to reason with opponents who, like myself as you say, are so far beneath you as to not represent a worthy challenge? You\'re not going to keep Alzheimer\'s at bay by spending all day tapping away on your keyboard, Bill. The brain needs to be exercised in multiple ways.

That\'s why I\'m also the treasure of the NSW branch of the IEEE, but it isn\'t a particulary time-consuming job.

Concentrating solely on this futile exercise will only atrophy the parts of your brain that aren\'t getting used. You don\'t seem to be aware of that and the grave consequences that can arise from it.

Don\'t be silly. I\'ve been hanging out with psychologists for most of my life. There are a lot of bright females in the profession and I ended up marrying one of them.

https://en.wikipedia.org/wiki/Brian_Butterworth

isn\'t female, but I got to know him through my wife - he\'s the guy who pointed out that Ronald Reagan was showing early signs of dementia when he was running for his second term. I probably know a lot more about Alzheimers than you do - the fact that you are a half-wit and I\'m not gives me an even bigger edge.

Thanks, Bill; much appreciated.
 
On Wednesday, May 10, 2023 at 4:32:16 AM UTC+10, Cursitor Doom wrote:
On Tue, 9 May 2023 09:05:46 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Wednesday, May 10, 2023 at 1:41:56?AM UTC+10, Cursitor Doom wrote:
On Tue, 9 May 2023 07:13:19 -0700 (PDT), Anthony William Sloman <bill.....@ieee.org> wrote:
On Tuesday, May 9, 2023 at 10:38:07?PM UTC+10, Rocky wrote:
On Monday, May 8, 2023 at 5:17:08?PM UTC+2, John Larkin wrote:
On Mon, 08 May 2023 14:01:43 +0100, Cursitor Doom <c...@notformail.com> wrote:

snip

As you\'d expect, ChatGPT misses the point that 7812 parts don\'t produce an exactly 12V output, so the chip that happens to have the highest output voltage is going to try and source most of the current. The design problem getting the parts to share the load - and they weren\'t designed to let you do that easily.

Presumably John Larkin had that in mind when he posed the problem - I\'m not impressed by his design skills, but it is a pretty obvious point.

Yeah, so it\'s using an ideal model. Bit of a problem there. But they\'ll perfect it in time, I\'ve no doubt.

About the same sort of time it will take to work out how to get clowns like you to realise how little you know, and how little you take in when exposed to information that you don\'t realise that you need to process.

There\'s a fundamental problem in dealing with more and less sophisticated users. Sometimes it is pointless to provide detailed information because the recipient doesn\'t know enough to realise that it means anything, and in other contexts providing too much information irritates the intended recipient because they fell that they are being patronised. One of my medical friend talked about three levels of information - for dumb patients, for intelligent patients and for other doctors. What other doctors got was briefer but a lot more informative.

AI will have to know a lot about the individuals it is informing before it will be able to get the level of discourse right.

Right, so if I use AI in future I must be sure to tell it I\'m a clinical imbecile, to tailor the info it gives to my intelligence level.

You aren\'t a clinical imbecile - you wouldn\'t be able to type if you were. You are merely hopelessly gullible and spectacularly ill-informed.

How AI would cope with that isn\'t obvious. It could cite reliable authority figures, but you\'d imagine that they were part of some bizarre conspiracy in which you have unwisely chosen to believe. It would probably send around the guys in the white coats to take you some place where you could be de-programmed - not for your own sake but to protect everybody else,

> Most people tell me how brilliant I am, but I always suspected they were just empty blandishments and now, thanks to you, I know that to be the case. I really appreciate your candour, old friend.

Flattery is a great way pf manipulating gullible twits. Unduly susceptible people - John Larkin comes to mind - get addicted to it. It\'s not damaging of itself, but addicts can be persuaded to do stuff than endangers everybody else - like hijacking aeroplanes and flying them into the Twin Towers.

You may claim to realise that you are vulnerable, but the underlying weakness won\'t go away. You need to move yourself into protected accommodation, selected by somebody else who could be relied on to have your best interests at heart. They might be difficult to find.

--
Bill Sloman, Sydney
 
On Tue, 9 May 2023 20:41:56 -0700 (PDT), Anthony William Sloman
<bill.sloman@ieee.org> wrote:

On Wednesday, May 10, 2023 at 4:32:16?AM UTC+10, Cursitor Doom wrote:
On Tue, 9 May 2023 09:05:46 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Wednesday, May 10, 2023 at 1:41:56?AM UTC+10, Cursitor Doom wrote:
On Tue, 9 May 2023 07:13:19 -0700 (PDT), Anthony William Sloman <bill....@ieee.org> wrote:
On Tuesday, May 9, 2023 at 10:38:07?PM UTC+10, Rocky wrote:
On Monday, May 8, 2023 at 5:17:08?PM UTC+2, John Larkin wrote:
On Mon, 08 May 2023 14:01:43 +0100, Cursitor Doom <c...@notformail.com> wrote:

snip

As you\'d expect, ChatGPT misses the point that 7812 parts don\'t produce an exactly 12V output, so the chip that happens to have the highest output voltage is going to try and source most of the current. The design problem getting the parts to share the load - and they weren\'t designed to let you do that easily.

Presumably John Larkin had that in mind when he posed the problem - I\'m not impressed by his design skills, but it is a pretty obvious point.

Yeah, so it\'s using an ideal model. Bit of a problem there. But they\'ll perfect it in time, I\'ve no doubt.

About the same sort of time it will take to work out how to get clowns like you to realise how little you know, and how little you take in when exposed to information that you don\'t realise that you need to process.

There\'s a fundamental problem in dealing with more and less sophisticated users. Sometimes it is pointless to provide detailed information because the recipient doesn\'t know enough to realise that it means anything, and in other contexts providing too much information irritates the intended recipient because they fell that they are being patronised. One of my medical friend talked about three levels of information - for dumb patients, for intelligent patients and for other doctors. What other doctors got was briefer but a lot more informative.

AI will have to know a lot about the individuals it is informing before it will be able to get the level of discourse right.

Right, so if I use AI in future I must be sure to tell it I\'m a clinical imbecile, to tailor the info it gives to my intelligence level.

You aren\'t a clinical imbecile - you wouldn\'t be able to type if you were. You are merely hopelessly gullible and spectacularly ill-informed.

How AI would cope with that isn\'t obvious. It could cite reliable authority figures, but you\'d imagine that they were part of some bizarre conspiracy in which you have unwisely chosen to believe. It would probably send around the guys in the white coats to take you some place where you could be de-programmed - not for your own sake but to protect everybody else,

Most people tell me how brilliant I am, but I always suspected they were just empty blandishments and now, thanks to you, I know that to be the case. I really appreciate your candour, old friend.

Flattery is a great way pf manipulating gullible twits. Unduly susceptible people - John Larkin comes to mind - get addicted to it. It\'s not damaging of itself, but addicts can be persuaded to do stuff than endangers everybody else - like hijacking aeroplanes and flying them into the Twin Towers.

You may claim to realise that you are vulnerable, but the underlying weakness won\'t go away. You need to move yourself into protected accommodation, selected by somebody else who could be relied on to have your best interests at heart. They might be difficult to find.

Right, so I need to be put away somewhere without internet access so
my crazy ideas can\'t infect innocent people\'s minds. Finally it all
makes sense! Thanks, Bill.
 
On Wednesday, May 10, 2023 at 6:16:46 PM UTC+10, Cursitor Doom wrote:
On Tue, 9 May 2023 20:41:56 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Wednesday, May 10, 2023 at 4:32:16?AM UTC+10, Cursitor Doom wrote:
On Tue, 9 May 2023 09:05:46 -0700 (PDT), Anthony William Sloman
bill....@ieee.org> wrote:
On Wednesday, May 10, 2023 at 1:41:56?AM UTC+10, Cursitor Doom wrote:
On Tue, 9 May 2023 07:13:19 -0700 (PDT), Anthony William Sloman <bill....@ieee.org> wrote:
On Tuesday, May 9, 2023 at 10:38:07?PM UTC+10, Rocky wrote:
On Monday, May 8, 2023 at 5:17:08?PM UTC+2, John Larkin wrote:
On Mon, 08 May 2023 14:01:43 +0100, Cursitor Doom <c...@notformail.com> wrote:

snip

As you\'d expect, ChatGPT misses the point that 7812 parts don\'t produce an exactly 12V output, so the chip that happens to have the highest output voltage is going to try and source most of the current. The design problem getting the parts to share the load - and they weren\'t designed to let you do that easily.

Presumably John Larkin had that in mind when he posed the problem - I\'m not impressed by his design skills, but it is a pretty obvious point.

Yeah, so it\'s using an ideal model. Bit of a problem there. But they\'ll perfect it in time, I\'ve no doubt.

About the same sort of time it will take to work out how to get clowns like you to realise how little you know, and how little you take in when exposed to information that you don\'t realise that you need to process.

There\'s a fundamental problem in dealing with more and less sophisticated users. Sometimes it is pointless to provide detailed information because the recipient doesn\'t know enough to realise that it means anything, and in other contexts providing too much information irritates the intended recipient because they fell that they are being patronised. One of my medical friend talked about three levels of information - for dumb patients, for intelligent patients and for other doctors. What other doctors got was briefer but a lot more informative.

AI will have to know a lot about the individuals it is informing before it will be able to get the level of discourse right.

Right, so if I use AI in future I must be sure to tell it I\'m a clinical imbecile, to tailor the info it gives to my intelligence level.

You aren\'t a clinical imbecile - you wouldn\'t be able to type if you were. You are merely hopelessly gullible and spectacularly ill-informed.

How AI would cope with that isn\'t obvious. It could cite reliable authority figures, but you\'d imagine that they were part of some bizarre conspiracy in which you have unwisely chosen to believe. It would probably send around the guys in the white coats to take you some place where you could be de-programmed - not for your own sake but to protect everybody else,

Most people tell me how brilliant I am, but I always suspected they were just empty blandishments and now, thanks to you, I know that to be the case. I really appreciate your candour, old friend.

Flattery is a great way pf manipulating gullible twits. Unduly susceptible people - John Larkin comes to mind - get addicted to it. It\'s not damaging of itself, but addicts can be persuaded to do stuff than endangers everybody else - like hijacking aeroplanes and flying them into the Twin Towers.

You may claim to realise that you are vulnerable, but the underlying weakness won\'t go away. You need to move yourself into protected accommodation, selected by somebody else who could be relied on to have your best interests at heart. They might be difficult to find.

Right, so I need to be put away somewhere without internet access so my crazy ideas can\'t infect innocent people\'s minds. Finally it all makes sense! Thanks, Bill.

Your ideas are much too crazy to infect many other people. The risk is that you might try to act on them. Your habit of telling us about them is an irritating distraction - you need to join some secret conspiracy, so that we don\'t get to hear about them, but people as silly as your are are bit thin on the ground, and it must be hard to find enough conspirators that you don\'t get bored by the limited range of absurdities on offer.

--
Bill Sloman, Sydney
 

Welcome to EDABoard.com

Sponsor

Back
Top