OT. No Repeat Tiles...

On Sunday, April 9, 2023 at 9:00:49 AM UTC-4, Jasen Betts wrote:
On 2023-04-07, Ricky <gnuarm.del...@gmail.com> wrote:
On Friday, April 7, 2023 at 8:07:03 AM UTC-4, Dean Hoffman wrote:
This article is about a geometry issue people have been working on since the 1960s.
https://www.breitbart.com/science/2023/04/06/british-retiree-solved-decades-old-geometry-problem/

It\'s not an unattractive tile, or pattern. I wonder when they will be in production?
It\'s kind of boutique and the shape may be hard to manufacture or
prove less durable due to stress concentration in the interior angles. also
getting the pattern right is essential. Some sort of pattern assistant is
probably warranted.

That crummy Moroccan wannabee pattern would not be woven, textile/ wall covering patterns are printed with an ink jet, or maybe better a dye jet, type of printing machine. Since the pattern is emulating a ceramic mosaic, printing on a chintz type fabric makes more sense, and a heck of lot lower cost..



--
Jasen.
🇺🇦 Слава Україні
 
On Sun, 9 Apr 2023 09:49:32 +0100, Martin Brown
<\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 07/04/2023 21:32, John Larkin wrote:
On Fri, 7 Apr 2023 21:10:26 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 07/04/2023 14:41, Fred Bloggs wrote:
On Friday, April 7, 2023 at 8:07:03?AM UTC-4, Dean Hoffman wrote:
This article is about a geometry issue people have been working on
since the 1960s.
https://www.breitbart.com/science/2023/04/06/british-retiree-solved-decades-old-geometry-problem/


We\'re going to see quite a lot of the age old conjectural quandaries
start to crumble once AI becomes more pervasive. It\'s kinda sad to
see something people of the past spent decades of their lives on be
obliterated in a sub-millisecond flash by an AI machine. Actually not
sad.

You are over optimistic about the timescales but in some domains where a
combination of brute force and deep learnable heuristics can triumph I
suspect several of the famous outstandingly difficult mathematical
challenges may fall to AI within the next decade (perhaps sooner).

Machines don\'t get bored or make trivial typo mistakes like we do.

I never expected to see a computer that could master Go and yet now
there are several that can bootstrap ab initio from the basic rules and
surpass the best human players in under a month.

Draughts turned out to be misleadingly simple. Chess was a tougher nut
to crack requiring initially dedicated parallel hardware (but now it is
hard to find a serious chess program that can\'t beat a human GM).

Go seemed to be intractable with so many possible moves and game states.

Automated circuit design has been tried. It will be interesting to see
if A\"I\" can do any.

Most CPU\'s these days already contain chunks of stuff designed
exclusively by AI. It makes a lot fewer mistakes than a human -
especially on boring repetitive stuff and awkward edge cases.

What does \"AI\" mean as regards CPU design? I don\'t think I can go
online to an AI machine and say \"design me a CPU.\"


Intel is working on its own neuromorphic designs to challenge Nvidia in
that market. It isn\'t clear which of them will gain most market share.
What is certain is that current CPUs are now way more complex than any
individual human can fully understand.

Well, read the data sheet. People do lay out boards and write
compilers.

I doubt that any individual human fully understands my dishwasher.


The next generation will have
substantial parts designed by an AI to use fewer gates and less power to
implement the same functionality.

https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html

and

https://siliconangle.com/2021/08/19/intel-debuts-100b-transistor-ai-chip-alder-lake-hybrid-processor/

AI sounds like a fad to me. People have been writing papers about
\"neural networks\" for decades. NN\'s are cargo-cult caricatures of a
biological brain.
 
On Sun, 9 Apr 2023 03:22:40 -0700 (PDT), Fred Bloggs
<bloggs.fredbloggs.fred@gmail.com> wrote:

On Friday, April 7, 2023 at 4:32:37?PM UTC-4, John Larkin wrote:
On Fri, 7 Apr 2023 21:10:26 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 07/04/2023 14:41, Fred Bloggs wrote:
On Friday, April 7, 2023 at 8:07:03?AM UTC-4, Dean Hoffman wrote:
This article is about a geometry issue people have been working on
since the 1960s.
https://www.breitbart.com/science/2023/04/06/british-retiree-solved-decades-old-geometry-problem/


We\'re going to see quite a lot of the age old conjectural quandaries
start to crumble once AI becomes more pervasive. It\'s kinda sad to
see something people of the past spent decades of their lives on be
obliterated in a sub-millisecond flash by an AI machine. Actually not
sad.

You are over optimistic about the timescales but in some domains where a
combination of brute force and deep learnable heuristics can triumph I
suspect several of the famous outstandingly difficult mathematical
challenges may fall to AI within the next decade (perhaps sooner).

Machines don\'t get bored or make trivial typo mistakes like we do.

I never expected to see a computer that could master Go and yet now
there are several that can bootstrap ab initio from the basic rules and
surpass the best human players in under a month.

Draughts turned out to be misleadingly simple. Chess was a tougher nut
to crack requiring initially dedicated parallel hardware (but now it is
hard to find a serious chess program that can\'t beat a human GM).

Go seemed to be intractable with so many possible moves and game states.
Automated circuit design has been tried. It will be interesting to see
if A\"I\" can do any.

It will, and will do it the exact same way you design stuff.

I think not. Besides, you have no idea how I, or anyone else, designs
stuff. I have no idea myself, beyond noting that certain attitudes
seem to work.

Do you design stuff?


> Previous attempts at automation were completely different, they didn\'t have close to the computational speed and resources available to the modern AI tools.

All that CPU power can\'t tell me if it will rain tomorrow.
 
On Monday, April 10, 2023 at 12:25:38 AM UTC+10, John Larkin wrote:
On Sun, 9 Apr 2023 03:22:40 -0700 (PDT), Fred Bloggs
bloggs.fred...@gmail.com> wrote:
On Friday, April 7, 2023 at 4:32:37?PM UTC-4, John Larkin wrote:
On Fri, 7 Apr 2023 21:10:26 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 07/04/2023 14:41, Fred Bloggs wrote:
On Friday, April 7, 2023 at 8:07:03?AM UTC-4, Dean Hoffman wrote:
This article is about a geometry issue people have been working on
since the 1960s.
https://www.breitbart.com/science/2023/04/06/british-retiree-solved-decades-old-geometry-problem/


We\'re going to see quite a lot of the age old conjectural quandaries
start to crumble once AI becomes more pervasive. It\'s kinda sad to
see something people of the past spent decades of their lives on be
obliterated in a sub-millisecond flash by an AI machine. Actually not
sad.

You are over optimistic about the timescales but in some domains where a
combination of brute force and deep learnable heuristics can triumph I
suspect several of the famous outstandingly difficult mathematical
challenges may fall to AI within the next decade (perhaps sooner).

Machines don\'t get bored or make trivial typo mistakes like we do.

I never expected to see a computer that could master Go and yet now
there are several that can bootstrap ab initio from the basic rules and
surpass the best human players in under a month.

Draughts turned out to be misleadingly simple. Chess was a tougher nut
to crack requiring initially dedicated parallel hardware (but now it is
hard to find a serious chess program that can\'t beat a human GM).

Go seemed to be intractable with so many possible moves and game states.
Automated circuit design has been tried. It will be interesting to see
if A\"I\" can do any.

It will, and will do it the exact same way you design stuff.
I think not. Besides, you have no idea how I, or anyone else, designs
stuff. I have no idea myself, beyond noting that certain attitudes
seem to work.

Do you design stuff?

John Larkin\'s favourite put-down. It would have more force if he ever posted anything that implied that he designed stuff - as opposed to slinging it together and seeing if it worked.

Previous attempts at automation were completely different, they didn\'t have close to the computational speed and resources available to the modern AI tools.

All that CPU power can\'t tell me if it will rain tomorrow.

Because you won\'t listen.

Weather is pretty predictable over periods up to about ten days. The \"butterfly effect\" takes a while to build up.

You\'ve read enough to know that chaotic systems aren\'t predictable, but not enough to realise that it takes a while for them to diverge enough to matter. With the weather it\'s about a fortnight. With the solar system it\'s about a million years.

We keep telling you this, but it never sinks in.

--
Bill Sloman, Sydney
 
On Sun, 9 Apr 2023 12:51:22 -0000 (UTC), Jasen Betts
<usenet@revmaps.no-ip.org> wrote:

On 2023-04-07, Ricky <gnuarm.deletethisbit@gmail.com> wrote:
On Friday, April 7, 2023 at 8:07:03?AM UTC-4, Dean Hoffman wrote:
This article is about a geometry issue people have been working on since the 1960s.
https://www.breitbart.com/science/2023/04/06/british-retiree-solved-decades-old-geometry-problem/

It\'s not an unattractive tile, or pattern. I wonder when they will be in production?

It\'s kind of boutique and the shape may be hard to manufacture or
prove less durable due to stress concentration in the interior angles. also
getting the pattern right is essential. Some sort of pattern assistant is
probably warranted.

The pattern likely has to be exact, as in mathematically exact. It may
not be physically realizable.
 
On Sunday, April 9, 2023 at 12:39:14 PM UTC-4, John Larkin wrote:
On Sun, 9 Apr 2023 12:51:22 -0000 (UTC), Jasen Betts
use...@revmaps.no-ip.org> wrote:

On 2023-04-07, Ricky <gnuarm.del...@gmail.com> wrote:
On Friday, April 7, 2023 at 8:07:03?AM UTC-4, Dean Hoffman wrote:
This article is about a geometry issue people have been working on since the 1960s.
https://www.breitbart.com/science/2023/04/06/british-retiree-solved-decades-old-geometry-problem/

It\'s not an unattractive tile, or pattern. I wonder when they will be in production?

It\'s kind of boutique and the shape may be hard to manufacture or
prove less durable due to stress concentration in the interior angles. also
getting the pattern right is essential. Some sort of pattern assistant is
probably warranted.
The pattern likely has to be exact, as in mathematically exact. It may
not be physically realizable.

In the real world, nothing has to be \"mathematically exact\". I think it would make an excellent floor tile for a room. It could give the appearance of a repeating pattern, without actually repeating. And floor tiles don\'t need to be exact, either. Grout is your friend.

--

Rick C.

+ Get 1,000 miles of free Supercharging
+ Tesla referral code - https://ts.la/richard11209
 
On Monday, April 10, 2023 at 2:39:14 AM UTC+10, John Larkin wrote:
On Sun, 9 Apr 2023 12:51:22 -0000 (UTC), Jasen Betts <use...@revmaps.no-ip.org> wrote:
On 2023-04-07, Ricky <gnuarm.del...@gmail.com> wrote:
On Friday, April 7, 2023 at 8:07:03?AM UTC-4, Dean Hoffman wrote:
This article is about a geometry issue people have been working on since the 1960s.
https://www.breitbart.com/science/2023/04/06/british-retiree-solved-decades-old-geometry-problem/

It\'s not an unattractive tile, or pattern. I wonder when they will be in production?

It\'s kind of boutique and the shape may be hard to manufacture or
prove less durable due to stress concentration in the interior angles. also
getting the pattern right is essential. Some sort of pattern assistant is
probably warranted.

The pattern likely has to be exact, as in mathematically exact. It may
not be physically realizable.

John Larkin has never heard of grouting. Real tiles are always separated by a thin line of grout, which deals with the fact that their edges aren\'t perfectly straight.

Most people who do electronic design have heard of \"tolerances\". John Larkin doesn\'t seem to be one of them.

--
Bill Sloman, Sydney
 
On 09/04/2023 15:19, John Larkin wrote:
On Sun, 9 Apr 2023 09:49:32 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 07/04/2023 21:32, John Larkin wrote:
On Fri, 7 Apr 2023 21:10:26 +0100, Martin Brown
\'\'\'newspam\'\'\'@nonad.co.uk> wrote:

On 07/04/2023 14:41, Fred Bloggs wrote:
On Friday, April 7, 2023 at 8:07:03?AM UTC-4, Dean Hoffman wrote:
This article is about a geometry issue people have been working on
since the 1960s.
https://www.breitbart.com/science/2023/04/06/british-retiree-solved-decades-old-geometry-problem/


We\'re going to see quite a lot of the age old conjectural quandaries
start to crumble once AI becomes more pervasive. It\'s kinda sad to
see something people of the past spent decades of their lives on be
obliterated in a sub-millisecond flash by an AI machine. Actually not
sad.

You are over optimistic about the timescales but in some domains where a
combination of brute force and deep learnable heuristics can triumph I
suspect several of the famous outstandingly difficult mathematical
challenges may fall to AI within the next decade (perhaps sooner).

Machines don\'t get bored or make trivial typo mistakes like we do.

I never expected to see a computer that could master Go and yet now
there are several that can bootstrap ab initio from the basic rules and
surpass the best human players in under a month.

Draughts turned out to be misleadingly simple. Chess was a tougher nut
to crack requiring initially dedicated parallel hardware (but now it is
hard to find a serious chess program that can\'t beat a human GM).

Go seemed to be intractable with so many possible moves and game states.

Automated circuit design has been tried. It will be interesting to see
if A\"I\" can do any.

Most CPU\'s these days already contain chunks of stuff designed
exclusively by AI. It makes a lot fewer mistakes than a human -
especially on boring repetitive stuff and awkward edge cases.

What does \"AI\" mean as regards CPU design? I don\'t think I can go
online to an AI machine and say \"design me a CPU.\"

That you choose to remain wilfully ignorant and parade that ignorance
here is your problem not mine.

You can ask AI to do well posed things like design me a faster multiply
algorithm using less power and/or less silicon and it will do it.

Deep mind already has the first new matrix multiplication algorithm
under its belt - the first new breakthrough in that field for 50 years!

https://www.newscientist.com/article/2340343-deepmind-ai-finds-new-way-to-multiply-numbers-and-speed-up-computers/

NS article is paywalled. This one isn\'t:

https://thenewstack.io/how-deepminds-alphatensor-ai-devised-a-faster-matrix-multiplication/

It is around 10-20% better than what the very best human minds have ever
been able to come up with since computers were invented. It has made the
inventive step by extrapolating from the training examples it was shown
and then finding new insights that have eluded human minds.

Same happened with Go where it found a host of novel positions and
whilst Alpha Go was essentially taught to play Go by humans the next
generation more generally AI Alphazero was able to bootstrap itself from
the rules to stronger than the best human in about a month.

https://www.newyorker.com/science/elements/how-the-artificial-intelligence-program-alphazero-mastered-its-games

Intel is working on its own neuromorphic designs to challenge Nvidia in
that market. It isn\'t clear which of them will gain most market share.
What is certain is that current CPUs are now way more complex than any
individual human can fully understand.

Well, read the data sheet. People do lay out boards and write
compilers.

AI is getting close now to being able to layout boards or silicon
better than most humans. The best combination is invariably a human to
do the rough placement and machine assist to do the tedious parts like
getting all the tolerances and connectivity exactly right.

Right now humans assisted by AI/computer tools can leverage the
strengths of each. AI doesn\'t get bored or make fencepost errors.

I doubt that any individual human fully understands my dishwasher.


The next generation will have
substantial parts designed by an AI to use fewer gates and less power to
implement the same functionality.

https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html

and

https://siliconangle.com/2021/08/19/intel-debuts-100b-transistor-ai-chip-alder-lake-hybrid-processor/

AI sounds like a fad to me. People have been writing papers about
\"neural networks\" for decades. NN\'s are cargo-cult caricatures of a
biological brain.

You choose not to understand things and then snipe at straw men of your
own invention to bolster your argument from a position of ignorance.

Neural networks are good enough to emulate the characteristics of a
brain - the only thing lacking is sufficient numbers of them with high
enough connectivity to emulate higher cognitive functions.

What we think of as consciousness and self awareness is an emergent
behaviour in a sufficiently complicated network of neurons. Artificial
machine based ones haven\'t become that big yet but they will eventually.

--
Martin Brown
 

Welcome to EDABoard.com

Sponsor

Back
Top