AI learns to design

On Monday, November 11, 2019 at 6:21:04 AM UTC+11, tabb...@gmail.com wrote:
On Sunday, 10 November 2019 06:01:29 UTC, bitrex wrote:
On 11/10/19 12:57 AM, bitrex wrote:
On 11/9/19 10:16 PM, jlarkin@highlandsniptechnology.com wrote:

There have been attempts to design with computers, or to do the easier
task, optimize a given circuit. Not much progress so far.




Using AI to "evolve" designs on real FPGAs finds flaws in the particular
devices that it exploits, like building ring oscillators that aren't
connected to any power supply net and run on parasitics.

That is to say if you only tell the algorithm to use "minimum number of
gates" and don't enforce a rule that they must all be powered in some
way it will sometimes find a minimum gate solution regardless that
works, on one particular device, for one particular task. "Life finds a way"

We have rules that we didn't think to tell the computer. It's a major problem in programming.

Not exactly. Learning to program involves learning to recognise all the rules you need to apply to get the result you want, and then - of course - working out how you tell them to the machine, and how you get it to check that every last one has been satisfied.

If you have rules you didn't think to tell the computer about, you haven't mastered programming.

--
Bill Sloman, Sydney
 
On Mon, 11 Nov 2019 08:16:20 GMT, Jan Panteltje
pNaOnStPeAlMtje@yahoo.com> wrote:

On Sun, 10 Nov 2019 11:18:50 -0800 (PST), tabbypurr@gmail.com wrote:

Maybe not. Some problems don't get solved much better by applying more
and faster Intel CPUs.

That is the point,
a neural net is principle NOT related to CPUs.

No neural nets are cargo-cult cartoons of a biological nervous system.


There is hardware developed that behaves much like a neuron.

Not a bit like a neuron. Even single-cell organisms, with no nervous
system, have complex behavior. Things with a couple dozen neurons have
extremely complex behavior.

OK, getting a bit into philosophy perhaps,
my first encounter with that 'neuron' idea in programming
is from I think it was in the eighties in a German magazine called MC?? where a prof had little cars
(software emulation) controlled by only 2 or 3 ? (don't remember) of those 'neurons'.

Those cars showed amazing human like behavior, depending how you 'wired'
the neurons those would circle around each other or avoid each other etc..

It got my interest (robotics been building a little car myself)) back then,
and did some experimenting.

Later I did write some neural net code to 'learn' stuff,
but then was too busy with work etc to pay much attention.
For object recognition (video my field) it was at that time not really good enough.
A lot has been done since the eighties and many papers have been published, quite amazing
results now achieved with e=neural nets in object recognition, amazing special hardware exist too.
That field is still very much alive.


For beginners:
https://towardsdatascience.com/first-neural-network-for-beginners-explained-with-code-4cfd37e06eaf
first google hit..

The thingy in fig 2 can be made from analog components, so no 'puter needed,
but 'puters are nice to emulate such thing.
The 'weights' storage could be in the form of a capacitor charge,
does not have to be digital registers.

NNs are popular in academia for some reason. I've had job applicants,
recent grads, show me their NN projects, which they didn't actually
understand.

mm I dunno, could it perhaps be _you_ missed something??




Chips exist that contain fig 2 in quantity...

'Learning' changes the weights.

Interesting stuff..
There is open source software for Linux to play with this.
Those are 'emulations' however, like spice.
The analog thing does it have a (CPU) clock?

We are mostly analog, that is why we cannot do big number math easily...
Math is just a sub-structure of a few neurons in our brain,
and dangerous when taken for reality, much like religion and other 'beliefs'
My view anyway :)

I do a lot of math at the whiteboard, in my head, which impresses
people, but it's analog computing, like a slide rule. 10% or
thereabouts accuracy is good enough at a whiteboard, exact results in
many cases.

There have been "human computers" who can do high-precision
numerical-digits math in their heads.

Yes there are those special people, I am one of those, did electronics at 9 years old ;-),
mama had to get a waver from the library so I could get the books.
There is also somebody who can learn a new language in 12 hours or something,,,,

And recite PI to I dunno how many decimals.
Maybe with enough training and the will to do it anybody can.
The will to do it is an important factor.


I was reading about this kid last week (in Dutch):
https://www.nu.nl/wetenschap/6009969/negenjarige-jongen-behaalt-bachelordiploma-aan-technische-universiteit.html
He learns so fast he is in the guinness book of records.
He wants to go into engineering to make artificial limbs and things.
When he turns over his final study project US universities will send a delegation to see if they can use him..


WE sHoUlD KeeP hIm iN Europe
 

Welcome to EDABoard.com

Sponsor

Back
Top