DNA animation

On Wed, 15 May 2019 21:50:06 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 15/05/19 20:21, John Larkin wrote:
On Wed, 15 May 2019 18:09:59 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 15/05/19 17:06, John Larkin wrote:
On Wed, 15 May 2019 15:54:01 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 15/05/19 15:32, John Larkin wrote:
I don't think we'll invite you to any of our brainstorming sessions.
Some people poison brainstorming.

Yebbut. There are two phases to brainstorming:
- firstly rapid generation of ideas, which requires complete
suspension of disbelief
- followed by selection of the ideas that might work, and
discarding the others

Alternatively consider team makeup...

If you have two "ideas men" only, then sparks will fly and
everybody will also have great fun - but nothing will be
able to come of it.

If you have two "critics" only, then there will be very
realistic plans, but they will be boring.

Sometimes the seed of a great idea comes from someone that nobody
expected anything from, like an intern invited in to observe.
Sometimes the inspiration is just a question.

Yes, indeed.

It can be very hard to break out of a preconception - I may
have suffered from an example of that today since writing
my previous response. Now I'm trying to figure out how to
choose between two possibilities.

Brainstorming is a group extention of a basic process: send your
mental tendrils as far and wide as possible into the potential, real
or absurd, solution space, dredge up anything interesting or amusing,
and play with it to see what develops. More people can spread out
further into that space, or riff on what someone else finds.

That's the necessary and sometimes beneficial /first/ part.

The separate second part, analysis and pruning, is also necessary.


Our little fiberoptic back channel monitor, the minimal FSK
generator/detector thing, is trivial and not worth optimizing, but has
inspired about 20 approaches so far and has been a lot of fun.
Exercizes like this tend to linger in the back of ones brain and
sometimes turn out to be useful years later.

Yup.


The logic gate Icc charge dispenser F/V converter, based on a
suggestion in this group, is really slick. It's barely possible to
brainstorm circuits in a public forum, but it's difficult because the
majority of posters are dour and idea-hostile or frankly uninterested
in electronics.

Depends on whether they are primarily ideas men
or critics.

Also it is easier to be remote critic and
more difficult to force remote suspension of
critical faculties.


OTOH, if you have one "ideas man" and one "critic" you
stand a change of getting novel and realistic plans.

Of course if you want to get something used in the real
world you also need "workers", "finishers", "communicators",
"chairman".


A little nonsense now and then
Is cherished by the wisest men.

- Willy Wonka

Notice the words "little" and "now and then".

Most professions, including brain surgery and electronics design, are
mostly disciplined implementation... grunt work. I spent the weekend
tweaking impedances and crosstalk clearances and bypassing on a very
big PC board. Good thing, because all that staring at the presumably
finished design turned up a big mistake.

Been there, done that. I expect everybody on this group has.


Then, for comic relief, I got
to write a test plan for the customer to review.

You are lucky to have a customer that can meaningfully
review it; it isn't guaranteed!

As if! They require a test plan, but they are apparently not required
to read it.

So, a normal customer that might feel able to blame you
for something you couldn't have known and they didn't
specify/avoid.

Their system requires a test plan. Actually, I have no idea about what
a test plan should look like, so I made something up. I could probably
have filled it with dirty limericks and submitted it.

I did put some Bullwinkle cartoons in a proposal to a big aerospace
company, and they liked them.

https://www.dropbox.com/s/feolaq0e4i2vtla/Wayback_Front.JPG?dl=0

We call that box the Wayback Machine.

https://www.dropbox.com/s/zb5os8y130eumle/peabody_and_sherman.jpg?dl=0



--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On 14/5/19 7:14 pm, Martin Brown wrote:
On 10/05/2019 03:15, Clifford Heath wrote:
Life is a quine (a program that outputs itself), but a special kind of
quine; it also outputs the machine that runs the program. The name for
this kind of quine is simply "life".

That is actually a very nice way to describe it in terms that an
engineer ought to be able to understand.
...
They are all quines but some are more elegant than others.

You'll like this extraordinary quine, which (run repeatedly) prints a
rotating globe. Somehow it has the global map, though it only prints one
side at a time. It requires a Ruby interpreter, which comes built-in on
my Mac.
<https://github.com/knoxknox/qlobe>

I think it really depends on how strictly you interpret the definition
of life.

I thought this discussion was playing with the boundaries of a
definition of life, not taking an existing definition as gospel. So I
proposed a definition (above).

Clifford Heath.
 
John Larkin wrote:
On Wed, 15 May 2019 18:27:00 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

However, I am not prepared to allow you to attack modern scientific
research from a position of wilful ignorance without pushing back.

Primordial soup!

John, when you look at that video of the DNA replication at real-time
speed, the parts being assembled are amino acids that form a soup in the
cell. It's amazing the parts are right at hand for the machine running
at that speed. You know that elemental primordial soup can make amino
acids because we've done that, so there's no problem.
 
On Wed, 15 May 2019 11:02:34 +0100, Martin Brown
<'''newspam'''@nezumi.demon.co.uk> wrote:

On 15/05/2019 00:36, krw@notreal.com wrote:
On Tue, 14 May 2019 21:45:36 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 14/05/19 17:00, John Larkin wrote:
Of course it's a big problem. Big problems need big ideas.

Big /solutions/ need falsifiable hypotheses and tests.

Exactly what problem are you trying to solve, with the origin of life?

How did it get started? Where should be be looking for other life?

How is that a "problem" that _needs_ to be "solved"?
We may even be able to use DNA or similar molecules to solve certain
combinatorial problems. There has been some interesting work done on
recasting certain problems into a form where they can be computed by
manipulating designer DNA sequences in wet chemistry. eg.

https://www.technologyreview.com/s/400727/dna-computing/

(So do small solutions)

We do a lot of small stuff every day.

There is an optimum size of difficulty of problems that can be tackled
with today's available resources. So long as Moore's Law holds you can
prove that for some hard computational problems the fastest way to the
solution is to go surfing on the beach for a couple of years and then
start building your hardware using the latest fastest CPUs and memory.

That's like saying "I'll be later getting to work if I leave now,
rather than an hour from now because the traffic is worse this time of
the day.". Absurd.

This may change now that we are getting awfully close to the limits of
what feature detail resolution you can sensibly etch into silicon and
still have it work. We must be close to the point where Moore's Law runs
into the buffers - at least for a while on limitations of atomic scale.

That's been said before but that "at least for a while" seems to get
shorter.
 
On May 15, 2019, Bill Sloman wrote
(in article<7f7cdc94-c1a9-46db-b7fa-92a4833c900d@googlegroups.com>):

On Wednesday, May 15, 2019 at 11:41:27 PM UTC+10, Joseph Gwinn wrote:
On May 15, 2019, Bill Sloman wrote
(in article<8a31eb25-0236-4f8b-9143-e831f1bdc35d@googlegroups.com>):

On Wednesday, May 15, 2019 at 9:32:47 AM UTC+10, John Larkin wrote:
On Tue, 14 May 2019 16:25:46 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 14/05/2019 15:50, Rick C wrote:
On Tuesday, May 14, 2019 at 7:04:31 AM UTC-4, Martin Brown wrote:
On 14/05/2019 04:50, John Larkin wrote:

The inventor critters could have evolved from inorganics in a
more reasonable incremental way than we are supposed to have.
They might have originated on a gas giant, or in superfluid
helium, or something.

They would have had to invent a time machine as well then. The universe
has only fairly recently become cool enough for the microwave background
radiation to permit superfluid helium to condense naturally.

You really do have absolutely no physical intuition whatsoever.

JL's talents include designing analog electronics at the board level,
drinking beer and eating burgers. He is largely ignorant of the
greater world and chooses to remain that way. Are you really
surprised at this point?

He is still a decent engineer and obviously intelligent. I cannot
understand why he has such a preference for "just so" stories.

Because science keeps being blindsided by astounding discoveries.

Name one.

The classic examples have to be Relativity and Quantum Physics, and thus the
atom bomb.

The connection between relativity and quantum physics and the atom bomb is
pretty remote.

Really? E=mc*2 came from Special Relativity. Before that, nobody even guessed
that such a thing could exist.

The following stuff are interesting details, but change nothing.

..
The Einstein-Szilard letter

https://en.wikipedia.org/wiki/Einstein%E2%80%93Szil%C3%A1rd_letter

may have got the Manhattan project going, but it was Szilard's chemical
connections that prompted him to write it, and Einstein was dragged in
because he was famous, not because his contributions to relativity or e=mc^2
were all that relevant - special relativity came out in 1904.

In 1900, people though that Physics was pretty much settled, and that all
that remained was to tidy up a few constants, and figure out a few odd
little
results like the photoelectric effect existed and why thermal radiation was
red, not blue (like the theory said).

Some elderly physicists said stuff to that effect. The reset of the field
wasn't blind-sided.

Max Planck had invented quantisation to sort out the ultraviolet catastrophe
in 1900, but it took Einstein's 1904 paper on the photoelectric effect to
give the idea some traction.

https://en.wikipedia.org/wiki/Ultraviolet_catastrophe

Little did they know....

More than you do ...

Joe Gwinn
 
On Thursday, May 16, 2019 at 2:06:59 AM UTC+10, John Larkin wrote:
On Wed, 15 May 2019 15:54:01 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 15/05/19 15:32, John Larkin wrote:
I don't think we'll invite you to any of our brainstorming sessions.
Some people poison brainstorming.

Yebbut. There are two phases to brainstorming:
- firstly rapid generation of ideas, which requires complete
suspension of disbelief
- followed by selection of the ideas that might work, and
discarding the others

Alternatively consider team makeup...

If you have two "ideas men" only, then sparks will fly and
everybody will also have great fun - but nothing will be
able to come of it.

If you have two "critics" only, then there will be very
realistic plans, but they will be boring.

Sometimes the seed of a great idea comes from someone that nobody
expected anything from, like an intern invited in to observe.
Sometimes the inspiration is just a question.

Example?

Brainstorming is a group extension of a basic process: send your
mental tendrils as far and wide as possible into the potential, real
or absurd, solution space, dredge up anything interesting or amusing,
and play with it to see what develops. More people can spread out
further into that space, or riff on what someone else finds.

Some people extend into more useful space than others. Knowing more about what you are brainstorming about can point the extension into more useful areas.

Our little fiberoptic back channel monitor, the minimal FSK
generator/detector thing, is trivial and not worth optimizing, but has
inspired about 20 approaches so far and has been a lot of fun.
Exercizes like this tend to linger in the back of one's brain and
sometimes turn out to be useful years later.

Like the junk in the attic, that never gets thrown out because it "might be useful".

The logic gate Icc charge dispenser F/V converter, based on a
suggestion in this group, is really slick. It's barely possible to
brainstorm circuits in a public forum, but it's difficult because the
majority of posters are dour and idea-hostile or frankly uninterested
in electronics.

My 1976 PWM D/A converter started off using logic outputs to generate the square waves, but the voltages coming out of a logic gate aren't well-defined. We went to driving a transmission gate to generate the waveform and got much better precision.

There's a lot of hostility to bad ideas. They can soak up a lot of effort to no useful purpose.

OTOH, if you have one "ideas man" and one "critic" you
stand a change of getting novel and realistic plans.

Of course if you want to get something used in the real
world you also need "workers", "finishers", "communicators",
"chairman".


A little nonsense now and then
Is cherished by the wisest men.

- Willy Wonka

Notice the words "little" and "now and then".

Most professions, including brain surgery and electronics design, are
mostly disciplined implementation... grunt work. I spent the weekend
tweaking impedances and crosstalk clearances and bypassing on a very
big PC board. Good thing, because all that staring at the presumably
finished design turned up a big mistake. Then, for comic relief, I got
to write a test plan for the customer to review.

Of course original thought happens a small fraction of the time,
except that most people never do it.

And lots of people who think they are doing it are actually re-inventing a lumpy version of the wheel, long after their better-informed colleagues have moved onto other problems.

--
Bill Sloman, Sydney
 
On Thursday, May 16, 2019 at 8:33:13 AM UTC+10, Clifford Heath wrote:
On 16/5/19 12:40 am, Bill Sloman wrote:
On Wednesday, May 15, 2019 at 11:23:49 PM UTC+10, Clifford Heath wrote:
On 15/5/19 2:00 pm, Bill Sloman wrote:
On Wednesday, May 15, 2019 at 8:33:05 AM UTC+10, Clifford Heath wrote:
On 15/5/19 1:28 am, Jeroen Belleman wrote:
Rick C wrote:

snip

Hybridising with animal genetics sound nuts.

Perhaps the wrong word. Pinching genes is less than I mean; I mean
pinching phenotypic structures.

Yes, that's much more than just "a step too far" for most
folk, rather it transplants the dialog onto another planet. I've been
brewing up a novel about it for well over a decade now.

A little more background reading would seem to be in order.

Thanks for telling me that I'm not the person who knows the most about
what I have or haven't been reading :p

One doesn't have to know much about it to get the impression that you should have done more.

If " hybridising with animal genetics" isn't quite you had in mind, you clearly ought to have read enough to be able to express what you actually had in mind, with the subsidiary point that you probably didn't have a clear enough idea of what might be done to have had a idea that you could have expressed clearly.

I've read a lot of science-fiction - if you felt like e-mailing me a chunk of the text I might be able to give you comments.

I could share the plot outline. There are still some big "story-telling"
aspects that I haven't figured out how to do.

I know the feeling. Finding a narrative line isn't easy.

Basically it looks back two generations on the aftermath of an
accidental escape of a private experiment by an idealistic viro-ceutical
researcher.

Not a great starting point for a gripping narrative line.

I would be much more willing to engage with you if you didn't try so
damn hard to make yourself odious. There is a great story here and some
excellent vivid characters, but I don't know how to build tension when
the "disaster" foreseen turns out not to have been a disaster at all.
Sort-of an anti-thriller...

Like I said, not a great starting point for a gripping narrative line.

Good news doesn't sell newspapers.

The story is told by patient zero, a retired female GP, in response to
questions from her grand-children. That's the literary device to
introduce the story anyhow. There is the story about how it all started,
but there is global social turmoil in the intervening generation... and
then there is "now" - acceptance of what has happened and cannot be
reversed. So the story-telling can jump between these.

The global social turmoil offers the potential for dramatic events, but having patient zero talking to her grandchildren kills off a lot of the potential dramatic tension.

--
Bill Sloman, Sydney
 
On Thursday, May 16, 2019 at 5:14:55 AM UTC+10, John Larkin wrote:
On Tue, 14 May 2019 20:25:29 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Tuesday, May 14, 2019 at 4:32:47 PM UTC-7, John Larkin wrote:

Because science keeps being blindsided by astounding discoveries.

Huh? Science is NOT blind to possibilities,

Sadly, sometimes not just blind but outright hostile.

Example?

those discoveries are
the result of planning and careful work.

Or some crazy amateur who doesn't know he/she isn't allowed to
discover things.

Example?

--
Bill Sloman, Sydney
 
On Thursday, May 16, 2019 at 5:22:41 AM UTC+10, John Larkin wrote:
On Wed, 15 May 2019 18:27:00 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 15/05/2019 15:32, John Larkin wrote:
On Wed, 15 May 2019 09:06:27 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

The laws of physics and chemistry are determined by experiment and there
have been some very elegant experiments done to test and break the
existing paradigms. Scientists are human so there can be large egos
involved the Hoyle vs Ryle debacle over the Steady State vs Big Bang
cosmologies (a derisive name Hoyle coined for Einstein-de Sitter
expanding universes which stuck) was particularly bitter. The last guard
of the old paradigm sometimes never accept that they were wrong.

However, you are rather prone to picking up nonsensical gibberish and
trying to push it as a valid idea in what is nominally a science group.

I don't pick up nonsense, I invent it. For fun and profit.

We only have your word for that, but I am inclined to believe you.

I don't think we'll invite you to any of our brainstorming sessions.
Some people poison brainstorming.

You could not be more wrong. I enjoy brainstorming new ideas.

However, I am not prepared to allow you to attack modern scientific
research from a position of wilful ignorance without pushing back.

Primordial soup!

Wilful ignorance.

--
Bill Sloman, Sydney
 
On Thursday, May 16, 2019 at 11:22:05 AM UTC+10, k...@notreal.com wrote:
On Wed, 15 May 2019 11:02:34 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 15/05/2019 00:36, krw@notreal.com wrote:
On Tue, 14 May 2019 21:45:36 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 14/05/19 17:00, John Larkin wrote:

<snip>

There is an optimum size of difficulty of problems that can be tackled
with today's available resources. So long as Moore's Law holds you can
prove that for some hard computational problems the fastest way to the
solution is to go surfing on the beach for a couple of years and then
start building your hardware using the latest fastest CPUs and memory.

That's like saying "I'll be later getting to work if I leave now,
rather than an hour from now because the traffic is worse this time of
the day.". Absurd.

Wrong analogy. I've jumped in early twice - once using TI's 64k serial memory (in 1978) and once using GigaBit Logic's GaAs fast logic (in 1988).

In the first case, 16k RAM got cheap enough a few months later to make the approach sub-optimal, and in the second case GaAs logic never got up to the production yields it needed to make it attractive while Motorola's ECLinPS was close enough behind (and much easier to produce - and use) to kill any enthusiasm for further development.

This may change now that we are getting awfully close to the limits of
what feature detail resolution you can sensibly etch into silicon and
still have it work. We must be close to the point where Moore's Law runs
into the buffers - at least for a while on limitations of atomic scale.

That's been said before but that "at least for a while" seems to get
shorter.

Putting your faith in quantum computing then? That does seem to exploit a different set of physical laws.

--
Bill Sloman, Sydney
 
On 16/5/19 1:08 pm, Bill Sloman wrote:
On Thursday, May 16, 2019 at 11:22:05 AM UTC+10, k...@notreal.com wrote:
On Wed, 15 May 2019 11:02:34 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 15/05/2019 00:36, krw@notreal.com wrote:
On Tue, 14 May 2019 21:45:36 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 14/05/19 17:00, John Larkin wrote:

snip

There is an optimum size of difficulty of problems that can be tackled
with today's available resources. So long as Moore's Law holds you can
prove that for some hard computational problems the fastest way to the
solution is to go surfing on the beach for a couple of years and then
start building your hardware using the latest fastest CPUs and memory.

That's like saying "I'll be later getting to work if I leave now,
rather than an hour from now because the traffic is worse this time of
the day.". Absurd.

No, it's like saying "should I jump on the bus now, or wait until my
wife returns from the shops so I can drive the car to work?"

If you start work with a technology that slows you down, it's easy to
get too invested in it to be able to jump ship when a better way comes
along - so it's better not to start yet.

Wrong analogy. I've jumped in early twice - once using TI's 64k serial memory (in 1978) and once using GigaBit Logic's GaAs fast logic (in 1988).

In the first case, 16k RAM got cheap enough a few months later to make the approach sub-optimal, and in the second case GaAs logic never got up to the production yields it needed to make it attractive while Motorola's ECLinPS was close enough behind (and much easier to produce - and use) to kill any enthusiasm for further development.

It's always hard to say though. At the time when Intel produced the
first Pentiums and had trouble getting the yields high enough, they were
using 17 mask layers to make it.

DEC's Alpha was using 4, HP's 800 series was using 3, and MIPS was using
only 2, to produce CPUs of comparable power - but with much better
yields of course, and much better MIPS/Watt.

Where are those others, now?

And before that, we thought it was all about MIPS, until we discovered
that it was actually all about bandwidth - the RISC vs CISC wars died
out when neither could get data on and off chip as fast as they could do
something useful with it.

Clifford Heath.
 
On Thursday, May 16, 2019 at 11:26:09 AM UTC+10, Joseph Gwinn wrote:
On May 15, 2019, Bill Sloman wrote
(in article<7f7cdc94-c1a9-46db-b7fa-92a4833c900d@googlegroups.com>):

On Wednesday, May 15, 2019 at 11:41:27 PM UTC+10, Joseph Gwinn wrote:
On May 15, 2019, Bill Sloman wrote
(in article<8a31eb25-0236-4f8b-9143-e831f1bdc35d@googlegroups.com>):

On Wednesday, May 15, 2019 at 9:32:47 AM UTC+10, John Larkin wrote:
On Tue, 14 May 2019 16:25:46 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 14/05/2019 15:50, Rick C wrote:
On Tuesday, May 14, 2019 at 7:04:31 AM UTC-4, Martin Brown wrote:
On 14/05/2019 04:50, John Larkin wrote:

The inventor critters could have evolved from inorganics in a
more reasonable incremental way than we are supposed to have.
They might have originated on a gas giant, or in superfluid
helium, or something.

They would have had to invent a time machine as well then. The universe
has only fairly recently become cool enough for the microwave background
radiation to permit superfluid helium to condense naturally.

You really do have absolutely no physical intuition whatsoever.

JL's talents include designing analog electronics at the board level,
drinking beer and eating burgers. He is largely ignorant of the
greater world and chooses to remain that way. Are you really
surprised at this point?

He is still a decent engineer and obviously intelligent. I cannot
understand why he has such a preference for "just so" stories.

Because science keeps being blindsided by astounding discoveries.

Name one.

The classic examples have to be Relativity and Quantum Physics, and thus the atom bomb.

The connection between relativity and quantum physics and the atom bomb is
pretty remote.

Really? E=mc*2 came from Special Relativity. Before that, nobody even guessed
that such a thing could exist.

In 1904. It didn't have much practical application until people started seeing the mass defect in heavier atomic nuclear masses.

F.W. Aston, Proceedings of the Royal Society 115A (1927) 487 seems to record what Aston saw when his second mass spectrometer was accurate enough to detect the mass defect (better than 1% accuracy).

> The following stuff are interesting details, but change nothing.

That you don't know how your "astounding discovery" took a couple of decades to go from a theoretical insight to the germ of a practical application?

The Einstein-Szilard letter

https://en.wikipedia.org/wiki/Einstein%E2%80%93Szil%C3%A1rd_letter

may have got the Manhattan project going, but it was Szilard's chemical
connections that prompted him to write it, and Einstein was dragged in
because he was famous, not because his contributions to relativity or e=mc^2
were all that relevant - special relativity came out in 1904.

In 1900, people though that Physics was pretty much settled, and that all
that remained was to tidy up a few constants, and figure out a few odd
little results like the photoelectric effect existed and why thermal
radiation was red, not blue (like the theory said).

Some elderly physicists said stuff to that effect. The reset of the field
wasn't blind-sided.

Max Planck had invented quantisation to sort out the ultraviolet catastrophe
in 1900, but it took Einstein's 1904 paper on the photoelectric effect to
give the idea some traction.

https://en.wikipedia.org/wiki/Ultraviolet_catastrophe

Little did they know....

More than you do ...

--
Bill Sloman, Sydney
 
On 16/05/19 03:43, Bill Sloman wrote:
On Thursday, May 16, 2019 at 2:06:59 AM UTC+10, John Larkin wrote:
On Wed, 15 May 2019 15:54:01 +0100, Tom Gardner <spamjunk@blueyonder.co.uk
wrote:

On 15/05/19 15:32, John Larkin wrote:
I don't think we'll invite you to any of our brainstorming sessions.
Some people poison brainstorming.

Yebbut. There are two phases to brainstorming: - firstly rapid generation
of ideas, which requires complete suspension of disbelief - followed by
selection of the ideas that might work, and discarding the others

Alternatively consider team makeup...

If you have two "ideas men" only, then sparks will fly and everybody will
also have great fun - but nothing will be able to come of it.

If you have two "critics" only, then there will be very realistic plans,
but they will be boring.

Sometimes the seed of a great idea comes from someone that nobody expected
anything from, like an intern invited in to observe. Sometimes the
inspiration is just a question.

Example?

I forget all the details, but once upon a time we
were brainstorming comms systems, and things were
in the process of stalling.

I asked "how would you do that with yoghurt", and
things got moving again - and reached a useful conclusion.

Trivial? Yes, of course.

Feynman told the story of being sent out to assess
a production facility for making the bomb in WW2.
He knew he wouldn't be able to provide any detailed
technical assessment. He did randomly select a valve
and ask what would happen if it jammed. That kicked
off a discussion amongst the local staff, and they
did discover a significant vulnerability.



Brainstorming is a group extension of a basic process: send your mental
tendrils as far and wide as possible into the potential, real or absurd,
solution space, dredge up anything interesting or amusing, and play with it
to see what develops. More people can spread out further into that space,
or riff on what someone else finds.

Some people extend into more useful space than others. Knowing more about
what you are brainstorming about can point the extension into more useful
areas.

Yes, but the ability of someone "with a different
toolkit" or who isn't "in the middle of the trees"
can be significantly helpful.

I'm sure you can think of such cases from your own
experience.


Our little fiberoptic back channel monitor, the minimal FSK
generator/detector thing, is trivial and not worth optimizing, but has
inspired about 20 approaches so far and has been a lot of fun. Exercizes
like this tend to linger in the back of one's brain and sometimes turn out
to be useful years later.

Like the junk in the attic, that never gets thrown out because it "might be
useful".

Unfortunately, often they do turn out to be useful
in ways not imagined.

That's my excuse, and I'm sticking to it.



The logic gate Icc charge dispenser F/V converter, based on a suggestion in
this group, is really slick. It's barely possible to brainstorm circuits in
a public forum, but it's difficult because the majority of posters are dour
and idea-hostile or frankly uninterested in electronics.

My 1976 PWM D/A converter started off using logic outputs to generate the
square waves, but the voltages coming out of a logic gate aren't
well-defined. We went to driving a transmission gate to generate the waveform
and got much better precision.

There's a lot of hostility to bad ideas. They can soak up a lot of effort to
no useful purpose.

Yes, and that's a key point.

That's why brainstorming /must/ have the /second/
phase: select possibly helpful avenues and discard
the rest.



OTOH, if you have one "ideas man" and one "critic" you stand a change of
getting novel and realistic plans.

Of course if you want to get something used in the real world you also
need "workers", "finishers", "communicators", "chairman".


A little nonsense now and then Is cherished by the wisest men.

- Willy Wonka

Notice the words "little" and "now and then".

Most professions, including brain surgery and electronics design, are
mostly disciplined implementation... grunt work. I spent the weekend
tweaking impedances and crosstalk clearances and bypassing on a very big PC
board. Good thing, because all that staring at the presumably finished
design turned up a big mistake. Then, for comic relief, I got to write a
test plan for the customer to review.

Of course original thought happens a small fraction of the time, except
that most people never do it.

And lots of people who think they are doing it are actually re-inventing a
lumpy version of the wheel, long after their better-informed colleagues have
moved onto other problems.

Yes indeed; that's the other key point.

If anyone wants examples of that, there are whole website
devoted to spotting them in software. https://thedailywtf.com/
tagline "The Daily WTF: Curious Perversions in Information
Technology" springs to mind.
 
On 16/05/19 03:54, Bill Sloman wrote:
The global social turmoil offers the potential for dramatic events, but
having patient zero talking to her grandchildren kills off a lot of the
potential dramatic tension.

Not always.

There's a vestigial example of that, in Fred Hoyle's
"The Black Cloud", the only novel I know of with a
footnote containing calculus.

A more skilled author can tell you the ending, and
have that suck you into reading about the journey.
Cordwainer Smith used that technique to good effect.
 
On 16/05/2019 02:21, krw@notreal.com wrote:
On Wed, 15 May 2019 11:02:34 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 15/05/2019 00:36, krw@notreal.com wrote:
On Tue, 14 May 2019 21:45:36 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 14/05/19 17:00, John Larkin wrote:
Of course it's a big problem. Big problems need big ideas.

Big /solutions/ need falsifiable hypotheses and tests.

Exactly what problem are you trying to solve, with the origin of life?

How did it get started? Where should be be looking for other life?

How is that a "problem" that _needs_ to be "solved"?

Nothing *needs* to be solved, but scientists are always curious about
how the universe works we could all worship at your Tree of Ignorance
and be significantly worse off as a result.

You never know where blue skies research will leads in the long term.

What earthly use could a laser be if the first one required a huge flash
tube and a 4" perfect ruby crystal to make it work ever be to anybody?

Now they are ubiquitous and in every laser printer, CD and DVD player.

--
Regards,
Martin Brown
 
On 16/05/2019 03:46, Bill Sloman wrote:
On Thursday, May 16, 2019 at 5:14:55 AM UTC+10, John Larkin wrote:
On Tue, 14 May 2019 20:25:29 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Tuesday, May 14, 2019 at 4:32:47 PM UTC-7, John Larkin wrote:

those discoveries are
the result of planning and careful work.

Or some crazy amateur who doesn't know he/she isn't allowed to
discover things.

Example?

In fact mostly it is the other way around. Reviewers will be a bit more
lenient with an outsider who has an original idea with some merit.

I recall a paper in Nature a long time ago that suggested that the
appearance sunspots could be explained by them being jets of flame much
like on a gas cooker. The paper went to peer review with minor revisions
and at that time it was decided that the idea was a possible explanation
so it *was* published despite it coming from a amateur.

David Levy as a comet hunter is well respected as an amateur astronomer
in professional circles with 22 comets to his name (as well as some less
known in the West Japanese astronomers) including the infamous
Shoemacker-Levy 9 which hit Jupiter in 1994. One of my friends has a
similar status in the slightly more boring variable star monitoring by
amateurs with some staggering number of observations to his name.

--
Regards,
Martin Brown
 
Martin Brown wrote:
On 14/05/2019 23:32, Clifford Heath wrote:
On 15/5/19 1:28 am, Jeroen Belleman wrote:
Rick C wrote:
[...]

Humans will not be different in any significant way in 1,000
years. Are we any different than we were 1,000 years ago? [...]

I wonder. If you see how much we changed some animals and
many plants, what might happen if we start applying those
methods to ourselves? And that doesn't even take direct
gene-editing into account.

A much more likely source of big change is the modified selection
pressure from the environment we have changed.

Although the main environmental modifications at present seem to result
in a huge increase in weight, morbid obesity and type II diabetes.

I choose to believe we're on the eve of a revolution.

I know that our current ethical norms are against such
things, but those norms evolve, too.

I completely agree. I've progressed far enough in my thinking that I
believe we have a *moral imperative* to diversify our own germ line,
creating many sub-species of specialists and hybridising with animal
genetics. Yes, that's much more than just "a step too far" for most
folk, rather it transplants the dialog onto another planet. I've been
brewing up a novel about it for well over a decade now.

The most interesting one would be to see if we can generate the right
structures in a human to permit chloroplasts and photosynthesis. We
wouldn't need to eat quite so much if we could directly make sugars.
[...]

With the energetic efficiency of photosynthesis and in a
culture where most of the skin must remain covered, I don't
see any advantage.

Jeroen Belleman
 
On 16/05/2019 09:16, Jeroen Belleman wrote:
Martin Brown wrote:
On 14/05/2019 23:32, Clifford Heath wrote:
On 15/5/19 1:28 am, Jeroen Belleman wrote:

I know that our current ethical norms are against such
things, but those norms evolve, too.

I completely agree. I've progressed far enough in my thinking that I
believe we have a *moral imperative* to diversify our own germ line,
creating many sub-species of specialists and hybridising with animal
genetics. Yes, that's much more than just "a step too far" for most
folk, rather it transplants the dialog onto another planet. I've been
brewing up a novel about it for well over a decade now.

The most interesting one would be to see if we can generate the right
structures in a human to permit chloroplasts and photosynthesis. We
wouldn't need to eat quite so much if we could directly make sugars.
[...]

With the energetic efficiency of photosynthesis and in a
culture where most of the skin must remain covered, I don't
see any advantage.

It would give naturists a whole new meaning!

--
Regards,
Martin Brown
 
On 15/05/2019 23:46, Clifford Heath wrote:
On 14/5/19 7:14 pm, Martin Brown wrote:
On 10/05/2019 03:15, Clifford Heath wrote:
Life is a quine (a program that outputs itself), but a special kind
of quine; it also outputs the machine that runs the program. The name
for this kind of quine is simply "life".

That is actually a very nice way to describe it in terms that an
engineer ought to be able to understand.
...
They are all quines but some are more elegant than others.

You'll like this extraordinary quine, which (run repeatedly) prints a
rotating globe. Somehow it has the global map, though it only prints one
side at a time. It requires a Ruby interpreter, which comes built-in on
my Mac.
https://github.com/knoxknox/qlobe

I think it really depends on how strictly you interpret the definition
of life.

I thought this discussion was playing with the boundaries of a
definition of life, not taking an existing definition as gospel. So I
proposed a definition (above).

I think it is a nice example. A pattern in life that can replicate
itself in every detail is an interesting and still unsolved challenge.

You got me thinking about the sort of shape that such a Conway Life
"alive" pattern might have to be in order to be able to output itself
(possibly with a 90 degree rotation).

C H or T like would be my initial guesses.

Managing the glider or spaceship streams to collide far enough away to
build another independent one is going to be the big challenge.

BTW I enjoyed the Turing machine paper very much.

--
Regards,
Martin Brown
 
On 15/05/2019 15:41, John Larkin wrote:
On Wed, 15 May 2019 07:18:16 -0700 (PDT), George Herold
gherold@teachspin.com> wrote:

On Wednesday, May 15, 2019 at 9:36:56 AM UTC-4, Martin Brown wrote:

The most interesting one would be to see if we can generate the right
structures in a human to permit chloroplasts and photosynthesis. We
wouldn't need to eat quite so much if we could directly make sugars.

And green humans like the Treens in Dan Dare would be quite cool.

Without doing any numbers.. it seems like there would hardly be any gain.
(I'm lucky to get ~10 hours of full sun in a week.)
It takes a corn plant all summer to make a few ears of corn.
(I'm guessing I have about the same area as a corn plant.)

And not wear clothes...
Get as much sun as you can. It not only makes vitamin D, it does other
good stuff.

Some sun is good but even in the UK which is a fairly high latitude too
much of it can prematurely age the skin and cause malignant skin cancer
if you are unlucky. Rickets is making a come back thanks to very high
factor sun protection being used on children these days.

Friends with very fair skins turn red very quickly if out in the sun.

> MS is unheard of in sunny climes. People in cold gloomy places get it.

There certainly does seem to be a latitude correlation but there are
other risk factors like genetic susceptibility and possibly infection by
the Epstein-Barr virus. Being female seems to carry a serious risk too.

https://www.nhs.uk/conditions/multiple-sclerosis/causes/

--
Regards,
Martin Brown
 

Welcome to EDABoard.com

Sponsor

Back
Top