When is the Covid war over?

On 06/04/20 16:09, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 09:24:41 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 05/04/2020 15:49, jlarkin@highlandsniptechnology.com wrote:
On Sun, 5 Apr 2020 14:23:01 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 04/04/2020 01:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd <whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John Larkin wrote:

The lockdowns are trashing the economy, which hurts people, and are
probably not going to save many lives.

UK modelling suggests it may decrease the death toll by an order of
magnitude or so. That is a distinctly non-trivial contribution.

Oh. Computer modeling says that? How silly of me.

OK. *STOP* using spice then - that is also a computer model.

I accurately simulate linear systems with known accurate component
models and initial conditions. Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.

There is a difference between modelling and simulating.

Not all models are chaotic.

Not all models need to be simulated in order to obtain
results.

Even poorly defined systems can be modelled and simulated,
and useful results obtained.

Consider, for example, radio propagation, especially of
cellular systems. Models are created, simulated and the
results used to predict performance and where to site
towers and size the computer systems that control the
signalling.

That works surprisingly well, considering the inadequacy
of the information about the terrain and "atmospherics".
Occasionally modelling fails, and network operators have
"rogue cells" that don't work as well as they expect, and
they can't figure out why.

So please don't think that complete information is
required to do useful modelling; that's merely an excuse
for inaction.
 
On Mon, 6 Apr 2020 16:48:09 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 16:09, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 09:24:41 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 05/04/2020 15:49, jlarkin@highlandsniptechnology.com wrote:
On Sun, 5 Apr 2020 14:23:01 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 04/04/2020 01:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd <whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John Larkin wrote:

The lockdowns are trashing the economy, which hurts people, and are
probably not going to save many lives.

UK modelling suggests it may decrease the death toll by an order of
magnitude or so. That is a distinctly non-trivial contribution.

Oh. Computer modeling says that? How silly of me.

OK. *STOP* using spice then - that is also a computer model.

I accurately simulate linear systems with known accurate component
models and initial conditions. Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.


There is a difference between modelling and simulating.

Please explain that.


Not all models are chaotic.

Did I say that? Many systems are chaotic.

Not all models need to be simulated in order to obtain
results.

Even poorly defined systems can be modelled and simulated,
and useful results obtained.

Consider, for example, radio propagation, especially of
cellular systems. Models are created, simulated and the
results used to predict performance and where to site
towers and size the computer systems that control the
signalling.

That works surprisingly well, considering the inadequacy
of the information about the terrain and "atmospherics".
Occasionally modelling fails, and network operators have
"rogue cells" that don't work as well as they expect, and
they can't figure out why.

So please don't think that complete information is
required to do useful modelling; that's merely an excuse
for inaction.

Any cell phone system models are heavily tested in the field, and the
useful ones survive. That's evolution. My circuit simulations are soon
tested on a circuit board, so we learn what works and which models can
be trusted. Climate and economic and coronavirus models can't be
tested for their predictive accuracy. They can hindcast accurately,
but that's just curve fitting.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On 06/04/20 17:15, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 16:48:09 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 16:09, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 09:24:41 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 05/04/2020 15:49, jlarkin@highlandsniptechnology.com wrote:
On Sun, 5 Apr 2020 14:23:01 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 04/04/2020 01:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd <whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John Larkin wrote:

The lockdowns are trashing the economy, which hurts people, and are
probably not going to save many lives.

UK modelling suggests it may decrease the death toll by an order of
magnitude or so. That is a distinctly non-trivial contribution.

Oh. Computer modeling says that? How silly of me.

OK. *STOP* using spice then - that is also a computer model.

I accurately simulate linear systems with known accurate component
models and initial conditions. Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.


There is a difference between modelling and simulating.

Please explain that.

google "simulation vs modelling" The answers are more
coherent than I have time to provide.


Not all models are chaotic.

Did I say that? Many systems are chaotic.

You went straight from modelling to chaotic systems,
with nothing in between and not alternative.



Not all models need to be simulated in order to obtain
results.

Even poorly defined systems can be modelled and simulated,
and useful results obtained.

Consider, for example, radio propagation, especially of
cellular systems. Models are created, simulated and the
results used to predict performance and where to site
towers and size the computer systems that control the
signalling.

That works surprisingly well, considering the inadequacy
of the information about the terrain and "atmospherics".
Occasionally modelling fails, and network operators have
"rogue cells" that don't work as well as they expect, and
they can't figure out why.

So please don't think that complete information is
required to do useful modelling; that's merely an excuse
for inaction.

Any cell phone system models are heavily tested in the field, and the
useful ones survive. That's evolution.

As someone that was working in that area at the time, I know
that to be false in important respects. I even have a T-shirt
of some results, and I can still get into it 25 years later :)

First came the physics models, e.g. "knife edge diffraction"
and many others. Then came other generic academic models,
including "fast fading" and "log normal fading" and many others.
Then they were specialised for each frequency band. Then
academic measurements were made.

Based on all of that, some companies developed models that
helped operators decide where to site their towers.

Finally measurements were made, and the installations tweaked.


My circuit simulations are soon
tested on a circuit board, so we learn what works and which models can
be trusted. Climate and economic and coronavirus models can't be
tested for their predictive accuracy. They can hindcast accurately,
but that's just curve fitting.

The simple models of opamps taught to students are imperfect
and do not predict reality in many ways. Nonetheless, the
models are useful, and can guide design and debugging.
 
On 06/04/20 18:20, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 17:42:12 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 17:15, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 16:48:09 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 16:09, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 09:24:41 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 05/04/2020 15:49, jlarkin@highlandsniptechnology.com wrote:
On Sun, 5 Apr 2020 14:23:01 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 04/04/2020 01:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd <whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John Larkin wrote:

The lockdowns are trashing the economy, which hurts people, and are
probably not going to save many lives.

UK modelling suggests it may decrease the death toll by an order of
magnitude or so. That is a distinctly non-trivial contribution.

Oh. Computer modeling says that? How silly of me.

OK. *STOP* using spice then - that is also a computer model.

I accurately simulate linear systems with known accurate component
models and initial conditions. Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.


There is a difference between modelling and simulating.

Please explain that.

google "simulation vs modelling" The answers are more
coherent than I have time to provide.

"Modeling is the act of building a model. A simulation is the process
of using a model to study the behavior and performance of an actual or
theoretical system. ... While a model aims to be true to the system it
represents, a simulation can use a model to explore states that would
not be possible in the original system."

Sounds fussy to me. Entering a schematic into Spice is modeling, but
when I click the run icon, it becomes simulation. I must switch
between them 50 times a day.




Not all models are chaotic.

Did I say that? Many systems are chaotic.

You went straight from modelling to chaotic systems,
with nothing in between and not alternative.

Did not. I did observe that some systems are amenable to predictive
simulation, and some aren't.





Not all models need to be simulated in order to obtain
results.

Even poorly defined systems can be modelled and simulated,
and useful results obtained.

Consider, for example, radio propagation, especially of
cellular systems. Models are created, simulated and the
results used to predict performance and where to site
towers and size the computer systems that control the
signalling.

That works surprisingly well, considering the inadequacy
of the information about the terrain and "atmospherics".
Occasionally modelling fails, and network operators have
"rogue cells" that don't work as well as they expect, and
they can't figure out why.

So please don't think that complete information is
required to do useful modelling; that's merely an excuse
for inaction.

Any cell phone system models are heavily tested in the field, and the
useful ones survive. That's evolution.

As someone that was working in that area at the time, I know
that to be false in important respects. I even have a T-shirt
of some results, and I can still get into it 25 years later :)

The models weren't verified by field tests? They were all used as
originally coded? That's amazing.

Read later comments.


First came the physics models, e.g. "knife edge diffraction"
and many others. Then came other generic academic models,
including "fast fading" and "log normal fading" and many others.
Then they were specialised for each frequency band. Then
academic measurements were made.

Why did they need measurements?

For example, what's the loss of different types of wood,
or forests at differing seasons, or building materials.

For example, what are the fast fading parameters in US
suburbs or city centre canyons, or in European city centres.


Based on all of that, some companies developed models that
helped operators decide where to site their towers.

Were all model equally successful commercially? Who decided, and why?

I don't think there were many. Any given operator
would choose one, and learn how to use it, its strengths
and weaknesses.

They all had weaknesses :)


Finally measurements were made, and the installations tweaked.


My circuit simulations are soon
tested on a circuit board, so we learn what works and which models can
be trusted. Climate and economic and coronavirus models can't be
tested for their predictive accuracy. They can hindcast accurately,
but that's just curve fitting.

The simple models of opamps taught to students are imperfect
and do not predict reality in many ways. Nonetheless, the
models are useful, and can guide design and debugging.

They can predict reality, future states, to parts per million. If you
pick the right ones and use them properly. Some systems don't allow
that.

And if you don't, the results can be grossly wrong.
Knowledge and experience guides their use.

Exactly the same can be said of other models.


> Simple faith in, or contempt for, computer models is silly.

Physician, heal thyself :)
 
On Mon, 6 Apr 2020 17:42:12 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 17:15, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 16:48:09 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 16:09, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 09:24:41 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 05/04/2020 15:49, jlarkin@highlandsniptechnology.com wrote:
On Sun, 5 Apr 2020 14:23:01 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 04/04/2020 01:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd <whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John Larkin wrote:

The lockdowns are trashing the economy, which hurts people, and are
probably not going to save many lives.

UK modelling suggests it may decrease the death toll by an order of
magnitude or so. That is a distinctly non-trivial contribution.

Oh. Computer modeling says that? How silly of me.

OK. *STOP* using spice then - that is also a computer model.

I accurately simulate linear systems with known accurate component
models and initial conditions. Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.


There is a difference between modelling and simulating.

Please explain that.

google "simulation vs modelling" The answers are more
coherent than I have time to provide.

"Modeling is the act of building a model. A simulation is the process
of using a model to study the behavior and performance of an actual or
theoretical system. ... While a model aims to be true to the system it
represents, a simulation can use a model to explore states that would
not be possible in the original system."

Sounds fussy to me. Entering a schematic into Spice is modeling, but
when I click the run icon, it becomes simulation. I must switch
between them 50 times a day.


Not all models are chaotic.

Did I say that? Many systems are chaotic.

You went straight from modelling to chaotic systems,
with nothing in between and not alternative.

Did not. I did observe that some systems are amenable to predictive
simulation, and some aren't.


Not all models need to be simulated in order to obtain
results.

Even poorly defined systems can be modelled and simulated,
and useful results obtained.

Consider, for example, radio propagation, especially of
cellular systems. Models are created, simulated and the
results used to predict performance and where to site
towers and size the computer systems that control the
signalling.

That works surprisingly well, considering the inadequacy
of the information about the terrain and "atmospherics".
Occasionally modelling fails, and network operators have
"rogue cells" that don't work as well as they expect, and
they can't figure out why.

So please don't think that complete information is
required to do useful modelling; that's merely an excuse
for inaction.

Any cell phone system models are heavily tested in the field, and the
useful ones survive. That's evolution.

As someone that was working in that area at the time, I know
that to be false in important respects. I even have a T-shirt
of some results, and I can still get into it 25 years later :)

The models weren't verified by field tests? They were all used as
originally coded? That's amazing.

First came the physics models, e.g. "knife edge diffraction"
and many others. Then came other generic academic models,
including "fast fading" and "log normal fading" and many others.
Then they were specialised for each frequency band. Then
academic measurements were made.

Why did they need measurements?

Based on all of that, some companies developed models that
helped operators decide where to site their towers.

Were all model equally successful commercially? Who decided, and why?


Finally measurements were made, and the installations tweaked.


My circuit simulations are soon
tested on a circuit board, so we learn what works and which models can
be trusted. Climate and economic and coronavirus models can't be
tested for their predictive accuracy. They can hindcast accurately,
but that's just curve fitting.

The simple models of opamps taught to students are imperfect
and do not predict reality in many ways. Nonetheless, the
models are useful, and can guide design and debugging.

They can predict reality, future states, to parts per million. If you
pick the right ones and use them properly. Some systems don't allow
that.

Simple faith in, or contempt for, computer models is silly.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Mon, 6 Apr 2020 18:42:55 +0100, Tom Gardner
<spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 18:20, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 17:42:12 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 17:15, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 16:48:09 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 16:09, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 09:24:41 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 05/04/2020 15:49, jlarkin@highlandsniptechnology.com wrote:
On Sun, 5 Apr 2020 14:23:01 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 04/04/2020 01:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd <whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John Larkin wrote:

The lockdowns are trashing the economy, which hurts people, and are
probably not going to save many lives.

UK modelling suggests it may decrease the death toll by an order of
magnitude or so. That is a distinctly non-trivial contribution.

Oh. Computer modeling says that? How silly of me.

OK. *STOP* using spice then - that is also a computer model.

I accurately simulate linear systems with known accurate component
models and initial conditions. Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.


There is a difference between modelling and simulating.

Please explain that.

google "simulation vs modelling" The answers are more
coherent than I have time to provide.

"Modeling is the act of building a model. A simulation is the process
of using a model to study the behavior and performance of an actual or
theoretical system. ... While a model aims to be true to the system it
represents, a simulation can use a model to explore states that would
not be possible in the original system."

Sounds fussy to me. Entering a schematic into Spice is modeling, but
when I click the run icon, it becomes simulation. I must switch
between them 50 times a day.




Not all models are chaotic.

Did I say that? Many systems are chaotic.

You went straight from modelling to chaotic systems,
with nothing in between and not alternative.

Did not. I did observe that some systems are amenable to predictive
simulation, and some aren't.





Not all models need to be simulated in order to obtain
results.

Even poorly defined systems can be modelled and simulated,
and useful results obtained.

Consider, for example, radio propagation, especially of
cellular systems. Models are created, simulated and the
results used to predict performance and where to site
towers and size the computer systems that control the
signalling.

That works surprisingly well, considering the inadequacy
of the information about the terrain and "atmospherics".
Occasionally modelling fails, and network operators have
"rogue cells" that don't work as well as they expect, and
they can't figure out why.

So please don't think that complete information is
required to do useful modelling; that's merely an excuse
for inaction.

Any cell phone system models are heavily tested in the field, and the
useful ones survive. That's evolution.

As someone that was working in that area at the time, I know
that to be false in important respects. I even have a T-shirt
of some results, and I can still get into it 25 years later :)

The models weren't verified by field tests? They were all used as
originally coded? That's amazing.

Read later comments.


First came the physics models, e.g. "knife edge diffraction"
and many others. Then came other generic academic models,
including "fast fading" and "log normal fading" and many others.
Then they were specialised for each frequency band. Then
academic measurements were made.

Why did they need measurements?

For example, what's the loss of different types of wood,
or forests at differing seasons, or building materials.

For example, what are the fast fading parameters in US
suburbs or city centre canyons, or in European city centres.


Based on all of that, some companies developed models that
helped operators decide where to site their towers.

Were all model equally successful commercially? Who decided, and why?

I don't think there were many. Any given operator
would choose one, and learn how to use it, its strengths
and weaknesses.

They all had weaknesses :)


Finally measurements were made, and the installations tweaked.


My circuit simulations are soon
tested on a circuit board, so we learn what works and which models can
be trusted. Climate and economic and coronavirus models can't be
tested for their predictive accuracy. They can hindcast accurately,
but that's just curve fitting.

The simple models of opamps taught to students are imperfect
and do not predict reality in many ways. Nonetheless, the
models are useful, and can guide design and debugging.

They can predict reality, future states, to parts per million. If you
pick the right ones and use them properly. Some systems don't allow
that.

And if you don't, the results can be grossly wrong.
Knowledge and experience guides their use.

Exactly the same can be said of other models.


Simple faith in, or contempt for, computer models is silly.

Physician, heal thyself :)

I think I have a pretty good feel for whether I can trust various
Spice sims. I can verify by breadboard when I'm in doubt. The popular
press seems to think that all computer models and all Top Scientists
are right.

We need a teevee game show to pick the Top Scientists.

--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Sat, 04 Apr 2020 04:40:20 -0700, jlarkin@highlandsniptechnology.com
wrote:

On Sat, 4 Apr 2020 03:08:10 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-04-03 20:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd <whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John Larkin wrote:

The lockdowns are trashing the economy, which hurts people, and are
probably not going to save many lives.

This displays no grasp at all of the concept of 'probability'. Show us a credible
model that gives a quantitative result other than 'many' lives in the balance.

We know 96% of us can't die from it, that's good enough.

False assurance.
That was the result with an intact healthcare system, well supplied and operating
within its limits. One municipality's turnaround is not data to match a crisis overwhelming national
resources (Spain, Italy aren't finished with their reports).

And, false acceptable level of risk.
And, if 4% of us die this year (it does spread fast enough to cover the planet under one year)
that makes the effective life expectancy 25 years... it's a bigger danger to you, personally,
than other diseases. It's bigger, in fact, than ALL OTHER causes of death put together.
If you have a brain and a heart, that should raise your pulse rate.

The Princess cruise ships were captive petri dishes, with a lot of old
people on board.

https://en.wikipedia.org/wiki/Diamond_Princess_ship#2020_COVID-19

Those numbers are probably worse than you'd get in a more normal city
situation.

Yeah, looking for the Diamond Princess and Grand Princess to be quietly
renamed a few months from now.

The one we can see from our kitchen window is the Corona Princess.

https://www.dropbox.com/s/5kklq79a7yb6j89/Corona_Princess_Binocs.jpg?dl=0

It's right out there in the Bay. It seems to have a single bow line to
an anchor or a bouy, so it drifts all around and points in different
directions. Sometimes it's clear of the trees.

--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
whit3rd <whit3rd@gmail.com> wrote in
news:51aeb3eb-756a-4a68-9ca4-cee6b331ccc5@googlegroups.com:

On Monday, April 6, 2020 at 8:09:45 AM UTC-7,
jla...@highlandsniptechnology.com wrote:

.... Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions,
but that doesn't stop them from trying, and generating press
releases.

Why not? Statistically large numbers of folk have symptoms, and
have been tested, and that's USEFUL. There's no particular
value in 'accurate' pandemic predictions (no one is likely to
schedule an appointment today for an infection he/she gets in a
month). But, progress of a disease IS well-studied and rates
understood; consider how many folk have examined petri dishes for
exactly that kind of information.

I do not think that word 'chaotic' means what you think it means.

Inconceiveable!
 
On Monday, April 6, 2020 at 8:09:45 AM UTC-7, jla...@highlandsniptechnology.com wrote:

.... Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.

Why not? Statistically large numbers of folk have symptoms, and have been
tested, and that's USEFUL. There's no particular value in 'accurate' pandemic
predictions (no one is likely to schedule an appointment today for an infection he/she
gets in a month). But, progress of a disease IS well-studied and rates understood;
consider how many folk have examined petri dishes for exactly that kind of
information.

I do not think that word 'chaotic' means what you think it means.
 
Tom Gardner <spamjunk@blueyonder.co.uk> wrote in
news:zSJiG.304765$ffc1.162088@fx36.am4:

On 06/04/20 18:20, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 17:42:12 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 17:15, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 16:48:09 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 16:09, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 09:24:41 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 05/04/2020 15:49, jlarkin@highlandsniptechnology.com
wrote:
On Sun, 5 Apr 2020 14:23:01 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 04/04/2020 01:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd
whit3rd@gmail.com> wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander
Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd
whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John
Larkin wrote:

The lockdowns are trashing the economy, which hurts
people, and are probably not going to save many
lives.

UK modelling suggests it may decrease the death toll by an
order of magnitude or so. That is a distinctly non-trivial
contribution.

Oh. Computer modeling says that? How silly of me.

OK. *STOP* using spice then - that is also a computer model.

I accurately simulate linear systems with known accurate
component models and initial conditions. Nobody accurately
simulates chaotic systems with bad componant models and
unknown initial conditions, but that doesn't stop them from
trying, and generating press releases.


There is a difference between modelling and simulating.

Please explain that.

google "simulation vs modelling" The answers are more
coherent than I have time to provide.

"Modeling is the act of building a model. A simulation is the
process of using a model to study the behavior and performance of
an actual or theoretical system. ... While a model aims to be
true to the system it represents, a simulation can use a model to
explore states that would not be possible in the original
system."

Sounds fussy to me. Entering a schematic into Spice is modeling,
but when I click the run icon, it becomes simulation. I must
switch between them 50 times a day.




Not all models are chaotic.

Did I say that? Many systems are chaotic.

You went straight from modelling to chaotic systems,
with nothing in between and not alternative.

Did not. I did observe that some systems are amenable to
predictive simulation, and some aren't.





Not all models need to be simulated in order to obtain
results.

Even poorly defined systems can be modelled and simulated,
and useful results obtained.

Consider, for example, radio propagation, especially of
cellular systems. Models are created, simulated and the
results used to predict performance and where to site
towers and size the computer systems that control the
signalling.

That works surprisingly well, considering the inadequacy
of the information about the terrain and "atmospherics".
Occasionally modelling fails, and network operators have
"rogue cells" that don't work as well as they expect, and
they can't figure out why.

So please don't think that complete information is
required to do useful modelling; that's merely an excuse
for inaction.

Any cell phone system models are heavily tested in the field,
and the useful ones survive. That's evolution.

As someone that was working in that area at the time, I know
that to be false in important respects. I even have a T-shirt
of some results, and I can still get into it 25 years later :)

The models weren't verified by field tests? They were all used as
originally coded? That's amazing.

Read later comments.


First came the physics models, e.g. "knife edge diffraction"
and many others. Then came other generic academic models,
including "fast fading" and "log normal fading" and many others.
Then they were specialised for each frequency band. Then
academic measurements were made.

Why did they need measurements?

For example, what's the loss of different types of wood,
or forests at differing seasons, or building materials.

For example, what are the fast fading parameters in US
suburbs or city centre canyons, or in European city centres.


Based on all of that, some companies developed models that
helped operators decide where to site their towers.

Were all model equally successful commercially? Who decided, and
why?

I don't think there were many. Any given operator
would choose one, and learn how to use it, its strengths
and weaknesses.

They all had weaknesses :)


Finally measurements were made, and the installations tweaked.


My circuit simulations are soon
tested on a circuit board, so we learn what works and which
models can be trusted. Climate and economic and coronavirus
models can't be tested for their predictive accuracy. They can
hindcast accurately, but that's just curve fitting.

The simple models of opamps taught to students are imperfect
and do not predict reality in many ways. Nonetheless, the
models are useful, and can guide design and debugging.

They can predict reality, future states, to parts per million. If
you pick the right ones and use them properly. Some systems don't
allow that.

And if you don't, the results can be grossly wrong.
Knowledge and experience guides their use.

Exactly the same can be said of other models.


Simple faith in, or contempt for, computer models is silly.

Physician, heal thyself :)
Factors... comes down to factors. Does one have a complete set or
are only a few parameters needed to attain acceptable accuracy? AND
are those factors calibrated in correctly over the range of variance
for each?

In one setting you get imaginary numbers. In another Millionth
inch accuracy.

I like examining pool simulations. There are some that allow for
an elevated cue and have good ball and table cloth friction physics
and rail nose physics incorporated and some that only put a 2D
playfield action reaction physics in their sim.
 
On Monday, April 6, 2020 at 8:09:45 AM UTC-7, jla...@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 09:24:41 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 05/04/2020 15:49, jlarkin@highlandsniptechnology.com wrote:
On Sun, 5 Apr 2020 14:23:01 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 04/04/2020 01:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd <whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John Larkin wrote:

The lockdowns are trashing the economy, which hurts people, and are
probably not going to save many lives.

UK modelling suggests it may decrease the death toll by an order of
magnitude or so. That is a distinctly non-trivial contribution.

Oh. Computer modeling says that? How silly of me.

OK. *STOP* using spice then - that is also a computer model.

I accurately simulate linear systems with known accurate component
models and initial conditions. Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.

The more people you predict killed, the more likely that The
Associated Press will spread your name. So there is a
dead-bodies-stacked-up bidding war based on infallible Computer
Simulations by Top Scientists. Has any quotable source got to a
billion deaths yet?

Even worse, the initial conditions were either lied about, withheld or both.
 
On 06/04/20 20:28, John Larkin wrote:
On Mon, 6 Apr 2020 18:42:55 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 18:20, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 17:42:12 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 17:15, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 16:48:09 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 16:09, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 09:24:41 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 05/04/2020 15:49, jlarkin@highlandsniptechnology.com wrote:
On Sun, 5 Apr 2020 14:23:01 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 04/04/2020 01:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd <whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John Larkin wrote:

The lockdowns are trashing the economy, which hurts people, and are
probably not going to save many lives.

UK modelling suggests it may decrease the death toll by an order of
magnitude or so. That is a distinctly non-trivial contribution.

Oh. Computer modeling says that? How silly of me.

OK. *STOP* using spice then - that is also a computer model.

I accurately simulate linear systems with known accurate component
models and initial conditions. Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.


There is a difference between modelling and simulating.

Please explain that.

google "simulation vs modelling" The answers are more
coherent than I have time to provide.

"Modeling is the act of building a model. A simulation is the process
of using a model to study the behavior and performance of an actual or
theoretical system. ... While a model aims to be true to the system it
represents, a simulation can use a model to explore states that would
not be possible in the original system."

Sounds fussy to me. Entering a schematic into Spice is modeling, but
when I click the run icon, it becomes simulation. I must switch
between them 50 times a day.




Not all models are chaotic.

Did I say that? Many systems are chaotic.

You went straight from modelling to chaotic systems,
with nothing in between and not alternative.

Did not. I did observe that some systems are amenable to predictive
simulation, and some aren't.





Not all models need to be simulated in order to obtain
results.

Even poorly defined systems can be modelled and simulated,
and useful results obtained.

Consider, for example, radio propagation, especially of
cellular systems. Models are created, simulated and the
results used to predict performance and where to site
towers and size the computer systems that control the
signalling.

That works surprisingly well, considering the inadequacy
of the information about the terrain and "atmospherics".
Occasionally modelling fails, and network operators have
"rogue cells" that don't work as well as they expect, and
they can't figure out why.

So please don't think that complete information is
required to do useful modelling; that's merely an excuse
for inaction.

Any cell phone system models are heavily tested in the field, and the
useful ones survive. That's evolution.

As someone that was working in that area at the time, I know
that to be false in important respects. I even have a T-shirt
of some results, and I can still get into it 25 years later :)

The models weren't verified by field tests? They were all used as
originally coded? That's amazing.

Read later comments.


First came the physics models, e.g. "knife edge diffraction"
and many others. Then came other generic academic models,
including "fast fading" and "log normal fading" and many others.
Then they were specialised for each frequency band. Then
academic measurements were made.

Why did they need measurements?

For example, what's the loss of different types of wood,
or forests at differing seasons, or building materials.

For example, what are the fast fading parameters in US
suburbs or city centre canyons, or in European city centres.


Based on all of that, some companies developed models that
helped operators decide where to site their towers.

Were all model equally successful commercially? Who decided, and why?

I don't think there were many. Any given operator
would choose one, and learn how to use it, its strengths
and weaknesses.

They all had weaknesses :)


Finally measurements were made, and the installations tweaked.


My circuit simulations are soon
tested on a circuit board, so we learn what works and which models can
be trusted. Climate and economic and coronavirus models can't be
tested for their predictive accuracy. They can hindcast accurately,
but that's just curve fitting.

The simple models of opamps taught to students are imperfect
and do not predict reality in many ways. Nonetheless, the
models are useful, and can guide design and debugging.

They can predict reality, future states, to parts per million. If you
pick the right ones and use them properly. Some systems don't allow
that.

And if you don't, the results can be grossly wrong.
Knowledge and experience guides their use.

Exactly the same can be said of other models.


Simple faith in, or contempt for, computer models is silly.

Physician, heal thyself :)


I think I have a pretty good feel for whether I can trust various
Spice sims. I can verify by breadboard when I'm in doubt. The popular
press seems to think that all computer models and all Top Scientists
are right.

I don't doubt it.

You know what you had to do and learn in order to use
Spice effectively. You also know how inexperience can
lead to rubbish results.

Similar learning and experience is necessary to use other
tools effectively. Both of us are inexperienced with
medical nor epidemiological tools.

But just because we cannot understand them doesn't mean
experienced people cannot.
 
On Monday, April 6, 2020 at 12:28:20 PM UTC-7, John Larkin wrote:

> We need a teevee game show to pick the Top Scientists.

No, that's too silly to be worth considering. Besides, low-to-middle
scientists with popular-culture communications skills would win,
the top scientists are COOPERATIVE, not competitive.

But, if you want to hire winners to be on the team, consider
that the Donald hired Omarosa

<https://en.wikipedia.org/wiki/Omarosa_Manigault_Newman>

and, how well did THAT work?
 
On Monday, April 6, 2020 at 12:46:05 PM UTC-7, Flyguy wrote:

> Even worse, the initial conditions were either lied about, withheld or both.

Or, neither. Funny how you leave out possibilities, instead of considering them.
 
On Mon, 6 Apr 2020 11:50:24 -0700 (PDT), whit3rd <whit3rd@gmail.com>
wrote:

On Monday, April 6, 2020 at 8:09:45 AM UTC-7, jla...@highlandsniptechnology.com wrote:

.... Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.

Why not? Statistically large numbers of folk have symptoms, and have been
tested, and that's USEFUL. There's no particular value in 'accurate' pandemic
predictions (no one is likely to schedule an appointment today for an infection he/she
gets in a month). But, progress of a disease IS well-studied and rates understood;
consider how many folk have examined petri dishes for exactly that kind of
information.

I do not think that word 'chaotic' means what you think it means.

You'd have to state what you think I think it means.

What do *you* think "chaotic system" means?



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Mon, 6 Apr 2020 15:09:28 -0700 (PDT), whit3rd <whit3rd@gmail.com>
wrote:

On Monday, April 6, 2020 at 12:28:20 PM UTC-7, John Larkin wrote:

We need a teevee game show to pick the Top Scientists.

No, that's too silly to be worth considering. Besides, low-to-middle
scientists with popular-culture communications skills would win,
the top scientists are COOPERATIVE, not competitive.

The losers can be Bottom Scientists.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Mon, 6 Apr 2020 12:46:00 -0700 (PDT), Flyguy
<soar2morrow@yahoo.com> wrote:

On Monday, April 6, 2020 at 8:09:45 AM UTC-7, jla...@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 09:24:41 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 05/04/2020 15:49, jlarkin@highlandsniptechnology.com wrote:
On Sun, 5 Apr 2020 14:23:01 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 04/04/2020 01:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd <whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John Larkin wrote:

The lockdowns are trashing the economy, which hurts people, and are
probably not going to save many lives.

UK modelling suggests it may decrease the death toll by an order of
magnitude or so. That is a distinctly non-trivial contribution.

Oh. Computer modeling says that? How silly of me.

OK. *STOP* using spice then - that is also a computer model.

I accurately simulate linear systems with known accurate component
models and initial conditions. Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.

The more people you predict killed, the more likely that The
Associated Press will spread your name. So there is a
dead-bodies-stacked-up bidding war based on infallible Computer
Simulations by Top Scientists. Has any quotable source got to a
billion deaths yet?

Even worse, the initial conditions were either lied about, withheld or both.

Or unknown.

The current conditions are, if anything, even more unknown.

This interests me because I am a connoisseur of wrongness. I am amazed
by how many people can get together and reinforce their mutual
wrongness.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Tuesday, April 7, 2020 at 11:19:00 AM UTC+10, jla...@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 11:50:24 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Monday, April 6, 2020 at 8:09:45 AM UTC-7, jla...@highlandsniptechnology.com wrote:

.... Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.

Why not? Statistically large numbers of folk have symptoms, and have been
tested, and that's USEFUL. There's no particular value in 'accurate' pandemic
predictions (no one is likely to schedule an appointment today for an infection he/she
gets in a month). But, progress of a disease IS well-studied and rates understood;
consider how many folk have examined petri dishes for exactly that kind of
information.

I do not think that word 'chaotic' means what you think it means.

You'd have to state what you think I think it means.

What do *you* think "chaotic system" means?

https://en.wikipedia.org/wiki/Chaos_theory

A system that is very sensitive to initial conditions. It doesn't stop them from having predictable behaviour over quite long periods - the solar system is chaotic but only over periods of about a million years so.

--
Bill Sloman, Sydney
 
On Tuesday, April 7, 2020 at 11:16:23 AM UTC+10, jla...@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 15:09:28 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Monday, April 6, 2020 at 12:28:20 PM UTC-7, John Larkin wrote:

We need a teevee game show to pick the Top Scientists.

No, that's too silly to be worth considering. Besides, low-to-middle
scientists with popular-culture communications skills would win,
the top scientists are COOPERATIVE, not competitive.

The losers can be Bottom Scientists.

Proctologists.

https://en.wikipedia.org/wiki/Colorectal_surgery

Kim Kardashian would probably come into it.

--
Bill Sloman, Sydney
 
On Tuesday, April 7, 2020 at 5:28:20 AM UTC+10, John Larkin wrote:
On Mon, 6 Apr 2020 18:42:55 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 18:20, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 17:42:12 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 17:15, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 16:48:09 +0100, Tom Gardner
spamjunk@blueyonder.co.uk> wrote:

On 06/04/20 16:09, jlarkin@highlandsniptechnology.com wrote:
On Mon, 6 Apr 2020 09:24:41 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 05/04/2020 15:49, jlarkin@highlandsniptechnology.com wrote:
On Sun, 5 Apr 2020 14:23:01 +0100, Martin Brown
'''newspam'''@nezumi.demon.co.uk> wrote:

On 04/04/2020 01:00, John Larkin wrote:
On Fri, 3 Apr 2020 16:40:42 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Friday, April 3, 2020 at 12:26:33 PM UTC-7, Commander Kinsey wrote:
On Fri, 03 Apr 2020 02:00:21 +0100, whit3rd <whit3rd@gmail.com> wrote:

On Thursday, April 2, 2020 at 11:12:05 AM UTC-7, John Larkin wrote:

The lockdowns are trashing the economy, which hurts people, and are
probably not going to save many lives.

UK modelling suggests it may decrease the death toll by an order of
magnitude or so. That is a distinctly non-trivial contribution.

Oh. Computer modeling says that? How silly of me.

OK. *STOP* using spice then - that is also a computer model.

I accurately simulate linear systems with known accurate component
models and initial conditions. Nobody accurately simulates chaotic
systems with bad componant models and unknown initial conditions, but
that doesn't stop them from trying, and generating press releases.


There is a difference between modelling and simulating.

Please explain that.

google "simulation vs modelling" The answers are more
coherent than I have time to provide.

"Modeling is the act of building a model. A simulation is the process
of using a model to study the behavior and performance of an actual or
theoretical system. ... While a model aims to be true to the system it
represents, a simulation can use a model to explore states that would
not be possible in the original system."

Sounds fussy to me. Entering a schematic into Spice is modeling, but
when I click the run icon, it becomes simulation. I must switch
between them 50 times a day.




Not all models are chaotic.

Did I say that? Many systems are chaotic.

You went straight from modelling to chaotic systems,
with nothing in between and not alternative.

Did not. I did observe that some systems are amenable to predictive
simulation, and some aren't.





Not all models need to be simulated in order to obtain
results.

Even poorly defined systems can be modelled and simulated,
and useful results obtained.

Consider, for example, radio propagation, especially of
cellular systems. Models are created, simulated and the
results used to predict performance and where to site
towers and size the computer systems that control the
signalling.

That works surprisingly well, considering the inadequacy
of the information about the terrain and "atmospherics".
Occasionally modelling fails, and network operators have
"rogue cells" that don't work as well as they expect, and
they can't figure out why.

So please don't think that complete information is
required to do useful modelling; that's merely an excuse
for inaction.

Any cell phone system models are heavily tested in the field, and the
useful ones survive. That's evolution.

As someone that was working in that area at the time, I know
that to be false in important respects. I even have a T-shirt
of some results, and I can still get into it 25 years later :)

The models weren't verified by field tests? They were all used as
originally coded? That's amazing.

Read later comments.


First came the physics models, e.g. "knife edge diffraction"
and many others. Then came other generic academic models,
including "fast fading" and "log normal fading" and many others.
Then they were specialised for each frequency band. Then
academic measurements were made.

Why did they need measurements?

For example, what's the loss of different types of wood,
or forests at differing seasons, or building materials.

For example, what are the fast fading parameters in US
suburbs or city centre canyons, or in European city centres.


Based on all of that, some companies developed models that
helped operators decide where to site their towers.

Were all model equally successful commercially? Who decided, and why?

I don't think there were many. Any given operator
would choose one, and learn how to use it, its strengths
and weaknesses.

They all had weaknesses :)


Finally measurements were made, and the installations tweaked.


My circuit simulations are soon
tested on a circuit board, so we learn what works and which models can
be trusted. Climate and economic and coronavirus models can't be
tested for their predictive accuracy. They can hindcast accurately,
but that's just curve fitting.

The simple models of opamps taught to students are imperfect
and do not predict reality in many ways. Nonetheless, the
models are useful, and can guide design and debugging.

They can predict reality, future states, to parts per million. If you
pick the right ones and use them properly. Some systems don't allow
that.

And if you don't, the results can be grossly wrong.
Knowledge and experience guides their use.

Exactly the same can be said of other models.


Simple faith in, or contempt for, computer models is silly.

Physician, heal thyself :)


I think I have a pretty good feel for whether I can trust various
Spice sims. I can verify by breadboard when I'm in doubt. The popular
press seems to think that all computer models and all Top Scientists
are right.

We need a teevee game show to pick the Top Scientists.

Perhaps rather better than "The Apprentice" in picking top businessmen?

Trump came out of that with a popular reputation for knowing what he was talking about, despite having been involved in enough bankrupties that nobody serious would lend him any money any more.

Unsophisticated people still claim that he wrote "The Art of the Deal", which the actual author - Tony Schwartz - now thinks should be "recategorized as fiction".

Televisions shows can be educational, but they mainly have to be entertaining.

--
Bill Sloman, Sydney
 

Welcome to EDABoard.com

Sponsor

Back
Top