Driver to drive?

On Thu, 18 Sep 2014 03:58:52 -0400, rickman <gnuarm@gmail.com> Gave us:

Yeah, I kinda wondered if my indirect reference to the birther thing
would slip past people. Not only is Uganda not Hawaii, it's not Kenya
either... lol.

--

Rick

WTF are you retarded children mumbling about now?
 
On Thu, 18 Sep 2014 05:30:34 -0500, John S <Sophi.2@invalid.org> Gave
us:

Who, "DecadentLinuxUserNumeroUno" ? No, I don't know who he is - have I
missed something obvious?


AlwaysWrong and many other nyms.

Sorry, Jackass John S... I never had any such nym. Ever.

Now come back posting, and show us further just how low the depths of
your self imposed mental retardation goes.
 
On 9/18/2014 8:15 AM, Jasen Betts wrote:
On 2014-09-18, mpm <mpmillard@aol.com> wrote:
Somewhat off topic, but maybe someone here knows a "numerically inexpensive" way to do this:

I have (x,y,z) data - about 1.6 million points. (maybe more)

The (x,y) is somewhat regularly spaced already, but I want to
resample this data so that the (x,y) values "snap" to a grid of my
choosing. I don't mind interpolating the values where necessary, and
I realize there are several methods to accomplish that.

you can't interpolate a point. what aren't you telling us?

I don't think you understand his problem. He has three dimensional data
which can be considered a surface. He wants to produce results where X
and Y are on a grid of his choosing and the Z values are adjusted to fit
the existing surface. Is that more clear?


What I want is a program that can input my original (x,y) data and output a reasonable approximation of that data snapped to what will ultimately be a lower density grid.

I thought about using Excel, but my version will not accept that many rows.
Also, after rounding, I would probably still have some duplicates to get rid of... which sounds like a real hassle in Excel.

I'm sure this problem has been beat to death already.
Will MatLab do it?
I thought Surfer would do it, but I either forgot how, or it just can't do it.

you've got data, use a statistics package, or a database, or a
general purpose programming language you are comfortable with.

I don't think the algorithm is all that simple depending, of course, on
which algorithm he chooses. He has not indicated if this is a one time
thing, likely because he considered doing it in Excel, or if he will
have recurring sets of data to process. That makes a difference to the
amount of optimization he might need.

--

Rick
 
On 9/18/2014 5:07 PM, DecadentLinuxUserNumeroUno wrote:
On Thu, 18 Sep 2014 03:58:52 -0400, rickman <gnuarm@gmail.com> Gave us:

Yeah, I kinda wondered if my indirect reference to the birther thing
would slip past people. Not only is Uganda not Hawaii, it's not Kenya
either... lol.

--

Rick


WTF are you retarded children mumbling about now?

WTF are *you* retarded children mumbling about now?

--

Rick
 
On Thursday, September 18, 2014 1:29:09 PM UTC-4, Don Y wrote:

The problem with all of them is they aren't particularly intuitive
so, unless you use them "often", there is a start-up cost involved.

I stumbled upon a program called Saga at SourceForge.
http://sourceforge.net/projects/saga-gis/

Looks good in several interesting ways - a possible keeper for future needs..
As you say, like the rest, not particularly intuitive and comes with practically no documentation.
But it's free.

For now, I think I've found a way to reduce the (row count) size of my original dataset so that it will fit into Excel 2007. I'll still need to resample (snap) to my preferred grid, but at least I know how to do that in Excel (or, at least I think I do.) Famous last words? :)

Thanks for the help and pointers, guys.
I only have a couple of these sets to process, so assuming this first one is the largest, I can probably mangle it fine from here.
 
On 9/19/2014 12:28 AM, Robert Baer wrote:
mpm wrote:
Somewhat off topic, but maybe someone here knows a "numerically
inexpensive" way to do this:

I have (x,y,z) data - about 1.6 million points. (maybe more)

The (x,y) is somewhat regularly spaced already, but I want to resample
this data so that the (x,y) values "snap" to a grid of my choosing. I
don't mind interpolating the values where necessary, and I realize
there are several methods to accomplish that.

What I want is a program that can input my original (x,y) data and
output a reasonable approximation of that data snapped to what will
ultimately be a lower density grid.

I thought about using Excel, but my version will not accept that many
rows.
Also, after rounding, I would probably still have some duplicates to
get rid of... which sounds like a real hassle in Excel.

I'm sure this problem has been beat to death already.
Will MatLab do it?
I thought Surfer would do it, but I either forgot how, or it just
can't do it.

Thanks.
-mpm

What about finding the descriptive equation via curve fitting (least
squares or some such)?

If he has some million data points, that equations would likely have an
awful lot of terms... It could be accurate with not so many terms, but
how accurate? It would be a lot of processing just to estimate the
accuracy.

--

Rick
 
On 9/19/2014 1:04 AM, Bill Sloman wrote:
On 18/09/2014 11:52 AM, rickman wrote:
On 9/17/2014 7:55 PM, Bill Sloman wrote:
On 18/09/2014 1:52 AM, John S wrote:
On 9/17/2014 10:17 AM, Bill Sloman wrote:
On 18/09/2014 12:52 AM, John S wrote:

snip

I, by the way, am using Thunderbird and Eternal September.

And this is being posted using Thunderbird via Eternal September.

And, looks great!

The Google Groups interface still provides better access to what's
been posted.

I don't have trouble like that. Perhaps it is a lack of familiarity on
your part?

Don't be silly. When I last looked I was one of the more enthusiastic
posters to this group.

Thunderbird organises posts in the traditional way, and if you want to
check for up-dates on a thread that started a while ago, you have to
scroll up to the oldest surviving posting and click on it to see the
thread. The header is a high-lighted if there's been a new posting since
you last looked, but you have to scroll up to it to see that.

Google Groups sorts thread on the basis of the most recent posting,
so active threads are almost always available without you having to
scroll through a bunch of less active postings.

Have you actually used T-bird? Instead of criticizing it, maybe you
could ask for help with it?

Just click the threads icon at the top of that column to enable listing
by threads if you haven't already. Then click the date column to sort
the threads by the date of the most recent posting. You can click the
thread messaage which by default is the first and then hit 'N' to show
the next unread message in that thread. Or you can open the thread by
clicking arrow at the left of the subject and open the thread to search
manually.

I've been using Thunderbird off and on for years, and I switched my
e-mail from Eudora to Thunderbird a few years ago, and I probably
qualify as an experienced user. I can get the information I want out of
Thunderbird, but the Google Groups display is a trifle more user-friendly.

If you don't use Google Groups, you may not be aware of it advantages,
which I find outweigh its weaknesses. YMMD.

I used GG for a long time after my ISP stopped offering newsgroup
access. At first it was ok. Then I seem to recall they made changes
which I didn't like, but got used to. There was even a fix for the
third column where they had advertising.

More changes, then the really big change that no one seemed to like and
the tool became nearly unusable for me. When people started complaining
about the double spacing in the quotes I followed their advice and have
never looked back since. Good riddance to GG!

I also use Eudora for email, but that is so ingrained that I can't
switch to T-bird for email. It might be ok, but why switch? Eudora
still works well and in fact one of the small bugs cleared out when I
moved to a Win 8 machine... lol

Do what you want, but I think you are vastly overrating GG.


I wouldn't bother using T-bird if it couldn't do this.

Nor would I. But every browser that I've ever used to read news groups
has done much the same, give or take a bit of high-lighting, so this
probably wasn't worth posting.

I don't follow what that means, but ok.


What I would love is for it to hide all the insulting, name calling,
profane and just plain stupid messages, but then s.e.d wouldn't have
much left would it?

Google Groups does allow you to tag posted messages as spam, or
containing hateful or violent content, which is a start.

I do seem to be seeing fewer offers to sell us the answers to the
problems in popular textbooks, so it may even be working.

I seem to recall the spam reporting did work a little... but the key
word is "a little". Not worth the bother.

--

Rick
 
Chris Jones wrote:

Is this what you are after?

http://archive.computerhistory.org/resources/access/text/2014/05/102718662-05-01.acc.pdf

I was going to say that was really brief, but you can ++ the last digit to
get the next chapter.

http://archive.computerhistory.org/resources/access/text/2014/05/102718662-05-02.acc.pdf

....

http://archive.computerhistory.org/resources/access/text/2014/05/102718662-05-06.acc.pdf
 
On 19-09-2014 10:06, Chris Jones wrote:
On 19/09/2014 14:20, Anand P. Paralkar wrote:
Hello everyone,

Has anybody read the book "Introduction to Semiconductor Devices" by
Robert J. Widlar (Bob Widlar)?

I am enthused by the comments about this book by Bo Lojek, (author of
History of Semiconductor Engineering) that:

"he was more artist than an engineer ..." and more importantly,

"The very first Widlar publication was a crispy clear textbook
"Introduction to Semiconductor Devices" (Fig 8.9). When reading this
text, I realized why Bob Widlar was so successful in his future work. He
had an extraordinary capability to simplify complex problems."

Would like to know your opinion about this book and ANY INFORMATION
WHERE I CAN BUY THIS BOOK. (No amount of Googling shows where this book
is available. No reviews or previews available.)

Regards,
Anand

Is this what you are after?
http://archive.computerhistory.org/resources/access/text/2014/05/102718662-05-01.acc.pdf

Thank you Chris Jones and miso@sushi.com! Thanks a tonne!! Didn't know
one has to look there. :)

Regards,
Anand
 
On 19-09-2014 10:06, Chris Jones wrote:
On 19/09/2014 14:20, Anand P. Paralkar wrote:
Hello everyone,

Has anybody read the book "Introduction to Semiconductor Devices" by
Robert J. Widlar (Bob Widlar)?

I am enthused by the comments about this book by Bo Lojek, (author of
History of Semiconductor Engineering) that:

"he was more artist than an engineer ..." and more importantly,

"The very first Widlar publication was a crispy clear textbook
"Introduction to Semiconductor Devices" (Fig 8.9). When reading this
text, I realized why Bob Widlar was so successful in his future work. He
had an extraordinary capability to simplify complex problems."

Would like to know your opinion about this book and ANY INFORMATION
WHERE I CAN BUY THIS BOOK. (No amount of Googling shows where this book
is available. No reviews or previews available.)

Regards,
Anand

Is this what you are after?
http://archive.computerhistory.org/resources/access/text/2014/05/102718662-05-01.acc.pdf

Thank you Chris Jones and miso@sushi.com! Thanks a tonne!! Didn't know
one has to look there. :)

Regards,
Anand
 
On 19-09-2014 10:15, Bill Sloman wrote:
On 19/09/2014 2:20 PM, Anand P. Paralkar wrote:
Hello everyone,

Has anybody read the book "Introduction to Semiconductor Devices" by
Robert J. Widlar (Bob Widlar)?

I am enthused by the comments about this book by Bo Lojek, (author of
History of Semiconductor Engineering) that:

"he was more artist than an engineer ..." and more importantly,

"The very first Widlar publication was a crispy clear textbook
"Introduction to Semiconductor Devices" (Fig 8.9). When reading this
text, I realized why Bob Widlar was so successful in his future work. He
had an extraordinary capability to simplify complex problems."

Would like to know your opinion about this book and ANY INFORMATION
WHERE I CAN BUY THIS BOOK. (No amount of Googling shows where this book
is available. No reviews or previews available.)

If Amazon doesn't list it - and it doesn't - it's probably totally
unavailable. Widlar was an US Air Force instructor when he wrote it - so
they are the likely publishers and owners of the copyright, and it was
written for the Air Force technicians Widlar was instructing, so the
level isn't going to be all that high.

Widlar's real output was his applications notes for National
Semiconductor and his papers in the IEEE Journal of Solid-State
Circuits. They are all well-worth reading.

They presumably were produced under pressure from the marketing
department. Widlar doesn't seem to have been all that enthusiastic about
writing up what he did. He could afford to let his integrated circuits
do his advertising for him.

Thank you Bill. Was so curious to see what and how he would write.
Just to see if he had a certain approach to semiconductors.

Will search for his App. Notes too.

Regards,
Anand
 
On 9/18/2014 6:42 PM, mpm wrote:
On Thursday, September 18, 2014 1:29:09 PM UTC-4, Don Y wrote:

The problem with all of them is they aren't particularly intuitive
so, unless you use them "often", there is a start-up cost
involved.

I stumbled upon a program called Saga at SourceForge.
http://sourceforge.net/projects/saga-gis/

"Grid interpolation of scattered point data, triangulation, IDW,
splines, ..."

Presumably, it lets you specify (x,y,z) instead of *implying*
(x,y) for each z that you specify? ("scattered point data")

Looks good in several interesting ways - a possible keeper for future
needs.. As you say, like the rest, not particularly intuitive and
comes with practically no documentation. But it's free.

For now, I think I've found a way to reduce the (row count) size of
my original dataset so that it will fit into Excel 2007. I'll still
need to resample (snap) to my preferred grid, but at least I know how
to do that in Excel (or, at least I think I do.) Famous last words?
:)

Thanks for the help and pointers, guys. I only have a couple of these
sets to process, so assuming this first one is the largest, I can
probably mangle it fine from here.

I'd *carefully* single out a portion of your raw data and
"manually" work through what you *think* the output should
be "in that vicinity" -- just so you aren't surprised if it
applies some unexpected transform that manifests as an
abnomaly in your data later in "analysis/processing".
 
On Friday, September 19, 2014 3:16:53 AM UTC-4, miso wrote:
Chris Jones wrote:





Is this what you are after?



http://archive.computerhistory.org/resources/access/text/2014/05/102718662-05-01.acc.pdf


I was going to say that was really brief, but you can ++ the last digit to
get the next chapter.

Excellent, thanks for the hint miso.

George H.
 
On 09/19/2014 02:16 AM, miso wrote:
Chris Jones wrote:


Is this what you are after?

http://archive.computerhistory.org/resources/access/text/2014/05/102718662-05-01.acc.pdf

I was going to say that was really brief, but you can ++ the last digit to
get the next chapter.

http://archive.computerhistory.org/resources/access/text/2014/05/102718662-05-02.acc.pdf

...

http://archive.computerhistory.org/resources/access/text/2014/05/102718662-05-06.acc.pdf

In Python (watch out for the line wrap on the wget call)

import re
import socket
import urllib
import subprocess

for book in range(1,6):

subprocess.call(['wget','http://archive.computerhistory.org/resources/access/text/2014/05/102718662-05-0'+str(book)+'.acc.pdf'])
 
On Thu, 18 Sep 2014 13:07:31 -0500, John S <Sophi.2@invalid.org> Gave
us:


Phil, you are probably a genius. I think that may keep you from
understanding that there are methods and approaches that is beneath your
grasp. I am just tired of people knocking something that might be
misused and blaming it on the something. I have had great success with
the solderless breadboards and I have had failures as well. It was
educational and well worth the experience. Since then, I have learned to
use them when appropriate and not use them otherwise. Have you done the
same?

You are truly hilarious sometimes.

Is the depth of that observation within your grasp?
 
On Thu, 18 Sep 2014 22:35:11 -0700, Kevin McMurtrie
<mcmurtrie@pixelmemory.us> wrote:

In article <obpl1a93349eb92cv1623gg6il7c3f79q6@4ax.com>,
John Larkin <jlarkin@highlandtechnology.com> wrote:

On Wed, 17 Sep 2014 21:37:49 -0700, Kevin McMurtrie
mcmurtrie@pixelmemory.us> wrote:

In article <n4vj1a5j0ag5v8rji03bii0huq57gv1up4@4ax.com>,
John Larkin <jlarkin@highlandtechnology.com> wrote:

http://www.ims-resistors.com/B-Series-Final.pdf

http://www.ims-resistors.com/P-Series.pdf

Pyrolytic graphite tape is neat stuff too.

It's electrically conductive, isn't it?

We need diamond, preferably isotopically pure diamond.

Pyrolytic graphite can be purchased with insulation on one or both
sides. The insulated adhesive tape is meant for chilling tiny SMDs and
circuit boards where direct soldering to metal bulk isn't practical.
Naked pyrolytic graphite adheres very well with epoxy.

Given that one wants thermal conductivity and electrical insulation,
the graphite just makes things worse. The adhesive is doing all the
work.

It may be useful as a lateral heat spreader, if the adhesive is on one
side. But copper can do that, too.

AlN and BeO are good thermal conductors and electrical insulators.
They are available metalized, for soldering. The IMS things are cute,
stocked parts.

Diamond is much better. It's surprising that nobody has come up with a
process to make affordable bulk diamond.


--

John Larkin Highland Technology, Inc

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com
 
On Friday, September 19, 2014 1:36:19 PM UTC-7, Jim Thompson wrote:

I compressed it into a single v4 PDF if anyone wants it in
that format... just ask.

Could you upload it to http://www.ko4bb.com/manuals/ ? Just hit the Upload File button near the top of the page.

Not sure what category he will put it in, probably the "App Notes" folder, but at least it'll be available.

-- john, KE5FX
 
In article <lvgcd9$iom$1@dont-email.me>, Bill Sloman
<bill.sloman@ieee.org> wrote:

On 19/09/2014 2:20 PM, Anand P. Paralkar wrote:
Hello everyone,

Has anybody read the book "Introduction to Semiconductor Devices" by
Robert J. Widlar (Bob Widlar)?

I am enthused by the comments about this book by Bo Lojek, (author of
History of Semiconductor Engineering) that:

"he was more artist than an engineer ..." and more importantly,

"The very first Widlar publication was a crispy clear textbook
"Introduction to Semiconductor Devices" (Fig 8.9). When reading this
text, I realized why Bob Widlar was so successful in his future work. He
had an extraordinary capability to simplify complex problems."

Would like to know your opinion about this book and ANY INFORMATION
WHERE I CAN BUY THIS BOOK. (No amount of Googling shows where this book
is available. No reviews or previews available.)

If Amazon doesn't list it - and it doesn't - it's probably totally
unavailable. Widlar was an US Air Force instructor when he wrote it - so
they are the likely publishers and owners of the copyright, and it was
written for the Air Force technicians Widlar was instructing, so the
level isn't going to be all that high.

By US Copyright Law, such documents cannot be copyrighted, so obscurity
is the issue. Turned out that The Computer Museum had a copy, although
it may be incomplete.

Joe Gwinn
 
In article <lvfi7s$r5n$1@dont-email.me>, rickman <gnuarm@gmail.com>
wrote:

On 9/18/2014 8:15 AM, Jasen Betts wrote:
On 2014-09-18, mpm <mpmillard@aol.com> wrote:
Somewhat off topic, but maybe someone here knows a "numerically
inexpensive" way to do this:

I have (x,y,z) data - about 1.6 million points. (maybe more)

The (x,y) is somewhat regularly spaced already, but I want to
resample this data so that the (x,y) values "snap" to a grid of my
choosing. I don't mind interpolating the values where necessary, and
I realize there are several methods to accomplish that.

you can't interpolate a point. what aren't you telling us?

I don't think you understand his problem. He has three dimensional data
which can be considered a surface. He wants to produce results where X
and Y are on a grid of his choosing and the Z values are adjusted to fit
the existing surface. Is that more clear?

This is a point-cloud description of a surface, which gives us a google
search term.


What I want is a program that can input my original (x,y) data and output
a reasonable approximation of that data snapped to what will ultimately be
a lower density grid.

I thought about using Excel, but my version will not accept that many rows.
Also, after rounding, I would probably still have some duplicates to get
rid of... which sounds like a real hassle in Excel.

I'm sure this problem has been beat to death already.
Will MatLab do it?
I thought Surfer would do it, but I either forgot how, or it just can't do
it.

you've got data, use a statistics package, or a database, or a
general purpose programming language you are comfortable with.

I don't think the algorithm is all that simple depending, of course, on
which algorithm he chooses. He has not indicated if this is a one time
thing, likely because he considered doing it in Excel, or if he will
have recurring sets of data to process. That makes a difference to the
amount of optimization he might need.

Excel won't know what to do with a point cloud. There are programs
that will take point cloud data and turn them into CAD descriptions.

Joe Gwinn
 
On 9/19/2014 5:06 AM, Anand P. Paralkar wrote:
On 19-09-2014 10:06, Chris Jones wrote:
On 19/09/2014 14:20, Anand P. Paralkar wrote:
Hello everyone,

Has anybody read the book "Introduction to Semiconductor Devices" by
Robert J. Widlar (Bob Widlar)?

I am enthused by the comments about this book by Bo Lojek, (author of
History of Semiconductor Engineering) that:

"he was more artist than an engineer ..." and more importantly,

"The very first Widlar publication was a crispy clear textbook
"Introduction to Semiconductor Devices" (Fig 8.9). When reading this
text, I realized why Bob Widlar was so successful in his future work. He
had an extraordinary capability to simplify complex problems."

Would like to know your opinion about this book and ANY INFORMATION
WHERE I CAN BUY THIS BOOK. (No amount of Googling shows where this book
is available. No reviews or previews available.)

Regards,
Anand

Is this what you are after?
http://archive.computerhistory.org/resources/access/text/2014/05/102718662-05-01.acc.pdf




Thank you Chris Jones and miso@sushi.com! Thanks a tonne!! Didn't know
one has to look there. :)

Regards,
Anand

Interesting read. To save others the trouble, here's a combined version
in djvu format, OCRed. http://electrooptical.net/OldBooks.html

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 

Welcome to EDABoard.com

Sponsor

Back
Top