Driver to drive?

On Friday, 18 April 2014 09:08:08 UTC+10, mrob...@att.net wrote:
Bill Sloman <bill.sloman@gmail.com> wrote:

My Russian friend (who now devises penguin-weighing machines for the
British Antarctic Survey)

Do they ever get volunteers? http://i.imgur.com/AoDcfzS.jpg

IIRR the weighing machine is buried in the snow on a path regularly used by penguins, so any penguin weighed would count as an involuntary volunteer.

--
Bill Sloman, Sydney
 
On 2014-04-18, josephkk <joseph_barrett@sbcglobal.net> wrote:
On Thu, 17 Apr 2014 00:09:15 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

Magnetic force on a moving charge is perpendicular to velocity,
the power is zero because F-vector and V-vector are orthogonal.

Stuff and nonsense. If you change the path of a particle you have
accelerated it.
That takes work.

Only if the applied force is in the direction of the motion, which is
never the case in Hall-effect cells.



--
umop apisdn


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
 
On Thu, 17 Apr 2014 23:03:57 -0400, Joe Gwinn <joegwinn@comcast.net>
wrote:

Ten or twenty years before, the spectacle was Fortran compiler vendors
claiming that their compilers generated executable code that was faster
than that produced by assembly programmers. Well, not if you get a
real assembly programmer. But hardware got fast enough that we no
longer had to care.

If you have an existing Fortran program and give it to an assembler
programmer and ask to write it in assembly, the compiler could well
produce a better result.

However, if the assembler programmer starts from scratch with the
functional requirements only, the assembly code might be better. For
instance with global register assignment, you can get away with lots
of the high level language parameter passing overhead or use some
specialized instructions that can't be expressed in a HLL.
 
On 18/04/2014 04:03, Joe Gwinn wrote:
In article <ijQ3v.14995$X41.9844@fx15.am4>, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 17/04/2014 13:18, Joe Gwinn wrote:

Something is fishy here. Basic is an interpreted language. If the

Not necessarily.

PowerBasic is a decent optimising native code compiler. And in some ways
it has more freedom to optimise its loop code than a C compiler!

Basic and Lisp are usually interpreted languages but there are
optimising native code compilers for both of them on some platforms.

http://www.powerbasic.com/products/

Interpreted languages generally compile to bytecode, while compiled
languages compile to native machine code, which is a whole lot faster.

Although what you say is true of many interpreted language compilers it
is not true of all of them. There are fully optimising Basic and Lisp
compilers about that can do JIT compilation to native code and in some
cases full global program optimisation.

This is a bit weird. I usually end up being rude about the "magical"
claims that Larkin makes for his beloved PowerBasic but in this case he
is right - it is a native code optimising compiler with a better grasp
of optimising the sort of loops he needed than the C compiler that they
were using as coded by their "senior C programmer".

program has high locality, aggressive caching of repeated bits can make
it only one tenth as fast as the same algorithm coded in a compiled
language like C. If the program has low locality (like lots of
realtime stuff), interpreted code is more like one 50th of the speed of
compiled code.

From memory the data was just about big enough and involved words and
integers to go I/O bound and their C code was decidedly non-optimal.

On the current crop of optimising compilers there is seldom much to
choose between different ways of implementing vector dot products.

I'd look at the C code with a profiler, and find the bug.

Joe Gwinn

C code isn't quite as fast at some things as you might like to believe,
but ISTR the slowness in this case was mostly down to user error.

Well, there you are - isn't "user error" another word for performance
bug?

Choice of compiler and pragmas as I recall.
The software world has periodic language wars.

I particularly recall Ada83 versus C. Both are compiled languages, but
Ada is far more complex a language, as judged by the sizes of their
respective compilers. We would read article after article where a
world class Ada expert would produce Ada programs that ran circles
around the C programs produced by some duffer C programmers, and
declare that Ada was therefore the better language.

Generally happens in these language wars and they generate a lot more
heat than light. C optimisers have to be very careful what they do.
Other languages lend themselves to easier global code optimisation.

Ten or twenty years before, the spectacle was Fortran compiler vendors
claiming that their compilers generated executable code that was faster
than that produced by assembly programmers. Well, not if you get a
real assembly programmer. But hardware got fast enough that we no
longer had to care.

Sometimes we do when transforming large arrays in realtime. Optimising
the performance of the cache architecture and avoiding pipeline stalls
can be absolutely critical to optimal performance.

An assembler programmer today would have to work extremely hard to beat
a modern optimising compiler at avoiding pipeline stalls on a modern
CPU. I doubt if more than a handful of people on the planet could do it
instinctively without using the internal chip diagnostics to get
feedback on how and where the stalls and bottlenecks are occurring.

The exact fastest code depends critically on the CPU model number and
cache structure. Certain programs like FFTW are self tuning to optimises
for a given CPU architecture once they have been trained.

--
Regards,
Martin Brown
 
On 16/04/14 17:28, Tim Williams wrote:
"David Brown" <david.brown@hesbynett.no> wrote in message
news:lilpeq$42a$1@dont-email.me...
Typically AVR and MSP430 code is a lot more
compact than on small CISC devices I have worked with in assembly,
including 8051, COP8, HPC, PIC.

Wat. You're shittin' me, right?..

Maybe you are misunderstanding me...

None of those is close to being as nice. Taking 'nice' to mean, fewer
assembly lines required to accomplish various tasks.

The AVR and the MSP430 are much "nicer" to work with than the 8051, the
COP8, the HPC and the PIC devices (though the HPC wasn't too bad). The
AVR and the MSP430 are RISC architectures, the others listed are CISC.
And generally the AVR and the MSP430 have more compact code than the
others - although I certainly don't think that's a fair meaning for "nice".

It's like you're calling them CISC just because they have no registers.
Which is why they take as many instructions, you're always pulling stuff
through the accumulator or whatever.

There is no fixed, absolute definition of what is RISC and what is CISC.
There are a number of characteristics of processor design that are
typical "RISC" characteristics, and a number that are typical "CISC"
characteristics. Most processors have a mix from both groups, but often
have a large enough proportion from one group to be able to classify it.
There are some processors that are too mixed to be fairly called RISC
or CISC.

Accumulators, special registers, and small register sets are CISC
characteristics. Multiple identical registers with an orthogonal
instruction set are RISC characteristics. So yes, having an accumulator
and a small number of registers is one reason for classifying the 8051,
COP8, PIC as "CISC". Other characteristics are instructions and
addressing modes for operating directly on memory (rather than a
load-store architecture common to most RISC cpus), varied instruction
lengths, complex instructions (relative to the size of the core),
instructions that do multiple tasks, and very varied instruction timing.

I want to say AVR has more instructions (arguably, many of which could be
called addressing modes, Atmel just doesn't enumerate them as such) than
PIC. (But it's been a while since I looked at the PIC instruction set.)
What's "CISC" about PIC if that's the case?

"RISC" is parsed as "(Reduced-Instruction) Set Computer", not "Reduced
(Instruction-Set) Computer". In other words, a RISC cpu has a set of
relatively simple "reduced" instructions. RISC does /not/ mean that the
size of the instruction set is reduced.

Big RISC processors like the PowerPC have a very large instruction set,
and while some of them appear quite complex they are actually almost all
very simple. In a "pure RISC" architecture (to the extent that such a
thing exists), all instructions have the same size, and operate with the
same timing - usually 1 pipelined cycle.

Or compare 8051 to Z80, though you still don't get read-modify-write
instructions, so arithmetic in memory still isn't any better. Does that
make Z80 RISC too?...

It's been a while since I have worked with a Z80 (about 25 years), so I
don't remember all the details. But the Z80 is CISC.

RISC assembly programming on big cpus, such as PPC, is a pain because
they are so complex. But so is assembly programming on big CISC cpus.

Can't argue with that. From what I've seen, I'd rather do assembly on x86
than full-on ARM (having written 8086 before, but only looked at the ARM
instruction set).

You can write working x86 code fairly easily, but writing good, fast x86
assembly code for modern x86 chips is a serious pain. ARM assembly
takes a bit of getting used to as well. I would pick ARM, if given the
choice, but on such devices you can normally write much faster C code
than assembly code.

I'd probably change my mind once I learned enough to work with.
Conditionals per instruction though, that's a compiler's dream. I suppose
it's about time something like that has caught on; I want to say IA432 was
supposed to do that, but that ended up a major flop for a variety of
reasons.

The ARMs these days have several different instruction sets ("old" ARM,
Thumb, Thumb2) with their pros and cons. Conditionals per instruction
are certainly nice, but they are costly in terms of instruction code
bits for their usage - so the Thumb instruction sets have replaced them
with a sort of if-then-else-endif construction.

PCs to this day are still x86, though they're RISC inside. Go
figure.

Big CISC processors have traditionally used microcoding - the complex
instructions are run as a series of very wide microcode instructions
that are at a lower level. The translation of x86 instructions into
RISC microops is not much different, except that these microops are
scheduled and pipelined in a different way.

CISC instructions try to do a lot of different things within the same
instruction - in particular, they often use multiple complicated
addressing modes. So breaking them into separate RISC instructions that
do one thing at a time makes a lot of sense.

The kinds of code-heavy roles where, yeah you can optimize the
inner loops -- and should, once you've exhausted other means -- but you've
just got so damn much code that you'd be mental to do any small fraction
of it in assembly.

There is seldom any reason to write assembly for "inner loops" any more
- on most processors, a decent compiler will generate pretty close to
ideal code for that sort of thing. And for complex processors, the
compiler will generally do a better job than hand-written assembly,
because there are often subtle issues with scheduling, instruction
ordering, etc., that can make a big difference but be difficult to track
by hand. This is particularly important if you want to target several
cores - the ideal code can be significantly different between two x86
devices from different companies, or different generations. And for
RISC devices you have lots of registers to track instead as well.

Where hand assembly still makes a big difference in these kinds of chips
is for vector and SIMD processing.

 
On Fri, 18 Apr 2014 09:10:55 +0100, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

Ten or twenty years before, the spectacle was Fortran compiler vendors
claiming that their compilers generated executable code that was faster
than that produced by assembly programmers. Well, not if you get a
real assembly programmer. But hardware got fast enough that we no
longer had to care.

Sometimes we do when transforming large arrays in realtime. Optimising
the performance of the cache architecture and avoiding pipeline stalls
can be absolutely critical to optimal performance.

One thing to remember when getting algorithms from some old Fortran
math library and then rewrite it in e.g. in C, is that Fortran stored
two (and multiple dimension) arrays in a different way than for
instance C.

If the code had been optimized for Fortran array storage mode to
minimize cache/virtual memory misses, the C code would have a huge
number of cache/virtual misses, unless the array indexes are
swapped:).
 
On Thursday, April 17, 2014 9:05:59 PM UTC-7, josephkk wrote:
On Thu, 17 Apr 2014 00:09:15 -0700 (PDT), whit3rd <whit3rd@gmail.com

Magnetic force on a moving charge is perpendicular to velocity,

the power is zero because F-vector and V-vector are orthogonal.



Stuff and nonsense. If you change the path of a particle you have

accelerated it. That takes work.

Rethink that. The moon's straight path has been 'changed' into an ellipse by
Earth's gravity, but the continual acceleration does NO net work on
the moon. Similarly, a permanent magnet placed near a Hall sensor
will give a positive Hall effect indication for an indefinite period of time.

The magnet won't go flat. And, the moon isn't falling from the sky.
The word 'work' in physics has a specific energy-is-transferred meaning.
 
On 4/17/2014 1:17 AM, Jan Panteltje wrote:
On a sunny day (Wed, 16 Apr 2014 11:52:17 -0700 (PDT)) it happened whit3rd
whit3rd@gmail.com> wrote in
6e7d2256-d638-4838-acd0-b7144bb93ba0@googlegroups.com>:

I'm interested in sensing AC and DC currents, 0-8A nominally, but up to 160=

A for 10msec current surges from both AC and DC sources...

There is an other way:

http://panteltje.com/pub/play_back_head_current_sensor_img_1153.jpg

That is an old playback head from a walkman against a mains lead.
Very little losses in the straight wire,
very good frequency response (to kHz).

It's a good solution, for AC; the possibility of high DC currents,
though, means that one might possibly have to attend to demagnetizing
the playback head in order to keep it calibrated. Tape
head demagnetizers are intended, after all, to change the
head's properties using nearby currents!

Yes,
and there is an other way to use a coil with core, as both AC and DC sensor,
Here is a little inductor setup in a FET oscillator:
http://panteltje.com/pub/dc_current_sensor/osc_without_magnet_img_1790.jpg
the wave form:
http://panteltje.com/pub/dc_current_sensor/freq_without_magnet_img_1794.jpg

Now with a bit of DC (say permanent magnet for demo):
http://panteltje.com/pub/dc_current_sensor/osc_with_magnet_img_1797.jpg
the wave form:
http://panteltje.com/pub/dc_current_sensor/freq_with_magnet_img_1795.jpg

As the core saturates the L decreases, and frequency goes way up.
If f resonance is much higher than the frequency of the measured 'signal' (AC with DC component),
then it should follow that,
It sort of de-magnetizes itself...

That is actually a several hundred mH coil.

Like magnetic amplifiers. I used them back in the early '60s.
 
On 04/18/2014 05:57 AM, upsidedown@downunder.com wrote:
On Fri, 18 Apr 2014 09:10:55 +0100, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

Ten or twenty years before, the spectacle was Fortran compiler vendors
claiming that their compilers generated executable code that was faster
than that produced by assembly programmers. Well, not if you get a
real assembly programmer. But hardware got fast enough that we no
longer had to care.

Sometimes we do when transforming large arrays in realtime. Optimising
the performance of the cache architecture and avoiding pipeline stalls
can be absolutely critical to optimal performance.

One thing to remember when getting algorithms from some old Fortran
math library and then rewrite it in e.g. in C, is that Fortran stored
two (and multiple dimension) arrays in a different way than for
instance C.

If the code had been optimized for Fortran array storage mode to
minimize cache/virtual memory misses, the C code would have a huge
number of cache/virtual misses, unless the array indexes are
swapped:).

I'm pretty sure modern optimizing compilers fix that for you. It would
be a pretty obvious thing to do.

Learning how to write loops so that your compiler can vectorize them is
the big win. Intel C++ is the bomb at that, but gcc is learning.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
On 04/17/2014 11:56 PM, Jasen Betts wrote:
On 2014-04-18, Joe Gwinn <joegwinn@comcast.net> wrote:
In article <9vuvk9h784223h07kd6bs5o0hp0ul19ln3@4ax.com>, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

PowerBasic is a very good optimized compiler. It can run useful FOR loops at
hundreds of MHz.

Be careful with that word "compiler". Interpreted languages compile
to byte code, not to native machine code. The byte code is executed by
a bit of software. In Java, this machine is called the JVM (Java
Virtual Machine).

Power basic (and quickbasic, turbo-basic and probably several others)
compile to machine code.

While one can compile some originally interpreted languages to machine
code, it isn't common because some of the nicest features of
interpreted languages cannot be compiled to machine code in advance.

Compilable basics often don't have those features.

MS made a bunch of BASICs. GW BASIC was interpreted directly,
QuickBasic I'm pretty sure was byte code, and the MS BASIC compiler was
a real compiler. They did bundle QuickBasic along with the compiler, so
it was fairly easy to confuse.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC
Optics, Electro-optics, Photonics, Analog Electronics

160 North State Road #203
Briarcliff Manor NY 10510

hobbs at electrooptical dot net
http://electrooptical.net
 
In article <grl0l9hln3cqsr88vdaaqensjcpk5mkn8u@4ax.com>,
martin_rid@verizon.net says...
On Thu, 17 Apr 2014 13:56:01 -0700, John Larkin
jlarkin@highlandtechnology.com> wrote:

On Thu, 17 Apr 2014 13:12:08 -0500, Tim Wescott
tim@seemywebsite.really> wrote:

On Thu, 17 Apr 2014 09:26:12 -0700, John Larkin wrote:

On Thu, 17 Apr 2014 00:03:04 -0500, Tim Wescott
tim@seemywebsite.really> wrote:

I have a customer who wants a USB-powered battery charger designed, with
certification -n- all. I figure the certification part will be harder
than the charger part, so I have to give it a pass.

Anyone do that and have spare cycles, or know someone? He wants someone
with a track record, or I'd talk him into using me!

What certs? UL/CSA/CE? FCC?

A test lab will do those, for a moderate pile of money.

Is there a USB certification standard?

You have to pass their compatibility tests if you want to use their logos
& such. I'm not sure whether you can even use "USB", but I suspect by
now that you can if you use the right wording.

You can use something like the FTDI chips and (maybe) inherit the
certs.

I think its $3K just for the PID and VID alone.

They have upped it to $5K.
We finally got into doing enough USB devices that I decided to pop out
the $2K they were asking for a VID (you make up the PID yourself), and
wouldn't you know it, they had increased the price to $5K just in time
for me to give them some money. That's my lot.
Naturally, they want you to sign on for the $4K a year subscription that
gets you logo use and so on.
 
On a sunny day (Fri, 18 Apr 2014 09:05:04 -0500) it happened John S
<Sophi.2@invalid.org> wrote in <lirbem$93a$1@dont-email.me>:

On 4/17/2014 1:17 AM, Jan Panteltje wrote:
On a sunny day (Wed, 16 Apr 2014 11:52:17 -0700 (PDT)) it happened whit3rd
whit3rd@gmail.com> wrote in
6e7d2256-d638-4838-acd0-b7144bb93ba0@googlegroups.com>:

I'm interested in sensing AC and DC currents, 0-8A nominally, but up to 160=

A for 10msec current surges from both AC and DC sources...

There is an other way:

http://panteltje.com/pub/play_back_head_current_sensor_img_1153.jpg

That is an old playback head from a walkman against a mains lead.
Very little losses in the straight wire,
very good frequency response (to kHz).

It's a good solution, for AC; the possibility of high DC currents,
though, means that one might possibly have to attend to demagnetizing
the playback head in order to keep it calibrated. Tape
head demagnetizers are intended, after all, to change the
head's properties using nearby currents!

Yes,
and there is an other way to use a coil with core, as both AC and DC sensor,
Here is a little inductor setup in a FET oscillator:
http://panteltje.com/pub/dc_current_sensor/osc_without_magnet_img_1790.jpg
the wave form:
http://panteltje.com/pub/dc_current_sensor/freq_without_magnet_img_1794.jpg

Now with a bit of DC (say permanent magnet for demo):
http://panteltje.com/pub/dc_current_sensor/osc_with_magnet_img_1797.jpg
the wave form:
http://panteltje.com/pub/dc_current_sensor/freq_with_magnet_img_1795.jpg

As the core saturates the L decreases, and frequency goes way up.
If f resonance is much higher than the frequency of the measured 'signal' (AC with DC component),
then it should follow that,
It sort of de-magnetizes itself...

That is actually a several hundred mH coil.


Like magnetic amplifiers. I used them back in the early '60s.

Yep, same here, company I worked made huge ones, hundreds of amps..
I designed the controller..
 
On Fri, 18 Apr 2014 10:19:28 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 04/17/2014 11:56 PM, Jasen Betts wrote:
On 2014-04-18, Joe Gwinn <joegwinn@comcast.net> wrote:
In article <9vuvk9h784223h07kd6bs5o0hp0ul19ln3@4ax.com>, John Larkin
jjlarkin@highNOTlandTHIStechnologyPART.com> wrote:

PowerBasic is a very good optimized compiler. It can run useful FOR loops at
hundreds of MHz.

Be careful with that word "compiler". Interpreted languages compile
to byte code, not to native machine code. The byte code is executed by
a bit of software. In Java, this machine is called the JVM (Java
Virtual Machine).

Power basic (and quickbasic, turbo-basic and probably several others)
compile to machine code.

While one can compile some originally interpreted languages to machine
code, it isn't common because some of the nicest features of
interpreted languages cannot be compiled to machine code in advance.

Compilable basics often don't have those features.


MS made a bunch of BASICs. GW BASIC was interpreted directly,
QuickBasic I'm pretty sure was byte code, and the MS BASIC compiler was
a real compiler. They did bundle QuickBasic along with the compiler, so
it was fairly easy to confuse.

Cheers

Phil Hobbs

I think PDS Basic, the "pro" version of QuickBasic, was a machine code compiler.


--

John Larkin Highland Technology Inc
www.highlandtechnology.com jlarkin at highlandtechnology dot com

Precision electronic instrumentation
 
How do measurements off a shunt work for AC, are there OpAmps that can handle that kind of common mode voltage at their inputs? Or is Hall Effect pretty much the way to go for these kinds of things?
 
On 18 Apr 2014 04:39:22 GMT, Jasen Betts <jasen@xnet.co.nz> wrote:

On 2014-04-18, josephkk <joseph_barrett@sbcglobal.net> wrote:
On Thu, 17 Apr 2014 00:09:15 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

Magnetic force on a moving charge is perpendicular to velocity,
the power is zero because F-vector and V-vector are orthogonal.

Stuff and nonsense. If you change the path of a particle you have
accelerated it.
That takes work.

Only if the applied force is in the direction of the motion, which is
never the case in Hall-effect cells.

?-)

Sounds like you need some remedial work in vector mathematics (and the
"right-hand rule" :)

...Jim Thompson
--
| James E.Thompson | mens |
| Analog Innovations | et |
| Analog/Mixed-Signal ASIC's and Discrete Systems | manus |
| San Tan Valley, AZ 85142 Skype: Contacts Only | |
| Voice:(480)460-2350 Fax: Available upon request | Brass Rat |
| E-mail Icon at http://www.analog-innovations.com | 1962 |

I love to cook with wine. Sometimes I even put it in the food.
 
On 18/04/14 16:35, WangoTango wrote:
In article <grl0l9hln3cqsr88vdaaqensjcpk5mkn8u@4ax.com>,
martin_rid@verizon.net says...
On Thu, 17 Apr 2014 13:56:01 -0700, John Larkin
jlarkin@highlandtechnology.com> wrote:

On Thu, 17 Apr 2014 13:12:08 -0500, Tim Wescott
tim@seemywebsite.really> wrote:

On Thu, 17 Apr 2014 09:26:12 -0700, John Larkin wrote:

On Thu, 17 Apr 2014 00:03:04 -0500, Tim Wescott
tim@seemywebsite.really> wrote:

I have a customer who wants a USB-powered battery charger designed, with
certification -n- all. I figure the certification part will be harder
than the charger part, so I have to give it a pass.

Anyone do that and have spare cycles, or know someone? He wants someone
with a track record, or I'd talk him into using me!

What certs? UL/CSA/CE? FCC?

A test lab will do those, for a moderate pile of money.

Is there a USB certification standard?

You have to pass their compatibility tests if you want to use their logos
& such. I'm not sure whether you can even use "USB", but I suspect by
now that you can if you use the right wording.

You can use something like the FTDI chips and (maybe) inherit the
certs.

I think its $3K just for the PID and VID alone.

They have upped it to $5K.
We finally got into doing enough USB devices that I decided to pop out
the $2K they were asking for a VID (you make up the PID yourself), and
wouldn't you know it, they had increased the price to $5K just in time
for me to give them some money. That's my lot.
Naturally, they want you to sign on for the $4K a year subscription that
gets you logo use and so on.

And that nicely sums up the real motivation for removing the ever useful
parallel and serial ports on our computers.

Jeroen Belleman
 
In article <l654v.30687$gV7.13445@fx21.am4>, Martin Brown
<|||newspam|||@nezumi.demon.co.uk> wrote:

On 18/04/2014 04:03, Joe Gwinn wrote:
In article <ijQ3v.14995$X41.9844@fx15.am4>, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 17/04/2014 13:18, Joe Gwinn wrote:

Something is fishy here. Basic is an interpreted language. If the

Not necessarily.

PowerBasic is a decent optimising native code compiler. And in some ways
it has more freedom to optimise its loop code than a C compiler!

Basic and Lisp are usually interpreted languages but there are
optimising native code compilers for both of them on some platforms.

http://www.powerbasic.com/products/

Interpreted languages generally compile to bytecode, while compiled
languages compile to native machine code, which is a whole lot faster.

Although what you say is true of many interpreted language compilers it
is not true of all of them. There are fully optimising Basic and Lisp
compilers about that can do JIT compilation to native code and in some
cases full global program optimisation.

JIT compilation can work if the algorithm doesn't jump around too much.
Realtime systems tended to violate this, quite floridly. In the old
days, it was often faster to turn the cache hardware off.


This is a bit weird. I usually end up being rude about the "magical"
claims that Larkin makes for his beloved PowerBasic but in this case he
is right - it is a native code optimising compiler with a better grasp
of optimising the sort of loops he needed than the C compiler that they
were using as coded by their "senior C programmer".

program has high locality, aggressive caching of repeated bits can make
it only one tenth as fast as the same algorithm coded in a compiled
language like C. If the program has low locality (like lots of
realtime stuff), interpreted code is more like one 50th of the speed of
compiled code.

From memory the data was just about big enough and involved words and
integers to go I/O bound and their C code was decidedly non-optimal.

On the current crop of optimising compilers there is seldom much to
choose between different ways of implementing vector dot products.

I'd look at the C code with a profiler, and find the bug.

Joe Gwinn

C code isn't quite as fast at some things as you might like to believe,
but ISTR the slowness in this case was mostly down to user error.

Well, there you are - isn't "user error" another word for performance
bug?

Choice of compiler and pragmas as I recall.

Compilers differ in how well the optimize, and what assumptions they
make about the typical program.


The software world has periodic language wars.

I particularly recall Ada83 versus C. Both are compiled languages, but
Ada is far more complex a language, as judged by the sizes of their
respective compilers. We would read article after article where a
world class Ada expert would produce Ada programs that ran circles
around the C programs produced by some duffer C programmers, and
declare that Ada was therefore the better language.

Generally happens in these language wars and they generate a lot more
heat than light. C optimisers have to be very careful what they do.
Other languages lend themselves to easier global code optimisation.

But they all have to get the right answer, so I don't understand this
comment.


Ten or twenty years before, the spectacle was Fortran compiler vendors
claiming that their compilers generated executable code that was faster
than that produced by assembly programmers. Well, not if you get a
real assembly programmer. But hardware got fast enough that we no
longer had to care.

Sometimes we do when transforming large arrays in realtime. Optimising
the performance of the cache architecture and avoiding pipeline stalls
can be absolutely critical to optimal performance.

An assembler programmer today would have to work extremely hard to beat
a modern optimising compiler at avoiding pipeline stalls on a modern
CPU. I doubt if more than a handful of people on the planet could do it
instinctively without using the internal chip diagnostics to get
feedback on how and where the stalls and bottlenecks are occurring.

The exact fastest code depends critically on the CPU model number and
cache structure. Certain programs like FFTW are self tuning to optimises
for a given CPU architecture once they have been trained.

I have had to do such things in the past, but not in the last decade or
two.

What does happen is the compiler writers will cut corners in areas that
they think are rarely used by their typical customer. We had a florid
case of this in an Ada83 compiler. Use of a Rep spec with some kind or
Record definition, an odd corner but one that's essential for handling
messages between different machines (the bits all gotta line up), cause
the program to run something like a factor of one hundred slower than
necessary. Staring at the Ada code was no help - the Ada code was
correct, and did get the correct answer.

A profiler found the problem in a day - the generated code implemented
a critical and widely used bit of the Ada in a big subroutine library,
rather than by spitting out a few lines of assembly. Ouch.

The solution was to use a different Ada compiler.

Joe Gwinn
 
On Friday, April 18, 2014 2:30:13 PM UTC-7, John Larkin wrote:
On Fri, 18 Apr 2014 16:10:34 -0400, Phil Hobbs

hobbs@electrooptical.net> wrote:

Work is a dot product, viz. force dot distance,
whereas Lorentz (magnetic) force goes as v cross B.

The dot product v dot (v cross B) is identically zero.

Otherwise, the Sun would be doing work on the Earth by bending its path
into an ellipse.

Unless the acceleration on a charged particle radiates a photon.

If it does, the particle must slow down. Is that the reaction to the
momentum of the photon? Must be. So that is why the photon is emitted
in the direction of the particle motion?

The emitted photon (called synchrotron radiation, as if the charged
particle is traveling in a circle in a cyclotron) is polarized, and
there is no way to emit such a polarized photon in a radial direction
because the E of a photon is always perpendicular to its momentum/travel direction, and that E direction is parallel to the force applied (radial).

It's a very rare photon emission except in the case of ultrarelativistic
beams, where there's a 'headlight effect'. This makes a narrow forward beam
from what (in the electron's rest frame) looks like a dipole-radiation 'doughnut'.
 
On 18/04/14 20:03, Joe Gwinn wrote:
In article <l654v.30687$gV7.13445@fx21.am4>, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

On 18/04/2014 04:03, Joe Gwinn wrote:
In article <ijQ3v.14995$X41.9844@fx15.am4>, Martin Brown
|||newspam|||@nezumi.demon.co.uk> wrote:

Interpreted languages generally compile to bytecode, while compiled
languages compile to native machine code, which is a whole lot faster.

Although what you say is true of many interpreted language compilers it
is not true of all of them. There are fully optimising Basic and Lisp
compilers about that can do JIT compilation to native code and in some
cases full global program optimisation.

JIT compilation can work if the algorithm doesn't jump around too much.
Realtime systems tended to violate this, quite floridly. In the old
days, it was often faster to turn the cache hardware off.

No such thing as JIT compilation in an RT system. JTL would be closer
to the mark. OK, it depends a bit on what you mean by 'RT'.

Jeroen Belleman
 
On 4/15/2014 8:54 PM, David Eather wrote:
Phil Hobbs made a pertinent point. At the beginning it was best science
that the earth was the center of the universe. But then center of power,
let's not admit we were wrong, ego and it all blows back eventually.
Actually I was just thinking it is not unlike todays situation if you
try to report errors/mistakes within a large established company. Often,
they respond as if the person trying to inform about the problem were
the problem itself. How little we change...

I had an experience like that. I was at a company designing hand held
radios for military use. A new design had been through some level of
testing and I was asked to produce a bus timing diagram for some reason.
I did and in the process found there was a timing violation on one of
the Flash device specs. When I reported this I was told that it passed
test so they didn't want to hear about the problem. I was a bit floored
at the attitude.

--

Rick
 

Welcome to EDABoard.com

Sponsor

Back
Top