That was scary

Clifford Heath <no.spam@please.net> wrote in
news:8O7lG.17$cJ2.2@fx47.iad:

On 14/4/20 10:34 am, John Larkin wrote:
On Tue, 14 Apr 2020 10:03:18 +1000, Clifford Heath
no.spam@please.net> wrote:

On 14/4/20 5:26 am, John Larkin wrote:
On Mon, 13 Apr 2020 12:13:35 +1000, Clifford Heath
no.spam@please.net> wrote:

On 13/4/20 12:07 pm, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Apr 2020 09:33:14 +1000, Clifford Heath
I should drag out tprof again, it still fills a need that's
substantially un-met by existing tools. It also contained a
dynamic memory profiling mode that was useful.

Sometimes we raise a port pin at the entry of a chunk of code
and drop it at the end, and look at that with an
oscilloscope. A routine can be optimized for worst-case
execution time, which usually matters more than average. A
little thinking can sometimes reduce worst-case by 5:1.

One port pin can be made to blip or change state at several
places in a segment of code. That can look cool on infinite
persistance.

Great way to look at exactly one thing at a time, and quite
unlike what a proper profiler does.

I have histogrammed the program counter. That can be a
revelation. See what's hogging the resources.

That's a trivial profiler, and comes built-in to Linux tools,
always has (since 1976 at least). It tells you nothing about
context switch or interrupt latencies though, because it only
samples during the program's assigned timeslots i.e. while
it's running.

CH

Nobody has guessed about the Linux timeouts I measured. Nobody
has estimated a reasonable IRQ rate for my tiny ARM. An
oscilloscope is good enough for things like that.

Sure! If it works for you, that's great.

On a running Linux system with normal desktop peripherals, there
is a great variety of different kinds of things going on. In the
histogram of latencies, it's very instructive to see the
different spikes for different interrupts (and try to identify
which is which), and to see the variance for each spike. Kind-of
a top-down view, which would augment your bottom-up one.

CH

We were interested in how long and how often a tight application
loop might be suspended by the OS and drivers and stuff. Would a
profiler tell you that?

Exactly, that's what the histogram is. Put the contents of your
inner loop (or some fixed number of repetitions) in a profiled
function (called from the loop), and the shortest elapsed-time
spike is how long it takes to run if it's not interrupted. All
longer spikes are interrupts or one sort or another. You can see
how long each is, count how many, and see the variability in each
interrupt time (based on the width of the spike).

You do need a CPU with a fine-grained timer you can quickly read,
and you need to ensure that your inner-loop function runs for
significantly longer than the profiler overhead of doing that.

In a Zynq sort of chip, one bailout is to move "code" from the
ARM cpu's into FPGA fabric. I'm often shocked by what people can
implement in VHDL.

I wish I had time and energy to get started with the Zynq, it's
such a nice way of doing things. Someone should do an "Arduino,
but for Zynq".

CH.

Close but no cigar!

<https://www.youtube.com/watch?v=4XV8dlFwNd0>
 
Clifford Heath <no.spam@please.net> wrote in
news:zg8lG.412$ZL6.256@fx28.iad:

On 14/4/20 11:15 am, DecadentLinuxUserNumeroUno@decadence.org
wrote:
Clifford Heath <no.spam@please.net> wrote in
news:8O7lG.17$cJ2.2@fx47.iad:
On 14/4/20 10:34 am, John Larkin wrote:
In a Zynq sort of chip, one bailout is to move "code" from the
ARM cpu's into FPGA fabric. I'm often shocked by what people
can implement in VHDL.

I wish I had time and energy to get started with the Zynq, it's
such a nice way of doing things. Someone should do an "Arduino,
but for Zynq".
Close but no cigar!

https://www.youtube.com/watch?v=4XV8dlFwNd0

Can I FFT streaming data at 50MSPS on its FPGA, while consuming
less than 10W?

No, didn't think so.

I'm enjoying the ODroid modules, but 1-4GFLOP is a *lot* less than
the x00-GFLOP I'd like. It needs a decent GPU (like Jetson Nano)
or an FPGA really.

CH


Not that little SBC, but... on a real PC...

Use a progrmmable graphics card, ya dope.
 
On 2020-04-13 11:12, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Apr 2020 16:35:13 +0200, David Brown
david.brown@hesbynett.no> wrote:

On 13/04/2020 03:39, dagmargoodboat@yahoo.com wrote:
On Sunday, April 12, 2020 at 12:19:24 PM UTC-4, David Brown wrote:
On 11/04/2020 17:21, jlarkin@highlandsniptechnology.com wrote:

When I did more realtime programming, I generally knew how long it
would take for a chunk of code to execute. Kids these days haven't the
faintest idea, and are afraid to push interrupt rates or state machine
rep rates. Sometimes I have to get an oscilloscope and show them how
fast a 600 MHz dual-core ARM can really compute. We got some
interesting numbers on the Zynq+Linux.

You claim this, and yet you don't think programming involves science,
causality, maths or system dynamics? Is that because you simply don't
understand what those terms mean? Or that you are lying about the
programming you have done? Or that you think /you/ have done "real"
programming, but no one else does? Or - and I strongly suspect this -
you are a troll who finds perverse entertainment in annoying people by
saying blatantly stupid things.

You're way off base.

John, being a solid programmer -- I've seen his code, and he's posted
code here, too -- just has a palpable disdain for what Bill Gates has
turned the 'discipline' into.

(You can tell a lot about a person, reading their code. You can see
if someone's clear-headed and reasoning, or confused and fiddling,
for one.)


I can agree that a lot of programs are written poorly.

But John basically said "You're a programmer? That means you don't do
any maths, science or rational work, and are probably just an English
major".

Mind you, he demonstrates himself that it is possible to write programs
while not understanding science or being able to reason rationally.

As I said, he is no doubt just trolling.

No, seriously, most programmers use no math, no theory, don't know how
fast their code executes, and have never heard of a state machine or a
filter or signal averaging to reduce noise. Never heard of the
Sampling Theorem. They never write a routine that runs correctly first
try, and rarely manage one that even compiles error-free first time.

Some years back, a customer asked me to transfer a proof-of-concept
system to an engineering firm in Orange County CA for productizing.

The proto did very nice transcutaneous blood glucose and alcohol
measurements using a custom fibre bundle and a normal SWIR grating
spectrometer, along from some super secret special sauce from USC that I
don't know about. (Mine was made of toy parts, including a RC airplane
servo for rotating the grating, but worked very well.)

The back end of mine was a TIA and LPF feeding an analogue lock-in made
from a fast dual SPDT analogue mux in the usual way. Worked great.

The vendor chucked all that, and tried pulling a narrowband signal out
of wideband noise by *least square fitting to a sine wave*. The guy who
coded that mess had a PhD in "industrial engineering", which at the time
I didn't realize was a dumping ground for folks who couldn't cut it in
EE or ME or CE or EngPhys or anywhere else. (He also made a number of
other messes.)

I've talked about that one here before, and all the story that's fit to
print and won't get me sued is at

<https://electrooptical.net/News/transcutaneous-blood-glucose-a-war-story/>.

Suffice it to say that this was very definitely an example of what John
talks about.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On 14/4/20 11:15 am, DecadentLinuxUserNumeroUno@decadence.org wrote:
Clifford Heath <no.spam@please.net> wrote in
news:8O7lG.17$cJ2.2@fx47.iad:
On 14/4/20 10:34 am, John Larkin wrote:
In a Zynq sort of chip, one bailout is to move "code" from the
ARM cpu's into FPGA fabric. I'm often shocked by what people can
implement in VHDL.

I wish I had time and energy to get started with the Zynq, it's
such a nice way of doing things. Someone should do an "Arduino,
but for Zynq".
Close but no cigar!

https://www.youtube.com/watch?v=4XV8dlFwNd0

Can I FFT streaming data at 50MSPS on its FPGA, while consuming less
than 10W?

No, didn't think so.

I'm enjoying the ODroid modules, but 1-4GFLOP is a *lot* less than the
x00-GFLOP I'd like. It needs a decent GPU (like Jetson Nano) or an FPGA
really.

CH
 
On 14/4/20 10:34 am, John Larkin wrote:
On Tue, 14 Apr 2020 10:03:18 +1000, Clifford Heath
no.spam@please.net> wrote:

On 14/4/20 5:26 am, John Larkin wrote:
On Mon, 13 Apr 2020 12:13:35 +1000, Clifford Heath
no.spam@please.net> wrote:

On 13/4/20 12:07 pm, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Apr 2020 09:33:14 +1000, Clifford Heath
I should drag out tprof again, it still fills a need that's
substantially un-met by existing tools. It also contained a dynamic
memory profiling mode that was useful.

Sometimes we raise a port pin at the entry of a chunk of code and drop
it at the end, and look at that with an oscilloscope. A routine can be
optimized for worst-case execution time, which usually matters more
than average. A little thinking can sometimes reduce worst-case by
5:1.

One port pin can be made to blip or change state at several places in
a segment of code. That can look cool on infinite persistance.

Great way to look at exactly one thing at a time, and quite unlike what
a proper profiler does.

I have histogrammed the program counter. That can be a revelation. See
what's hogging the resources.

That's a trivial profiler, and comes built-in to Linux tools, always has
(since 1976 at least). It tells you nothing about context switch or
interrupt latencies though, because it only samples during the program's
assigned timeslots i.e. while it's running.

CH

Nobody has guessed about the Linux timeouts I measured. Nobody has
estimated a reasonable IRQ rate for my tiny ARM. An oscilloscope is
good enough for things like that.

Sure! If it works for you, that's great.

On a running Linux system with normal desktop peripherals, there is a
great variety of different kinds of things going on. In the histogram of
latencies, it's very instructive to see the different spikes for
different interrupts (and try to identify which is which), and to see
the variance for each spike. Kind-of a top-down view, which would
augment your bottom-up one.

CH

We were interested in how long and how often a tight application loop
might be suspended by the OS and drivers and stuff. Would a profiler
tell you that?

Exactly, that's what the histogram is. Put the contents of your inner
loop (or some fixed number of repetitions) in a profiled function
(called from the loop), and the shortest elapsed-time spike is how long
it takes to run if it's not interrupted. All longer spikes are
interrupts or one sort or another. You can see how long each is, count
how many, and see the variability in each interrupt time (based on the
width of the spike).

You do need a CPU with a fine-grained timer you can quickly read, and
you need to ensure that your inner-loop function runs for significantly
longer than the profiler overhead of doing that.

In a Zynq sort of chip, one bailout is to move "code" from the ARM
cpu's into FPGA fabric. I'm often shocked by what people can implement
in VHDL.

I wish I had time and energy to get started with the Zynq, it's such a
nice way of doing things. Someone should do an "Arduino, but for Zynq".

CH.
 
On Tuesday, April 14, 2020 at 4:50:00 AM UTC+10, Ricky C wrote:
On Monday, April 13, 2020 at 1:57:06 PM UTC-4, whit3rd wrote:
On Monday, April 13, 2020 at 7:41:19 AM UTC-7, jla...@highlandsniptechnology.com wrote:
On Sun, 12 Apr 2020 20:10:37 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Sunday, April 12, 2020 at 4:51:33 PM UTC-7, jla...@highlandsniptechnology.com wrote:

Can no country ever release lockdown?

Sure. China , South Korea have basically done it.


But won't the epidemic return as soon as the lockdown is lifted? If
not, why not?

Lockdown removes personal contact that transmits the disease. The course of the illness has a period of transmissibility. When that period is past, lockdown is unnecessary.

Honestly, though, there could be a 'typhoid mary' scenario where someone, somewhere, just carries the disease but does NOT pass through that phase.. Wuhan has opened back up, and we may be finding out that this is possible.

Typhoid is caused by a bacterium. It could live in her gut if she didn't respond to the toxins it produces.

Covid-19 is a virus. It infects any cell that it gets into, and destroys it in the process of creating many more Covid-19 virus particles.

A "Typhoid Mary" situation is less likely

> I'm not clear on how this disease has a "period of transmissibility". If anyone is infected, they can transmit the disease to others.

If you get Covid-19, you either die of it, or your immune system gets rid of every last virus particle. If you live, you don't stay infected.
Once the lockdown is ended in a given area, the disease will return unless measures are taken to prevent it. We've gone over this many, many times here. Travel will need to be restricted for some time after the lock down is ended, but most importantly, case tracking will need to be done on every single case. If that is done effectively we will be able to resume activities. It would remain a good idea to continue social distancing rules until the disease is completely eradicated at least from the country you are in..

One problem is all the many people who are in denial of the seriousness of this disease. We continue to hear reports of people being fined for not obeying lock down orders and social distancing rules. Once we are at a low enough infection level that we can relax the shut down rules, why do we think many people will continue to obey the remaining rules?

The forecasts are now showing we are nearing the peak and our efforts at social distancing have been fairly effective other than perhaps in New York..

The rate of new infections may have levelled off in the US, but you haven't got a clue what's happening outside the worst affected states.

https://www.worldometers.info/coronavirus/country/us/

Three states (with 10% of your population) now account for about half of all your infections.

> Each area of the country is different however. But even in the areas with slower progression of the disease, the peak will be over by the end of April.

What makes you think that?

> It is looking like the end of May might be a time when the US can start to relax the lock down without worrying about a rebound. I expect this will happen much sooner and we will see a rebound in areas.

Probably because of this sort of enthusiasm for making predictions when you don't know enough about what's actually going on.

--
Bill Sloman, Sydney
 
On Tue, 14 Apr 2020 10:51:13 +1000, Clifford Heath
<no.spam@please.net> wrote:

On 14/4/20 10:34 am, John Larkin wrote:
On Tue, 14 Apr 2020 10:03:18 +1000, Clifford Heath
no.spam@please.net> wrote:

On 14/4/20 5:26 am, John Larkin wrote:
On Mon, 13 Apr 2020 12:13:35 +1000, Clifford Heath
no.spam@please.net> wrote:

On 13/4/20 12:07 pm, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Apr 2020 09:33:14 +1000, Clifford Heath
I should drag out tprof again, it still fills a need that's
substantially un-met by existing tools. It also contained a dynamic
memory profiling mode that was useful.

Sometimes we raise a port pin at the entry of a chunk of code and drop
it at the end, and look at that with an oscilloscope. A routine can be
optimized for worst-case execution time, which usually matters more
than average. A little thinking can sometimes reduce worst-case by
5:1.

One port pin can be made to blip or change state at several places in
a segment of code. That can look cool on infinite persistance.

Great way to look at exactly one thing at a time, and quite unlike what
a proper profiler does.

I have histogrammed the program counter. That can be a revelation. See
what's hogging the resources.

That's a trivial profiler, and comes built-in to Linux tools, always has
(since 1976 at least). It tells you nothing about context switch or
interrupt latencies though, because it only samples during the program's
assigned timeslots i.e. while it's running.

CH

Nobody has guessed about the Linux timeouts I measured. Nobody has
estimated a reasonable IRQ rate for my tiny ARM. An oscilloscope is
good enough for things like that.

Sure! If it works for you, that's great.

On a running Linux system with normal desktop peripherals, there is a
great variety of different kinds of things going on. In the histogram of
latencies, it's very instructive to see the different spikes for
different interrupts (and try to identify which is which), and to see
the variance for each spike. Kind-of a top-down view, which would
augment your bottom-up one.

CH

We were interested in how long and how often a tight application loop
might be suspended by the OS and drivers and stuff. Would a profiler
tell you that?

Exactly, that's what the histogram is. Put the contents of your inner
loop (or some fixed number of repetitions) in a profiled function
(called from the loop), and the shortest elapsed-time spike is how long
it takes to run if it's not interrupted. All longer spikes are
interrupts or one sort or another. You can see how long each is, count
how many, and see the variability in each interrupt time (based on the
width of the spike).

You do need a CPU with a fine-grained timer you can quickly read, and
you need to ensure that your inner-loop function runs for significantly
longer than the profiler overhead of doing that.

In a Zynq sort of chip, one bailout is to move "code" from the ARM
cpu's into FPGA fabric. I'm often shocked by what people can implement
in VHDL.

I wish I had time and energy to get started with the Zynq, it's such a
nice way of doing things. Someone should do an "Arduino, but for Zynq".

CH.

It's a MicroZed. We have done several products and a few test sets
with a MicroZed as the compute platform. It has all the power
supplies, DRAM, SD card, Gbit Ethernet, USB, all that done, and they
bring 100 FPGA pins out on connectors. It runs Linux right out of the
box.

https://www.dropbox.com/s/al2x92st7ja7gry/DSC02865.JPG?raw=1

https://www.dropbox.com/s/r6sl0nh8zd9sm7r/ASP_SN1_top.jpg?raw=1






--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Mon, 13 Apr 2020 21:24:04 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-04-13 11:12, jlarkin@highlandsniptechnology.com wrote:
On Mon, 13 Apr 2020 16:35:13 +0200, David Brown
david.brown@hesbynett.no> wrote:

On 13/04/2020 03:39, dagmargoodboat@yahoo.com wrote:
On Sunday, April 12, 2020 at 12:19:24 PM UTC-4, David Brown wrote:
On 11/04/2020 17:21, jlarkin@highlandsniptechnology.com wrote:

When I did more realtime programming, I generally knew how long it
would take for a chunk of code to execute. Kids these days haven't the
faintest idea, and are afraid to push interrupt rates or state machine
rep rates. Sometimes I have to get an oscilloscope and show them how
fast a 600 MHz dual-core ARM can really compute. We got some
interesting numbers on the Zynq+Linux.

You claim this, and yet you don't think programming involves science,
causality, maths or system dynamics? Is that because you simply don't
understand what those terms mean? Or that you are lying about the
programming you have done? Or that you think /you/ have done "real"
programming, but no one else does? Or - and I strongly suspect this -
you are a troll who finds perverse entertainment in annoying people by
saying blatantly stupid things.

You're way off base.

John, being a solid programmer -- I've seen his code, and he's posted
code here, too -- just has a palpable disdain for what Bill Gates has
turned the 'discipline' into.

(You can tell a lot about a person, reading their code. You can see
if someone's clear-headed and reasoning, or confused and fiddling,
for one.)


I can agree that a lot of programs are written poorly.

But John basically said "You're a programmer? That means you don't do
any maths, science or rational work, and are probably just an English
major".

Mind you, he demonstrates himself that it is possible to write programs
while not understanding science or being able to reason rationally.

As I said, he is no doubt just trolling.

No, seriously, most programmers use no math, no theory, don't know how
fast their code executes, and have never heard of a state machine or a
filter or signal averaging to reduce noise. Never heard of the
Sampling Theorem. They never write a routine that runs correctly first
try, and rarely manage one that even compiles error-free first time.

Some years back, a customer asked me to transfer a proof-of-concept
system to an engineering firm in Orange County CA for productizing.

The proto did very nice transcutaneous blood glucose and alcohol
measurements using a custom fibre bundle and a normal SWIR grating
spectrometer, along from some super secret special sauce from USC that I
don't know about. (Mine was made of toy parts, including a RC airplane
servo for rotating the grating, but worked very well.)

The back end of mine was a TIA and LPF feeding an analogue lock-in made
from a fast dual SPDT analogue mux in the usual way. Worked great.

The vendor chucked all that, and tried pulling a narrowband signal out
of wideband noise by *least square fitting to a sine wave*. The guy who
coded that mess had a PhD in "industrial engineering", which at the time
I didn't realize was a dumping ground for folks who couldn't cut it in
EE or ME or CE or EngPhys or anywhere else. (He also made a number of
other messes.)

I've talked about that one here before, and all the story that's fit to
print and won't get me sued is at

https://electrooptical.net/News/transcutaneous-blood-glucose-a-war-story/>.

Suffice it to say that this was very definitely an example of what John
talks about.

Cheers

Phil Hobbs

A rounded EE education really isn't bad for a generalized approach to
solving problems. Assuming not all of the courses are coding. Circuit
theory, control theory, communications, signals-and-systems, some
mechanics and thermo, and a couple of semisters of physics and
calculus.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Mon, 13 Apr 2020 10:56:59 -0700 (PDT), whit3rd <whit3rd@gmail.com>
wrote:

On Monday, April 13, 2020 at 7:41:19 AM UTC-7, jla...@highlandsniptechnology.com wrote:
On Sun, 12 Apr 2020 20:10:37 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Sunday, April 12, 2020 at 4:51:33 PM UTC-7, jla...@highlandsniptechnology.com wrote:

Can no country ever release lockdown?

Sure. China , South Korea have basically done it.


But won't the epidemic return as soon as the lockdown is lifted? If
not, why not?

Lockdown removes personal contact that transmits the disease. The course of the
illness has a period of transmissibility. When that period is past, lockdown is unnecessary.

Cool. Simultaneously lock every person on Earth in an individual cell,
with pure air and no contact with anyone else, in the dark, for six or
eight weeks, then let them all out. They will be hungry, especially
the children.

In the dark of course because we'll have to shut down all the
electricity generating plants to lock up the operators.

Honestly, though, there could be a 'typhoid mary' scenario where someone, somewhere,
just carries the disease but does NOT pass through that phase. Wuhan has opened back
up, and we may be finding out that this is possible.

"Typhoid Mary" Mallon was incarcerated for many years. Her persistent transmission of
the disease was anomalous, and such a phenomenon is unpredictable.

--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Tuesday, April 14, 2020 at 12:11:18 PM UTC+10, jla...@highlandsniptechnology.com wrote:
On Mon, 13 Apr 2020 10:56:59 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Monday, April 13, 2020 at 7:41:19 AM UTC-7, jla...@highlandsniptechnology.com wrote:
On Sun, 12 Apr 2020 20:10:37 -0700 (PDT), whit3rd <whit3rd@gmail.com
wrote:

On Sunday, April 12, 2020 at 4:51:33 PM UTC-7, jla...@highlandsniptechnology.com wrote:

Can no country ever release lockdown?

Sure. China , South Korea have basically done it.


But won't the epidemic return as soon as the lockdown is lifted? If
not, why not?

Lockdown removes personal contact that transmits the disease. The course of the illness has a period of transmissibility. When that period is past, lockdown is unnecessary.

Cool. Simultaneously lock every person on Earth in an individual cell,
with pure air and no contact with anyone else, in the dark, for six or
eight weeks, then let them all out. They will be hungry, especially
the children.

It doesn't work like that. Lock down reduces social contacts. It doesn't eliminate them. The aim is to reduce R0 well below one, rather than absolutely preventing any opportunity for infection.

In the dark of course because we'll have to shut down all the
electricity generating plants to lock up the operators.

Stupid.

Honestly, though, there could be a 'typhoid mary' scenario where someone, somewhere, just carries the disease but does NOT pass through that phase. Wuhan has opened back>up, and we may be finding out that this is possible.

"Typhoid Mary" Mallon was incarcerated for many years. Her persistent transmission of the disease was anomalous, and such a phenomenon is unpredictable.

Typhoid is caused by a bacterium, not a virus.

--
Bill Sloman, Sydney
 
On 14/4/20 11:38 am, DecadentLinuxUserNumeroUno@decadence.org wrote:
Clifford Heath <no.spam@please.net> wrote in
news:zg8lG.412$ZL6.256@fx28.iad:

On 14/4/20 11:15 am, DecadentLinuxUserNumeroUno@decadence.org
wrote:
Clifford Heath <no.spam@please.net> wrote in
news:8O7lG.17$cJ2.2@fx47.iad:
On 14/4/20 10:34 am, John Larkin wrote:
In a Zynq sort of chip, one bailout is to move "code" from the
ARM cpu's into FPGA fabric. I'm often shocked by what people
can implement in VHDL.

I wish I had time and energy to get started with the Zynq, it's
such a nice way of doing things. Someone should do an "Arduino,
but for Zynq".
Close but no cigar!

https://www.youtube.com/watch?v=4XV8dlFwNd0

Can I FFT streaming data at 50MSPS on its FPGA, while consuming
less than 10W?

No, didn't think so.

I'm enjoying the ODroid modules, but 1-4GFLOP is a *lot* less than
the x00-GFLOP I'd like. It needs a decent GPU (like Jetson Nano)
or an FPGA really.
Not that little SBC, but... on a real PC...

Use a progrmmable graphics card, ya dope.

Over PoE? Ok, maybe 4PPoE. Did you miss where I said 10W?
 
On Monday, April 13, 2020 at 9:23:50 PM UTC-4, Clifford Heath wrote:
On 14/4/20 11:15 am, DecadentLinuxUserNumeroUno@decadence.org wrote:
Clifford Heath <no.spam@please.net> wrote in
news:8O7lG.17$cJ2.2@fx47.iad:
On 14/4/20 10:34 am, John Larkin wrote:
In a Zynq sort of chip, one bailout is to move "code" from the
ARM cpu's into FPGA fabric. I'm often shocked by what people can
implement in VHDL.

I wish I had time and energy to get started with the Zynq, it's
such a nice way of doing things. Someone should do an "Arduino,
but for Zynq".
Close but no cigar!

https://www.youtube.com/watch?v=4XV8dlFwNd0

Can I FFT streaming data at 50MSPS on its FPGA, while consuming less
than 10W?

No, didn't think so.

I'm enjoying the ODroid modules, but 1-4GFLOP is a *lot* less than the
x00-GFLOP I'd like. It needs a decent GPU (like Jetson Nano) or an FPGA
really.

I have no idea how much power the ARM uses, but it shouldn't be more than a watt or so, right? So why do you think the FFT will use so much power?

What size FFT? Are they overlapped? Where does the FFT result go? Sounds too fast for the ARM to do anything with it.

--

Rick C.

-+-- Get 1,000 miles of free Supercharging
-+-- Tesla referral code - https://ts.la/richard11209
 
On Monday, April 13, 2020 at 7:11:18 PM UTC-7, jla...@highlandsniptechnology.com wrote:

Cool. Simultaneously lock every person on Earth in an individual cell,
with pure air and no contact with anyone else, in the dark, for six or
eight weeks, then let them all out. They will be hungry, especially
the children.

Request denied.
Check the personals ads, though; something akin to your fantasy MAY be
available, at modest hourly rates.
 
jlarkin@highlandsniptechnology.com wrote in
news:eek:47a9ftjatfo34t6q5s6no4h5jrh2e6h41@4ax.com:

On Tue, 14 Apr 2020 10:51:13 +1000, Clifford Heath
no.spam@please.net> wrote:

On 14/4/20 10:34 am, John Larkin wrote:
On Tue, 14 Apr 2020 10:03:18 +1000, Clifford Heath
no.spam@please.net> wrote:

On 14/4/20 5:26 am, John Larkin wrote:
On Mon, 13 Apr 2020 12:13:35 +1000, Clifford Heath
no.spam@please.net> wrote:

On 13/4/20 12:07 pm, jlarkin@highlandsniptechnology.com
wrote:
On Mon, 13 Apr 2020 09:33:14 +1000, Clifford Heath
I should drag out tprof again, it still fills a need that's
substantially un-met by existing tools. It also contained a
dynamic memory profiling mode that was useful.

Sometimes we raise a port pin at the entry of a chunk of
code and drop it at the end, and look at that with an
oscilloscope. A routine can be optimized for worst-case
execution time, which usually matters more than average. A
little thinking can sometimes reduce worst-case by 5:1.

One port pin can be made to blip or change state at several
places in a segment of code. That can look cool on infinite
persistance.

Great way to look at exactly one thing at a time, and quite
unlike what a proper profiler does.

I have histogrammed the program counter. That can be a
revelation. See what's hogging the resources.

That's a trivial profiler, and comes built-in to Linux tools,
always has (since 1976 at least). It tells you nothing about
context switch or interrupt latencies though, because it only
samples during the program's assigned timeslots i.e. while
it's running.

CH

Nobody has guessed about the Linux timeouts I measured. Nobody
has estimated a reasonable IRQ rate for my tiny ARM. An
oscilloscope is good enough for things like that.

Sure! If it works for you, that's great.

On a running Linux system with normal desktop peripherals,
there is a great variety of different kinds of things going on.
In the histogram of latencies, it's very instructive to see the
different spikes for different interrupts (and try to identify
which is which), and to see the variance for each spike.
Kind-of a top-down view, which would augment your bottom-up
one.

CH

We were interested in how long and how often a tight application
loop might be suspended by the OS and drivers and stuff. Would a
profiler tell you that?

Exactly, that's what the histogram is. Put the contents of your
inner loop (or some fixed number of repetitions) in a profiled
function (called from the loop), and the shortest elapsed-time
spike is how long it takes to run if it's not interrupted. All
longer spikes are interrupts or one sort or another. You can see
how long each is, count how many, and see the variability in each
interrupt time (based on the width of the spike).

You do need a CPU with a fine-grained timer you can quickly read,
and you need to ensure that your inner-loop function runs for
significantly longer than the profiler overhead of doing that.

In a Zynq sort of chip, one bailout is to move "code" from the
ARM cpu's into FPGA fabric. I'm often shocked by what people can
implement in VHDL.

I wish I had time and energy to get started with the Zynq, it's
such a nice way of doing things. Someone should do an "Arduino,
but for Zynq".

CH.

It's a MicroZed. We have done several products and a few test sets
with a MicroZed as the compute platform. It has all the power
supplies, DRAM, SD card, Gbit Ethernet, USB, all that done, and
they bring 100 FPGA pins out on connectors. It runs Linux right
out of the box.

https://www.dropbox.com/s/al2x92st7ja7gry/DSC02865.JPG?raw=1

https://www.dropbox.com/s/r6sl0nh8zd9sm7r/ASP_SN1_top.jpg?raw=1
Look like it WOULD be good, if it was not a lame 5 plus year old
design and the punks have not upgraded ANY of it.

Way to pricey, only a single GB RAM (how lame) 1 Ethernet of only
100Mb/s. Probably only one ARM core.

Pretty fuckin lame, actually. There has to be way better than
that.
I mean maybe it was a great idea for them years ago, but they needed
to re-invest their profits into upgraded board, not just years of the
some fucking offering.
 
On 13/04/2020 19:16, Ricky C wrote:
On Monday, April 13, 2020 at 10:25:05 AM UTC-4, David Brown wrote:
On 12/04/2020 21:28, Ricky C wrote:
On Sunday, April 12, 2020 at 12:27:45 PM UTC-4, David Brown wrote:
On 12/04/2020 04:52, Ricky C wrote:

That's your straw man argument. We don't need a vaccine if we
can eliminate the virus. Do they still vaccinate for smallpox?


Smallpox was eliminated by vaccines - so we don't need vaccines for
it /now/.

So you are agreeing with me that if we eliminate the virus we won't
need a vaccine?

Yes - but I am also saying that you need a vaccine to eliminate it. I
don't think it will be practical to do so without a vaccine - it has
spread too far and wide to be contained.

That is the fallacy in your argument. Being spread "far and wide" means nothing. Once this infection is under control and it is eliminated in a given area, it only requires a few things to remain free of the virus. I've already said all that.

Those "few things" would include banning all travel into the virus-free
area. Clearly, that is never going to be practical. The reality will
involve a balance between reducing the risk of the infection
re-occurring in the area, and practicality.

But coronavirus? Yeah, it may have leapt from
another animal previously, but there is no indication we are being
reinfected by the same means. Get rid of it in humans and we will be
rid of it forever.

Hopefully, yes.

It is likely that this particular Corona virus was the result of a
mutation or combination from one or more other corona viruses. Whether
that occurred in a human or an animal is unknown. But if it were an
animal and it hasn't spread to other animals, then maybe it is only
significantly infectious in humans and therefore could be eliminated.
(It has been found in some other animals, but only a few, and their
infectiousness is not yet known.)

Ok, so now you are changing your story of the vaccine being essential?

No, I haven't changed my story. The virus might be controllable without
a vaccine, it won't be eliminated without a vaccine. A vaccine alone
might not be enough to eliminate it if the virus survives in animals,
but it could stop it from being an issue for humans.


Not an easy task, but once we get the infection numbers down,
aggressive contact tracing has a lot less impact than the shutdown we
are presently in.


I don't believe it is realistic to get good enough testing and tracking
world-wide in order to eliminate it completely without mass vaccination.
It could certainly be controllable, but not eliminated.

I only care about "controlling" it in this country. I believe that all the more modern countries will contain it and eliminate it within their borders. The other countries will essentially let it "burn out" which will take some time, but after a year or so the infection rates should be so low as to not pose significant threats. Travel bans can be lifted and contact tracing be the only means needed.

Such a myopic "I only care about me and those around me" attitude is the
best guarantee of not getting control of the virus.

If you want to avoid the virus re-occurring in the USA (assuming you
first manage to eliminate it there without a vaccine - and that's a big
assumption), you have two choices. Seal off the borders of the USA
permanently with quarantines and comprehensive tests for all
international travel (good luck with your wall), or work towards
eliminating it /everywhere/ throughout the world.

Obviously the reality will be a compromise and a balance of risks - if
the disease can be eliminated from /most/ of the world, the risks of
travellers spreading it is much smaller, and it can be good enough to
live with. (That is the situation for many serious diseases, such as
Ebola.)

As for letting it "burn out", what exactly do you mean by that? We now
know that having the virus does not impart full immunity - we don't yet
know how much or how little you get. There are plenty of viruses for
which you get immunity for a year or so, and it flares up on two-year
cycles.

Maybe this will all be controllable and containable. Maybe better
control on travelling, better hygiene habits, permanent contact tracing
of populations, etc., will mean that as we see new outbreaks around the
world, there won't be much of it spreading in the USA. There are a lot
of unknown variables here - a lot of maybes. I am fairly confident that
a reasonable balance will be established in time.

But a good vaccine would make all the difference.

(This is my estimation and extrapolation, rather than a known fact.)

Remember, recovery from Covid-19 does not appear to give very good
immunity - so all you need is a few pockets of it hidden away somewhere,
and the potential for new outbreaks will be there.

(One can hope that they would be caught and isolated faster now, of course.)

Where did you see any indication that the disease does not leave the person immune? I have not seen that at all.

This is a crucial point. I think you'd agree with me more above if you
understood this.

For some diseases, after recovery you have long-term immunity with
antibodies. For other diseases, the immunity is short-term or only
partial. It is a common assumption - but often incorrect - that if
you've had a disease, you are immune for life (given a consistent
pathogen - flu's and colds are caused by lots of related viruses). It
is this assumption that led to the "everyone's going to get it sooner or
later - let people get it and build up a herd immunity" strategy used by
some countries.

The assumption is no more than that - an assumption. It often does not
apply.

And it does not /seem/ to apply for Covid-19.

The studies are early as yet - we'll need many more, and it's impossible
to evaluate long-term immunity without waiting a long time. But
preliminary testing is showing unexpectedly low anti-body counts in a
sizeable fraction of people who have recovered from the disease.

We don't yet know how this will work out. Maybe people will have enough
immunity that re-infections will be mild or symptomless. Maybe new
infections will boost the immune response to give a longer term immunity
after the second round. But maybe re-infections will leave people with
mild (or different) symptoms but still infectious.


<https://time.com/5810454/coronavirus-immunity-reinfection/>
<http://www.koreaherald.com/view.php?ud=20200412000213&np=3&mp=1>
<https://www.telegraph.co.uk/science/2020/04/08/coronavirus-immunity-test-faces-setback-recovered-patients-present/>
<https://abcnews.go.com/Health/questions-remain-covid-19-recovery-guarantee-immunity-reinfection/story?id=70085581>


Add to this mixture the risk of the virus mutating - the more people
that get it, and the more time that goes past with wide-scale infection,
the bigger the chance of it mutating to something that will then infect
people anew.


Measles was almost eliminated by vaccines, but there so many
"anti-vaxer" morons that the elimination failed, and there are
still outbreaks - so kids still need the vaccines. The same
applies to polio.

Covid-19 can, hopefully, be eliminated by vaccines. Whether it
will or not is another matter - but good vaccines will certainly
prevent it being a problem.

But can Covid-19 be eliminated /without/ a vaccine? I don't think
so. It is far too wide-spread for that. It can be kept at bay by
other measures, and some places can be kept free of it, but if
there is freedom of movement, outbreaks will always return.

Wide spread is not the issue. The shutdown will allow us to get the
numbers to a point that contact tracing can confine the disease.

If South Korea can do it, why can't we?


Because you are only one country. To eliminate the virus anywhere, it
needs to be eliminated /everywhere/. Maybe the USA can do the kind of
tracking that South Korea managed (I doubt it - Americans are not as
obedient. Freedom works both ways). But you won't get that same
tracking across India, Africa, war-torn Syria, Afghanistan, etc.

Ah, you are arguing semantics. Ok, fine. I'm talking about eliminating it in various countries that are capable. The rest of the world will deal with it for a while longer and have many more deaths, but even there this disease will pass once it infects enough people.

I won't say I am "arguing semantics", but the different terms and
viewpoints does at least partly explain why we appear to have different
opinions here.


Another aspect that is not yet understood is the long-term effects on
people that have had serious symptoms but recovered. Preliminary
indications are that it can involve not just lung damage, but damage to
the heart and liver (and this is not just for people who needed
ventilators).

I suppose it could mutate and become infectious again after passing through the lion's share of the world community. But technically that is a new disease and a vaccine won't protect from that either. Perhaps they will crack the code on developing a vaccine to a slowly evolving antigen on the virus, but we've not been able to do that with the cold or flu.

Even vaccines are no match for an evolving virus.

That depends on how the vaccine works (there are many paths to a
vaccine, and many are being researched concurrently for Covid-19).
Vaccines often target particular proteins on the virus shell - if a
mutation does not change that protein, the vaccine still works. It is
not uncommon that a vaccine can be of some benefit to a related or
mutated virus even if it is not a perfect match (that happens when the
estimates of yearly flu variants are not accurate). And for some
vaccine types, they can be made in a flexible and adaptable way - like
the flu vaccines, that can be adapted for different mutations in a few
months.

Vaccines are not perfect (especially when we don't have one), but they
are the best tool we have against viruses.
 
Clifford Heath <no.spam@please.net> wrote in news:ENalG.29335$p44.24755
@fx05.iad:

> Over PoE? Ok, maybe 4PPoE. Did you miss where I said 10W?

Now we are back to a simple ARM CPU could do the whole job. No need
for the xylinx crap.

There are ARM chips with CUDA cores now.

I guess it would depend on where one puts one's watts as to which
would be faster at it.
 
David Brown <david.brown@hesbynett.no> wrote in
news:r743bo$2as$1@dont-email.me:

Such a myopic "I only care about me and those around me" attitude
is the best guarantee of not getting control of the virus.

Almost 2 million infected and a quarter of that in the US.

Even if ALL the other nations were lying or even ZERO reporting,
things do not bode here well because we were caught off guard by an
incompetent, incapable 'leader'. And we are STILL not doing the right,
necessary things, because of his continued missteps. He spent dayas
and is still spending time on his drug push.

Had we known he would do nothing, the states would have gotten more
off the ground on their own, depsite it not being their job.

Trump failed. Big time.
 
On Tuesday, April 14, 2020 at 8:30:21 PM UTC+10, David Brown wrote:
On 13/04/2020 19:16, Ricky C wrote:
On Monday, April 13, 2020 at 10:25:05 AM UTC-4, David Brown wrote:
On 12/04/2020 21:28, Ricky C wrote:
On Sunday, April 12, 2020 at 12:27:45 PM UTC-4, David Brown wrote:
On 12/04/2020 04:52, Ricky C wrote:

<snip>

Yes - but I am also saying that you need a vaccine to eliminate it. I
don't think it will be practical to do so without a vaccine - it has
spread too far and wide to be contained.

That is the fallacy in your argument. Being spread "far and wide" means nothing. Once this infection is under control and it is eliminated in a given area, it only requires a few things to remain free of the virus. I've already said all that.


Those "few things" would include banning all travel into the virus-free
area.

It doesn't. You would have to quarantine any traveller coming in for 14 days - for Covid-19 infections anyway.

You might be able to shorten that by testing them for the virus after about a week.

> Clearly, that is never going to be practical.

Which is why people actually use quarantine.

<snip>

--
Bill Sloman, Sydney
 
On Tue, 14 Apr 2020 08:47:21 +0000 (UTC),
DecadentLinuxUserNumeroUno@decadence.org wrote:

jlarkin@highlandsniptechnology.com wrote in
news:eek:47a9ftjatfo34t6q5s6no4h5jrh2e6h41@4ax.com:

On Tue, 14 Apr 2020 10:51:13 +1000, Clifford Heath
no.spam@please.net> wrote:

On 14/4/20 10:34 am, John Larkin wrote:
On Tue, 14 Apr 2020 10:03:18 +1000, Clifford Heath
no.spam@please.net> wrote:

On 14/4/20 5:26 am, John Larkin wrote:
On Mon, 13 Apr 2020 12:13:35 +1000, Clifford Heath
no.spam@please.net> wrote:

On 13/4/20 12:07 pm, jlarkin@highlandsniptechnology.com
wrote:
On Mon, 13 Apr 2020 09:33:14 +1000, Clifford Heath
I should drag out tprof again, it still fills a need that's
substantially un-met by existing tools. It also contained a
dynamic memory profiling mode that was useful.

Sometimes we raise a port pin at the entry of a chunk of
code and drop it at the end, and look at that with an
oscilloscope. A routine can be optimized for worst-case
execution time, which usually matters more than average. A
little thinking can sometimes reduce worst-case by 5:1.

One port pin can be made to blip or change state at several
places in a segment of code. That can look cool on infinite
persistance.

Great way to look at exactly one thing at a time, and quite
unlike what a proper profiler does.

I have histogrammed the program counter. That can be a
revelation. See what's hogging the resources.

That's a trivial profiler, and comes built-in to Linux tools,
always has (since 1976 at least). It tells you nothing about
context switch or interrupt latencies though, because it only
samples during the program's assigned timeslots i.e. while
it's running.

CH

Nobody has guessed about the Linux timeouts I measured. Nobody
has estimated a reasonable IRQ rate for my tiny ARM. An
oscilloscope is good enough for things like that.

Sure! If it works for you, that's great.

On a running Linux system with normal desktop peripherals,
there is a great variety of different kinds of things going on.
In the histogram of latencies, it's very instructive to see the
different spikes for different interrupts (and try to identify
which is which), and to see the variance for each spike.
Kind-of a top-down view, which would augment your bottom-up
one.

CH

We were interested in how long and how often a tight application
loop might be suspended by the OS and drivers and stuff. Would a
profiler tell you that?

Exactly, that's what the histogram is. Put the contents of your
inner loop (or some fixed number of repetitions) in a profiled
function (called from the loop), and the shortest elapsed-time
spike is how long it takes to run if it's not interrupted. All
longer spikes are interrupts or one sort or another. You can see
how long each is, count how many, and see the variability in each
interrupt time (based on the width of the spike).

You do need a CPU with a fine-grained timer you can quickly read,
and you need to ensure that your inner-loop function runs for
significantly longer than the profiler overhead of doing that.

In a Zynq sort of chip, one bailout is to move "code" from the
ARM cpu's into FPGA fabric. I'm often shocked by what people can
implement in VHDL.

I wish I had time and energy to get started with the Zynq, it's
such a nice way of doing things. Someone should do an "Arduino,
but for Zynq".

CH.

It's a MicroZed. We have done several products and a few test sets
with a MicroZed as the compute platform. It has all the power
supplies, DRAM, SD card, Gbit Ethernet, USB, all that done, and
they bring 100 FPGA pins out on connectors. It runs Linux right
out of the box.

https://www.dropbox.com/s/al2x92st7ja7gry/DSC02865.JPG?raw=1

https://www.dropbox.com/s/r6sl0nh8zd9sm7r/ASP_SN1_top.jpg?raw=1

Look like it WOULD be good, if it was not a lame 5 plus year old
design and the punks have not upgraded ANY of it.

It's had several upgrades, including PicoZed.

Way to pricey, only a single GB RAM (how lame) 1 Ethernet of only
100Mb/s. Probably only one ARM core.

Where did you see that? It has Gbit ethernet and dual ARM cores and a
gigantic FPGA. The price is more than reasonable considering what the
Zynq chip and other stuff costs, less than what the parts would cost
us in small quantity. It's in the noise for our aerospace products.

It runs Linux right out of the box if you apply power. The development
software integrates the Linux OS, boot loader, c compiler, and FPGA
compiler. It makes a file with everything, that you copy to an SD card
and plug into the target. Imagine developing all that yourself. It all
worked for us first try.

Pretty fuckin lame, actually. There has to be way better than
that.

Design one.

I mean maybe it was a great idea for them years ago, but they needed
to re-invest their profits into upgraded board, not just years of the
some fucking offering.

It's an eval board for the Zynq, not intended to be a profitable
product. It's entirely public and open source, down to the PCB
gerbers. Anybody can build it.

Works for us.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Tue, 14 Apr 2020 10:12:16 -0700 (PDT), Lasse Langwadt Christensen
<langwadt@fonz.dk> wrote:

tirsdag den 14. april 2020 kl. 18.22.11 UTC+2 skrev jla...@highlandsniptechnology.com:
On Tue, 14 Apr 2020 08:47:21 +0000 (UTC),
DecadentLinuxUserNumeroUno@decadence.org wrote:

jlarkin@highlandsniptechnology.com wrote in
news:eek:47a9ftjatfo34t6q5s6no4h5jrh2e6h41@4ax.com:

On Tue, 14 Apr 2020 10:51:13 +1000, Clifford Heath
no.spam@please.net> wrote:

On 14/4/20 10:34 am, John Larkin wrote:
On Tue, 14 Apr 2020 10:03:18 +1000, Clifford Heath
no.spam@please.net> wrote:

On 14/4/20 5:26 am, John Larkin wrote:
On Mon, 13 Apr 2020 12:13:35 +1000, Clifford Heath
no.spam@please.net> wrote:

On 13/4/20 12:07 pm, jlarkin@highlandsniptechnology.com
wrote:
On Mon, 13 Apr 2020 09:33:14 +1000, Clifford Heath
I should drag out tprof again, it still fills a need that's
substantially un-met by existing tools. It also contained a
dynamic memory profiling mode that was useful.

Sometimes we raise a port pin at the entry of a chunk of
code and drop it at the end, and look at that with an
oscilloscope. A routine can be optimized for worst-case
execution time, which usually matters more than average. A
little thinking can sometimes reduce worst-case by 5:1.

One port pin can be made to blip or change state at several
places in a segment of code. That can look cool on infinite
persistance.

Great way to look at exactly one thing at a time, and quite
unlike what a proper profiler does.

I have histogrammed the program counter. That can be a
revelation. See what's hogging the resources.

That's a trivial profiler, and comes built-in to Linux tools,
always has (since 1976 at least). It tells you nothing about
context switch or interrupt latencies though, because it only
samples during the program's assigned timeslots i.e. while
it's running.

CH

Nobody has guessed about the Linux timeouts I measured. Nobody
has estimated a reasonable IRQ rate for my tiny ARM. An
oscilloscope is good enough for things like that.

Sure! If it works for you, that's great.

On a running Linux system with normal desktop peripherals,
there is a great variety of different kinds of things going on.
In the histogram of latencies, it's very instructive to see the
different spikes for different interrupts (and try to identify
which is which), and to see the variance for each spike.
Kind-of a top-down view, which would augment your bottom-up
one.

CH

We were interested in how long and how often a tight application
loop might be suspended by the OS and drivers and stuff. Would a
profiler tell you that?

Exactly, that's what the histogram is. Put the contents of your
inner loop (or some fixed number of repetitions) in a profiled
function (called from the loop), and the shortest elapsed-time
spike is how long it takes to run if it's not interrupted. All
longer spikes are interrupts or one sort or another. You can see
how long each is, count how many, and see the variability in each
interrupt time (based on the width of the spike).

You do need a CPU with a fine-grained timer you can quickly read,
and you need to ensure that your inner-loop function runs for
significantly longer than the profiler overhead of doing that.

In a Zynq sort of chip, one bailout is to move "code" from the
ARM cpu's into FPGA fabric. I'm often shocked by what people can
implement in VHDL.

I wish I had time and energy to get started with the Zynq, it's
such a nice way of doing things. Someone should do an "Arduino,
but for Zynq".

CH.

It's a MicroZed. We have done several products and a few test sets
with a MicroZed as the compute platform. It has all the power
supplies, DRAM, SD card, Gbit Ethernet, USB, all that done, and
they bring 100 FPGA pins out on connectors. It runs Linux right
out of the box.

https://www.dropbox.com/s/al2x92st7ja7gry/DSC02865.JPG?raw=1

https://www.dropbox.com/s/r6sl0nh8zd9sm7r/ASP_SN1_top.jpg?raw=1

Look like it WOULD be good, if it was not a lame 5 plus year old
design and the punks have not upgraded ANY of it.

It's had several upgrades, including PicoZed.


Way to pricey, only a single GB RAM (how lame) 1 Ethernet of only
100Mb/s. Probably only one ARM core.

Where did you see that? It has Gbit ethernet and dual ARM cores and a
gigantic FPGA. The price is more than reasonable considering what the
Zynq chip and other stuff costs, less than what the parts would cost
us in small quantity. It's in the noise for our aerospace products.

It runs Linux right out of the box if you apply power. The development
software integrates the Linux OS, boot loader, c compiler, and FPGA
compiler. It makes a file with everything, that you copy to an SD card
and plug into the target. Imagine developing all that yourself. It all
worked for us first try.

As usual the linux guy is just yelling about things he doesn't understand

If all we wanted to do was run c, there are much cheaper things
around. But a lot of our products absolutely need a big FPGA. The FPGA
does the real work, and the ARMs do slow stuff like Ethernet and BIST
and blinking LEDs.

You can even do FFTs and floating point and complex comm protocols in
the FPGA if you need to. DDS and filtering are trivial.


Pretty fuckin lame, actually. There has to be way better than
that.

Design one.

I mean maybe it was a great idea for them years ago, but they needed
to re-invest their profits into upgraded board, not just years of the
some fucking offering.

It's an eval board for the Zynq, not intended to be a profitable
product. It's entirely public and open source, down to the PCB
gerbers. Anybody can build it.

Works for us.


http://www.myirtech.com/list.asp?id=502 has a very similar board for $99

if you buy the chip from digikey it cost $63 ...

What keeps impressing me is that I can buy boards, especially from
China, all built, for a fraction of what the parts would cost us.

MicroZed is about break-even on parts cost.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 

Welcome to EDABoard.com

Sponsor

Back
Top