FPGA sensitivities...

J

John Larkin

Guest
I have a time-critical thing where the signal passes through an XC7A15
FPGA and does a fair lot of stuff inside. I measured delay vs some
voltages:

1.8 aux no measurable DC effect

3.3 vccio no measurable DC effect

2.5 vccio ditto (key io\'s are LVDS in this bank)

+1 core -10 ps per millivolt!

If I vary the trigger frequency, I can see the delay heterodyning
against the 1.8V switcher frequency, a few ps p-p maybe. Gotta track
that down.

A spritz of freeze spray on the chip had practically no effect on
delay through the chip, on a scope at 100 ps/div.

I expected sensitivity to core voltage, so we\'ll make sure we have a
serious, analog-quality voltage regulator next rev.

The temperature thing surprised me. I was used to CMOS having a
serious positive delay TC. Maybe modern FPGAs have some sort of
temperature compensation designed in?

We also have a ZYNQ on this board that crashes the ARM core
erratically, especially when the chip is hot. It might crash in maybe
a half hour MTBF if the chip reports 55C internally; the FPGA part
keeps going. At powerup boot from an SD card, it will always configure
the PL FPGA side, but will then fail to run our application if the
chip is hot. We\'re playing with DRAM and CPU clock rates to see if
that has much effect.
 
On 2020-09-25 15:16, John Larkin wrote:
I have a time-critical thing where the signal passes through an XC7A15
FPGA and does a fair lot of stuff inside. I measured delay vs some
voltages:

1.8 aux no measurable DC effect

3.3 vccio no measurable DC effect

2.5 vccio ditto (key io\'s are LVDS in this bank)

+1 core -10 ps per millivolt!

If I vary the trigger frequency, I can see the delay heterodyning
against the 1.8V switcher frequency, a few ps p-p maybe. Gotta track
that down.

A spritz of freeze spray on the chip had practically no effect on
delay through the chip, on a scope at 100 ps/div.

I expected sensitivity to core voltage, so we\'ll make sure we have a
serious, analog-quality voltage regulator next rev.

The temperature thing surprised me. I was used to CMOS having a
serious positive delay TC. Maybe modern FPGAs have some sort of
temperature compensation designed in?

We also have a ZYNQ on this board that crashes the ARM core
erratically, especially when the chip is hot. It might crash in maybe
a half hour MTBF if the chip reports 55C internally; the FPGA part
keeps going. At powerup boot from an SD card, it will always configure
the PL FPGA side, but will then fail to run our application if the
chip is hot. We\'re playing with DRAM and CPU clock rates to see if
that has much effect.

Yecch, good to know--keeping the drift down to 1 ps requires the 1V
supply to be stable to 100 ppm total. I don\'t think I\'ve ever needed to
use four-wire sensing on a logic supply, but you\'re probably going to
have to.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com
 
On Fri, 25 Sep 2020 16:49:51 -0400, Phil Hobbs
<pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-09-25 15:16, John Larkin wrote:


I have a time-critical thing where the signal passes through an XC7A15
FPGA and does a fair lot of stuff inside. I measured delay vs some
voltages:

1.8 aux no measurable DC effect

3.3 vccio no measurable DC effect

2.5 vccio ditto (key io\'s are LVDS in this bank)

+1 core -10 ps per millivolt!

If I vary the trigger frequency, I can see the delay heterodyning
against the 1.8V switcher frequency, a few ps p-p maybe. Gotta track
that down.

A spritz of freeze spray on the chip had practically no effect on
delay through the chip, on a scope at 100 ps/div.

I expected sensitivity to core voltage, so we\'ll make sure we have a
serious, analog-quality voltage regulator next rev.

The temperature thing surprised me. I was used to CMOS having a
serious positive delay TC. Maybe modern FPGAs have some sort of
temperature compensation designed in?

We also have a ZYNQ on this board that crashes the ARM core
erratically, especially when the chip is hot. It might crash in maybe
a half hour MTBF if the chip reports 55C internally; the FPGA part
keeps going. At powerup boot from an SD card, it will always configure
the PL FPGA side, but will then fail to run our application if the
chip is hot. We\'re playing with DRAM and CPU clock rates to see if
that has much effect.

Yecch, good to know--keeping the drift down to 1 ps requires the 1V
supply to be stable to 100 ppm total. I don\'t think I\'ve ever needed to
use four-wire sensing on a logic supply, but you\'re probably going to
have to.

Cheers

Phil Hobbs

I don\'t expect to keep the delay stable to 1 ps over temperature.
Below 1 ps/oC would wipe the competition. I do want to get the jitter
down into the single digits of ps RMS.

I boogered the +1 volt (Zynq core) supply voltage up to 1.1 volts and
the ARM crash thing went away. Or the crash temperature went way up.
So we have a timing problem.

My engineers are working from home but one is burning me a new SD card
to try, with slower clocks in the ARM and DRAM. I\'ll pop over soon and
pick it up. She lives in a tiny rent-controlled apartment above
Dolores Park. Her next-door neighbor on that block is one of the
richest people in the world.
 
fredag den 25. september 2020 kl. 23.53.06 UTC+2 skrev John Larkin:
On Fri, 25 Sep 2020 16:49:51 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-09-25 15:16, John Larkin wrote:


I have a time-critical thing where the signal passes through an XC7A15
FPGA and does a fair lot of stuff inside. I measured delay vs some
voltages:

1.8 aux no measurable DC effect

3.3 vccio no measurable DC effect

2.5 vccio ditto (key io\'s are LVDS in this bank)

+1 core -10 ps per millivolt!

If I vary the trigger frequency, I can see the delay heterodyning
against the 1.8V switcher frequency, a few ps p-p maybe. Gotta track
that down.

A spritz of freeze spray on the chip had practically no effect on
delay through the chip, on a scope at 100 ps/div.

I expected sensitivity to core voltage, so we\'ll make sure we have a
serious, analog-quality voltage regulator next rev.

The temperature thing surprised me. I was used to CMOS having a
serious positive delay TC. Maybe modern FPGAs have some sort of
temperature compensation designed in?

We also have a ZYNQ on this board that crashes the ARM core
erratically, especially when the chip is hot. It might crash in maybe
a half hour MTBF if the chip reports 55C internally; the FPGA part
keeps going. At powerup boot from an SD card, it will always configure
the PL FPGA side, but will then fail to run our application if the
chip is hot. We\'re playing with DRAM and CPU clock rates to see if
that has much effect.

Yecch, good to know--keeping the drift down to 1 ps requires the 1V
supply to be stable to 100 ppm total. I don\'t think I\'ve ever needed to
use four-wire sensing on a logic supply, but you\'re probably going to
have to.

Cheers

Phil Hobbs

I don\'t expect to keep the delay stable to 1 ps over temperature.
Below 1 ps/oC would wipe the competition. I do want to get the jitter
down into the single digits of ps RMS.

I boogered the +1 volt (Zynq core) supply voltage up to 1.1 volts and
the ARM crash thing went away. Or the crash temperature went way up.
So we have a timing problem.

yeh, I\'ve not seen such issues and that is even running at 90\'C occationally
 
On Fri, 25 Sep 2020 15:54:29 -0700 (PDT), Lasse Langwadt Christensen
<langwadt@fonz.dk> wrote:

fredag den 25. september 2020 kl. 23.53.06 UTC+2 skrev John Larkin:
On Fri, 25 Sep 2020 16:49:51 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-09-25 15:16, John Larkin wrote:


I have a time-critical thing where the signal passes through an XC7A15
FPGA and does a fair lot of stuff inside. I measured delay vs some
voltages:

1.8 aux no measurable DC effect

3.3 vccio no measurable DC effect

2.5 vccio ditto (key io\'s are LVDS in this bank)

+1 core -10 ps per millivolt!

If I vary the trigger frequency, I can see the delay heterodyning
against the 1.8V switcher frequency, a few ps p-p maybe. Gotta track
that down.

A spritz of freeze spray on the chip had practically no effect on
delay through the chip, on a scope at 100 ps/div.

I expected sensitivity to core voltage, so we\'ll make sure we have a
serious, analog-quality voltage regulator next rev.

The temperature thing surprised me. I was used to CMOS having a
serious positive delay TC. Maybe modern FPGAs have some sort of
temperature compensation designed in?

We also have a ZYNQ on this board that crashes the ARM core
erratically, especially when the chip is hot. It might crash in maybe
a half hour MTBF if the chip reports 55C internally; the FPGA part
keeps going. At powerup boot from an SD card, it will always configure
the PL FPGA side, but will then fail to run our application if the
chip is hot. We\'re playing with DRAM and CPU clock rates to see if
that has much effect.

Yecch, good to know--keeping the drift down to 1 ps requires the 1V
supply to be stable to 100 ppm total. I don\'t think I\'ve ever needed to
use four-wire sensing on a logic supply, but you\'re probably going to
have to.

Cheers

Phil Hobbs

I don\'t expect to keep the delay stable to 1 ps over temperature.
Below 1 ps/oC would wipe the competition. I do want to get the jitter
down into the single digits of ps RMS.

I boogered the +1 volt (Zynq core) supply voltage up to 1.1 volts and
the ARM crash thing went away. Or the crash temperature went way up.
So we have a timing problem.


yeh, I\'ve not seen such issues and that is even running at 90\'C occationally

I just did some clean startups with a chip temp of 100c... after
increasing the core voltage from 1.0 to 1.1. I\'m thinking there may be
a time window or race condition somewhere, not a simple speed failure.
Maybe I could fix it by going down on voltage too.

The ARM and the fabric have different clocks and talk to one another a
lot. I think they use the Wishbone bus.

This is going to be tedious to find. I\'ll let other people do that.
 
lørdag den 26. september 2020 kl. 01.16.43 UTC+2 skrev John Larkin:
On Fri, 25 Sep 2020 15:54:29 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

fredag den 25. september 2020 kl. 23.53.06 UTC+2 skrev John Larkin:
On Fri, 25 Sep 2020 16:49:51 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-09-25 15:16, John Larkin wrote:


I have a time-critical thing where the signal passes through an XC7A15
FPGA and does a fair lot of stuff inside. I measured delay vs some
voltages:

1.8 aux no measurable DC effect

3.3 vccio no measurable DC effect

2.5 vccio ditto (key io\'s are LVDS in this bank)

+1 core -10 ps per millivolt!

If I vary the trigger frequency, I can see the delay heterodyning
against the 1.8V switcher frequency, a few ps p-p maybe. Gotta track
that down.

A spritz of freeze spray on the chip had practically no effect on
delay through the chip, on a scope at 100 ps/div.

I expected sensitivity to core voltage, so we\'ll make sure we have a
serious, analog-quality voltage regulator next rev.

The temperature thing surprised me. I was used to CMOS having a
serious positive delay TC. Maybe modern FPGAs have some sort of
temperature compensation designed in?

We also have a ZYNQ on this board that crashes the ARM core
erratically, especially when the chip is hot. It might crash in maybe
a half hour MTBF if the chip reports 55C internally; the FPGA part
keeps going. At powerup boot from an SD card, it will always configure
the PL FPGA side, but will then fail to run our application if the
chip is hot. We\'re playing with DRAM and CPU clock rates to see if
that has much effect.

Yecch, good to know--keeping the drift down to 1 ps requires the 1V
supply to be stable to 100 ppm total. I don\'t think I\'ve ever needed to
use four-wire sensing on a logic supply, but you\'re probably going to
have to.

Cheers

Phil Hobbs

I don\'t expect to keep the delay stable to 1 ps over temperature.
Below 1 ps/oC would wipe the competition. I do want to get the jitter
down into the single digits of ps RMS.

I boogered the +1 volt (Zynq core) supply voltage up to 1.1 volts and
the ARM crash thing went away. Or the crash temperature went way up.
So we have a timing problem.


yeh, I\'ve not seen such issues and that is even running at 90\'C occationally



I just did some clean startups with a chip temp of 100c... after
increasing the core voltage from 1.0 to 1.1. I\'m thinking there may be
a time window or race condition somewhere, not a simple speed failure.
Maybe I could fix it by going down on voltage too.

The ARM and the fabric have different clocks and talk to one another a
lot. I think they use the Wishbone bus.

the interface to the ARM complex is AXI bus so is most of the xilinx cores
one interesting quirk of AXI is that it has no timeout, so if you try to
read and the slave doesn\'t respond the ARM will do nothing waiting forever

I\'m assuming it runs linux, have you tried to plug in a serial port to see
what it does? it is ARM software in the bootloader that configures the PL
so something did run if it got that far, though it might only use internal
memory
 
On Saturday, September 26, 2020 at 5:16:21 AM UTC+10, John Larkin wrote:
I have a time-critical thing where the signal passes through an XC7A15
FPGA and does a fair lot of stuff inside. I measured delay vs some
voltages:

1.8 aux no measurable DC effect

3.3 vccio no measurable DC effect

2.5 vccio ditto (key io\'s are LVDS in this bank)

+1 core -10 ps per millivolt!

If I vary the trigger frequency, I can see the delay heterodyning
against the 1.8V switcher frequency, a few ps p-p maybe. Gotta track
that down.

Voltage switching logic is sensitive to supply voltage.

When I was working for the Nijmegen University science workshop, we got a complaint from the electron spin resonance group that the TTL-based timer that we\'d built for them some years earlier (before my time) was producing exactly this kind of pico-second level jitter.

I reworked the relevant board with surface mount components. which left me enough extra space to fit in an ECLinPS stage that resynchronised the output pulse to the 200MHz master clock. We then pushed the signal through a ECL-toTTL converter and the jitter had gone away.

This impressed everybody no end, and we got the okay to do better timer, with the timing edges coming out of an ECLinPS MC100EP195 - actually an ECLinPS MC100EL195 but that doesn\'t seem to exist any more

https://www.onsemi.com/pub/Collateral/MC10EP195-D.PDF

The bulk of the data handling was to be done in TTL/HCMOS - I couldn\'t find any memory that I cycle faster than 40MHz so we ended spitting out four delays worth of data every 25nsec which gave us enough precisely located timing edges in that 25nsec to keep the electron spin resonance guys happy.

Of course, once we\'d got the design nailed down (on hundreds of pages of A4 schematics), the electron spin resonance group got defunded, so we never built anything.

--
Bill Sloman, Sydney
 
On Fri, 25 Sep 2020 17:47:35 -0700 (PDT), Lasse Langwadt Christensen
<langwadt@fonz.dk> wrote:

lørdag den 26. september 2020 kl. 01.16.43 UTC+2 skrev John Larkin:
On Fri, 25 Sep 2020 15:54:29 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

fredag den 25. september 2020 kl. 23.53.06 UTC+2 skrev John Larkin:
On Fri, 25 Sep 2020 16:49:51 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-09-25 15:16, John Larkin wrote:


I have a time-critical thing where the signal passes through an XC7A15
FPGA and does a fair lot of stuff inside. I measured delay vs some
voltages:

1.8 aux no measurable DC effect

3.3 vccio no measurable DC effect

2.5 vccio ditto (key io\'s are LVDS in this bank)

+1 core -10 ps per millivolt!

If I vary the trigger frequency, I can see the delay heterodyning
against the 1.8V switcher frequency, a few ps p-p maybe. Gotta track
that down.

A spritz of freeze spray on the chip had practically no effect on
delay through the chip, on a scope at 100 ps/div.

I expected sensitivity to core voltage, so we\'ll make sure we have a
serious, analog-quality voltage regulator next rev.

The temperature thing surprised me. I was used to CMOS having a
serious positive delay TC. Maybe modern FPGAs have some sort of
temperature compensation designed in?

We also have a ZYNQ on this board that crashes the ARM core
erratically, especially when the chip is hot. It might crash in maybe
a half hour MTBF if the chip reports 55C internally; the FPGA part
keeps going. At powerup boot from an SD card, it will always configure
the PL FPGA side, but will then fail to run our application if the
chip is hot. We\'re playing with DRAM and CPU clock rates to see if
that has much effect.

Yecch, good to know--keeping the drift down to 1 ps requires the 1V
supply to be stable to 100 ppm total. I don\'t think I\'ve ever needed to
use four-wire sensing on a logic supply, but you\'re probably going to
have to.

Cheers

Phil Hobbs

I don\'t expect to keep the delay stable to 1 ps over temperature.
Below 1 ps/oC would wipe the competition. I do want to get the jitter
down into the single digits of ps RMS.

I boogered the +1 volt (Zynq core) supply voltage up to 1.1 volts and
the ARM crash thing went away. Or the crash temperature went way up.
So we have a timing problem.


yeh, I\'ve not seen such issues and that is even running at 90\'C occationally



I just did some clean startups with a chip temp of 100c... after
increasing the core voltage from 1.0 to 1.1. I\'m thinking there may be
a time window or race condition somewhere, not a simple speed failure.
Maybe I could fix it by going down on voltage too.

The ARM and the fabric have different clocks and talk to one another a
lot. I think they use the Wishbone bus.


the interface to the ARM complex is AXI bus so is most of the xilinx cores
one interesting quirk of AXI is that it has no timeout, so if you try to
read and the slave doesn\'t respond the ARM will do nothing waiting forever

That was considered, but it\'s temperature and core-voltage dependant,
so it\'s not some dumb memory map error.

I\'m assuming it runs linux, have you tried to plug in a serial port to see
what it does? it is ARM software in the bootloader that configures the PL
so something did run if it got that far, though it might only use internal
memory

I\'ll let my FPGA and C guys poke into the internals. Yes, it runs
Linux. I have mostly verified that the ARM dies hard, not still
running some of the simpler tasks.

As I understand it, it pulls a Xilinx-generated boot loader off the SD
card, and that reads the config file and loads up the FPGA and then
our c code. That runs out of cpu-local SRAM and it always loads the
FPGA. But if it\'s warm, our application then crashes, instantly or
within an hour. It runs out of DRAM and bangs the FPGA registers a
lot. Our code programs a second FPGA, and if the application crashes
at startup, that one doesn\'t program.

So, it could be a problem with DRAM, or it could be something wrong
with the FPGA bus interface.

I was wondering if anyone else had a problem like this.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Fri, 25 Sep 2020 19:41:48 -0700, jlarkin@highlandsniptechnology.com
wrote:

On Fri, 25 Sep 2020 17:47:35 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

lørdag den 26. september 2020 kl. 01.16.43 UTC+2 skrev John Larkin:
On Fri, 25 Sep 2020 15:54:29 -0700 (PDT), Lasse Langwadt Christensen
langwadt@fonz.dk> wrote:

fredag den 25. september 2020 kl. 23.53.06 UTC+2 skrev John Larkin:
On Fri, 25 Sep 2020 16:49:51 -0400, Phil Hobbs
pcdhSpamMeSenseless@electrooptical.net> wrote:

On 2020-09-25 15:16, John Larkin wrote:


I have a time-critical thing where the signal passes through an XC7A15
FPGA and does a fair lot of stuff inside. I measured delay vs some
voltages:

1.8 aux no measurable DC effect

3.3 vccio no measurable DC effect

2.5 vccio ditto (key io\'s are LVDS in this bank)

+1 core -10 ps per millivolt!

If I vary the trigger frequency, I can see the delay heterodyning
against the 1.8V switcher frequency, a few ps p-p maybe. Gotta track
that down.

A spritz of freeze spray on the chip had practically no effect on
delay through the chip, on a scope at 100 ps/div.

I expected sensitivity to core voltage, so we\'ll make sure we have a
serious, analog-quality voltage regulator next rev.

The temperature thing surprised me. I was used to CMOS having a
serious positive delay TC. Maybe modern FPGAs have some sort of
temperature compensation designed in?

We also have a ZYNQ on this board that crashes the ARM core
erratically, especially when the chip is hot. It might crash in maybe
a half hour MTBF if the chip reports 55C internally; the FPGA part
keeps going. At powerup boot from an SD card, it will always configure
the PL FPGA side, but will then fail to run our application if the
chip is hot. We\'re playing with DRAM and CPU clock rates to see if
that has much effect.

Yecch, good to know--keeping the drift down to 1 ps requires the 1V
supply to be stable to 100 ppm total. I don\'t think I\'ve ever needed to
use four-wire sensing on a logic supply, but you\'re probably going to
have to.

Cheers

Phil Hobbs

I don\'t expect to keep the delay stable to 1 ps over temperature.
Below 1 ps/oC would wipe the competition. I do want to get the jitter
down into the single digits of ps RMS.

I boogered the +1 volt (Zynq core) supply voltage up to 1.1 volts and
the ARM crash thing went away. Or the crash temperature went way up.
So we have a timing problem.


yeh, I\'ve not seen such issues and that is even running at 90\'C occationally



I just did some clean startups with a chip temp of 100c... after
increasing the core voltage from 1.0 to 1.1. I\'m thinking there may be
a time window or race condition somewhere, not a simple speed failure.
Maybe I could fix it by going down on voltage too.

The ARM and the fabric have different clocks and talk to one another a
lot. I think they use the Wishbone bus.


the interface to the ARM complex is AXI bus so is most of the xilinx cores
one interesting quirk of AXI is that it has no timeout, so if you try to
read and the slave doesn\'t respond the ARM will do nothing waiting forever

That was considered, but it\'s temperature and core-voltage dependant,
so it\'s not some dumb memory map error.


I\'m assuming it runs linux, have you tried to plug in a serial port to see
what it does? it is ARM software in the bootloader that configures the PL
so something did run if it got that far, though it might only use internal
memory

I\'ll let my FPGA and C guys poke into the internals. Yes, it runs
Linux. I have mostly verified that the ARM dies hard, not still
running some of the simpler tasks.

As I understand it, it pulls a Xilinx-generated boot loader off the SD
card, and that reads the config file and loads up the FPGA and then
our c code. That runs out of cpu-local SRAM and it always loads the
FPGA. But if it\'s warm, our application then crashes, instantly or
within an hour. It runs out of DRAM and bangs the FPGA registers a
lot. Our code programs a second FPGA, and if the application crashes
at startup, that one doesn\'t program.

So, it could be a problem with DRAM, or it could be something wrong
with the FPGA bus interface.

I was wondering if anyone else had a problem like this.

I don\'t know if this is even remotedly related to your architecture
but I had an ARM (NXP) parts randomly fail at higher temperatures,
but below the specifications (70C) and the fix was to add 1 flash read
wait-state. DRAM can have those too of course.
 
Am 26.09.20 um 04:41 schrieb jlarkin@highlandsniptechnology.com:

I was wondering if anyone else had a problem like this.

That leads to the question if it happens on more
than one board.


Gerhard
 
On Sat, 26 Sep 2020 07:25:36 +0200, Gerhard Hoffmann <dk4xp@arcor.de>
wrote:

Am 26.09.20 um 04:41 schrieb jlarkin@highlandsniptechnology.com:


I was wondering if anyone else had a problem like this.

That leads to the question if it happens on more
than one board.


Gerhard

Four.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On 9/25/20 7:41 PM, jlarkin@highlandsniptechnology.com wrote:
> I was wondering if anyone else had a problem like this.

We\'re currently debugging a ZYNQ 7045 that seems to lose some of its
register programming when first coming up. Re-writing after boot seems
to be our only workaround to the problem. So far just one board is
exhibiting the problem. I don\'t believe it is a heat problem as we have
a heatsink installed and air blowing on it, plus it happens on a cold start.

Buzz
 
On Wed, 30 Sep 2020 20:56:07 -0700, Buzz McCool
<buzz_mccool@yahoo.com> wrote:

On 9/25/20 7:41 PM, jlarkin@highlandsniptechnology.com wrote:
I was wondering if anyone else had a problem like this.

We\'re currently debugging a ZYNQ 7045 that seems to lose some of its
register programming when first coming up. Re-writing after boot seems
to be our only workaround to the problem. So far just one board is
exhibiting the problem. I don\'t believe it is a heat problem as we have
a heatsink installed and air blowing on it, plus it happens on a cold start.

Buzz

You might test to see if it is temperature sensitive. Just spritz it
with a heat gun and freeze spray.

We\'re still seeing our problem on some boxes. It looks like the
boot-time stuff, which runs in cpu SRAM, works, but then Linux crashes
when the chip is warm.

Vcc_core = 1.1 volts fixes it. 0.92 breaks it hard. People are still
hunting.

The tools for tracking down things like this are few.

Might be a DRAM problem, but it runs the DRAM test OK.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
Am 01.10.20 um 16:50 schrieb jlarkin@highlandsniptechnology.com:

The tools for tracking down things like this are few.

Might be a DRAM problem, but it runs the DRAM test OK.

Back in Z80 days I knew someone who could run DRAM tests
all day long without a single error.
And that was the only thing he could run on this Z80.

Turned out the Z80 supplies 7 Bits for refresh and he had
bought 64K rams with 8 bit refresh. And LOTs of them.

The DRAM test program did its own refresh by addressing
all possible row adresses.


Cheers, Gerhard
 
On 1.10.20 18.24, Gerhard Hoffmann wrote:
Am 01.10.20 um 16:50 schrieb jlarkin@highlandsniptechnology.com:


The tools for tracking down things like this are few.

Might be a DRAM problem, but it runs the DRAM test OK.

Back in Z80 days I knew someone who could run DRAM tests
all day long without a single error.
And that was the only thing he could run on this Z80.

Turned out the Z80 supplies 7 Bits for refresh and he had
bought 64K rams with 8 bit refresh. And LOTs of them.

The DRAM test program did its own refresh by addressing
all possible row adresses.


Cheers, Gerhard

This reminds me of a CP/M computer we built using a Z80
and DRAMs (with proper 7 bit refresh). The computer booted
fine and run as long as it was not left idle for longer
than some seconds. The idle period killed it totally.

After searching for the cause, it proved that the refresh
circuitry was totally broken (a bad chip), so the DRAMs
did not forget in milliseconds, but seconds.

--

-TV
 
On Thu, 1 Oct 2020 18:48:19 +0300, Tauno Voipio
<tauno.voipio@notused.fi.invalid> wrote:

On 1.10.20 18.24, Gerhard Hoffmann wrote:
Am 01.10.20 um 16:50 schrieb jlarkin@highlandsniptechnology.com:


The tools for tracking down things like this are few.

Might be a DRAM problem, but it runs the DRAM test OK.

Back in Z80 days I knew someone who could run DRAM tests
all day long without a single error.
And that was the only thing he could run on this Z80.

Turned out the Z80 supplies 7 Bits for refresh and he had
bought 64K rams with 8 bit refresh. And LOTs of them.

The DRAM test program did its own refresh by addressing
all possible row adresses.


Cheers, Gerhard


This reminds me of a CP/M computer we built using a Z80
and DRAMs (with proper 7 bit refresh). The computer booted
fine and run as long as it was not left idle for longer
than some seconds. The idle period killed it totally.

After searching for the cause, it proved that the refresh
circuitry was totally broken (a bad chip), so the DRAMs
did not forget in milliseconds, but seconds.

Sometimes a DRAM can remember for many seconds without refresh.

We will look into possible refresh issues. We hadn\'t considered that.

Worst case, we could maybe run a little program that did refresh.




--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
On Thursday, 1 October 2020 at 07:50:45 UTC-7, jla...@highlandsniptechnology.com wrote:
...
We\'re still seeing our problem on some boxes. It looks like the
boot-time stuff, which runs in cpu SRAM, works, but then Linux crashes
when the chip is warm.

Vcc_core = 1.1 volts fixes it. 0.92 breaks it hard. People are still
hunting.
...

I had a tricky problem with somewhat similar symptoms (I don\'t remember whether it was temperature-sensitive) but it also was cured by increasing the core voltage.

We worked with Xilinx on that and it seems that there can be package resonances in the 30-50MHz range (this was a Virtex 5 in a large package).
Our system was running with a 168MHz clock and 5 time-slots but one of time-slots had no significant processing. The result was that we had 30A pulses in the supply current at ~33MHz.

I did a board spin to increase external decoupling without any improvement.

The fix we took into production that avoided the problem was to process random data during the fifth time slot to reduce the supply current perturbations.

kw
 
In article <rl4to3$crq$1@dont-email.me>,
Tauno Voipio <tauno.voipio@notused.fi.invalid> wrote:

After searching for the cause, it proved that the refresh
circuitry was totally broken (a bad chip), so the DRAMs
did not forget in milliseconds, but seconds.

The official spec for 4164 DRAM chips says \"refresh at
least every 4ms\".

In an Oric (6502A based) computer, a ULA is used to
provide memory refresh as a side effect of building the
TV picture. Suppressing the memory refresh by holding
this ULA in a \"reset\" state for a second or so seems
to have no effect on memory contents, even though this
also stops the system 1MHz clock.

Everything comes back working when the reset is released.

It takes at least a couple of seconds of refresh/clock
loss for corruption of screen memory contents or the
system to crash (bad data/bad code in RAM, loss of
dynamic registers in the 6502A).

Didn\'t expect that, so DRAM *is* more resilient than you\'d
think.
--
--------------------------------------+------------------------------------
Mike Brown: mjb[-at-]signal11.org.uk | http://www.signal11.org.uk
 
On Fri, 2 Oct 2020 10:56:43 +0100 (BST), mjb@signal11.invalid (Mike)
wrote:

In article <rl4to3$crq$1@dont-email.me>,
Tauno Voipio <tauno.voipio@notused.fi.invalid> wrote:

After searching for the cause, it proved that the refresh
circuitry was totally broken (a bad chip), so the DRAMs
did not forget in milliseconds, but seconds.

The official spec for 4164 DRAM chips says \"refresh at
least every 4ms\".

In an Oric (6502A based) computer, a ULA is used to
provide memory refresh as a side effect of building the
TV picture. Suppressing the memory refresh by holding
this ULA in a \"reset\" state for a second or so seems
to have no effect on memory contents, even though this
also stops the system 1MHz clock.

Everything comes back working when the reset is released.

It takes at least a couple of seconds of refresh/clock
loss for corruption of screen memory contents or the
system to crash (bad data/bad code in RAM, loss of
dynamic registers in the 6502A).

Didn\'t expect that, so DRAM *is* more resilient than you\'d
think.

We\'re using a Micron 64G DDR BGA part, which is \"self refreshing\"
whatever that means. The data sheet is 132 pages. But there are a
jillion parameters that the Vivado software uses to build the DRAM
interface, so maybe we have one of those wrong. My guys like to tune
for performance, and I like to tune for reliable and good enough.

An older version of this product used a 68332 CPU running at 16 MHz.
Now we have dual ARM cores running at 600 MHz, with cache. We don\'t
need to push anything.



--

John Larkin Highland Technology, Inc

Science teaches us to doubt.

Claude Bernard
 
fredag den 2. oktober 2020 kl. 16.15.52 UTC+2 skrev jla...@highlandsniptechnology.com:
On Fri, 2 Oct 2020 10:56:43 +0100 (BST), mjb@signal11.invalid (Mike)
wrote:

In article <rl4to3$crq$1@dont-email.me>,
Tauno Voipio <tauno.voipio@notused.fi.invalid> wrote:

After searching for the cause, it proved that the refresh
circuitry was totally broken (a bad chip), so the DRAMs
did not forget in milliseconds, but seconds.

The official spec for 4164 DRAM chips says \"refresh at
least every 4ms\".

In an Oric (6502A based) computer, a ULA is used to
provide memory refresh as a side effect of building the
TV picture. Suppressing the memory refresh by holding
this ULA in a \"reset\" state for a second or so seems
to have no effect on memory contents, even though this
also stops the system 1MHz clock.

Everything comes back working when the reset is released.

It takes at least a couple of seconds of refresh/clock
loss for corruption of screen memory contents or the
system to crash (bad data/bad code in RAM, loss of
dynamic registers in the 6502A).

Didn\'t expect that, so DRAM *is* more resilient than you\'d
think.

We\'re using a Micron 64G DDR BGA part, which is \"self refreshing\"
whatever that means.

is it not the same part as on the microzed? try loading the standard
linux image and see if that also crashes
 

Welcome to EDABoard.com

Sponsor

Back
Top