A
Austin Lesea
Guest
Hello from the SEU Desk:
Peter defended us rather well, but how can one seriously question real
data vs. babble and drivel?
Well, after 919 equivalent device years of experiment at sea level,
Albuquerque (~5100 feet), and White Mountain Research Center (12,500
feet) the Rosetta Experiment* on the 3 groups of 100 2V6000s has logged
a grand total of 45 single soft error events, for a grand total of 20.4
years MTBF (or 5335 FITs -- FITs and MTBF are related by a simple
formula -- mean time between failures vs failures per billion hours or
FITs).
It actual tests done by third parties, it takes from 6 to 80 soft errors
(flips) with about 10 flips on average to affect a standard non
redundant design in our FPGA. This is just common sense, as for years
ASIC vendors trashed FPGAs as they "use 30 transistors to do the job of
just one!" Guess what? What was our "downfall" is now a strength!
True. So that means that a 2V6000 at sea level gets a logic disturbing
hit once every 200 years.
535 FITs (soft errors affecting customer design) for a 6 million gate
FPGA.
The biggest part A**** makes is 6 times smaller, so for our 2V1000, we
get about 90 FITs. For a 3S1000, it is 30% better (see blelow), or 63
FITs. OK A****, tell us what your actual as measured FIT rate is for
your largest device? Go ahead, I'd like to know. How many device years
do you have to back it up? 1000 actual years? Nope. Didn't think so.
You know, if you want to use FITs, we'll use FITs. But I am afraid it
will give those spreading nonsense fits (pun intended).
Now if you use triple redundant logic, checksums, ECC codes, you can
design so you NEVER HAVE AN UPSET.
As has been published, Xilinx FPGAs are on the Mars Landers (on their
way there now), so someone is not concerned about upsets. Even periodic
reconfiguring (scrubbing) eliminates a major portion of the probability
of logic affecting upsets. Virtex II, and II Pro have ways to actually
check, detect, and correct the flipped bits using the ICAP feature. For
details, contact your FAE. If 535 FITs is completely unacceptable for
that critical application you have, this makes it 0 FITs from soft
errors.
Some of our customers have now qualified Virtex II Pro as the ONLY
solution to the soft error problem, as ASICs can't solve it (easily like
we have), and other FPGAs do not have the facts to back up their
claims. That is quite new: the Xilinx FPGA is the only safe design
choice to make? Maybe it is right now, as it is the only choice where
all of the variables are measured, understood, and techniqies exist to
reduce the risk to near zero, or whatever level is acceptable.
Oh, and yes, the 90nm technology is now 30% better than the 150 nm
technology (15% better than the 130 nm technology) as proven by our
tests (as presented to the MAPLD conference last month).
So, you can run around blathering on about data taken by grad students
(no offense, I was one at one time), or you can look at our real time
results from three locations on 300 devices being tested 24 by 7, or
talk to us about our beam tests in protons and neutrons, or ways to
design to get the desired level of reliability for your system.
And, you may want to consider going with the vendor who has been
actively working on soft error mitigation for more than five years now.
And has real results to show for it.
Let Moore's Law Rule!
Austin
*Rosetta Stone was the key that unlocked ancient Egyptian wisdom to the
world. The stone had an inscription in three languages, which allowed
archeologists to decipher ancient Egyptian writings. The Rosetta FPGA
Experiment is designed to translate beam testing (proton or neutron)
into actual atmospheric, or high altitude results, without having to
actually build huge arrays of FPGAs and send them to mountain tops
around the world to get real results. It was also designed to answer the
basic questions of altitude effects, position effects, and how smaller
device geometries behave in the real world.
Peter defended us rather well, but how can one seriously question real
data vs. babble and drivel?
Well, after 919 equivalent device years of experiment at sea level,
Albuquerque (~5100 feet), and White Mountain Research Center (12,500
feet) the Rosetta Experiment* on the 3 groups of 100 2V6000s has logged
a grand total of 45 single soft error events, for a grand total of 20.4
years MTBF (or 5335 FITs -- FITs and MTBF are related by a simple
formula -- mean time between failures vs failures per billion hours or
FITs).
It actual tests done by third parties, it takes from 6 to 80 soft errors
(flips) with about 10 flips on average to affect a standard non
redundant design in our FPGA. This is just common sense, as for years
ASIC vendors trashed FPGAs as they "use 30 transistors to do the job of
just one!" Guess what? What was our "downfall" is now a strength!
True. So that means that a 2V6000 at sea level gets a logic disturbing
hit once every 200 years.
535 FITs (soft errors affecting customer design) for a 6 million gate
FPGA.
The biggest part A**** makes is 6 times smaller, so for our 2V1000, we
get about 90 FITs. For a 3S1000, it is 30% better (see blelow), or 63
FITs. OK A****, tell us what your actual as measured FIT rate is for
your largest device? Go ahead, I'd like to know. How many device years
do you have to back it up? 1000 actual years? Nope. Didn't think so.
You know, if you want to use FITs, we'll use FITs. But I am afraid it
will give those spreading nonsense fits (pun intended).
Now if you use triple redundant logic, checksums, ECC codes, you can
design so you NEVER HAVE AN UPSET.
As has been published, Xilinx FPGAs are on the Mars Landers (on their
way there now), so someone is not concerned about upsets. Even periodic
reconfiguring (scrubbing) eliminates a major portion of the probability
of logic affecting upsets. Virtex II, and II Pro have ways to actually
check, detect, and correct the flipped bits using the ICAP feature. For
details, contact your FAE. If 535 FITs is completely unacceptable for
that critical application you have, this makes it 0 FITs from soft
errors.
Some of our customers have now qualified Virtex II Pro as the ONLY
solution to the soft error problem, as ASICs can't solve it (easily like
we have), and other FPGAs do not have the facts to back up their
claims. That is quite new: the Xilinx FPGA is the only safe design
choice to make? Maybe it is right now, as it is the only choice where
all of the variables are measured, understood, and techniqies exist to
reduce the risk to near zero, or whatever level is acceptable.
Oh, and yes, the 90nm technology is now 30% better than the 150 nm
technology (15% better than the 130 nm technology) as proven by our
tests (as presented to the MAPLD conference last month).
So, you can run around blathering on about data taken by grad students
(no offense, I was one at one time), or you can look at our real time
results from three locations on 300 devices being tested 24 by 7, or
talk to us about our beam tests in protons and neutrons, or ways to
design to get the desired level of reliability for your system.
And, you may want to consider going with the vendor who has been
actively working on soft error mitigation for more than five years now.
And has real results to show for it.
Let Moore's Law Rule!
Austin
*Rosetta Stone was the key that unlocked ancient Egyptian wisdom to the
world. The stone had an inscription in three languages, which allowed
archeologists to decipher ancient Egyptian writings. The Rosetta FPGA
Experiment is designed to translate beam testing (proton or neutron)
into actual atmospheric, or high altitude results, without having to
actually build huge arrays of FPGAs and send them to mountain tops
around the world to get real results. It was also designed to answer the
basic questions of altitude effects, position effects, and how smaller
device geometries behave in the real world.