Original (5V) Xilinx Spartan ?

Peter Alfke wrote:
I have a new idea how to simplify the metstable explanation and calculation.
Following Albert Einstein's advice that everything should be made as
simple as possible, but not any simpler:
Quite agree.

We all agree that the extra metastable delay occurs when the data input
changes in a tiny timing window relative to the clock edge. We also
agree that the metastable delay is a strong function of how exactly the
data transition hits the center of that window.
That means, we can define the width of the window as a function of the
expected metastable delay.

Measurements on Virtex-IIPro flip-flops showed that the metastable
window is:

• 0.07 femtoseconds for a delay of 1.5 ns.
• The window gets a million times smaller for every additional 0.5 ns of delay.

Every CMOS flip-flop will behave similarily. The manufacturer just has
to give you the two parameters ( x femtoseconds at a specified delay,
and y times smaller per ns of additional delay)

The rest is simple math, and it even applies to Jim's question of
non-asynchronous data inputs. I like this simple formula because it
directly describes the actual physical behavior of the flip-flop, and
gives the user all the information for any specific systems-oriented
statistical calculations.
eg: Take a system that is not randomly async, but by some quirk of
nature, actually has two crystal sources, one for clock, and another
for the data. These crystals are quite stable, but have a slow
relative phase drift due to their 0.5ppm mismatch.

Now lets say I want to know not just the statistical average, but to
get
some idea of the peak - the real failure mode is not 'white noise', but
has distinct failure peaks near 'phase lock', and nulls clear of this.
Seems management wants to know how bad it can get, for how long,
not just 'how good it is, on average', so we'll humour them :)

That's a "specific systems-oriented statistical calculation".
Please demonstrate how to apply the above x & y, to give me
all the information I seek.

-jg
 
Interesting.
Let's say we have two frequencies, 100 MHz even, and 100.000 050 MHz,
which is 50 Hz higher. These two frequencies will beat or wander over
each other 50 times per second.
Assuming no noise and no jitter, each step will be 10 ns divided by 2
million = 5 femtoseconds. That is 80 times wider than the capture window
for a 1.5 ns delay. Therefore we can treat this case the same way as my
original case with totally asynchronous frequencies. I think even jitter
has no bearing on this, because it also would be far, far wider that the
capture window. That means, this slowly drifting case is not special at
all, except that metastable events would be spaced by multiples of 20 ms
(1/50 Hz) apart. But that's irrelevant for events that occur on average
once per year or millenium.

Now, you will never ever, under any circumstances, get a guarantee not
to exceed a long delay, since by accident the flip-flop might go
perfectly metastable and stay for a long time. It's just an extremely
small probability, expressed as a very, very long MTBF. That is the
fundamental nature of metastability.

To repeat, I like the capture window approach because it is independent
of data rate and clock rate.
Greetings, and thanks for the discussion. It helped me clear up my mind...

Peter Alfke
=================================
Jim Granville wrote:
Peter Alfke wrote:

I have a new idea how to simplify the metstable explanation and calculation.
Following Albert Einstein's advice that everything should be made as
simple as possible, but not any simpler:

Quite agree.


We all agree that the extra metastable delay occurs when the data input
changes in a tiny timing window relative to the clock edge. We also
agree that the metastable delay is a strong function of how exactly the
data transition hits the center of that window.
That means, we can define the width of the window as a function of the
expected metastable delay.

Measurements on Virtex-IIPro flip-flops showed that the metastable
window is:

• 0.07 femtoseconds for a delay of 1.5 ns.
• The window gets a million times smaller for every additional 0.5 ns of delay.

Every CMOS flip-flop will behave similarily. The manufacturer just has
to give you the two parameters ( x femtoseconds at a specified delay,
and y times smaller per ns of additional delay)

The rest is simple math, and it even applies to Jim's question of
non-asynchronous data inputs. I like this simple formula because it
directly describes the actual physical behavior of the flip-flop, and
gives the user all the information for any specific systems-oriented
statistical calculations.

eg: Take a system that is not randomly async, but by some quirk of
nature, actually has two crystal sources, one for clock, and another
for the data. These crystals are quite stable, but have a slow
relative phase drift due to their 0.5ppm mismatch.

Now lets say I want to know not just the statistical average, but to
get
some idea of the peak - the real failure mode is not 'white noise', but
has distinct failure peaks near 'phase lock', and nulls clear of this.
Seems management wants to know how bad it can get, for how long,
not just 'how good it is, on average', so we'll humour them :)

That's a "specific systems-oriented statistical calculation".
Please demonstrate how to apply the above x & y, to give me
all the information I seek.

-jg
 
Peter Alfke wrote:
Interesting.
Let's say we have two frequencies, 100 MHz even, and 100.000 050 MHz,
which is 50 Hz higher. These two frequencies will beat or wander over
each other 50 times per second.
Assuming no noise and no jitter, each step will be 10 ns divided by 2
million = 5 femtoseconds. That is 80 times wider than the capture window
for a 1.5 ns delay. Therefore we can treat this case the same way as my
original case with totally asynchronous frequencies. I think even jitter
has no bearing on this, because it also would be far, far wider that the
capture window. That means, this slowly drifting case is not special at
all, except that metastable events would be spaced by multiples of 20 ms
(1/50 Hz) apart. But that's irrelevant for events that occur on average
once per year or millenium.

Now, you will never ever, under any circumstances, get a guarantee not
to exceed a long delay, since by accident the flip-flop might go
perfectly metastable and stay for a long time. It's just an extremely
small probability, expressed as a very, very long MTBF. That is the
fundamental nature of metastability.

To repeat, I like the capture window approach because it is independent
of data rate and clock rate.
Greetings, and thanks for the discussion. It helped me clear up my mind...
I don't want to beat a dead horse, but I do want to make clear that the
capture window model does not eliminate the frequency of the clock and
data from the failure rate calculation. The basic probability of a
failure from any single event is clearly explained by the window model,
but to get a failure rate you need to know the clock rates to know how
often the the possible event is tested, so to speak. If you double
either the clock or the data rate, you double the failure rate.

--

Rick "rickman" Collins

rick.collins@XYarius.com
Ignore the reply address. To email me use the above address with the XY
removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design URL http://www.arius.com
4 King Ave 301-682-7772 Voice
Frederick, MD 21701-3110 301-682-7666 FAX
 
rickman wrote:
Peter Alfke wrote:
snip
To repeat, I like the capture window approach because it is independent
of data rate and clock rate.
Greetings, and thanks for the discussion. It helped me clear up my mind...

I don't want to beat a dead horse, but I do want to make clear that the
capture window model does not eliminate the frequency of the clock and
data from the failure rate calculation. The basic probability of a
failure from any single event is clearly explained by the window model,
but to get a failure rate you need to know the clock rates to know how
often the the possible event is tested, so to speak. If you double
either the clock or the data rate, you double the failure rate.
I'm collecting empirical results - do you have any URLs, especially
covering the 'double either' aspect ?

-jg
 
"Jim Granville" <jim.granville@designtools.co.nz> wrote in message
news:3F60E3A8.116A@designtools.co.nz...
Peter Alfke wrote:
snip
Measurements on Virtex-IIPro flip-flops showed that the metastable
window is:

. 0.07 femtoseconds for a delay of 1.5 ns.
. The window gets a million times smaller for every additional 0.5 ns of
delay.

Every CMOS flip-flop will behave similarily. The manufacturer just has
to give you the two parameters ( x femtoseconds at a specified delay,
and y times smaller per ns of additional delay)
snip
That's a "specific systems-oriented statistical calculation".
Please demonstrate how to apply the above x & y, to give me
all the information I seek.

-jg
The asynchronous system produces even distribution across the sampling clock
cycle.
The synchronous system with arbitrary phase gives you a lumped distribution
at the phase offset.
The critical point to realize that you won't get a system to be consistently
going metastable is that there is *significant* jitter in the sampling and
data clocks relative to the metastability window.

Determine the distribution of the data relative to the sample point. The
peak of this (gaussian?) distribution will be the worst-case error point.
What percentage of that statistical distribution is within the 0.07
femtosecond window? This provides for the "worst case" for management or
for engineers.

It may not have been as easy when the metastability window was much larger
than the system jitter.
 

Welcome to EDABoard.com

Sponsor

Back
Top