J
John Larkin
Guest
On Mon, 27 Mar 2006 22:35:31 +0200, "Antti Lukats"
<antti@openchip.org> wrote:
duty cycle and edges could jitter a bit and not cause problems. So I
could build an internal ring oscillator and use that to resync the
incoming 16 MHz clock (dual-rank d-flops again) on the theory that the
input glitches will never last anything like the 300-ish MHz resync
clock period. And that's even easier.
Thanks for the input,
John
<antti@openchip.org> wrote:
Nice idea. But I do need the 16 MHz to be long-term correct, although"John Larkin" <jjlarkin@highNOTlandTHIStechnologyPART.com> schrieb im
Newsbeitrag news:6jgg221p6iuffrbbb6dtml39fn3u9sdu4k@4ax.com...
We have a perfect-storm clock problem. A stock 16 MHz crystal
oscillator drives a CPU and two Spartan3 FPGAs. The chips are arranged
linearly in that order (xo, cpu, Fpga1, Fpga2), spaced about 1.5"
apart. The clock trace is 8 mils wide, mostly on layer 6 of the board,
the bottom layer. We did put footprints for a series RC at the end (at
Fpga2) as terminators, just in case.
Now it gets nasty: for other reasons, the ground plane was moved to
layer 5, so we have about 7 mils of dielectric under the clock
microstrip, which calcs to roughly 60 ohms. Add the chips, a couple of
tiny stubs, and a couple of vias, and we're at 50 ohms, or likely
less.
And the crystal oscillator turns out to be both fast and weak. On its
rise, it puts a step into the line of about 1.2 volts in well under 1
ns, and doesn't drive to the Vcc rail until many ns later. At Fpga1,
the clock has a nasty flat spot on its rising edge, just about halfway
up. And it screws up, of course. The last FPGA, at the termination, is
fine, and the CPU is ancient 99-micron technology or something and
couldn't care less.
Adding termination at Fpga2 helps a little, but Fpga1 still glitches
now and then. If it's not truly double-clocking then the noise margin
must be zilch during the plateau, and the termination can't help that.
One fix is to replace the xo with something slower, or kluge a series
inductor, 150 nH works, just at the xo output pin, to slow the rise.
Unappealing, as some boards are in the field, tested fine but we're
concerned they may be marginal.
So we want to deglitch the clock edges *in* the FPGAs, so we can just
send the customers an upgrade rom chip, and not have to kluge any
boards.
Some ideas:
1. Use the DCM to multiply the clock by, say, 8. Run the 16 MHz clock
as data through a dual-rank d-flop resynchronizer, clocked at 128 MHz
maybe, and use the second flop's output as the new clock source. A
Xilinx fae claims this won't work. As far as we can interpret his
English, the DCM is not a true PLL (ok, then what is it?) and will
propagate the glitches, too. He claims there *is* no solution inside
the chip.
2. Run the clock in as a regular logic pin. That drives a delay chain,
a string of buffers, maybe 4 or 5 ns worth; call the input and output
of the string A and B. Next, set up an RS flipflop; set it if A and B
are both high, and clear it if both are low. Drive the new clock net
from that flop. Maybe include a midpoint tap or two in the logic, just
for fun.
3. Program the clock logic threshold to be lower. It's not clear to us
if this is possible without changing Vccio on the FPGAs. Marginal at
best.
Any other thoughts/ideas? Has anybody else fixed clock glitches inside
an FPGA?
John
you can run a genlocked NCO clocked from in-fabric onchip oscillator. your
internal recovered clock will have jitter +-1 clock period of the ring
oscillator (what could be as high as about 370MHz in S3), you might need a
some sync logic that will ensure the 16mhz clock edges are only used to
adjust the NCO.
duty cycle and edges could jitter a bit and not cause problems. So I
could build an internal ring oscillator and use that to resync the
incoming 16 MHz clock (dual-rank d-flops again) on the theory that the
input glitches will never last anything like the 300-ish MHz resync
clock period. And that's even easier.
Thanks for the input,
John