D
David Ashley
Guest
Hi,
In opencores DDR implementation, the author uses a PLL
to generate a clock of multiple phases. The PLL outputs
true and inverted signals, in perfect sync.
However the author doesn't use the inverting output of
the PLL -- to generate an inverted output he inverts the
true clock output and uses that.
I'm trying to figure out why he did this. Is it because if
you use the single clock source, then you only need one
global clock buffer -- but if you use both, presumably
there would be 2 global clock buffers used, and this
is excessive for the design?
Moreover the design needs a 2 phased clock. All clocks
are 100 mhz. So 2 phases, 2 types (true/inverted) with
minimum clock skew would necessitate 4 global clock
buffers, right?
Instead evidently he opted for just the 2 true clocks,
then uses inverters when he wants the inverted version.
Now this means the inverted signal will be delayed by
the inverter. This appears on the order of .5 to 1 ns in
the parts I'm interested in.
So perhaps it's a tradeoff -- minimum skew requires
4 global clock buffers. Perhaps the inverter approach
conserves global clock buffers at the expense of a little
bit of skew.
Anyway I just am looking for a sanity check. Does the
reasoning above sound...reasonable? When designing
is it necessary to keep these things in mind?
Thanks--
Dave
In opencores DDR implementation, the author uses a PLL
to generate a clock of multiple phases. The PLL outputs
true and inverted signals, in perfect sync.
However the author doesn't use the inverting output of
the PLL -- to generate an inverted output he inverts the
true clock output and uses that.
I'm trying to figure out why he did this. Is it because if
you use the single clock source, then you only need one
global clock buffer -- but if you use both, presumably
there would be 2 global clock buffers used, and this
is excessive for the design?
Moreover the design needs a 2 phased clock. All clocks
are 100 mhz. So 2 phases, 2 types (true/inverted) with
minimum clock skew would necessitate 4 global clock
buffers, right?
Instead evidently he opted for just the 2 true clocks,
then uses inverters when he wants the inverted version.
Now this means the inverted signal will be delayed by
the inverter. This appears on the order of .5 to 1 ns in
the parts I'm interested in.
So perhaps it's a tradeoff -- minimum skew requires
4 global clock buffers. Perhaps the inverter approach
conserves global clock buffers at the expense of a little
bit of skew.
Anyway I just am looking for a sanity check. Does the
reasoning above sound...reasonable? When designing
is it necessary to keep these things in mind?
Thanks--
Dave