FPGA Timing question

E

Ed Anuff

Guest
I'm using an FPGA (a Xilinx Spartan 3 XC3S400-4PQ208) in a design
where one of it's functions is to sit between a cpu's SDRAM controller
address and control lines and the sdram chips in order to register and
buffer the signals and generate some additional chip selects. The
data lines are not buffered. The processor in question (an Analog
Devices ADSP-BF532) has the ability to pipeline the SDRAM address and
control signals to support the delay introduced by this sort of
buffering by one cycle.

address, control, clock signals (A0-12, DQMB0-1, RAS#, CAS#, clk,
etc.):
CPU <-> FPGA <-> SDRAM

data signals (D0-D15):
CPU <-> SDRAM

The question is how to analyze how long the FPGA needs to register the
address and control lines on the input pins at the first rising edge
of the clock signal received from the processor and then be able to
have the buffered address and control lines available on the output
pins on the FPGA at next rising edge of the clock. I'm not sure what
I should be looking at in the timing analyzer to get the best estimate
of what the timing parameters involved are. Also, what constraints
should I set to minimize the timing overhead?

Thanks

Ed
 
Hi Ed,
Well, if you use a DCM to manage your on chip clock, you should be able to
adjust the timing to whatever you want. For example on the output side, you
can set the DCM to eliminate any phase difference between the on-chip clock
and the SDRAM's clock. Use the IOB's output FFs. Then the delay introduced
by the FPGA is simply the 'clock CLK to PAD' IOB output delay, specified in
the data sheet, (<2ns at a guess), assuming you meet the setup
specification. Also, you can fiddle about with the DCM phase shift to
further optimise things, have a play! As for getting the data into the chip,
you need to consider the 'Pad to I output' .... etc..
It's all in the manuals!
Cheers, Syms.
 
"Symon" <symon_brewer@hotmail.com> wrote in message news:<2gpvqkF5g3ujU1@uni-berlin.de>...
Hi Ed,
Well, if you use a DCM to manage your on chip clock, you should be able to
adjust the timing to whatever you want. For example on the output side, you
can set the DCM to eliminate any phase difference between the on-chip clock
and the SDRAM's clock. Use the IOB's output FFs. Then the delay introduced
by the FPGA is simply the 'clock CLK to PAD' IOB output delay, specified in
the data sheet, (<2ns at a guess), assuming you meet the setup
specification. Also, you can fiddle about with the DCM phase shift to
further optimise things, have a play! As for getting the data into the chip,
you need to consider the 'Pad to I output' .... etc..
It's all in the manuals!
Cheers, Syms.
If you are only adding one clock delay through the part, the input
setup timing is important, too. Time from an input pad to an output
flip-flop can be quite large depending on the pinout. You need to
constrain input setup time which can be done globally in the .ucf file
for the common clock signal like:
OFFSET = IN 3 ns BEFORE "DRAM_CLK";
If your design is allowed two clock delays, using input and output
flip-flops will no doubt do the trick, however with one clock delay
you would need to balance input setup time against clock-to-output.
In this case you may find that using an internal flip-flop gives a
better balance, especially if the input and output pins are not near
eachother.
Another note on single-stage delays. When you tell the tools to place
a flip-flop in the IOB, and the flip-flop's input and output connect
to two pads, the tool will choose on its own whether to place it in
the input or output IOB. Forcing the flip-flop to the other location
would seem to be possible by placing a library component like OFD or
IFD to specify the IOB, however an inspection of these macros shows
they simply have a flip-flop with IOB=TRUE. To force the flop into
the output I've found I have to place a flip-flop without IOB=TRUE and
then enable packing registers into IOB for outputs only in the mapping
options.
 

Welcome to EDABoard.com

Sponsor

Back
Top