Using LUTs to create a phase delayed clock - is it reproduci

  • Thread starter Aleksandar Kuktin
  • Start date
A

Aleksandar Kuktin

Guest
Hi all,

I'm making a system on iCE40 and I've ran out of PLLs. The design
incorporates two DDR2 controllers that need to perform several operations
delayed with respect to the system clock. I'm gonna use a phase delayed
clock for that.

So my approach is to take the clock signal and pipe it through several
LUTs, thus delaying it.

But - how comparable are LUT delays between different chips? As in,
different pieces of FPGA silicon? If I implement this design, will every
chip be a special snowflake that needs to be calibrated separately and
use different lengths of LUT delay gates?
 
Aleksandar Kuktin wrote on 11/5/2017 3:38 PM:
Hi all,

I'm making a system on iCE40 and I've ran out of PLLs. The design
incorporates two DDR2 controllers that need to perform several operations
delayed with respect to the system clock. I'm gonna use a phase delayed
clock for that.

So my approach is to take the clock signal and pipe it through several
LUTs, thus delaying it.

But - how comparable are LUT delays between different chips? As in,
different pieces of FPGA silicon? If I implement this design, will every
chip be a special snowflake that needs to be calibrated separately and
use different lengths of LUT delay gates?

It can be done, but dependant on how accurate you need the delay to be. If
you can live with variation over chips, temperature and power supply voltage
of perhaps 50 % (a number scientifically pulled from air), it will work.

If you are just trying to create some delay so data is stable when clock
arrives or something like that were you have a lot of tolerance this can
work. If you need precision (depending on your value of precision) it
won't. There is also the issue of routing delays varying from one layout to
another. My boss at one job had delayed a clock by manually routing the
signal through a mux near the IOB. Then he didn't document how he made it
work, lol. We needed to make a change to the design and he had to show us
how he made it work including how he used the manual routing tool. Even
then it was not documented by anyone. What a loose cannon!

As is typically done when people ask "how long is a piece of string", what
are you doing exactly? Maybe there is a better approach?

--

Rick C

Viewed the eclipse at Wintercrest Farms,
on the centerline of totality since 1998
 
In article <otnssb$19i$1@gioia.aioe.org>, akuktin@gmail.com says...
Hi all,

I'm making a system on iCE40 and I've ran out of PLLs. The design
incorporates two DDR2 controllers that need to perform several operations
delayed with respect to the system clock. I'm gonna use a phase delayed
clock for that.

So my approach is to take the clock signal and pipe it through several
LUTs, thus delaying it.

But - how comparable are LUT delays between different chips? As in,
different pieces of FPGA silicon? If I implement this design, will every
chip be a special snowflake that needs to be calibrated separately and
use different lengths of LUT delay gates?

I'm very much a learner with FPGA but just my general hardware experience
suggests that would be a really bad way to go. Random use of chips
to make a delay seems a bit flakey to me. You'd surely need to create
a lock step somehow?

Reading the Xilinx design book (Churiwala) I did notice that the layout
of the FPGA can be changed by the compiler/software when manipulating
for phase variance which might at first glance suggest software can accomodate
you - but presumably that would mean a different layout each time. I'm guessing
that would not be good.

(The Xilinx book has a whole chapter on clocking by the way)

From what I've read a more careful clocking design in the first place
using the PLLS to generate something is really the way to go.

Perhaps I've missed something someone experienced can say though.

--

john

=========================
http://johntech.co.uk
=========================
 
On Sunday, 11/5/2017 3:38 PM, Aleksandar Kuktin wrote:
Hi all,

I'm making a system on iCE40 and I've ran out of PLLs. The design
incorporates two DDR2 controllers that need to perform several operations
delayed with respect to the system clock. I'm gonna use a phase delayed
clock for that.

So my approach is to take the clock signal and pipe it through several
LUTs, thus delaying it.

But - how comparable are LUT delays between different chips? As in,
different pieces of FPGA silicon? If I implement this design, will every
chip be a special snowflake that needs to be calibrated separately and
use different lengths of LUT delay gates?

There are some old Xilinx app notes written for Virtex E series if
memory serves me correctly. They talk about how to use carry chains
to build a variable delay. The point of this is that you could
use a variable delay line to create your phase delay if you have
some sort of feedback mechanism to detect the optimum delay point.
This is building a delay-locked loop out of fabric elements. In the
old FPGA's the carry chains were much faster than LUTs and therefore
gave you much finer grain in the delay. They are also fixed in the
layout, giving a better chance of repeatability between builds.

--
Gabor
 
"Aleksandar Kuktin" <akuktin@gmail.com> wrote in message
news:eek:tnssb$19i$1@gioia.aioe.org...
Hi all,

I'm making a system on iCE40 and I've ran out of PLLs. The design
incorporates two DDR2 controllers that need to perform several operations
delayed with respect to the system clock. I'm gonna use a phase delayed
clock for that.

So my approach is to take the clock signal and pipe it through several
LUTs, thus delaying it.

But - how comparable are LUT delays between different chips? As in,
different pieces of FPGA silicon? If I implement this design, will every
chip be a special snowflake that needs to be calibrated separately and
use different lengths of LUT delay gates?

It seems to be a bad idea. I'd recommend an external PLL with a couple of
outputs then.
 

Welcome to EDABoard.com

Sponsor

Back
Top