EDK : FSL macros defined by Xilinx are wrong

You can fit yourself if you are very good with a soldering iron but beware as the resistor sites
are 0201 size.
0201!

I thought 0402 was bad, 0201 must be like dust.

I presume components this size remove all the headache of decoupling/
terminating BGA designs, but are they easy to get hold of, and where
do you source them in the UK (they're not in the RS catalogue :-( ).


Nial.
 
Hi,

http://www.enterpoint.co.uk/moelbryn/broaddown2.html
this board looks nice, also the price is okay...

regards,
Benjamin
 
Hello,

Brijesh <brijesh_xyz@cfrsi_xyz.com> wrote in message news:<d3e426$rbl$1@solaris.cc.vt.edu>...
The double clocking of the CRC generator mentioned was when strobe was
directly used to clock the CRC generator. In my design I am using
internal clocl for CRC generator. Although I am still using the strobe
to clock in the data at the IOB's. So right now I suspect thats where
the problem is.
How do you know when new values are available to take them into your
core clock domain? Can double clocking occure at this point?

So currently Iam trying to identify the cause,
Is it really the double clocking that is causing the trouble?
You could put the captured data on a debug output, capture it with an
logic analyzer and compare it with your reference. High effort,
though.

I don't know much about these issues (I design circuits for FPGA/ASICs
and do no "real" hardware), but don't you need to take LVCMOS33 for
outputs?

The Voh and Vol for LVCMOS33 and LVTTL33 on V2 device are identical and
match that of the IDE spec.
Really? I read in the ds031.pdf (v3.4) on page 4 in module 3, that Voh
is 2.4 V for LVTTL and Vcco-0.4 for LVCMOS33. But anyway, the
requirement is 2.4 V for UDMA3. I had the more restrictant
Voh2=VDD3-0.51V in mind, which is required for UDMA5 and greater.

Also just read that the LVTTL and LVCMOS
inputs have approximately 100mV of hysteresis.
Again for UDMA5 and greater, there are additional requirements for the
input thresholds to keep the average of the two close to 1.5 V. I
think this is to ensure that a rising STROBE and a falling DATA edge
will switch at the same time.
Hmm - I see that the requirement for the 320 mV hysteresis is for
UDMA5 and greater only, too.


Sebastian Weiser
 
0201 fits nicely on the 1mm via grid underneath the FPGA allowing the
optimal termination point. 0201 are not generally available in the common
used UK catalogues as yet. Digi-Key do have stock though if you can suffer
the US export questions.

John Adair
Enterpoint Ltd. - Home of Broaddown2. The Ultimate Spartan3 Development
Board.
http://www.enterpoint.co.uk


"Nial Stewart" <nial@nialstewartdevelopments.co.uk> wrote in message
news:425acdf6$0$2595$da0feed9@news.zen.co.uk...
You can fit yourself if you are very good with a soldering iron but
beware as the resistor sites are 0201 size.

0201!

I thought 0402 was bad, 0201 must be like dust.

I presume components this size remove all the headache of decoupling/
terminating BGA designs, but are they easy to get hold of, and where
do you source them in the UK (they're not in the RS catalogue :-( ).


Nial.
 
You have to distinguish between specs abd eality.
The output voltage specs for LVCMOS and LVTTL are different, for
historical reasons, since CMOS pulls up to the rail, and TTL
traditionally had two diode drops below Vcc, therefore the 2.4 V (going
back to T.I. in the 'sixties)
In reality, the two types of outputs are the same, and "both" pull up
to the rail.
Similarily with Vol = 0.4 V. That stems from bipolar outputs that never
reach ground. In CMOS (and all FPGAs are CMOS) the Vol at zero current
is really zero volt.
We carry a lot of distracting baggage, accumulated during 40 years of
digital IC evolution...
Peter Alfke
 
Bret Wade wrote:
Hi Rudi,

We've seen one similar case and that was a Windows only failure. If you
have access to a Linux or Solaris machine, that might work. On the other
hand, that case wasn't an SP1 regression like yours is, so they may be
different problems. If that doesn't work, a webcase would be the next
suggestion.

Regards,
Bret
Bret,

the reported problem occurred on a Linux system. We only use
a windows notebook for downloading of bit streams and ChipScope,
otherwise only Linux.

Best Regards,
rudi
=============================================================
Rudolf Usselmann, ASICS World Services, http://www.asics.ws
Your Partner for IP Cores, Design, Verification and Synthesis
 
Hi Ray,

Ray Andraka wrote:

Brendan Cullen wrote:

Hi,


If you're targeting V4 then you are targeting one of our SX or LX devices
and you are using 7.1.01i.



not necessarily. I'm targeting an SX55 and using 6.3sp3.
I agree that you can use ISE to target the SX55 using 6.3sp3.

But XPower is a different kettle of fish to the other tools in ISE. XPower's
SX & LX support was only added after silicon-based characterisation was
performed. This support for SX & LX was added to XPower in 7.1.01i.

By the way - 7.1.01i allows an FX design to be opened in XPower. That is a
bug - and has been corrected in 7.1.02i. 7.1.02i is scheduled for
availability in late April.

I hope this clarifies things,

Brendan

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930 Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com

"They that give up essential liberty to obtain a little
temporary safety deserve neither liberty nor safety."
-Benjamin Franklin, 1759
 
"Peter Alfke" <alfke@sbcglobal.net> wrote in message news:<1113362657.039477.81570@z14g2000cwz.googlegroups.com>...
[LVTTL and LVCMOS33]
In reality, the two types of outputs are the same, and "both" pull up
to the rail.
That were my guess, but I usually try to stick to the exact wording of
a specification, only to be sure.

Wouldn't it be possible to write the more stringent values to the
Xilinx specs, independent of what JEDEC (or whatever) says? In some
corner cases (such as the ATA/ATAPI requirements) this may help a bit.
Currently I have a similar problem with a microcontroller
specification: The given values may reflect maximum worst case, but
are that pessimistic that they don't help at all.


Sebastian Weiser
 
Sebastian, I would agree with you, but we have to deal with thousands
of customers, and there are some who look strictly at compliance with
the written spec. Common sense and basic engineering knowledge does not
always apply. That's why these strange old specs survive.
Peter Alfke
=============
Sebastian Weiser wrote:
"Peter Alfke" <alfke@sbcglobal.net> wrote in message
news:<1113362657.039477.81570@z14g2000cwz.googlegroups.com>...
[LVTTL and LVCMOS33]
In reality, the two types of outputs are the same, and "both" pull
up
to the rail.

That were my guess, but I usually try to stick to the exact wording
of
a specification, only to be sure.

Wouldn't it be possible to write the more stringent values to the
Xilinx specs, independent of what JEDEC (or whatever) says? In some
corner cases (such as the ATA/ATAPI requirements) this may help a
bit.
Currently I have a similar problem with a microcontroller
specification: The given values may reflect maximum worst case, but
are that pessimistic that they don't help at all.


Sebastian Weiser
 
http://opencollector.org/history/freecore/Build%20your%20own%20ByteBlaster!.htm

try this site.


Or

altera practacally gives away the schematic in the byteblaster manual.
http://www.altera.com/literature/ug/ug_bbii.pdf

Google is your friend :)
 
This appears to be fixed in 7.1i the patch from this page:
http://www.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=21168


WebPACK is a free downloadable ISE subset that now supports Linux.
http://www.xilinx.com/xlnx/xebiz/designResources/ip_product_details.jsp?sGlobalNavPick=PRODUCTS&sSecondaryNavPick=Design+Tools&key=DS-ISE-WEBPACK

Regards,
Arthur
 
Austin, hello

Thank you for your reply. I have visited the site you mentioned but
failed to find
an LVDS card with a pci interface.

I have noticed that you are a Xlinx member, could you point other
companies that implement LVDS on your FPGAS?

Thanks in advance, and sorry to bother you,
Marc.
 
soos,

Was it: http://www.dyneng.com/pci_lvds.html
or,
another one of the 907 hits on google for:

LVDS Xilinx PCI card

I was just using google to find the card. A lot of links to wade through.

Good luck,

Austin



soos wrote:

Austin, hello

Thank you for your reply. I have visited the site you mentioned but
failed to find
an LVDS card with a pci interface.

I have noticed that you are a Xlinx member, could you point other
companies that implement LVDS on your FPGAS?

Thanks in advance, and sorry to bother you,
Marc.
 
downunder wrote:
Hi,

I was wondering if anyone has any advice or experience on the
following:

I need to clock some logic with a clock signal that is connected to
an
I/O pin (PCB design constraint!). What are the consequences of
connecting this signal to a DCM input? I'm assuming it's possible...

Does anyone have any recommendations? I'm using Virtex II Pro and the
clock signal is in the 100MHz range.
Howdy,

Yes, it should be possible. You'll probably pick up a little jitter,
and definitely will have a phase offset due to prop delay, but probably
nothing that is a show stopper.

You didn't say if you need to clock I/O signals into the part with this
clock. Or out of the part. For the out direction, is there a
requirement that the I/O's toggle with a particular phase relationship
to the input clock? If you do have I/O requirements, I believe you can
work around it by using the fixed phase offset of the DCM to dial in a
compensation for the prop delay of the input buffer, internal routing
to the DCM, GBUF, and global clock delay - may take a few experiments
to get it right. I/O requirements are the toughest thing to meet with
a situation like yours, but at 100 MHz, it should be doable by locking
down the routing from the input IOB to the DCM(s).

If you have lax or no I/O timing requirements, most of the above
doesn't apply and your job will be even easier.

Good luck,

Marc
 
"Joe Pfeiffer" <pfeiffer@cs.nmsu.edu> skrev i meddelandet
news:1bhdikbazp.fsf@cs.nmsu.edu...
"Ulf Samuelsson" <ulf@NOSPAMatmel.com> writes:

The basis of the patent is the "ARM Ltd discovery" that less code is
better
than more code.
Code compression for RISC is mentioned already in the original RISC
paper by
Katevenis.

Original RISC paper by Katevenis? While I was able to find a 1983
paper by him, near as I can tell the original RISC paper is still the
one by Patterson amd Ditzel in 1980.
--
OK, but both preceeded the ARM chip and code compression for RISC
is therefore not a new thing discovered by ARM Ltd.
They base their Thumb patent on the claim that architectures have only been
developed
to increase the performance, not to reduce code size, and that they
discovered the need
for code space reduction for RISC.
I believe that the width of datapaths has been driven mostly by the need to
increase addressspace.
If you do not accept the ARM claim, then the Thumb patent becomes really
weak.

Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
--
Best Regards
Ulf Samuelsson ulf@atmel.com
Atmel Nordic AB
Mail: Box 2033, 174 02 Sundbyberg, Sweden
Visit: Kavallerivägen 24, 174 58 Sundbyberg, Sweden
Phone +46 (8) 441 54 22 Fax +46 (8) 441 54 29
GSM +46 (706) 22 44 57
 
You might want to check the load as it may very well be that once your
load increase the delay increase and your clock and data are no more
meeting the timing requirements and hence the fail in the second case.

Many time just inverting the clock solve the problem but you should
check your timing report and don't forget of course you give the
timing requirements, as you might need to use something in between and
not as "drastic" as 180

Have Fun.
 
downunder wrote:
Yes, I do need to clock IO signals in with this clock. The specific
core I'm using is the PLB GEMAC, and the signal is gmii_rx_clk.
Taking the naive approach and connecting the clock input to a DCM
works...in that the device can send and receive packets successfully.
However, if I add a peripheral to the OPB, the GEMAC stops working,
and I'm led to believe that it only worked by chance in the first
instance. The GEMAC data sheet is not terribly helpful I'm afraid
(either that, or I'm not seeing the helpful data).
Ah, now I better understand what you're trying to do. I would have to
agree with you that the GEMAC "data sheet", at least that I have access
to, is suprisingly brief and lacking of detail. I suppose they are
expecting that you'll just take their source and use it, no questions
asked.

Anyway, back to your clock problem. The two global clocks that the
GEMAC core says it needs are almost certainly the rx clock from the
GMII device and a main reference clock - and it almost certainly
assumes those global clocks come in on GCLK pins. Since yours doesn't,
you'll probably need to try to come close to duplicating the IBUFG ->
BUFG prop delay with your path:

IBUF -> chip routing -> DCM -> BUFG

You can do that by using the fixed or variable phase offset of the DCM
to dial in a delay that compensates for your overly lagged clock
network. The only problem with the routing to the DCM is that you need
to nail it down so that the delay doesn't change every other build -
look into using directed routing for that.

Once you have that locked down, you can go about moving the clock
around with the DCM so that it's output is lined up with where it needs
to be (may take some trial, error, and patience). You might even be
able to use the "working" version of your design to get some idea of
what the clock phase should be - you can bring the clock out using a
DDR flop... this will give you external visability of the clock phase
with relation to the input clock phase.

Have fun,

Marc
 
The online tool does not really help if you are planning on getting the
best perfomance yeild off the FPGA, and need to know the power
constraints before hand.
I think the problem is that the worst case is so nasty that it
isn't interesting.

Can you go backwards? How much power can you get rid of? How
big a heat sink and/or fan are you going to have?

There isn't much need for a power supply to put out more than that.
Maybe 2x or 10x if you want to run in short bursts.

Another approach is to look at several prototyping boards and see
what they have. If you don't hear complaints about it here that's
probably big enough.

You could also add some big connection points so at worst you
can add wires over to an external power supply.

--
The suespammers.org mail server is located in California. So are all my
other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's. I hate spam.
 
Where is the initialization of Sincos_Rom?

Mohammed A Khader wrote:

Hi all,

I am using Synplify Pro for synthesis. My target FPGA is APEX20KE. I
have the following code for rom but the attribute rom_style is not
working instead it gave me the following warnings.

1) CL159 Input addrs_in is unused
2) Signal sincos_rom is undriven


entity Lookup is
port(
Addrs_In : in signed(ROM_DEPTH-3 downto 0); -- 10 bit
Data_Out : out signed(ROM_WIDTH-1 downto 0) -- 16 bit
);
end entity Lookup;

architecture Lookup_Synth_Arch of Lookup is
-- Declaration for Rom type
type Sincos_Rom_Type is array (0 to 2**(ROM_DEPTH-2) -1) of
WORD;
signal Sincos_Rom: SinCos_Rom_Type;

-- Attributes to map Rom to availbale techology library
attribute syn_romstyle : string;
attribute syn_romstyle of Sincos_Rom : signal is "block_rom";

begin

Data_Out <= Sincos_Rom(TO_INTEGER(Addrs_In));

end architecture Lookup_Synth_Arch;
 
austin wrote:
http://www.xilinx.com/bvdocs/notifications/pdn2004-21.pdf

Is the discontinue notice for somw parts that had extremely low volumes.

Thanks Austin.
The "port" won't be too difficult anyway :)

Bert
 

Welcome to EDABoard.com

Sponsor

Back
Top