EDK : FSL macros defined by Xilinx are wrong

austin wrote:
Marc,

It turns out that if you are going only to the DFS, and you do not move
the frequency very fast, you can sweep from min to max input (output)
frequency before losing lock.

The DLL is fussier, as it chooses to arrange its six delay lines based
on what options, range, and where it locks. So in the DLL, if you
start sweeping the frequency, you may get an overflow or underflow on
one of the delay lines, and lose lock.

We typically spec +/- 100 ppm, because just about any trashy crystal
can do that. In reality, +/- .01 is probably safe.
Austin,
Suppose the clock starts as 'any trashy crystal', but is then fed via
another Xilinx DLL - is there a chain-limit of jitter degradation, in
such a system ?
This will become a more common scenario...
-jg
 
Brandon wrote:
Does XST 7.1 support TCL scripting?
I would imagine that it does; the Xilinx installer includes a tcl
shell.

I don't see any mention of it in the XST User Manual and I find the
command line mode to be very awkward performing synthesis using the GUI
or command line without a script.

I'm new to XST, and I'm looking for ways to organize my synthesis
process. By default the tool dumps all of synthesis files all over the
place, ugh. I was hoping I could have some control over all this, which
I could do it.
I hate how the tools dump things all over the place, too. Calling the
various programs from a script can help with that but it's not ideal.

If anyone has any recommendations to using XST in command line I'm all
ears.
The GUI is useful for initial project set-up. Among other things, if
you look carefully you'll see the exact command line for each of the
various processes. The GUI is also helpful when creating constraints
and for interactively driving the timing analyzer.

I've created a couple of makefiles (one for FPGAs, one for CPLDs) that
I use under cygwin to synthesize and build my chips. "make cleanup"
gets rid of all of the excess files and directories (I found a list of
the files buried in the Xilinx docs). It took awhile to figure out
what does what and where. I couldn't figure out what xflow was doing,
so my makefile calls the individual programs as needed.

-a
 
No. It does work with the Digilent Adept Suite.

do_not_reply_to_this_addr@yahoo.com wrote:
Can one use the JTAG-USB cable from digilent with chipscope ?

Sumit
 
The GUI is useful for initial project set-up. Among other things, if
you look carefully you'll see the exact command line for each of the
various processes. The GUI is also helpful when creating constraints
and for interactively driving the timing analyzer.
There is .cmd_log file that logs every command was run.

I've created a couple of makefiles (one for FPGAs, one for CPLDs) that
I use under cygwin to synthesize and build my chips. "make cleanup"
gets rid of all of the excess files and directories (I found a list of
the files buried in the Xilinx docs). It took awhile to figure out
what does what and where.
Search for *.gfl file, which has a list of most (if not all) of the
intermediate files.

HTH,
Jim
 
Hi Duane,

I much appreciate your help.

Duane Clark wrote:
I want to use the SRAM in a pipeline manner, capturing the data from
the previous read in the same cycle that I present new data. Assuming
the delay of getting my address to the SRAM is roughly equivalent to
the delay getting the data back I should only have to worry about the
tOH (data hold time) of the SRAM being large enough.

I think you have a fundamental misunderstanding there. The two delays
mentioned add up, and along with the input setup time, need to be less
than the clock cycle time. For the 100MHz clock previously mentioned,
they do not.
I think I'm starting to understand it. While I'm still using it in a
pipelined manner (latching the previous data while presenting a new
address), I see now that I need to allow for additional timing margin.
Is it correctly understood though, that if I _didn't_ use it this way,
but instead held the address stable while I latched the data, I wouldn't
be able to achieve the same memory bandwidth?

So I wonder what Altera means when they write zero-wait-states? In
other words, what kind of bandwidth does Nios sees when using this 10ns
async sram?


You seem to have switched boards on me, and I don't have Altera specs
handy. But on the previous board mentioned with a Virtex2P chip, the
registered inputs do not even have a hold time requirement, only the
setup time.
Sorry, no trickery intended. I have two Xilinx boards, but it's my
Altera board I use the most. AFAICT, both X and A are very similar and
both have zero hold time.


Yes, 20nS should be enough if the only thing there is the SRAM, and
assuming the Altera timing is similar to the Xilinx timing. But you need
to check several thing. First, when you say the flash and ethernet are
fully disabled, do you mean they are never enabled? Have you verified
that is really the case?
To the best of my knowledge:
...
assign flash_cs_n = 1'b1;
assign flash_oe_n = 1'b1; // Unnecessary?
assign enet_aen = 1'b1;
assign enet_be_n = 4'b1111; // Unnecessary?
...
where

set_location_assignment PIN_A12 -to flash_cs_n
set_location_assignment PIN_B12 -to flash_oe_n
set_location_assignment PIN_B15 -to enet_aen
set_location_assignment PIN_C16 -to enet_be_n[0]
set_location_assignment PIN_B16 -to enet_be_n[1]
set_location_assignment PIN_D16 -to enet_be_n[2]
set_location_assignment PIN_E16 -to enet_be_n[3]

in the constraints file. Of course, the Nios Dev Kit documentation is
needed to verify this assignment. In case you're interested, the whole
thing is not that big and I've put it up on

http://numba-tu.com/sram

Have you verified that all signals to and from the SRAM are registered
within the IOBs of the FPGA? Including the data output enable signals? I
assume Altera has something similar to FPGA editor to make sure this is
really the case.
Good point, I haven't yet. (Though I doubt that's the problem, given
that it works for 24 out of 32 bits :)

Assuming that ALL inputs and outputs, including output enables, are
registered, then timing constraints on external pins are completely
irrelevant, and will have absolutely no effect. The timing is fixed, and
can be obtained from the FPGA data sheet. In general, you should be
registering all these signals within the IOBs. You should need a very
good reason not to.
D'oh! It's so obvious now that you point it out. Excellent.

Thank you,
Tommy
 
Andy Peters schrieb:

Brandon wrote:
Does XST 7.1 support TCL scripting?

I would imagine that it does; the Xilinx installer includes a tcl
shell.
Can anybody confirm that ISE can be run with a TCL script?

I was looking for that option and did not find anything?

I am using a Makefile now, but that is not enough, as the synthesis
tool requires its own script files. What makes it even harder is the
fact that one tool uses a different device specification format as the
other.

For example, calling the map or ngbuild tool a Spartan 3 is given as:
xc3s50-tq144-4. Where as in the synthesis script it needs to be
specified as: xc3s50-4-tq144.

Having those different files makes it really tedious to do changes.

Guenter
 
Chet Stemen wrote:
unpack first shar like archive in your current directory
sh WebPACK_71i_installer.sh --keep
Correction:

This should be

unpack first shar like archive in your current directory
sh WebPACK_71_fcfull_i.sh --keep
That is the whole package.

WebPACK_71i_installer.sh is the installer only.


--
Chet Stemen
http://www.hightek.org
 
hi Vladislav,
very sorry for not responding to your suggestions. it worked just
fine. thank you very much for that. Actualy it was my mistake. thank
you once again.
regrds sumesh
 
Good!
"MM" <mbmsv@yahoo.com> Đ´ČëĎűϢĐÂÎĹ:3kspnkFvdd0qU1@individual.net...
"Antti Lukats" <antti@openchip.org> wrote in message
news:dc87bd$knm$03$1@news.t-online.com...

they are NOT, available are only IEEE1532 files for XCFxxS not for XCFxxP
!!

OK, I see now what you mean. I assumed there was only one BSDL standard.
Can
you please explain what this IEEE1532 is all about compared to "regular"
bsd
files?


Thanks,
/Mikhail
 
<do_not_reply_to_this_addr@yahoo.com> schrieb im Newsbeitrag
news:1122585166.774633.123730@g43g2000cwa.googlegroups.com...
Can one use the JTAG-USB cable from digilent with chipscope ?

Sumit
ChipScope only works with officially supported Xilinx cable, and does not
work with 3rd party cable to it is not 100% compliant to fully supported
cable.

It could be possible to write 'ChipScope' server that would allow to use 3rd
party cable, but would require reverse engineering of the protocol used by
the ChipScope server.

Antti
 
Brad Smallridge wrote:
Up to now, I have been doing much of my work with ModelSim and
a BMP file reader and writer. Most of my VHDL designs have clk
and reset. I know where to attach the clk but what do I use for
reset. An external pin? The Done pin? Or a DCM lock signal?
I drive reset from a cpu running on
the fpga clock. Pulse it after the
binary image is loaded.
This is vendor independent
and synchronous.

-- Mike Treseler
 
Humm... Why wouldn't Xilinx support this cable (I thought Digilent and
Xilinx have a close relationship) and if not Why wouldn't Xilinx
publish the protocols used by Chipscope server. Are they trying to make
money by selling their own expensive USB cable ?

Sumit
 
Andrew FPGA wrote:

I am trying understand how a distributed arithmetic design can achieve
a density of 1 LUT(4 input) per four taps per input data bit. I have
read the www.andraka.com tutorial and a lot of the many previous posts
on distributed arithmetic but still cannot see it....

I understand how the scaling accumulator implements a bit serial
multiply and I see how the partial product summation is moved to be in
fornt of the scaling accumulator. What I can't see is how the partial
products for four taps can be implemented in a single 4 input LUT? (I
realise that a LUT = 16x1 RAM, in Xilinx anyway)

To caculate the partial product for four taps and a single bit position
of our input data then we need to add four bits? If all four bits are
1's then our sum results in 3 bits (or 2bits and a carry out). How can
a single LUT4 represent that? A single LUT has only 1 output bit....



Ok, consider the case where you have a single tap. You'd need to
compute a 1 bit by n-bit partial product
for each bit in the serial input, and then you sum those partial
products with a scaling accumulator. In
that case, the one bit input is gating the coefficient, so that if it is
'1' you get the coefficient out
(1x coefficient=coefficient). If it is '0' then you get '0' out in all
the bits. To do this you have a 1 input,
n output logic function (n outputs to handle the n bits in the
coefficient). This is equivalent to n AND
gates.

Now onto the 4 tap version. In this case you have the sum of 4 of these
1xN functions. If a tap input
bit is '1', then the corresponding coefficient is added to the output.,
if '0' then the coefficient is not added
(ie, you add either 1x or 0x the coefficient for each of the inputs).
If all 4 input bits are '0's, then
you have 0*c0 + 0*c1 + 0*c2 + 0*c3=0. If only one input bit is a '1',
then the n bit output is equal
to the corresponding coefficient. If you have two input bits '1', then
the n output bits are the sum of
the two coefficients corresponding to those inputs. Do you see then,
that there are 16 possible combinations
of inputs, and that the 4 input bits form a 4 bit address into the LUT?

I'm guessing what you were missing is that the DA-LUT is n bits wide, ie
it is comprised of n 4-LUTs.



--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930 Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com

"They that give up essential liberty to obtain a little
temporary safety deserve neither liberty nor safety."
-Benjamin Franklin, 1759
 
Brad Smallridge wrote:
I have a board with three Spartan3s on it.
Right now, under sources in project, I have
the project name, then xc3s400-4pq208,
and under that
top1-behavioral(top1.vhd),
top2-behavioral(top2.vhd),
top3-behavioral(top3.vhd)

and stuf under those like the top1.ucf, etc.

So what happens if one of the Xilinx parts
gets upgraded or downgrade in speed or
size? Can I assign the xc3s400 spec to
each top level design?

Brad
I think when you download programming file (.bit file)
that implement with one speed grade to other speed grade fpga,
you found an error about id number of device on iMPACT.
 
Chris Carlen wrote:
Hi:

I am reading the FAQ on Spartan3 here:

http://www.xilinx.com/products/spartan3/faq105_s3.pdf

which says only XC3S50 is supported by WebPack 5.2i. I realize this is
an old version (though the one I am still using since I had trouble with
6.1i). Now we are at 7.1i, which indicates support for Spartan3 of
course here:

http://www.xilinx.com/ise/logic_design_prod/webpack.htm

but doesn't indicate the details about whether it supports larger
Spartan3 devices or not.

Specifically, I am considering XC3S400.

Does WebPack 7.1i support that or do I need to start spending $$$ ?


Thanks for input.


Good day!
I'm using a XC3S200 with the 7.1i webpack, and the list of devices
indicates that up to XC3S1500 is supported.

-- Brian
 
<do_not_reply_to_this_addr@yahoo.com> schrieb im Newsbeitrag
news:1122935399.899122.35350@o13g2000cwo.googlegroups.com...
Humm... Why wouldn't Xilinx support this cable (I thought Digilent and
Xilinx have a close relationship) and if not Why wouldn't Xilinx
publish the protocols used by Chipscope server. Are they trying to make
money by selling their own expensive USB cable ?

Sumit
The Xilinx - Digilent co-oper is not so close.

Xilinx is NOT publishing the ChipScope server protocol and
Xilinx is NOT publishing the Impact server protocol

because they may change those with every service pack and the do not want
additional hassle from 3rd parties. Maybe some other issues as well.

Yes, it looks like Xilinx is trying to make money on the over priced USB
Cable, that doesnt work very good. The PCs with LPT port are coming to be
'hard to get' items, so the only official cable is Xilinx USB cable, what is
very EXPENSIVE, it only contains Cypress 68013 and Coolrunner, but cost
495$.

Antti
 
"Vladislav Muravin" <muravinv@advantech.ca> wrote in message
news:h5pHe.780$z91.148816@news20.bellglobal.com...
I have a V2 3000 device, and I am using BUFGMUX, the one located at P7
location (for this synthesis, I do not lock them). This BUFGMUX guy
multiplexes between one external and one internal clock. I output all 4
signals of BUFGMUX on the test pins and I see that when S is '1', the
output is not equal to I1 input !!!
I verify at least by FPGA Editor that the connections of BUFGMUX are ok.

So, I am going for another board with another FPGA and I will also check
that the pins are not shorted, but any other suggestions from everybody
are always welcome.

Here's a suggestion, in the V2 datasheet it says that "As long as the
presently selected clock is High, any level change of S has no effect .". Is
that your problem?
Cheers, Syms.
 
Why do you have to convert the design at all ?

Would it not be worthwhile to learn VHDL or Verilog ?

Of course NOW it would be the easiest step to convert your proven
schematic design.

But in the long run it might be better to be able to change the VHDL /
Verilog description.

How do you simulate your designs?

Rgds
André
 
Morpheus,

First, if you work in avionics industry, then probably FAEs & sales reps
will jump onto you as you ask them this question, because avionics can
afford anything...

Second, if you want to convert schematic to HDL and certification of the
design is critical, find a good FPGA design engineer to do the job at
high-level code, because instead of generating this high-level code and
synthesizing it (which is what you will probably do) you can synthesize the
schematic directly. It's just weird to me to certify the code generated by
tool, because usually it is better to design high-level code than to
generate it, as well it is easier to blame somebody than some tools, which,
just as the Matrix, may be not perfect :):):). (No hard feelings, Morph.
could not resist :) )

Vladislav


"morpheus" <saurster@gmail.com> wrote in message
news:1122950126.543093.39170@f14g2000cwb.googlegroups.com...
Does anyone know of a tool that actually converts a schematic entry
design to Verilog/VHDL. I know tools like Quartus can do it but the
conversion is at the device level(correct me if I'm wrong). I need
conversion to maybe behavioural level(I know I might be dreaming).
I work in the avionics industry and certification of the design is
critical.
Any clues will be appreciated
cheers y'all
MORPHEUS
 

Welcome to EDABoard.com

Sponsor

Back
Top