EDK : FSL macros defined by Xilinx are wrong

Hi,

how is the interrut signal specified? As low or high activ edge/level ?

How does the mhs file look like?
 
I dont have a ACE CF on my board. Well, I have the ACE connector, but no
card reader/card

"Peter Ryser" <peter.ryser@xilinx.com> schrieb im Newsbeitrag
news:42CC13E2.2060906@xilinx.com...
Since you have System ACE CF in your system that is the easiest way to
load both bitstream and ELF file. Please read up in the EDK
documentation how to generate an ACE file for your system.

- Peter


Andi wrote:
Hi,

do you want to run the program out of sdram or out of the bram? What is
the size of the c-code? Do you use the edk system as top level? Or do you
integrate the edk system in your own top level?
The tool flow from edk supports the mechanism to fit the c-code -
elf-file in the bram during the bitgen procedure.

Then when the fpga is loaded the c-code starts directly.
 
abgoyal@gmail.com wrote:
HI Allan,

I am myself looking for the same. Am using EDK6.3/ISE6.3i. I am also
told that there is no XBD included even in EDK7.1 for the ML402 board.

Allan, were you able to build the reference design without any errors?
For me, the reference design generates lots of errors about unavailable
versions of peripherals referenced in the design. What service packs
are you using.


Any help appreciated.

TIA,

Abhishek
I don't know who told you that the XBD for ML402 isn't included in
EDK7.1, it is included in my EDK7.1 (sp 1). I was also able to build it
and run it on my board without changing anything or experience any sort
of error. I guess this is not very helpful indeed, but at least I can
say that it works using sp1...

cheers!
Johan


--
-----------------------------------------------
Johan Bernspĺng, xjohbex@xfoix.se
Research engineer

Swedish Defence Research Agency - FOI
Division of Command & Control Systems
Department of Electronic Warfare Systems

www.foi.se

Please remove the x's in the email address if
replying to me personally.
-----------------------------------------------
 
Hi Brad,

I am attempting to get data (column_in) from a fast
clock domain(clk_wr) to a slow clock domain(clk_rd).
I've always like this paper as a good introduction to intra-domain
transfers:

http://www.sunburst-design.com/papers/CummingsSNUG2001SJ_AsyncClk.pdf

There are some other papers from the same author that are pretty good too.

Regards,

Paul Leventis
Altera Corp.
 
"Andi" <00andi@web.de> schrieb im Newsbeitrag
news:ee8f784.2@webx.sUN8CHnE...
Hi,

how is the interrut signal specified? As low or high activ edge/level ?

How does the mhs file look like?

Its level high. NO Device ISC is used and the interrupt capture mode is
INTR_PASS_THRU.
Here is the part of the MHS File.

BEGIN ppc405
PARAMETER INSTANCE = ppc405_0
PARAMETER HW_VER = 2.00.c
BUS_INTERFACE DPLB = plb
BUS_INTERFACE IPLB = plb
BUS_INTERFACE JTAGPPC = jtagppc_0_0
PORT RSTC405RESETCHIP = RSTC405RESETCHIP
PORT C405RSTSYSRESETREQ = C405RSTSYSRESETREQ
PORT C405RSTCORERESETREQ = C405RSTCORERESETREQ
PORT C405RSTCHIPRESETREQ = C405RSTCHIPRESETREQ
PORT PLBCLK = sys_clk_s
PORT RSTC405RESETCORE = RSTC405RESETCORE
PORT RSTC405RESETSYS = RSTC405RESETSYS
PORT CPMC405CLOCK = sys_clk_s
PORT EICC405EXTINPUTIRQ = EICC405EXTINPUTIRQ
END

BEGIN plb_decoder
PARAMETER INSTANCE = plb_decoder_0
PARAMETER HW_VER = 1.00.a
PARAMETER C_BASEADDR = 0x90000000
PARAMETER C_HIGHADDR = 0x900003ff
BUS_INTERFACE SPLB = plb
PORT PLB_Clk = sys_clk_s
PORT IP2INTC_Irpt = EICC405EXTINPUTIRQ
END
 
Martin,

We choose to show the products in their best mode, not their worst, so
that may be a valid assumption.

It may also be that we used the tools that are shipped with the product,
I just don't know.

Generally speaking, the synthesis tool may also be a factor in
performance variations. Hopefully over the large number of test cases,
the choice of tools becomes a minor effect, but sensitivity to tool
usage is definitely something very real.

I have heard from customers that at various times, one tool or another
has been "superior." Our policy is to share all performance
improvements in our synthesis tool with our partners, as we are not in
the business of selling synthesis tools, but rather using our own
synthesis tool refine and evaluate our architectures.

Just as for any individual customer, one FPGA or the other may be
"superior" in terms of performance, they are close enough in performance
that only by evaluating a large number of designs can the trend be seen.

But my posting was not so much about the speed debate, but more about
the overall product superiority: static power 1-5 watts less, SI for
ground bounce up to 8X less, added features (SSIO, DSP48, E-MACs, PPC,
MGT, FIFO-BRAM, FRAME_ECC, etc...).

Austin

Martin Thompson wrote:

Austin Lesea <austin@xilinx.com> writes:

snip

http://www.xilinx.com/products/virtex4/overview/performance.htm

is a good review of V4, which illustrate how we beat all other FPGAs
in EVERY category.



Interesting - why did you not use Synplify for the Altera side of
things - was it worse than Altera's synthesiser?

Cheers,
Martin
 
Kolja,

You are correct. The capacitance of the interconnect certainly
dominates as far as speed is concerned (over any advantage of a faster
transistor in SOI).

What is also interesting, is if you reduce the capacitance by 10% (ie
using lo-K), the speed only goes up by 5%. Thus, the effect of lowering
the K can actually be offset by pushing the transistor process in the
fabs to get all the speed back again (as a 5% faster transistor is easy
to do).

That is why we are neither for, nor against lo-K. For us, it just
doesn't matter! Toshiba uses lo-K. UMC does not (for V4). The
processes are adjusted for equivalent performance. And, there is hardly
any difference in dynamic power after all is said and done (one
datasheet covers both fabs).

Probably the biggest process technology improvement in V4 was the use of
triple oxide: the thick mid-ox device used for memory resulted in much
less static current, and also in superior SEU resistance to upset.
Regular use of 90nm 6T cells for memory means that the probability of
upset is worse than it was in 130nm. It also provided excellent speed
performance and low leakage for the pass-gates.

Additionally, the lifetime of the FPGA remains at 20 years, when Intel
is quoting a 7 year life for their 90nm processors. By using all 90nm
transistors, the reliability is compromised, as we are beginning to see
"wear out" effects for the technologies below 90nm. By keeping the
memory cells and pass gates at a thicker oxide, we also are using far
fewer 90nm transistors, and increasing the reliability of the hardware
itself.

There are also substrate implantation techniques that can also be used
for soft errors, which would cost far less that redesinging for SOI.

Austin


Kolja Sulimma wrote:

Austin Lesea schrieb:


In order to remove or minimize the variation in timing in SOI from the
floating wells, one needs to add taps. The addition of the taps to
every well, results in the area increasing dramatically. That makes the
FPGA cost too much, hence the process is not commercially viable. This
has been one of the reasons for its non-use.


Also, without knowing as much detauls as austin, I suspect that in a
design as heavily dominated by interconnect as an FPGA the area increase
results in longer wires which increases capacitance and therefore power
consumption and delay. This mitigates the two main advantages of SOI.

Kolja Sulimma
 
Martin Schoeberl wrote:

I'm playing around with sigma-delat ADC and DAC for audio. It's amazing
how good this works without any active components. Just Rs and Cs.


The sigma-delta works really well, and the faster you clock it the more
equivalent bits you get. You'll have to parallel a bunch of pins to get
enough drive for an 8 ohm speaker, and you'll still end up with a
sizable loss due to the driver impedance. You could add a simple
transistor stage to each pin to boost the current up and still keep the
parts count low. The sigma-delta will drive a 600 ohm headset with no
problem (I did that with the shortwave radio demo shown on my website).
You can also drive a set of powered PC speakers through a single pin.

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930 Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com

"They that give up essential liberty to obtain a little
temporary safety deserve neither liberty nor safety."
-Benjamin Franklin, 1759
 
Here it is:
http://www.altera.com/niosrenewal

--Alan
remove.prefix.gcalac@altera.com
 
Paul Leventis (at home) wrote:

<basically, "mine is bigger">

*Sigh*, here we go again. Raw device speed isn't the whole answer, and
neither is
the list of device features. I could easily design a suite of test
circuits that could
'definitively' show either the stratixII or the virtex4 as being the
faster device, depending
on which one I wanted to 'win'. All that really proves is what I've
said for the past 12
years, which is if you really want to wring out the most performance
from a device you
MUST tailor your design to that device. Doing so will get you the
maximum speed in/ that/
device, but at the same time will generally hurt the performance in the
other devices that it
wasn't tailored to.

When deciding on a device, chose based on your comfort level with the
family and its tools,
and how well the feature set of that chip augment your design. Both are
good products, and
you won't go wrong with either as long as you pay attention to the
device architecture as you
develop your design.

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930 Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com

"They that give up essential liberty to obtain a little
temporary safety deserve neither liberty nor safety."
-Benjamin Franklin, 1759
 
"greenplanet" <greenplanet@hotmail.com> wrote in
news:1120716400.871859.69220@g49g2000cwa.googlegroups.com:

I've visited the xess website; however, there's only one example for
mouse and the source codes are in Spanish. Anyways, I tried to figure
out what those codes mean and downloaded the bitstream to the fgpa;
however, that doesn't work. I am not sure whether the code is not
working or I have set something wrong. Anybody has experience using
the PS/2 port of XSA-3S1000 board with XST 3.0?
If I just wanna read the mouse, do I have to initialize the mouse
first? If then, how?
Yes, you have to initialize the mouse. Read this document for information
on the PS/2 mouse protocol: http://www.computer-engineering.org/ps2mouse/



--
----------------------------------------------------------------
Dave Van den Bout
XESS Corp.
PO Box 33091
Raleigh NC 27636
Phn: (919) 363-4695
Fax: (801) 749-6501
devb@xess.com
http://www.xess.com
 
Also kind of funny that they used a Quartus version three revisions old, and
the latest ISE and Synplify?

"Martin Thompson" <martin.j.thompson@trw.com> wrote in message
news:uackzkp64.fsf@trw.com...
Austin Lesea <austin@xilinx.com> writes:

snip
http://www.xilinx.com/products/virtex4/overview/performance.htm

is a good review of V4, which illustrate how we beat all other FPGAs
in EVERY category.


Interesting - why did you not use Synplify for the Altera side of
things - was it worse than Altera's synthesiser?

Cheers,
Martin

--
martin.j.thompson@trw.com
TRW Conekt, Solihull, UK
http://www.trw.com/conekt
 
Yes, I read the document about the PS/2 mouse protocol. I wonder is
that true that I must send all those commands as shown on the document
to the mouse for initialization? In the example on xess website, it
only sends the $F4 command to enable the mouse.
 
Hi,
I am trying to use the plb_tft driver provided in the ML401 reference
design with a ML402 board. The reference design "slideshow" doesnot work
if I try to download from the EDK enviroinment. Where as, if I use the
design stored in the CF card, everything works!! The design stored in the
CF card uses system0.bit ot configure the FPGA. I am not sure what it does
and the source-code for that design is not known either.

Can someone help me with this. Any suggestion is appreciated.

Thanks,
Krishna.
 
Also kind of funny that they used a Quartus version three revisions old,
and the latest ISE and Synplify?
Especially when the final (newer) Stratix II timing models have DSPs &
Memories running much faster than in older releases, eliminating some of X's
favourite "up to" numbers quoted...

Paul Leventis
Altera Corp.
 
I have no CF card reader. But I have 2 PROMs on my board, and I would like
to store the progam there.

If all I have is 6 block RAMs remaining, is it possible at all, to run the
program on this board. My design uses up 38 out of the 44 BRAMs available.
Can you advice me a possible solution.


"Peter Ryser" <peter.ryser@xilinx.com> schrieb im Newsbeitrag
news:42CC13E2.2060906@xilinx.com...
Since you have System ACE CF in your system that is the easiest way to
load both bitstream and ELF file. Please read up in the EDK
documentation how to generate an ACE file for your system.

- Peter


Andi wrote:
Hi,

do you want to run the program out of sdram or out of the bram? What is
the size of the c-code? Do you use the edk system as top level? Or do you
integrate the edk system in your own top level?
The tool flow from edk supports the mechanism to fit the c-code -
elf-file in the bram during the bitgen procedure.

Then when the fpga is loaded the c-code starts directly.
 
I'm using EDK 6.3 + SP2, along with ISE 6.3 + SP3 + IP updates 4 + XFFT
patch.

Yes, I had no problem building the reference design, and no problems
adding my own plb peripherals. I'm working on a sonar beamformer app.
using plb peripherals with xfft3_1. My problem was that the reference
design included many cores that I didn't need, and my plb peripherals
were getting pretty big (big FFTs), so I wanted to strip down the
reference design. However the resets and clocks for different parts of
the reference design were all a bit coupled (misc_logic,
sys_proc_reset etc.) so I initially had difficulty separating things
out. Because I couldn't find an XBD file, last week I started from
scratch with a new XPS project, adding only the ip I needed, and making
my own entities with DCMs for the system, plb and ddr sdram clocks. I
used the reference design's constraints file (system.ucf) as a guide -
its working fine.

I think the reference design uses some deprecated cores - only thing I
can think of is perhaps you accidentally changed these to newer
versions which aren't compatible? -
In Xilinx Platform Studio, did you go into 'add/edit cores' and
accidentally upgrade these cores?

Best regards

Allan Willcox
 
"Ray Andraka" <ray@andraka.com> schrieb im Newsbeitrag
news:YIgze.27303$FP2.14627@lakeread03...

and how well the feature set of that chip augment your design. Both are
good products, and
you won't go wrong with either as long as you pay attention to the
device architecture as you
develop your design.
Amen.
Hail to reverend Andraka

SCNR.

Regards
Falk
 
Ray Andraka wrote:
Paul Leventis (at home) wrote:

basically, "mine is bigger"

*Sigh*, here we go again.
Ray -- please don't take a job working for any of the FPGA vendors!
Your vendor-independent voice of reason is sorely needed here.

I'm sure I speak for many when I say that I'm bored of the A vs X
pissing match. I don't care which FPGAs were the first ones built on a
90 nm process! I don't care that a part is going to be $2 each when
purchased in quantities of a HALF MILLION in 2007. (Wish I did care,
but that's a different story.) (Is the config PROM gonna be twenty
cents?)

Users want:

a) tools that work
b) parts we can buy in the quantities we need in the timeframe required
by our schedules.

I think the best thing about the availability of free tools is that if
we're not pushing the envelope (read: using the latest/greatest), we
can keep all vendor tools installed on our development machines and
choose the parts that make the most sense.

-a
-----------------------
Andy Peters
Tucson, AZ
devel at latke dot net
 
Ray,

The settings package idea is a good one. Thanks for the tip. I guess
there's always a workaround.

-a
 

Welcome to EDABoard.com

Sponsor

Back
Top