EDK : FSL macros defined by Xilinx are wrong

Steve Knapp (Xilinx Spartan-3 Generation FPGAs) wrote:

Jim Granville wrote:

Steve Knapp (Xilinx Spartan-3 Generation FPGAs) wrote:


[ ... snip ...]


Spartan-3E FPGAs behave differently.

Whilst we are on this subject, to this detail,
can you give some info on how does Spartan 3E differ, and why ?

-jg


The only difference is in the DLL phase shifter feature included with
the DCM. Most everything else is identical between Spartan-3 and
Spartan-3E DCMs.

There's a summary of the differences in the following Answer Record,
but I'll follow up here with the abbreviated version.
http://www.xilinx.com/xlnx/xil_ans_display.jsp?getPagePath=23004

In FIXED phase shift mode, the difference depends on which version ISE
that you are using, as described in the data sheet and the Answer
Record. Physically, the Spartan-3 DLL performs a fixed phase shift by
as much as a full clock cycle forward or backward. The Spartan-3E DLL
performs a fixed phase shift by as much as _half_ a clock cycle forward
or backward. For nearly all applications, the Spartan-3E half-clock
shift provides the same flexibility as the full clock shift, but with
significantly less silicon.

In VARIABLE phase shift mode, the difference is that the Spartan-3 DLL
performs a variable phase shift in fractions of a clock period, 1/256th
of a full circle. Think degrees, angles, radians, using your favorite
angular unit. Extra logic within the Spartan-3 DLL calculates the
delay line change. The Spartan-3E DLL also performs a variable phase
shift using a delay line. However, in Spartan-3E, you have raw control
over the delay. The shift is always in time, not in some angular unit.
Thanks,
When you say time for 3E, do you mean 'calibrated time', or some
multiple of a ~30-60ps delay chain ?
I can see that the extra logic in the -3, (should?) track temp/Vcc
changes - or does it grab the multiplier only when the DCM is reset ?

How does the -3E manage temp/vcc/process variations, or does the
user do that ?

-jg
 
Jim,

The tap state machine is always trying to keep one entire period in one
of the delay lines. This way, the unit is always self calibrating, it
always "knows" how many taps equals one period.

So when you ask for 23/256 of a period shift, the arithmetic unit solves
for the closet tap (truncating).

To make the silicon take up less space, the delay line itself is
optimized to not change over the PVT (as much as it would otherwise).

How this is done is the subject of issued patents, so for those that are
curious, they can look these up.

Austin

Jim Granville wrote:

Steve Knapp (Xilinx Spartan-3 Generation FPGAs) wrote:

Jim Granville wrote:

Steve Knapp (Xilinx Spartan-3 Generation FPGAs) wrote:



[ ... snip ...]


Spartan-3E FPGAs behave differently.


Whilst we are on this subject, to this detail,
can you give some info on how does Spartan 3E differ, and why ?

-jg



The only difference is in the DLL phase shifter feature included with
the DCM. Most everything else is identical between Spartan-3 and
Spartan-3E DCMs.

There's a summary of the differences in the following Answer Record,
but I'll follow up here with the abbreviated version.
http://www.xilinx.com/xlnx/xil_ans_display.jsp?getPagePath=23004

In FIXED phase shift mode, the difference depends on which version ISE
that you are using, as described in the data sheet and the Answer
Record. Physically, the Spartan-3 DLL performs a fixed phase shift by
as much as a full clock cycle forward or backward. The Spartan-3E DLL
performs a fixed phase shift by as much as _half_ a clock cycle forward
or backward. For nearly all applications, the Spartan-3E half-clock
shift provides the same flexibility as the full clock shift, but with
significantly less silicon.

In VARIABLE phase shift mode, the difference is that the Spartan-3 DLL
performs a variable phase shift in fractions of a clock period, 1/256th
of a full circle. Think degrees, angles, radians, using your favorite
angular unit. Extra logic within the Spartan-3 DLL calculates the
delay line change. The Spartan-3E DLL also performs a variable phase
shift using a delay line. However, in Spartan-3E, you have raw control
over the delay. The shift is always in time, not in some angular unit.


Thanks,
When you say time for 3E, do you mean 'calibrated time', or some
multiple of a ~30-60ps delay chain ?
I can see that the extra logic in the -3, (should?) track temp/Vcc
changes - or does it grab the multiplier only when the DCM is reset ?

How does the -3E manage temp/vcc/process variations, or does the
user do that ?

-jg
 
dale.prather@gmail.com wrote:
John,
Thank you for your input. It has helped tremendously.

Hopefully, just one more question. When I do this:

BUS_INTERFACE SFSL0 = my_fsl_incoming
BUS_INTERFACE MFSL0 = my_fsl_outgoing

I get these two errors, which I think is at the root of the confusion.


ERROR:MDT - fsl_v20 (my_fsl_incoming) -
C:\Xil_Proj\Bal_Con\Current_Sense\PWM2\system.mhs line 197 - must
have
atleast 1 master assigned!

ERROR:MDT - fsl_v20 (my_fsl_outgoing) -
C:\Xil_Proj\Bal_Con\Current_Sense\PWM2\system.mhs line 210 - must
have
atleast 1 slave assigned!

For now, to fix it, I've done this:

BUS_INTERFACE MFSL0 = my_fsl_incoming
BUS_INTERFACE SFSL0 = my_fsl_incoming
BUS_INTERFACE SFSL1 = my_fsl_outgoing
BUS_INTERFACE MFSL1 = my_fsl_outgoing

Any comments on the errors or my solution?

Thanks,
Dale
Hi,

In your fix you have connect two outgoing FSL ports on MicroBlaze to two
incoming FSL ports. So this MicroBlaze can send data to itself.
Not sure if this is what you want to do.

Since you want to connect to the outside world of EDK, the tools doesn't
see a Master-Slave pair in your first attempt. It's the DRC of the tools
that errors out.

You can't use the BUS_INTERFACE for connect since their is no
slave/master BUS_INTERFACE for them.
You must connect each signal individual as John proposed in his first reply.

Göran
 
"Andy Peters" <Bassman59a@yahoo.com> wrote in message
news:1144276764.165125.290340@z34g2000cwc.googlegroups.com...
Anonymous wrote:
Dumb question: Since USB is just a two wire serial interface and all the
USB
solutions I've seen are simple, though speedy, microcontrollers why
can't
the USB be inside the fpga? Seems like you can instantiate a small micro
running at 50 mhz or so with code in a couple block rams to do what the
fx2,
for example, does. Apparently, doesn't exist so there must be some
reason?

It's a little more complex than simply two wires between two devices.

USB signalling is half-duplex differential, and high-speed signalling
is different than full-speed and low-speed. Also, there are some
instances where single-ended signalling is used and the driver must be
capable of doing this and the receiver has to be able to detect these
states.

-a
I agree the software is complicated. (Way too complicated in my opinion.)
But all the solutions out there seem to be built around a little 8-bit
micro. You don't think it's silly to have a 10 million gate FPGA sitting
next to an 8051?

There must be a real reason for it. Maybe I'll try it when I get some time.

-Clark
 
Jim,

I do not think it is any different in this regard, but Steve will
correct me if I am wrong,

Austin

Jim Granville wrote:

Austin Lesea wrote:

Jim,

The tap state machine is always trying to keep one entire period in
one of the delay lines. This way, the unit is always self
calibrating, it always "knows" how many taps equals one period.

So when you ask for 23/256 of a period shift, the arithmetic unit
solves for the closet tap (truncating).

To make the silicon take up less space, the delay line itself is
optimized to not change over the PVT (as much as it would otherwise).

How this is done is the subject of issued patents, so for those that
are curious, they can look these up.


Yes, I can follow that for the -3, but Steve was suggesting the 3E was
slightly different, so I was wanting to clarify the deails.

-jg



Steve Knapp (Xilinx Spartan-3 Generation FPGAs) wrote:

The Spartan-3E DLL also performs a variable phase
shift using a delay line. However, in Spartan-3E, you have raw control
over the delay. The shift is always in time, not in some angular unit.
 
PeterC wrote:
Thank you Brian - pointers to answer records and the past thread
greatly appreciated.
Sure; I can't find my folder of DCM simulation notes right now, but
searching the Xilinx Answer Records for "DCM" or "DCM simulation"
will turn up a boatload of the DCM simulation quirks; I've listed some
more of them below.

You may also want to try running a post-PAR timing simulation to see
what the DCM delay looks like with the back-annotated delays.

11067 SimPrim - ModelSim Simulations: Input and Output clocks
of the DCM and CLKDLL models do not appear to be de-skewed

13213 UniSim, SimPrim, Simulation - How do I simulate the DCM
without connecting the CLK Feedback (CLKFB) port? (VHDL)

11344 UniSim - Variables passed to GENERICs in functional simulation
are not working properly (VHDL)

18390 7.1i Timing Analyzer/TRACE - Changing the DESKEW_ADJUST
parameter does not affect the DCM value (Tdcmino)

20845 6.3i UniSim, Simulation- There is a Delta-cycle difference
between clk0 and clk2x in the DCM model

22064 7.1i UniSim, Simulation - There is a Delta-cycle difference
between CLK0 and CLKDV in the DCM model

6362 UniSim, SimPrim, Simulation - When I simulate a DCM or CLKDLL,
the LOCKED signal does not activate unless simulation is run in
ps time resolution

18115 8.1i/7.1i Simulation - DCM outputs are "0" and the DCM does not
lock UniSim and SimPrim VHDL models) (DCM reset requirement)

19005 Virtex-II/Virtex-II Pro, Clocking Wizard - The LOCKED signal
doed noy go high for cascaded DCM when CLKDV is used

have fun,
Brian
 
Hi

This example (xapp807) don't use the LwIP stack. It use the uIP stack, free also.
 
I don't think I can help with your problem, but I'm happy to complain
about the option along with you ;-). I've seen the same problem with
registers left in there for output IOBs being removed. It seems that
the option fixes some of the removal but not all of it because my
output files definitely have more registers when the removal is
disabled. You turn it on thinking that it will optimize your code, and
then you are left without sufficient registers to put one in each IOB.
Hence, your OFFSET timing all fails and you wonder what the heck
happened. I think this optimization should happen after the mapping. I
was grateful that Xilinx is finally realizing the fact that most
optimizations should happen in the mapper.

Perhaps some other XST option overrides some of the removal or makes
the registers unusable in the IOB, such as the optimize for speed
parameter. Here is my option set that is removing more than I think it
should or at least not allowing IOB USE.

set -tmpdir ./xst/projnav.tmp
set -xsthdpdir ./xst
run
-ifn ql5064_interface.prj
-ifmt mixed
-ofn ql5064_interface
-ofmt NGC
-p xc4vlx100-10-ff1513
-top ql5064_interface
-opt_mode Speed
-opt_level 2
-iuc NO
-lso ql5064_interface.lso
-keep_hierarchy SOFT
-rtlview Yes
-glob_opt AllClockNets
-read_cores YES
-write_timing_constraints NO
-cross_clock_analysis NO
-hierarchy_separator /
-bus_delimiter <>
-case maintain
-slice_utilization_ratio 100
-dsp_utilization_ratio 100
-verilog2001 YES
-fsm_extract YES -fsm_encoding Auto
-safe_implementation Yes
-fsm_style lut
-ram_extract Yes
-ram_style Auto
-rom_extract Yes
-mux_style Auto
-decoder_extract YES
-priority_extract YES
-shreg_extract YES
-shift_extract YES
-xor_collapse YES
-rom_style Auto
-mux_extract YES
-resource_sharing YES
-use_dsp48 auto
-iobuf NO
-max_fanout 10000
-bufg 32
-bufr 48
-register_duplication YES
-register_balancing No
-slice_packing YES
-optimize_primitives YES
-use_clock_enable Auto
-use_sync_set Auto
-use_sync_reset Auto
-iob auto
-equivalent_register_removal NO
-slice_utilization_ratio_maxmargin 5
 
Hi Mitch


Mich wrote:

this is what I have done
first I added this in the vhdl
test_I : in std_logic_vector (7 downto 0);
test_O : out std_logic_vector (7 downto 0);
test_T : out std_logic_vector (7 downto 0);
and this
s_test <= test_I;
test_O <= s_test;
test_T <= "00001111";
VHDL seems OK, except endianicity (normal is - 0 to 7 !)


then I have added this in the MPD file
PORT test = "", DIR = INOUT, ENABLE=SINGLE, THREE_STATE=TRUE, VEC =
[7:0]
ERROR: ENABLE=MULTI !!!!!!! and maybe endianicity [0:7]

As an example take a look at OPB_DDR MPD:
PORT DDR_DQS = "", DIR = IO, VEC = [0:((C_DDR_DWIDTH/8)-1)],
THREE_STATE = TRUE, ENABLE = MULTI, PERMIT = BASE_USER, DESC = 'DDR
Data Strobe', IO_IF = ddr_0, IO_IS = data_strobe

DELETE THIS PART!!
PORT test_O = "", DIR = OUT, VEC = [7:0]
PORT test_T = "", DIR = OUT, VEC = [7:0]

and this in the ucf file
Net IO_0_test_pin Loc = "N6";
Net IO_0_test_pin IOSTANDARD = LVTTL;
UCF Seems fine, but only for ONE test pin, for multiple pins look at
the example:

NET "gmii_txd<0>" LOC = "M4";
NET "gmii_txd<1>" LOC = "N4";
NET "gmii_txd<2>" LOC = "L1";
NET "gmii_txd<3>" LOC = "M1";
NET "gmii_txd<4>" LOC = "P2";
NET "gmii_txd<5>" LOC = "N5";
NET "gmii_txd<6>" LOC = "P4";
NET "gmii_txd<7>" LOC = "P5";

Cheers,

Guru
 
simon.stockton@baesystems.com schrieb:
Dear All,

I am a little confused with regards to the clocking arrangement
associated with the Xilinx Rocket IO MGT.

I want to use the MGT in Half Rate Mode with no 8B/10B encoding /
decoding with a byte wide interface (actually 10-bit wide due to not
using the 8B/10B).

I have the following clocks (as per page 54 of the Rocket IO User Guide
[Vertex-II Pro]):

REFCLK > tied to the pre-DCM input clock (clkin)
RXUSRCLK & TXUSRCLK > tied to the DCM output clock (div2)
RXUSRCLK2 & TXUSRCLK2 > tied to the DCM output clock (clk0)

My question is which clock do I use to clock my data TO the MGT and
conversely FROM the MGT?

The user guide says "Each edge of the slower clock must align with the
falling edge of the faster clock", as a result it suggests that
TXUSRCLK2 & RXUSRCLK2 are inverted so that clk0 can be used instead of
clk180.

"Since clk0 is needed for feedback, it can be used instead of clk180 to
clock USRCLK or USRCLK2 of the transceiver with the use of the
transceiver's local inverter, saving a global buffer (BUFG)."

My second question is, if the answer to question 1 is TXUSRCLK2 &
RXUSRCLK2 as suggested in the User Guide, is it permissible to invert
the RXUSRCLK & TXUSRCLK's instead of inverting the TXUSRCLK2 &
RXUSRCLK2 to assist with clock alignment in other areas in my design?

Many Thanks,

Simon
Hello Simon,

1st question: For your user interface you should use the usrclk2. This
clock depends on the data width.

2nd question: the relationship between the REFCLK and the USRCLK is not
so important because you can use the FIFO´s and the
clock-correction-codes. There is generally a delta time between the
REFCLK and the USRCLK over the DCM.

Helmut
 
lecroy7200@chek.com wrote:
I have the full license for Quartus and can't compile a simple project
targeting the GX device until I enable the talkback feature. What
gives? Suppose the PC is not on the internet, you can't use the GX
parts?
Try it and see.
Section 9 of the License Agreement says
that it stores the xml files on disk
in that case. It doesn't say it stops working.

-- Mike Treseler
 
Hi David,

I have a similar project - Camera on Virtex4FX12. I have Memec's FX12
MiniModule and TEMAC reference design uses hard LL_TEMAC core (which
has a large FIFO - about 50% of FX12's BRAMs) and an open source lwIP
stack version 1.01.a (some kind of workaround to work with TEMAC). I
did not test the performance of this system, but I believe It could be
equal to your design. I don't like my design because it uses too much
BRAM and it cannot make it work in EDK 8.1 (for now). I also wanted to
add ChipScope core to it, but there was simply not enough BRAM :(
GRRRR.
I took a look at the xapp807 (UC-II) but I don't know how to add other
peripherals (which I have a lot) and FIFO's are only 1 BRAM block deep
- resulting in poor performance (reported 50 Mbps).
Hard TEMAC with sizable FIFO's would be very handy solution with open
source lwIP stack.

How do you stream the data to the processor?
I am trying to make OPB master peripheral to write data to OPB DDR and
then processor can access this data (maybe not suitable for real time
streaming but necessary to get all of the 100-200 frames needed).

I would like to get any solution for this problem, so HELP WANTED!

Guru
 
Anonymous wrote:
"Andy Peters" <Bassman59a@yahoo.com> wrote in message
news:1144276764.165125.290340@z34g2000cwc.googlegroups.com...
Anonymous wrote:
Dumb question: Since USB is just a two wire serial interface and all the
USB
solutions I've seen are simple, though speedy, microcontrollers why
can't
the USB be inside the fpga? Seems like you can instantiate a small micro
running at 50 mhz or so with code in a couple block rams to do what the
fx2,
for example, does. Apparently, doesn't exist so there must be some
reason?

It's a little more complex than simply two wires between two devices.

USB signalling is half-duplex differential, and high-speed signalling
is different than full-speed and low-speed. Also, there are some
instances where single-ended signalling is used and the driver must be
capable of doing this and the receiver has to be able to detect these
states.

-a


I agree the software is complicated. (Way too complicated in my opinion.)
Where did I say anything about SOFTWARE? I pointed out that the
hardware interface is more than simply two wires.

But all the solutions out there seem to be built around a little 8-bit
micro. You don't think it's silly to have a 10 million gate FPGA sitting
next to an 8051?

There must be a real reason for it. Maybe I'll try it when I get some time.
You could get a PHY and put that next to your FPGA.

-a
 
Anonymous wrote:
I agree the software is complicated. (Way too complicated in my opinion.)
But all the solutions out there seem to be built around a little 8-bit
micro. You don't think it's silly to have a 10 million gate FPGA sitting
next to an 8051?
That depends on your mindset.
If you really want a single chip "at all costs", then yes, pull the USB
into the FPGA - the FPGA vendors will love you :)

but if you want a reliable, cheap, easy to fault-find system then a
little distributed intelligence can be a very good thing.
Keep the expensive FPGA fabric for what it best at.....

There must be a real reason for it. Maybe I'll try it when I get some time.
Try this :
The USB uC's out there can directly, and correctly, drive the USB
cable, and are proven to do so.

-jg
 
Helmut

Thanks for the reply.

Are you implying from your answer 2 that it is permissable to invert
USRCLK instead of USRCLK2 thus still conforming to the "Each edge of
the slower clock must align with the falling edge of the faster clock"
but not "Since clk0 is needed for feedback, it can be used instead of
clk180 to clock USRCLK2 of the transceiver with the use of the
transceiver's local inverter, saving a global buffer (BUFG)."?

Simon
 
Hello David,

You don't need to use Jumbo frames to utilize whole power of Gigabit
Ethernet. With Virtex4 EMACs we got more than 120MB/s data throughput
with plain Ethernet frames. And PC was able to handle this data stream
without any problems. We built the streaming hardware within
Virtex4FX20 and developed custom protocol driver for MS Widows Ethernet
stack. During testing phase we got no errors/retries. With TCP/IP stack
you will get less data bandwidth, since each frame will need to have
TCP/IP header, but I still think it will not worse to move to Jumbo
solution.

With best regards,
Vladimir S. Mirgorodsky

David wrote:
Hi

I'm evaluating one Gigabit Ethernet design who use the hard Temac embedded in the Virtex-4 FX (ML403 evaluation board) for fast image transmision.

The GSRD reference design (xapp546) is my best option, but have a 79% of occupied slices and I need more space for more components and the Treck TCP/IP used is a evaluation versión. It exist the option of the TEMAC UltraController-II but it seem that the PowerPC processor of the Virtex-4 can't be used for others issues and I don't know if the uIP TCP/IP stack used in this design supports Jumbo frames like Treck stack. This jumbo frames are needed for maximum performance at gigabit ethernet.

Somebody have a easy solution to this problem? Another design?

Thank you very much.
 
Jim,

Agreed. If you have to have a substantial core of soft logic to support
that interface in the FPGA, it had better be worth it. If the FPGA has
hardened cores for most, or part of that interface, it makes it a
slightly better proposition.

It is all systems engineering.

I admit that when I sit down to do that job (very rarely now), I pick
the components that will:

- do the best job
- cause me the least grief (both in software/coding and signal
integrity/support)
- meet the cost objectives

When all the marketing hype is said and done, the hard work is just
begun for the engineer.

Successful completion and testing of the prototype is an important step.

And then manufacturing can be another real trial.

So it ain't over till the customer is paying (and happy).

Anything that gets you to market faster is a real plus. The systems
engineer can make, or break a project by their decisions.

Austin

Jim Granville wrote:

Austin Lesea wrote:

All,

When we recently did a USB interface to the FPGA, we looked at the USB
interface parts that were out there, their features, and their costs.

We decided on a complete module (connector, and all) just because it
was - 1. incredibly cheap, 2. useful (it has its own 8 bit uP to
take care of everything we would ever need), and 3. it is done, and
working (one less thing to do).

Tightly integrating the USB into the FPGA has about 0 benefit. It is
not like having an ethernet port, or a 6.25 Gbs serial link, or PCI
express, or any of a number of high bandwidth interfaces where tight
coupling just makes sense.


True, tho I'd say that Ethernet is moving into the same category as
you have placed USB. Not GBit ethernet, but certainly vanilla 10/100,
where there are smarter/cheaper PHY included options to choose from.

-jg
 
"Anonymous" <someone@microsoft.com> wrote in message
news:NvfZf.78923$%84.17918@tornado.southeast.rr.com...
"Felix Bertram" <flx@bertram-family.com> wrote in message
news:49l64gFp3eu7U1@individual.net...
I agree the software is complicated. (Way too complicated in my
opinion.)

if "software" is referring to the firmware: this is really not too
complicated. Have a look here:
* www.usb-by-example.com
* www.lvr.com

But all the solutions out there seem to be built around a little 8-bit
micro. You don't think it's silly to have a 10 million gate FPGA
sitting
next to an 8051?

disagreed. There are two types of data to be very clearly separated:

* asynchronous data: this is all the USB device enumeration and control
stuff. This is low bandwidth, most of it happens only during device
attachment, and this is quite simple. An 8051 is still too complex to
handle this, there are designs out there using a simple state machine.

* isochronous data: this is all the traffic your application requires.
In case you are streaming high bandwidth data and you need to do some
processing on it, an FPGA might be a good solution. You will usually not
want to pass 480Mbps of data through a CPU. Think of audio or video
applications, USB protocol analyzers, ...


Any comments welcome,
best regards,


Felix
--
Dipl.-Ing. Felix Bertram
http://www.bertram-family.com/felix

I guess my point was if you look at an fx2, for example, all I see is an
8-bit micro, a little bit of memory, and some relatively simple fifo
hardware. All of this seems trivial inside a virtex-4, yet most v4 designs
I've seen have the usb outside the fpga. Maybe that's so they can load the
fpga at power up but it seems like if they have flash memory anyway,
there's
no real advantage to usb outside the fpga.

-Clark
I have to admit that I find some of the replies to what I thought was a dumb
question a little odd. It's almost like some folks take personal offense at
the notion of using a couple hundred slices and a couple block rams to
implement a USB interface inside the FPGA. (The smallest V4fx has 12,000 lcs
and 36 brams) I think most of the counterarguments could apply to any other
peripheral: Why a uart? Why ethernet? Why even a PPC core when you can get a
better processor discretely?

If I had a solution that:
1. Integrated into EDK such that it adds like any other peripheral. The IP
should contain the micro, firmware, and the interface to the CPU.
2. Integrated into the board support package so that I can build my chip,
carry a few files over to my linux source and be able to compile it into the
linux kernel.
3. Stock linux and windows driver support.
4. Netusb compatibility such that I can take my usb master port and connect
it to any other port to create a network link.
5. USB1.1 initially but eventually 2.0.

I would gladly use it over any discrete solution that I constantly have to
worry about obscolecense or other supply problems, not to mention the extra
size, cost(part plus ordering plus handling), and power. Who hasn't had a
board build delayed because of a back order on some small part like a usb
chip?

-Clark
 
Salil Raje wrote:
Hi -

Hmm.. Speaking for PlanAhead, this is a setup that we definitely do not test
on.
Not sure if I can help you, but answers to a couple of our questions may
give us a clue:

1. Are you running on a notebook?
No it's a workstation. Intel CPU dual core, 2Go of RAM.

2. Does it hang consistently or semi-repeatably? If so, how? CPU remains
idle or is it pegged?
It's not fully repeatable. I mean it always crash but not always at the
same point but it happens quite fast. I think it's always related to a
GUI action. I mean, if I don't do anything with it it will stay fine
(i.e still refresh the window when another window passes over), but
possibly at the next button click or action on a text field, it will
just stop responding.

AFAIK, cpu is idle when it's freezed but I'll need to confirm that on
monday.


BTW, can you download 8.1.5 at www.xilinx.com/planahead and see if you still
get this problem?
I'll try that on monday.


Sylvain
 
Amit,

All you have to do is to declare the pins related to the other port
external. I am assuming that your top-level design is in ISE and your EDK
susbsytem is an instantiation in the top-level design.

/Mikhail


"amit" <amitshukla1979@gmail.com> wrote in message
news:1144410286.775287.66600@i39g2000cwa.googlegroups.com...
Hi

I am trying to implement a shared memory interface between PPC and FPGA
fabric. I am using EDK to create a dual port RAM and connect it to a
DSOCM controller. I have been able to write to BlockRAM from my
application code.
My question is that how do I connect the other port of the BRAM to my
FPGA design?
Should the HDL module be added as a core from the " import peripheral"
utility?
If so, then which bus should it connect to?

Thanks
Amit
 

Welcome to EDABoard.com

Sponsor

Back
Top