EDK : FSL macros defined by Xilinx are wrong

you may also reply to nara_chak45@yahoo.com.

Thanks and Regards,
Chakra.
 
Is it a hard and fast rule that we map the signals/ports of the user
logic to the ports of the OPB/PLB, or is it enough if we cna map them
to the IPIF signals.

Say User logic has some ports namely, clk, reset of std_logic, input
and output of std_logic_vector. Now is it enought if we map these
ports to the IPIF, which is the negotiator between OPB logic and user
logic??.

The question is, is it needed to map the reset to OPB_Rst, clk to
OPB_CLK etc, (or) is it enough if we map clk to Bus2IP_clk, reset to
Bus2IP_Reset, and input and output to the Bus2IP_Dbus and
IP2Bus_Dbus.

I will greatly appreciate if someone throws a clear picture about this
issue.

Thanks and Regards,
Chak.
 
Peter Alfke wrote:

Remember, any circuit that does not work close to its speed limit
represents waste.
Peter Alfke



Peter, while this is true from a device utilization standpoint, there is
also development time, life cycle costs etc to consider. For someone
that is not well versed in the nuances, this sometimes significant cost
can weigh in favor of a larger design clocked at a relatively slow clock.

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930 Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com

"They that give up essential liberty to obtain a little
temporary safety deserve neither liberty nor safety."
-Benjamin Franklin, 1759
 
On 4 May 2005 10:28:15 -0700, "Peter Alfke" <peter@xilinx.com> wrote:

Remember, any circuit that does not work close to its speed limit
represents waste.
Peter Alfke
Designing close to the limit is a nice idea. But unless the part has
been completely and correctly characterized by the vendor, designing
too close to its speed limit can be fatal. Having been burnt by speed
files that changed for the worse after I'd completed a design, I now
try to keep a healthy margin between my design requirements and the
speed limit du jour.

Bob Perlman
Cambrian Design Works
 
Maybe I misspoke. I meant to say that a cicuit that runs at a fraction
of its speed capabilty can be miade to do multiple jobs sequentially.
That obviously only applies when the designer runs the circuitry at
half or quarter speed or less. Only then can you seriously think about
time-sharing or time multiplexing.

it's good to have friends who watch over me :)
Peter Alfke
 
bart wrote:

I need 1000 frequency "bins", where each bin is a descrete frequency.
As Thomas Womack pointed out above, it is beter defined as a N-point
DFT with 1000 frequency bins, where N = 1024. For each sample, every
microsecond, there is 24-bits of data lets call that x(n). During that
microsecond there must be 1000 MACS in parallel to calculate the N=1024
DFT. This would happen for 1024 samples to calculate the N-point DFT.
I hope that is a better description. Thanks for the input.



Bart, as others have pointed out, it sounds like you are doing a brute
force DFT. The FFT reduces the computations by exploiting symmetry
present in the evenly spaced bins. Most FFTs are done with a variation
of the Cooley-Tukey algorithm which factors DFTs with a power of 2
number of points by successively breaking the DFT into half sized DFTs
and combining the results with a phase rotation. Your post seems to
indicate that you are looking instead for a 1000 point transform. You
can either use a 1024 point FFT by padding the input data to fill out
the size and accepting the slightly smaller bin size, or if you need the
1000 point DFT, you can use some of the other FFT algorithms to arrive
at a 1000 point transform. Either way, you'll greatly reduce the number
of multiplies by using a Fast Fourier Transform instead of the DFT. The
Smith and Smith book (
http://www.amazon.com/exec/obidos/ASIN/0780310918/andraka/102-8981403-3626538
) provides a pretty good coverage of the various FFT algorithms that
you'd need for either approach. It is presented more from a software
perspective than from hardware, but nevertheless it provides a
comprehensive background to permit you to build a hardware
implementation that is far more efficient than what you are proposing.

The other point I should make is that you can use a process clock that
is faster than your sample clock, which I think you said is only 1 MHz.
Our FFT cores will run at over 300 MS/sec in current FPGA devices, and
they don't use anywhere near the 1000 multiplies you are looking at.

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930 Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com

"They that give up essential liberty to obtain a little
temporary safety deserve neither liberty nor safety."
-Benjamin Franklin, 1759
 
Duane Clark ha scritto:

I modified a little the ddr_clocks reference design. I added that diff
to the same location as the other files. Notice that it is against an
EDK6.2 version of that file. Also, I found and fixed one bug in
read_data_path.vhd, though this only affects the external interface.

The bd_top.vhd file shows one example of how to connect everything. You
probably should run this simulation to make sure everything works, then
modify it to zero out the external interface and try it again.

I also added an example system.mhs file to show how they are connected
in a real system. And finally, an example system_top.vhd file, to show
the top level structure of how they connect to the pins.
Hi and thanks for the new files,
worked on it again yesterday and now it still doesn't run properly but
at least now when I write a 16 bit value the system stalls (before it
couldn't write anything from 32 bit to 8 bit: I always got back 0), so
it's clear that something as changed (although I don't know if for the
better :) ). I'll see if finally I could make it work!

Thank you for the great support!
 
"Subroto Datta" <sdatta@altera.com> writes:

Hi Subroto,

Chapter 9 of the MAX II handbook explains how to use the ALTUFM
megafunction to add UFM data with mif or hex file. The user has to
recompile if they want to change the hex file data, as this is how you
convert it to POF.

This can be found at
http://www.altera.com/literature/hb/max2/max2_mii51010.pdf
(page 9-34 thru 9-38.)
I've read the databook some time ago. I tried to simulate the UFM
using $QUARTUS/eda/sim_lib/maxii_atoms.v. But this model does not seem
to use the MIF file at all. The verilog model is using defparams to
initialize the UFM contents during simulation and Quartus is using MIF
to initialize the UFM during implementation. I was somewhat confused
by this fact.

Anyway, what I would like to do is to program the UFM during
production. The MAX II is supposed to replace some older PLD's, a
couple I2C PROM's and some other logic. The I2C PROM's are programmed
during production and I was hoping I could to the same for the MAX II
UFM.

Is there a way I can generate a SVF file for the UFM only?

If not I'll have to install and run Quartus for each card at the
prodcution site, pregenerate POF files for all serial numbers and
product variations, or make the UFM control signals available
externally so I can write some software to program the UFM at the
production site.


Petter
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
 
OK,

I have received three votes for the spreadsheet.

That isn't going to convince anyone!

I am sure there are more of you out there who would like the
spreadsheet, but perhaps you are just not inclined to email me?

Please don't clutter up the newsgroup, mail me directly at
austin@xilinx.com.

Austin

Ray Andraka wrote:

Austin Lesea wrote:

How many folks out there want to have the local spreadsheet version
for estimating?

I vote for a spreadsheet. Using the web thing to present power numbers
to a customer is a real PITA.
 
Antony wrote:
Hi and thanks for the new files,
worked on it again yesterday and now it still doesn't run properly but
at least now when I write a 16 bit value the system stalls (before it
couldn't write anything from 32 bit to 8 bit: I always got back 0), so
it's clear that something as changed (although I don't know if for the
better :) ). I'll see if finally I could make it work!
Well, I'll admit to only using with 32 bit values. There may very well
be problems with other data widths, though I expect a fix would not be
terribly difficult. That is definitely something that would be a lot
easier to check in simulation rather than hardware.
 
"Pete" <padudle@sandia.gov> wrote in message news:4278ffe3$1@news3.es.net...
Hello

I noticed the University of Queensland distribution of uClinux for the
Xilinx Microblaze soft processor core.

Does anyone know of an open source embedded linux distribution for the PPC
405 cores in V2Pro and V4?

Thank you,

Pete
From a few posts in the microblaze uclinux mail list
__________________________________________________________________________________
"We are working with the PPC on virtex2p. You do not need Monta Vista Linux.
All you need is denx eldk: http://www.denx.de/twiki/bin/view/DULG/ELDK
And the penguin ppc linux distribution:
http://www.penguinppc.org/kernel/#developers
(we are using the 2.4 Kernel)
To get started: http://www.klingauf.de/v2p/index.phtml might be helpful. The
PPC (ML300 board) is support in the standard kernel.org kernel since
v2.6.10. Or
http://www.crhc.uiuc.edu/IMPACT/gsrc/hardwarelab/docs/kernel-HOWTO.html

I've already used these helpful references to get a Linux kernel running on
the PowerPC405 on the Digilent XUP-V2Pro board. I'm new to ucLinux but it
doesn't appear to be much more complicated than the ucLinux steps.
X running on the ML300 w/PPC? Yes - it was quite a painful build
process, but I got it running. Obviously MontaVista has,
too."____________________________________________________________________________________________Alex
 
Well, still need help if anyone has advice.

I may be able to get things going if I know what to enter for the
"Simulation Libraries Path" under the "Project Options" dialog.
Assuming I have BFM stuff installed and ready to go, I found the bfmsim
folder generated for my IP with the XPS project that is supposed to
allow me to test it out. Now, where should these paths (EDK library
and Xilinx library) lead?

Thanks...
Joey
 
Why not use Linux for the PPC.. its a standard port :)

Simon

"Alex Gibson" <news@alxx.net> wrote in message
news:3e32ceFvc1cU1@individual.net...
"Pete" <padudle@sandia.gov> wrote in message
news:4278ffe3$1@news3.es.net...
Hello

I noticed the University of Queensland distribution of uClinux for the
Xilinx Microblaze soft processor core.

Does anyone know of an open source embedded linux distribution for the
PPC
405 cores in V2Pro and V4?

Thank you,

Pete



From a few posts in the microblaze uclinux mail list

____________________________________________________________________________
______
"We are working with the PPC on virtex2p. You do not need Monta Vista
Linux.
All you need is denx eldk: http://www.denx.de/twiki/bin/view/DULG/ELDK
And the penguin ppc linux distribution:
http://www.penguinppc.org/kernel/#developers
(we are using the 2.4 Kernel)
To get started: http://www.klingauf.de/v2p/index.phtml might be helpful.
The
PPC (ML300 board) is support in the standard kernel.org kernel since
v2.6.10. Or
http://www.crhc.uiuc.edu/IMPACT/gsrc/hardwarelab/docs/kernel-HOWTO.html

I've already used these helpful references to get a Linux kernel running
on
the PowerPC405 on the Digilent XUP-V2Pro board. I'm new to ucLinux but it
doesn't appear to be much more complicated than the ucLinux steps.
X running on the ML300 w/PPC? Yes - it was quite a painful build
process, but I got it running. Obviously MontaVista has,

too."_______________________________________________________________________
_____________________Alex
 
Chapter 6: Simulation Model Generator of Embedded System
Tools Reference Manual has all the info you need.
http://www.xilinx.com/ise/embedded/est_rm.pdf

Joseph wrote:
Still unsure about those paths... can anyone offer some guidance?
 
Great! Thanks Paul. I am using EDK 6.3, so I will peruse the
equivalent manual for that version (though at first glance, looks like
7.1 didn't change this section too much). I appreciate the response!
 
Hello Dave--

The AccelChip DSP Synthesis product uses MATLAB M-files as its input
language. MATLAB in general can support multirate systems that are
integral fractions of a base rate -- alternatively you could say that
MATLAB can be used to model multi-rate systems with the restriction
that the clocks must be synchronous and integer multiples of each
other.

This can be accomplished with AccelChip using the streaming loop coding
style as shown below. In this example the design function
"design_func1" runs at the clock frequency, "design_func2" runs
at ˝ the clock frequency, and "design_func2" runs at 1/3 the clock
frequency.

for n = 1:NUM_ITER

outdata3 = design_func1(indata); % freq

if mod(n,2) = 0
outdata2 = design_func2(indata); % freq / 2
end;

if mod(n,3) = 0
outdata1 = design_func3(indata); % freq / 3
end;

end;

With regard to Simulink -- it would be best to check with The MathWorks
but to my knowledge there is no need for any integral relationship
between rates in Simulink. If you have access to Simulink
documentation, just look for the section titled "Modeling and
Simulating Discrete Systems."

AccelChip can be used in combination with Xilinx System Generator for
DSP. Each subsystem can by synthesized with AccelChip and then
assembled in System Generator in order to achieve multiple rates.

Information regarding Xilinx's support for multiple clocks is described
at
http://www.xilinx.com/products/software/sysgen/app_docs/user_guide_Chapter_7_Section_2.htm


--Eric
 
Michael Dreschmann wrote:
Hello,

in my actual design im using a few picoblazes. Now I wonder if it is
possible to update the code in the bitstream without a new
implementation run like it is possible with the microcblaze. I
checked
data2bram but it allows only an update of 16 Bit wide Brams, not the
necessary 18 Bit.

Thanks,
Michael
Check out Ken Chapmans reply to this in the PicoBlaze forum:

http://toolbox.xilinx.com/cgi-bin/forum?50@171.5RzZaQ3QgOp.0@.ee8a991
 
"dani" <user100@bluewin.ch> schrieb im Newsbeitrag
news:1116347285.812416.305220@g14g2000cwa.googlegroups.com...
Hallo all,

I'm trying to implement Jam Byte-Code player using the source code(8051
Jam byte code player) provided by Altera. The code only supports
version 1 Jam byte-code. But the Quartus II tool generates Version 2
Jam byte-code. What to do?

Thanks a lot in advance.

Dani
there are even more 'variants' of the JAM/STAPL code, and even more issues.
Unfortunatly it seems that while Altera is still using JAM internally it has
completly dropped any support of JAM for 3rd party developers. I guess they
where pissed off when the attemp to promote JAM as JEDEC standard failed. So
Altera decided to "show off" and stopped publishing uptodate versions of the
JAM tools.

So basicall you are on your own, with no support from Altera, grab the
latest what they have for download (what is ages old ASFAIK) and start
updating the player to support whatever Quartus thinks the current JAM
should support. You may have even to reverse engineer Q generated JAM files
on that path.

of course it all depends what you need todo, I implemented a simple JTAG
bitstream loader for AVR, the all program code occupies less than 200 code
words and uses very simple bytecode player.

my 3cents
Antti
 
Hi,
I saw your post about ncsim. I was used to modelsim and really liked it
but I have to use ncsim now. I wonder how you use it. You say the GUI
is very good, so I wonder how you launch it and run it. I assume you
launch it from the command line. What arguments are you using? It would
help me if you can simply give me a general idea of your flow.

Thank you very much,
David

John McGrath wrote:
I think the best commercial simulator is by far Cadence's ncsim. This
can support verilog or vhdl or both. I know it is relatively new to
the
FPGA simulation world, but is supported in Xilinx's ISE now. I have
used it extensively for verilog HDL development, and found it
extremely
fast, has good, intelligent syntax/error messages, and a fantastic
GUI
(modelsim's gui really gets on my nerves!). (it also allows features
such as tracing the source of an X on a net (schematically), which is
not one I have seen in other simulators.
I dont know if it is faster than modelsim, (I've never compared them)
but it definatly feels slicker. As for feature complete - I'm
guessing
you mean language coverage? - I dont know about VHDL, but I always
code
in verilog-2001, and have never seen it unable to handle these "new"
constructs.
I've tried ModelSim, Virsim, Verilog-XL, ncsim (ncverilog), and
without
doubt ncverilog wins. It does take a little getting used to, but it's
more than worth it!



gallen wrote:
I'm sure this kind of things has come up in the past, but given
that
things change, I'd like to throw this out there.

Which simulators do people like to use for their HDL purposes?

I have tried a couple of simulators and I was curious about peoples
recommendations.

I have used Modelsim XE starter for my purposes (I am just a
hobbyest
now), icarus verilog and GPL cver. I have used the built-in
quartus
simulator as well.

So a couple questions regarding these. Which simulators do people
consider feature complete? Why do I never hear about cver in this
group? Does nobody use it? If not, why? What's really wrong with
Modelsim. People seem faily opposed to it. They say the error
messages are bad, but I certainly feel that icarus error messages
are
worse.

Also, I haven't really discussed VHDL. Which are best for this?
I've
heard GHDL is pretty good.

I've mostly discussed free simulators, but I'm also interested in
how
expensive simulators compare to the free sims.

-Arlen
 

Welcome to EDABoard.com

Sponsor

Back
Top