EDK : FSL macros defined by Xilinx are wrong

Thank you for your reply. More details list here: ADI chip work at a clock of 200MHz, FPGA work from 40MHz up to 65MHz. FPGA write and read inner registers of ADI chip through parallel ports, which is the main job of the interface.

ADI chip use WR# signal as the write data latch, and RD# signal as read latch.

Write timing graph:

__________________ | Address | |_________________| | | |---8ns---| ______________ | | | Data | | | |____________| | | | | |3ns | | | | Write __________ _________ __| |________| |__ |--7ns---|

I hope write maximum frequnecy is the same as FPGA clock. So, if the FPGA works at 65MHz, I must drive Write signal by a combinatorial logic of a synchronized enable signal and the FPGA main clock. so, Write and Read is the Gated Clock.

The timing of Address and data, whcih are synchronized with FPGA clock, responds to Write signal is troublesome. It is varies by physical design. And sometimes there are addition pulse on write signal, which come from time delay difference of 2 inputs(one is clock, one is control signal) of the LUT4 vary greatly. Could I use any timing constrains limits the delay difference to an accepted level?

Thanks for your advice.
 
Thank you for your reply. More details list here: ADI chip work at a clock of 200MHz, FPGA work from 40MHz up to 65MHz. FPGA write and read inner registers of ADI chip through parallel ports, which is the main job of the interface.

ADI chip use WR# signal as the write data latch, and RD# signal as read latch.

Write timing graph:

__________________

| Address |

|_________________|

| |

|---8ns---| ______________

| | | Data |

| | |____________|

| | |

| |3ns |

| | |

Write

__________ _________

__| |________| |__

|--7ns---|

I hope write maximum frequnecy is the same as FPGA clock. So, if the FPGA works at 65MHz, I must drive Write signal by a combinatorial logic of a synchronized enable signal and the FPGA main clock. so, Write and Read is the Gated Clock.

The timing of Address and data, whcih are synchronized with FPGA clock, responds to Write signal is troublesome. It is varies by physical design. And sometimes there are addition pulse on write signal, which come from time delay difference of 2 inputs(one is clock, one is control signal) of the LUT4 vary greatly. Could I use any timing constrains limits the delay difference to an accepted level?

Thanks for your advice.
 
Thank you for your reply. More details list here: ADI chip work at a clock of 200MHz, FPGA work from 40MHz up to 65MHz. FPGA write and read inner registers of ADI chip through parallel ports, which is the main job of the interface.

ADI chip use WR# signal as the write data latch, and RD# signal as read latch.

Write timing graph:

__________________ | Address | |_________________| | | |---8ns---| ______________ | | | Data | | | |____________| | | | | |3ns | | | |

Write

__________ _________ __| |________| |__ |--7ns---|

I hope write maximum frequnecy is the same as FPGA clock. So, if the FPGA works at 65MHz, I must drive Write signal by a combinatorial logic of a synchronized enable signal and the FPGA main clock. so, Write and Read is the Gated Clock.

The timing of Address and data, whcih are synchronized with FPGA clock, responds to Write signal is troublesome. It is varies by physical design. And sometimes there are addition pulse on write signal, which come from time delay difference of 2 inputs(one is clock, one is control signal) of the LUT4 vary greatly. Could I use any timing constrains limits the delay difference to an accepted level?

Thanks for your advice.
 
Thank you for your reply. More details list here: ADI chip work at a clock of 200MHz, FPGA work from 40MHz up to 65MHz. FPGA write and read inner registers of ADI chip through parallel ports, which is the main job of the interface.

ADI chip use WR# signal as the write data latch, and RD# signal as read latch.

Write timing graph:

----------------- | Address | |----------------| | | |---8ns---| -------------- | | | Data | | | |------------| | | | | |3ns | | | |

Write

---------- --------- __| |________| |__ |--7ns---|

I hope write maximum frequnecy is the same as FPGA clock. So, if the FPGA works at 65MHz, I must drive Write signal by a combinatorial logic of a synchronized enable signal and the FPGA main clock. so, Write and Read is the Gated Clock.

The timing of Address and data, whcih are synchronized with FPGA clock, responds to Write signal is troublesome. It is varies by physical design. And sometimes there are addition pulse on write signal, which come from time delay difference of 2 inputs(one is clock, one is control signal) of the LUT4 vary greatly. Could I use any timing constrains limits the delay difference to an accepted level?

Thanks for your advice.
 
Thank you for your advice. I'll try to use DLL to double the FPGA Clock.

But another question, can I use any timing constrains limits the delay difference of 2 inputs of a LUT to an accepted level? How?
 
In my application, the designed interface is some alike the RAM interface. FPGA generate the write signal, read signal, address signal, and data signal (when writing), and ADI chip only drive the data signal (when reading). Data Reading not as fast as writing, up to 30 MHz.

For I couldn't draw a timing graph correctly. The timing requusts is :

when writing: Write is low active. The minimum low time of write signal is 2.5ns; the min high time of write signal is 7ns. Address signal setup time refer to write active(falling edge) is ns. Data setup time refer to write inactive(rising edge) is 3 ns.

when reading: Read is low active. Maximum Data delay after address is 15ns; Maximum Data delay after read active (falling edge ) is 15ns; Minimum Address hold time after read inactive(rising edge) is 5 ns. Maximum Data hold time after read inactive(rising edge) is 10 ns.

If still not clear, pls refer to AD9854 data sheet.

Thansk all!
 
Markus Meng schrieb:
Markus Meng schrieb:

Hi all,

we face a strange problem with our synchronized reset signal coming
from the ISA-Bus. It seems that some part of the logic is not
functioning correctly after reset-release. However I'am not shure.

I would like to implement a digital debounce logic for this reset
signal, and for this reason I would like to have part of the
logic on reset-ed ONCE after power-up and configuration. Is there
a way to connect to this internal power-up reset signal, or shall I
leave the reset connection of such a debounce block always negated
by connecting it to permanent '0', for a active '1' reset?

Best Regards
Markus

----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet
News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World!
120,000+ Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption
=----


Coming One Step further, we now have the following situation. The very
similar Spartan-II design is working perfectly in a clean 3.3Volt ISA
bus environment. Several thousand test runs using the the synchronized
Reset logic never produces an error. The card is working as expected.

The very same card in a 'old' fashioned 5V ISA-Bus System with some
overshoot on the signals crashes from time to time after RESET. Once
this crash occurs, there is NO-WAY to reset the FPGA again. It remains
"kind-of-dead". Only Power-Off and Power-On again can 'solve' this
deadlock situation...

The configuration is loaded from an external prom only once after
power-up. The ISA-Bus reset does not reload the bitstream but resets
the internal FF to their initial state ...

Has somebody also seen this kind of strange behavior, where the FPGA
can not be reseted anymore by subsequent resets. It remains in this
state until power-off.

Is this a 5Volt compatibility issue of Spartan-II?

Best Regards
Markus

----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet
News==----
http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+
Newsgroups
----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Hi all,

completing this what I started. It has NOT been a Spartan-II IO issue
concerning the 5 Volt compatibility issue of Spartan-II. Spartan-II
works perfect in our legacy ISA Bus environment.

Just in case, this comes up at another place

Have a nice Day
Markus
 
Hi Ben,

thank you very much for your measurements. I would say that the TCK
signals on my board look exactly identical.

Although I think that these signals look a bit strange because of the
drop from 3,3V to 2V. For me it seems that the level converter in the
cable is poorly designed.

I have send the waveforms I have measured to altera. As soon as
I get an answer from them I will also post it here in the newsgroup.

The solution for me at the moment ist using Byteblaster MV instead of
USB Blaster :-(

It can't check the cable with the Nios - Board because I have only got
the schematics and not the board.

Best regards

Markus


Does this look familiar or is it way off?

Also, if you have an Altera NIOS board lying around, could you try to
compare the waveforms between your USB Blaster and the diagram attached?

If they are way off, you may indeed have a problem with the USB Blaster.

Best regards,


Ben


------------------------------------------------------------------------


------------------------------------------------------------------------
 
Austin,

I am probably not planning on using any of the DSP48 blocks, but I
noticed the same for them (no change over temperature). Apologies for
the skepticism, but it seems strange to me that I could load up my
device with DCMs, DSP48s, PPC405s and measure the power at room temp
and at high temp and have the difference between those two values be
the same difference I would get if I were using NONE of those resources
in the first place.

Anyway, is there a spreadsheet for power calculations that I can get
that would allow me to both do this analysis offline?

Thanks,

JD
JDDC
 
Hi John,

the clock is actually 360 MHz divided by 7. It's an asyncronous clock
like this "--___--". However the data goes at 360 MHz SDR. I have to use
SDR, because the LCD doesn't support DDR.

regards,
Benjamin
 
JD,

The only difference for power at high temp versus low temp is
leakage current. All of these blocks (DCM, DSP40, PPC405) are
in the silicon and whether or not you are actively using them
they are sitting there and leaking current from the VCCINT
supply.

Adding these into the power tool as a "used" element will change
the dynamic portion of the power consumption for the device, but
it won't change the leakage current component as this is function
of the device that you selected which includes leakage from
everything that is in the silicon.

Ed

JD_Design wrote:
Austin,

I am probably not planning on using any of the DSP48 blocks, but I
noticed the same for them (no change over temperature). Apologies for
the skepticism, but it seems strange to me that I could load up my
device with DCMs, DSP48s, PPC405s and measure the power at room temp
and at high temp and have the difference between those two values be
the same difference I would get if I were using NONE of those resources
in the first place.

Anyway, is there a spreadsheet for power calculations that I can get
that would allow me to both do this analysis offline?

Thanks,

JD
JDDC
 
The LCD has to support a data stream with a divide-by-7 reference clock.
Good info.

Using the DDR registers you can have an effective divide-by-3.5 applied to
your 360 MHz clock and DDR generated from the same 360 MHz clock. This
looks from the outside as if it's a 720 MHz clock providing the divide-by-7
and the data running at 720 Mbits/s/pin just as if it were clocked with a
720 MHz clock.

Your LCD doesn't have to understand DDR to receive 720 Mbit/s data streams
and a 102.86 MHz clock.


"Benjamin Menküc" <benjamin@menkuec.de> wrote in message
news:d5dar6$nms$00$2@news.t-online.com...
Hi John,

the clock is actually 360 MHz divided by 7. It's an asyncronous clock
like this "--___--". However the data goes at 360 MHz SDR. I have to use
SDR, because the LCD doesn't support DDR.

regards,
Benjamin
 
Hi John,

Using the DDR registers you can have an effective divide-by-3.5 applied to
your 360 MHz clock and DDR generated from the same 360 MHz clock. This
looks from the outside as if it's a 720 MHz clock providing the divide-by-7
and the data running at 720 Mbits/s/pin just as if it were clocked with a
720 MHz clock.

Your LCD doesn't have to understand DDR to receive 720 Mbit/s data streams
and a 102.86 MHz clock.
the max. pixelclock of the LCD is 80 MHz, that would be a LVDS clock of
560 MHz. Maybe when everything works (DVI-Link is the next challange
now), I will try to tune up the LCD to 80 MHz using your DDR methodology.

Thanks for the information.

regards,
Benjamin
 
JD,

As I have said before (and also explained it to you), the differences
are swamped by the margin for process.

As for a local version of the predictor, we no longer support the excel
spreadsheet downloaded version. We are reconsidering that decision.
Initially, there was only a spreadsheet for download. Then we had both
the spreadsheet and the web based spreadsheet.

How many folks out there want to have the local spreadsheet version for
estimating?

Austin

JD_Design wrote:

Austin,

I am probably not planning on using any of the DSP48 blocks, but I
noticed the same for them (no change over temperature). Apologies for
the skepticism, but it seems strange to me that I could load up my
device with DCMs, DSP48s, PPC405s and measure the power at room temp
and at high temp and have the difference between those two values be
the same difference I would get if I were using NONE of those resources
in the first place.

Anyway, is there a spreadsheet for power calculations that I can get
that would allow me to both do this analysis offline?

Thanks,

JD
JDDC
 
NET "YOU_CRAZY_MADMAN" MAXSKEW=100ps;
"Wenjun Fu" <fwj@nmrs.ac.cn> wrote in message
news:ee8dffe.9@webx.sUN8CHnE...
Thank you for your advice. I'll try to use DLL to double the FPGA Clock.

But another question, can I use any timing constrains limits the delay
difference of 2 inputs of a LUT to an accepted level? How?
 
Thanks for the reply digi.

The "BFM" acronym you gave reminded me that I had this package
installed. When creating the custom peripheral, there was a options to
generate simulation files (I think) and this prompted me to download
edk_bfm_6_3.exe. I ran this and I suppose I am ready to run these BFM
simulations. I found a .pdf on the Xilinx site about BFM simulations,
but it isn't exhaustive (10 pgs). Assuming I have this piece of custom
IP (built with the wizard to act as a slave on the PLB bus), do I start
a new project in XPS to test it out? I see the bfm modules under the
"add/edit cores" dialog but don't know how to go about using them. Do
I do something like drop a PLB bus and hook my IP to it along with one
of these bfm modules? If so, how do I get anything useful to play with
in ModelSim? Any further advice is appreciated!
 
Thanks Mike.

Now if there is a way to turn off the DCache register in C rather than in
assembly.....


"Mike Lewis" <someone@micrsoft.com> wrote in message
news:p5qdneneUqv2TOXfRVn-gQ@magma.ca...
You have that area of the memory mapped cached ... you are seeing a cache
line
burst for the first access and nothing after that because it is
manipulating
the cache memory ... turn off the cache for this memory region.

Mike

"bta3" <bta3@iname.com> wrote in message
news:4JTde.11307$o32.1391@fe09.lga...
BlankHi,
I seem to have a problem talking to a MAC chip that is connected as a
memory mapped device on the EBI bus of an EPXA1-672 chip (EBI2 for CS,
no
split reads and no prefetch). EBI1 is connected to a flash chip. I am
using
the GNU toolset to develop code and no OS (as yet).

Apparently, if I read a single register (in my code), a series of 16
read
accesses are made by the chip and cached. Subsequent reads do not access
the
MAC, rather return values from the cache - I do not see any CS
transitions
at the chip pins. The write operations function perfectly well if I do
not
perform a read - one CS for every write request. Once a read is
performed,
the writes also cease to be "executed" and change the register value in
the
cache only.

Has anyone seen a similar problem? The Altera folks have not responded
to
my trouble tickets - their support is not what it used to be.

Thanks, bta3
 
Austin Lesea wrote:

How many folks out there want to have the local spreadsheet version
for estimating?

I vote for a spreadsheet. Using the web thing to present power numbers
to a customer is a real PITA.

--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930 Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com

"They that give up essential liberty to obtain a little
temporary safety deserve neither liberty nor safety."
-Benjamin Franklin, 1759
 
Benjamin Menküc wrote:

Hi,

after I set the optimization effort to "high", I can get the whole
270MHz now (thats all the DCM can generate out of 100 MHz).

Any comments how to make my design better are still welcome though :)

regards,
Benjamin
Yup,

use DDR clocking of your IOBs and double the internal path width. Just a
sugestion without studying your code.
This should give more than 600MHz.

Regards,
Thomas
 

Welcome to EDABoard.com

Sponsor

Back
Top