EDK : FSL macros defined by Xilinx are wrong

Bob,

The startup clock problem didn't occur to me because I am actually
programming a Platform Flash rather than FPGA, thus CCLK is the correct
clock in my case....

Looking at the Properties dialog for the "Generate Programming File"
process, I see "FPGA Start-Up Clock" under "Startup Options" , which
is indeed set to CCLK - seems this is the default setting, is that
true? So you are suggesting I change this to "JTAG Clock", right?
That's what you have to do.

And just in case, where do I "check the startup options to check how
many clocks you need"? Is this in the data sheet?
Under the same "Start Options" you can see Done (output Events) set to
probably 4 and Release Write Enable set to probably 6. The last number I
believe is the number of extra clocks required after the bitstream has been
shifted in the chip. However, I don't think this is your problem as I am
sure iMPACT adds all the required cycles to your xsvf file. I am pretty sure
your problem is with the startup clock as Gabor noticed.


/Mikhail
 
"Bob" <rsg.uClinux@gmail.com> wrote in message
news:87f1d4cf-1a31-4218-8c60-7da009716ee1@j22g2000hsf.googlegroups.com...
Double argh! This still doesn't work. I set the JTAG clock and it
still doesn't work!

Any _other_ ideas?
I guess Antti is right... Anyway, which version of the tools are you using?
What is you microcontroller? Do you have any external pullups? Do you
disconnect the cable when running your player? There is a number of Answer
Records on Xilinx site, which might be relevant to your problem. e.g.
http://www.xilinx.com/support/answers/22255.htm

/Mikhail
 
austin <austin@xilinx.com> wrote:

Alain,

Yes, that is a neat feature (allow a set or reset with small overhead
while still using SRL), however I have seen a thread where it isn't
working as intended for some cases. There is a CR filed on it, so we
will see if it is a real bug, or a weird corner case (which still needs
to get fixed).

Automatic use of SRL16(and/or SRL64 in V5) by the synthesis tool is not
consistent as third party tools don't always make the many cases where
the SRL is advantageous a priority. XST pioneers the use, so we may
then show third parties how we did it (and give it to them).
Which version of XST supports this? I always infer SRL16 (and similar)
from primitives out of lazyness.

--
Programmeren in Almere?
E-mail naar nico@nctdevpuntnl (punt=.)
 
austin <austin@xilinx.com> wrote:

Alain,

Yes, that is a neat feature (allow a set or reset with small overhead
while still using SRL), however I have seen a thread where it isn't
working as intended for some cases. There is a CR filed on it, so we
will see if it is a real bug, or a weird corner case (which still needs
to get fixed).

Automatic use of SRL16(and/or SRL64 in V5) by the synthesis tool is not
consistent as third party tools don't always make the many cases where
the SRL is advantageous a priority. XST pioneers the use, so we may
then show third parties how we did it (and give it to them).
Which version of XST supports this? I always infer SRL16 (and similar)
from primitives out of lazyness.

--
Programmeren in Almere?
E-mail naar nico@nctdevpuntnl (punt=.)
 
On 2008-05-04, Brian Drummond <brian_drummond@btconnect.com> wrote:
The effort spent generating optimal code for such a DSP engine is probably
better spent generating optimal hardware to realize the same function; you are
likely to gain a lot in performance.

(I realize there are probably exceptions to this principle, but my perception is
they are relatively small niches).
In principle I agree with you, but in practice I can see quite a few advantages
to an FPGA optimized DSP processor. Consider for example a video conference
application with DVD quality video and audio. You probably want to have custom
hardware for most of the video encoding and perhaps decoding as well. However,
I don't think there is any need for custom hardware for audio encoding/decoding.
But it might make sense to use a specialized DSP processor for that encoding and
decoding so that more CPU time is available for other parts of the system (part
of the video decoder for example).

So in this case, instead of making custom hardware for both video encoding/
decoding and audio encoding/decoding, a generic DSP processor block can be used
for many tasks. This will most probably reduce the total logic area and it will
certainly reduce the design time of the application.

/Andreas
 
Hi,

It's hard to tell with so little information.

Did MicroBlaze get out of reset in the simulation?
Did MicroBlaze start to execute code from your application?

If MicroBlaze gets to the instruction that writes data to FSL, the only
thing stopping it is if the fsl_full flag is asserted.

Göran

<chrisdekoh@gmail.com> wrote in message
news:9ae5b725-6219-4ac9-b5ff-af7ce4d50df8@w1g2000prd.googlegroups.com...
Hi

1) I am having some EDK simulation problems. I am using EDK9.2i with
microblaze 7.
I have attached a peripheral to the FSL bus using EDK's configure
coprocessor and written its corresponding drivers for the peripheral
which has commands like the one below. ie..

#include "mb_interface.h"

....
microblaze_bwrite_fsl(data,0);



I tried generating simulation libraries to test my drivers
interfacing with the attached peripheral. I created a test bench
system_tb which would instantiate system.vhd. In addition, I had also
added the following lines:

configuration system_tb_conf of system_tb is
for all
for all : system use configuration work.system_conf;
end;
end for;
end system_tb_conf;

to ensure that the initialised BRAM by data2mem is picked up correctly
with the command:

vsim -t ps system_tb_conf

. I have also ensured that microblaze + its peripherals do come out of
resets correctly.

however, when I probe the FSL bus from the microblaze processor, only
data from FSL0_M_data has data. The write signals from FSL0_M_write
are never asserted. In addition, no other signals from the microblaze
driving the FSL bus is asserted.

Did I miss out anything? I have set up EDK for simulation before
whilst using EDK9.1i and have done exactly the same things to get the
simulation up and running. However, it just does not seem to work in
this case.

the system I am using is the default base from the xsb provided by
Avnet with the exception to the FSL buses which is required to link to
the peripheral. I am using FSL link 0.



thanks for your help in advance!

Chris
 
"Bob" <rsg.uClinux@gmail.com> wrote in message
news:286aab22-ec7d-4c46-b232-049bd306e451@a23g2000hsc.googlegroups.com...
On May 2, 11:34 pm, "MM" <mb...@yahoo.com> wrote:

Do you have any external pullups?
No.
I am not sure if this could be your problem, but... normally JTAG signals
require pullups. Bitgen can (and probably does) enable them internally, but
personally I've never relied upon them and always designed in external
resistors... The behaviour with iMPACT might be different because there
might be some pullups inside of the programming cable and/or simply the
drivers/receivers are different.


/Mikhail
 
"Bob" <rsg.uClinux@gmail.com> wrote in message
news:286aab22-ec7d-4c46-b232-049bd306e451@a23g2000hsc.googlegroups.com...
On May 2, 11:34 pm, "MM" <mb...@yahoo.com> wrote:

Do you have any external pullups?
No.
I am not sure if this could be your problem, but... normally JTAG signals
require pullups. Bitgen can (and probably does) enable them internally, but
personally I've never relied upon them and always designed in external
resistors... The behaviour with iMPACT might be different because there
might be some pullups inside of the programming cable and/or simply the
drivers/receivers are different.


/Mikhail
 
Hi,

Easiest way to find out what is happening is to disassemble the program.
Just do a "mb_objdump -S" on the .elf file.

Göran

<chrisdekoh@gmail.com> wrote in message
news:6a6c57ff-bc59-4477-83e3-c85ac0e6664d@a9g2000prl.googlegroups.com...
Hi Goran,
The FSL_Full Flag is not asserted. Also, the microblaze came out of
reset. I know cos I probed the addr and data bus signals and there is
information on the bus in the modelsim simulator, wrt to when the
microblaze is not out of reset.

anyway, i found something else. This was what I wrote in my
firmware code running on microblaze:

#include "float.h"
#include "mb_interface.h"


typdef unsigned long long uint_64; //64 bits wide
typedef union {
uint_64 long_t;
double double_t;
} Union_double_t;



int main(){
Union_double_t a;
Xuint32 temp;
a= 3.0;

//extract the lower word to put into the peripheral

temp = (Xuint32) a.long_t & 0xffffffff;
microblaze_bwrite_fsl(temp,0);
//extract the upper word to put into the peripheral
temp = ((Xuint32) a.long_t >>32) | 0xffffffff;
microblaze_bwrite_fsl(temp,0);
return 1;
}

the code above does not work. In short, when i try to send a double
precision word onto the FSL bus like the manner described above by
breaking it into the lower word and the upper word, it fails to work.

However, for a single precision word sent in exactly the same way, it
works just fine.

any idea? :)
Chris
 
"Bob" <rsg.uClinux@gmail.com> wrote in message
news:736a625f-4421-4309-a98f-4e645f2b1d7f@t54g2000hsg.googlegroups.com...
Okay, but I don't know how to do this. Where do I "pick the code"
that you refer to?
Martin is talking about the source code for the microcontroller...


/Mikhail
 
"Bob" <rsg.uClinux@gmail.com> wrote in message
news:736a625f-4421-4309-a98f-4e645f2b1d7f@t54g2000hsg.googlegroups.com...
Okay, but I don't know how to do this. Where do I "pick the code"
that you refer to?
Martin is talking about the source code for the microcontroller...


/Mikhail
 
On 2008-05-06, jraj.thakkar@gmail.com <jraj.thakkar@gmail.com> wrote:
Hi all,

My background is in Software Engineering C,C++,Java and Unix. I am
getting started with VHDL and Verilog. What is the good way/books/
websites/training to get started? I have B.S. and M.S. in Computer
Engineering. Also, what is the learning curve in VHDL and Verilog?
Have you ever taken a course in digital hardware? If not you should
probably read a little bit about that before doing anything else.
Unfortunately I don't really know of good books in English in this
area because we are mainly teaching these subjects in Swedish.

Once you know a little bit about digital hardware you can draw a
little schematic and translate it into VHDL or Verilog. The learning
curve of VHDL and Verilog is actually quite low _if_ you know what
hardware your are planning to design.

May I ask why you are interested in learning about VHDL or Verilog?
Do you have a particular project in mind? Hobby or professional
interest?

/Andreas
 
On 2008-05-06, jraj.thakkar@gmail.com <jraj.thakkar@gmail.com> wrote:
Hi all,

My background is in Software Engineering C,C++,Java and Unix. I am
getting started with VHDL and Verilog. What is the good way/books/
websites/training to get started? I have B.S. and M.S. in Computer
Engineering. Also, what is the learning curve in VHDL and Verilog?
Have you ever taken a course in digital hardware? If not you should
probably read a little bit about that before doing anything else.
Unfortunately I don't really know of good books in English in this
area because we are mainly teaching these subjects in Swedish.

Once you know a little bit about digital hardware you can draw a
little schematic and translate it into VHDL or Verilog. The learning
curve of VHDL and Verilog is actually quite low _if_ you know what
hardware your are planning to design.

May I ask why you are interested in learning about VHDL or Verilog?
Do you have a particular project in mind? Hobby or professional
interest?

/Andreas
 
climber.tim@gmail.com wrote:
It can be Xilinx Spartan, but board should contain 4-6 or 8 Spartan
chips with roughly same logic capacity.
Um, what would be wrong with 4 Spartan starter kits and some duct tape?

G.
 
climber.tim@gmail.com wrote:
It is most cost-optimal for crypto-tasks, if I'm correct, of course.
Like it was done there:
http://www.copacobana.org/faq.html
Ah, ok then, 120 Spartan starter kits and a bigger roll of duct tape :)

For something so massively parallel and where each FPGA presumably
spends almost all of its time operating independently, I would think
some cheap off-the-shelf single-FPGA module in quantity might actually
be the easiest way to go. Probably not a problem to hook up 120 USB
devices to a PC to control them or whatever. Might get a little warm
though.

G.
 
climber.tim@gmail.com wrote:
It is most cost-optimal for crypto-tasks, if I'm correct, of course.
Like it was done there: http://www.copacobana.org/faq.html
Just a couple more thoughts on this (I'm not a crypto person but hey,
this is Usenet):

I'm not sure how interesting having such a device (as described at
that URL) would actually be. They talk about being able to crack
symmetric cyphers with "roughly" 64-bit keys. Well for standard
DES "roughly" is only 56 bits as I recall, so when they say the
average time to break a DES key was 6.4 days, that means a real
64-bit cypher would take an average of 4.5 years with a worst case
of 9 years at the same speed.

Cracking plain DES is a clever demonstration of parallel computing,
but these days just isn't all that interesting any more I think.

The 120 FPGA device described is unlikely to be good for anything
other than brute-force parallel tasks, and in the crypto world I
don't know that there are many other interesting things you can
do that are of similar complexity to cruddy old single DES.

It seems to me (again not being a crypto guy) that all these cracking
machines may be somewhat uninteresting for many real-world
applications because they must be known-plaintext attacks as
they rely not only on being able to do fast decrypt operations to
test each possible key, but also being able to determine whether
the key was correct or not in a similarly short period of time.

This might not be so easy to do depending on how much knowlege you
have about the plain text of the message you're trying to decrypt.

Anyhow, I can't really think of anything interesting to do with a
device such as the one you're asking about.

G.
 
climber.tim@gmail.com wrote:
It is most cost-optimal for crypto-tasks, if I'm correct, of course.
Like it was done there: http://www.copacobana.org/faq.html
Just a couple more thoughts on this (I'm not a crypto person but hey,
this is Usenet):

I'm not sure how interesting having such a device (as described at
that URL) would actually be. They talk about being able to crack
symmetric cyphers with "roughly" 64-bit keys. Well for standard
DES "roughly" is only 56 bits as I recall, so when they say the
average time to break a DES key was 6.4 days, that means a real
64-bit cypher would take an average of 4.5 years with a worst case
of 9 years at the same speed.

Cracking plain DES is a clever demonstration of parallel computing,
but these days just isn't all that interesting any more I think.

The 120 FPGA device described is unlikely to be good for anything
other than brute-force parallel tasks, and in the crypto world I
don't know that there are many other interesting things you can
do that are of similar complexity to cruddy old single DES.

It seems to me (again not being a crypto guy) that all these cracking
machines may be somewhat uninteresting for many real-world
applications because they must be known-plaintext attacks as
they rely not only on being able to do fast decrypt operations to
test each possible key, but also being able to determine whether
the key was correct or not in a similarly short period of time.

This might not be so easy to do depending on how much knowlege you
have about the plain text of the message you're trying to decrypt.

Anyhow, I can't really think of anything interesting to do with a
device such as the one you're asking about.

G.
 
Bob,

Congratulations! I am curious if the second method will work?... I think
there is a good chance that it will...


/Mikhail
 
Bob,

Congratulations! I am curious if the second method will work?... I think
there is a good chance that it will...


/Mikhail
 
"Bob" <rsg.uClinux@gmail.com> wrote in message
news:c8e6fe8d-6a36-4cb7-81f3-a665207b9208@b64g2000hsa.googlegroups.com...
Anyway, I was going to say I don't have time to try, but since you've
been so good to me, I figure I could return the favor. Yes, the
second method does indeed work! Seems to suggest there are two
separate uses of this function, one which the FPGA requires pulses for
whatever reason, and the other just time (erasing flash?)...
Thanks Bob. There was no need to return the favor, but I think this was time
not wasted :) And, yes, Spartan-3 is based on the Virtex-II architecture.

/Mikhail
 

Welcome to EDABoard.com

Sponsor

Back
Top