EDK : FSL macros defined by Xilinx are wrong

"TonyF" <not@valid.address> wrote in message
news:rl1Sd.1774$%F6.772@newsfe4-gui.ntli.net...
newman5382 wrote:

There is a school of thought that all off chip IO should be
inferred/instantiated at the top level, and not in sub-modules.


In the end, everything is flattened and becomes top-level, but in your HDL
code it is useful to have sub-modules for clarity, code maintenance and
reusability. It should be obvious or possible to tell to a synthesis tool
that your inout port in your sub-module really is an external port.

TonyF

It is not my HDL code.

Lots of things are judgement calls, and different people will choose
differently. If I look at regular HDL (non-EDK) targeted code, if I see
that all the primary I/O are defined in the top level, and not buried at
some unknown level of the hierarchy, it gives me a warm fuzzy that the other
person made some effort for other people to understand the flow of the
design.

As far as your complaint about the XST synthesys tool, since I own a bunch
of Synplicity stock, I think it would be best for me to not address that
issue.

-Newman
 
Actually what i gived to you is like a state machine and the 2 MSB of the
counter are the state of the machine


"fpgawizz" <bhaskarstays@yahoo.com> a écrit dans le message de news:
57900d3bbffa517d7810a22c49caced3@localhost.talkaboutelectronicequipment.com...
Thanks KCL. I used a state machine to model that part of the design. Seems
like its working.ITs only a piece of a bigger part. I am trying to have
this display module be one of the modules for a VHDL vending machine. Do
you know any materials in the internet that can help me design this
vending machine. It has the following features:
1) 5 products price - 55/60/65/70/75c
2) 3 different coin inputs -25 c/10c/5c
3) Need to display the product price and price entered via the 3 coin
inputs.
4) When the value of product selected is reached, it should be dispensed
and any change displayed.
5) System should reset after this and also reset if done asynchronously.
 
newman5382 wrote:

It is not my HDL code.

Lots of things are judgement calls, and different people will choose
differently. If I look at regular HDL (non-EDK) targeted code, if I see
that all the primary I/O are defined in the top level, and not buried at
some unknown level of the hierarchy, it gives me a warm fuzzy that the other
person made some effort for other people to understand the flow of the
design.
The designer's HDL code should not target such low level. An inout is
just that, an inout, not much to understand. In Xilinx FPGAs this will
be inferred as an IOBUF that will provide the *_I, *_O and *_T ports.
With other vendors or ASIC it might be inferred as something else
(though equivalent).

As far as your complaint about the XST synthesys tool, since I own a
bunch
of Synplicity stock, I think it would be best for me to not address that
issue.
I did try my code with Synplify (outside EDK) and I didn't have this
problem. Nevertheless I still think EDK/ISE is a nice tool for project
management/implementation and great value for money.

TonyF
 
"newman5382" <newman5382@yahoo.com> wrote in message
news:GP1Sd.96890$qB6.89122@tornado.tampabay.rr.com...
"TonyF" <not@valid.address> wrote in message
news:rl1Sd.1774$%F6.772@newsfe4-gui.ntli.net...
newman5382 wrote:

There is a school of thought that all off chip IO should be
inferred/instantiated at the top level, and not in sub-modules.


In the end, everything is flattened and becomes top-level, but in your
HDL code it is useful to have sub-modules for clarity, code maintenance
and reusability. It should be obvious or possible to tell to a synthesis
tool that your inout port in your sub-module really is an external port.

TonyF


It is not my HDL code.

Lots of things are judgement calls, and different people will choose
differently. If I look at regular HDL (non-EDK) targeted code, if I see
that all the primary I/O are defined in the top level, and not buried at
some unknown level of the hierarchy, it gives me a warm fuzzy that the
other person made some effort for other people to understand the flow of
the design.

As far as your complaint about the XST synthesys tool, since I own a bunch
of Synplicity stock, I think it would be best for me to not address that
issue.

-Newman
TonyF,
I looked at the code section in question. It appeared to be two IO lines
SDA, SCL that were broken out into input, output, and tristate control. I
did an I2C design a while back, and I found it convenient to break out the
signals in a similar manner.

-Newman
 
I'm also interseted in it

I'm writing an VGA display with incorporated ROM font , for test I just
implemented part of font but in the future I will have to implemente all the
font so knowing how to generate a file for the rom should be interesting.(I
have already made a ROM for an altera component with dspbuilder but actually
I doesn't know how to call a file in HDL and be recognized by xst as a rom)


"Marco" <marcotoschi@email.it> a écrit dans le message de news:
ee8bfb0.2@webx.sUN8CHnE...
To print text on display I need to have font on a rom.

Could you explain how to create a file to copy into rom?
 
"TonyF" <not@valid.address> schrieb im Newsbeitrag
news:ZX0Sd.1675$%F6.1428@newsfe4-gui.ntli.net...

Nonsense. XST can handle inouts quite good.

Only if they are at the top level. If they are in a sub-module, XST will
complain about not finding the *_I, *_O and *_T ports in your sub-module
(see my other post).
???? If you have ionout between modules that go not offside, XST can handle
them too. But I wouldnt use inout inside the FPGA, there is no reason to do
so and after all it will not translate in "real" tristates in newer FPGA
families and uses up more ressources than seperate ins and outs.

Regards
Falk
 
I use xilinx ise foundation

the rom is declared in vhdl
i declare an array that contains contains each symbol of the font
exactely each symbol is reprensated by 8 std_logic of 8 bit
i adress the rom by the concatenation of the symbol & the 3 lsb of the
adress of the vertical pointer then i peak the right bit with the 3 lsb of
horizontal pointer
for example 'A' symbol look like

--A
"00000000", --<< for vertical separation
"00011000",
"00011000",
"00100100",
"00111100",
"01000010",
"01000010",
"00000000",--<< for vertical separation
^ ^ for horizontal separation

here is the code: http://kclo4.free.fr/fpga/memoire_alphabet.vhdl

I'm actually remodeling my VGA display when it will be finish i will share
you the code if you want (in 2/3 days i think)

alexis

"Marco" <marcotoschi@email.it> a écrit dans le message de news:
ee8bfb0.4@webx.sUN8CHnE...
> In what way have you implemented the ROM fonts? (I'm using Xilinx EDK)
 
Martin wrote:

You could not allow your engineers to go home or use
the phone or use the internat


"Symon" wrote:

Perhaps that's why one of them ripped him off in the past... ;-)


The only shackles and chains I had around were already chaining me to my

desk, so that couldn't be it! :)


I gather from the responses that design work security either isn't a
significant issue (BTW, it has NOT been for me) or that no sensible
approach
exists. By "sensible" I mean anything that does not adversely affect
work
and creativity.

It is an issue. However you have to define first who you
want to deter. There are chips with a security bit. I'd
assume it to be crackable with sufficient effort, say
1 man month and gear for 10k$.

This means you're back at square one.

Rene
--
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net
 
"Falk Brunner" <Falk.Brunner@gmx.de> wrote in message
news:37s7tfF5h3oanU1@individual.net...
"TonyF" <not@valid.address> schrieb im Newsbeitrag
news:ZX0Sd.1675$%F6.1428@newsfe4-gui.ntli.net...

Nonsense. XST can handle inouts quite good.

Only if they are at the top level. If they are in a sub-module, XST will
complain about not finding the *_I, *_O and *_T ports in your sub-module
(see my other post).

???? If you have ionout between modules that go not offside, XST can
handle
them too. But I wouldnt use inout inside the FPGA, there is no reason to
do
so and after all it will not translate in "real" tristates in newer FPGA
families and uses up more ressources than seperate ins and outs.

Regards
Falk
Falk,

From what I have seen, the problem is that platgen.exe (Part of EDK, not
XST) is auto generating wrappers for the IP blocks. The auto-generated
wrappers breaks out "inout" to *_I, *_O, *_T ports. It also generates a top
level file where these signals are fed into instantiated IO blocks at the
top level. My guess is that the program that autogenerates these files
assumes that the user defined IP blocks conform to the _I, _O, _T
convention, and XST complains when it sees incompatible connections..

-Newman
 
"newman5382" <newman5382@yahoo.com> schrieb im Newsbeitrag
news:Xn8Sd.104376$JF2.1558@tornado.tampabay.rr.com...

From what I have seen, the problem is that platgen.exe (Part of EDK, not
XST) is auto generating wrappers for the IP blocks. The auto-generated
wrappers breaks out "inout" to *_I, *_O, *_T ports. It also generates a
top
level file where these signals are fed into instantiated IO blocks at the
top level. My guess is that the program that autogenerates these files
assumes that the user defined IP blocks conform to the _I, _O, _T
convention, and XST complains when it sees incompatible connections..
OK, this sounds like a different story. I didnt use EDK yet.

Regards
Falk
 
On Sat, 19 Feb 2005 21:46:46 -0800, AL wrote:

Hi, Thanks Bart for that answer. Yeah that's actually what I am working on right now, and stuck on one part, how do you know when it fail, and how do you know what the bit error rate is? In simulation I can see everything, but when I actually download the code to the FPGA, I don't know what's going on in there. I tried reading the result back via JTAG register, but it didn't work, BSCAN JTAG only allows me to read back register with very simple program. With a program this complicated, it didn't work. In addition to this bit error rate measurement, my boss wants a DNL and INL measurement; so as soon as I get done with this bit error rate measurement, I have to work on the DNL and INL part. Greatly appreciate if anyone can help! Thanks, Ann

The bit error rate measurement is fairly simple once you have the transmit
and receive blocks with a compare. Build a counter large enough for your
fail count (with an overflow indication in case it exceeds your expected fail
count) increment the counter on every miscompare. Wire the counter output to
a register that you could read with your JTAG interface and read the fail
count. The bit error rate is simply the ratio of the number of bad compares
to the number of bits transmitted. There are some variations to the
calculation if you want log error rate etc but it should be a quick
calculation. If you are having trouble reading out the result you could
build a serial or parallel interface of your choosing to the PC. Something
like SPI, I2C, 4 bit parallel (or just wire out to a bunch of 7 segment
displays.) I have had good results building a serial 3 wire custom SPI
interface on the FPGA and using the PC printer port to clock the FPGA
register bits out.

For your INL and DNL measurements I am not sure where they would fit in with
what I understand about your project. From the work I have done with INL and
DNL measurements there is usually a DAC or an ADC that is the subject of the
measurement. If you can give some more details about the functional blocks
you are trying to test I might be able to propose a few options.

Regards,
Bart
 
Hi Johan,

Johan Bernspĺng wrote:

I was able to both convert to pdf and to send directly to the printer
from the printing wizard in ChipScope. I've installed service pack 3 for
ChipScope 6.3, I don't know if that has anything to do with it... Might
be worth a try if you havn't.
Hmmm, installed SP3 but no change. Time to open a webcase I suppose...

I can email you some of my waveforms as pdf:s if that makes you
happier.. =)
A generous offer, but probably won't help me debug my DMA bandwidth
issues :)

John
 
Hi Bart, For the DNL and INL test, all I know so far is that I have an 8 bits ramp coming in from a transmitter, and I need to make the FPGA do DNL and INL measurements. What other information do you need? Please let me know. Also, have you ever used JTAG to read register result before? Thanks, Ann
 
Hi, Thanks for all the helpful responses. But I still have another question. Has anyone ever used JTAG to read back a register content??? I ran into some problem doing this, and need some help. When I put a constant number into the register and read it back, it works, but when I have that number changed depending on an if else statement, it doesn't work anymore. For example, in the following code: always @(posedge CLK_IN) begin if(RESET) begin num = 20+1; end else begin num = 1+1; end It would give me 00010011 or 21 even though the RESET signal has changed. I tried using the CASE statement instead: always @(posedge CLK_IN) begin case(RESET) 2'd0: num = 20+1; 2'd1: num = 2+2; default: num = 3+4; endcase end Now it always gives me 00000000 when I tried to read it back. Do you have any idea why? Thanks, Ann
 
"Thomas Entner" <aon.912710880@aon.at> schrieb im Newsbeitrag
news:42176496$0$33864$91cee783@newsreader01.highway.telekom.at...
Hello Piotr,

With a 1C20, speedgrade 6 and using Quartus physical synthesis, I achieve
116 MHz
(using fast-fit, in contrast: 92MHz), with speedgrade 8 (a bit cheaper...)
this
drops to 89 MHz (typical design, with SDRAM-controller). The real fmax of
course depends on your design, e.g. which periperals you are using, how
full
your chip is, etc.

If you need the CPU only for simple control tasks, you might also
considering to use our ERIC5 (www.entner-electronics.com). However, there
is
no support for fast multiplications and divisons, it is more comparable to
a
ATMEL AVR in performance (but higher fmax).
Hi Thomas,

could you please give Quartus resource utilization for ERIC5 when
targetting EPM240 and executing from UFM?
On your website you claim it would be 50% and that ERIC5 was
initially targetted for MAX2. I am just curious to see that report :)

Antti
PS the two other companies that used to offer 9-Bit processors
IP-Cores are now dead and vanished, hope you have better luck!
 
"Falk Brunner" <Falk.Brunner@gmx.de> schrieb im Newsbeitrag
news:37rj4hF5ea8fvU1@individual.net...
"TonyF" <not@valid.address> schrieb im Newsbeitrag
news:Wp%Rd.1219$%F6.1075@newsfe4-gui.ntli.net...

Just noticed that in your VHDL code you don't use inout ports, resulting
in 200% bloating of a normal inout port declaration. I presume this is
because XST is too lazy to parse inouts so that we have to do some kind

Nonsense. XST can handle inouts quite good.

Regards
Falk
yes and no.
XST can handle inout, YES
, but:

1) for EDK the _I _O _T useage is required to be "EDK compliant" - this
issue has nothing todo with XST inout handling
2) inout use with xilinx tools is an issue sometimes: the control port of
ChipScope cores is a single port that is kinda inout as one wire has
different direction, that causes very often problems, the chipscope cores
are delivered as netlist and used with verilog/vhdl wrapper where the inout
port is declared as unidir, this works, usually... sometimes it works better
when the wrapper is defined as inout. I am just pointing out that there are
cases where the 'inout' or not inout is an issue withing the xilinx
toolchain

Antti
 
TonyF wrote:

Just noticed that in your VHDL code you don't use inout ports, resulting
in 200% bloating of a normal inout port declaration. I presume this is
because XST is too lazy to parse inouts so that we have to do some kind
of backend annotation alongside HDL programming, resulting in a not very
elegant code.
Usually it is a good style of coding not to use inouts inside the design.
Especially if the design should be portable to different architectures
and tools.

Usually IO-pads are implemented at the toplevel, especially in ASIC based
designs where the IO-ring is generated usually with automatic tools. Also
I have seen formal tools to choke with internal inout ports during
RTL->gate verification sometimes.

--Kim
 
Jedi <me@aol.com> wrote in message news:<hgMMd.1754$zk.836@read3.inet.fi>...
Anybody has an idea why the NIOSII 1.1 toolchain build fails
on Linux/BSD systems with:

*** ld does not support target nios2-elf
*** see ld/configure.tgt for supported targets
make: *** [configure-ld] Error 1

Somehow it looses "nios2-unknown-elf"...


It builds fine on Win2k under Cygwin...
Have you used the same value for $prefix for both binutils and gcc? Is
$prefix/bin in your path?

Cheers,
Jon
 
newman5382 wrote:
This is probably the price to pay for such a cheap tool, so I
should not
really complain. Synplify will allow you to use inouts in sub
modules, but
it costs much more than XST.

There is a school of thought that all off chip IO should be
inferred/instantiated at the top level, and not in sub-modules.

-Newman
Is there a good reason for this school of thought?

Using that concept, when I go to take that top level and create a 4x
version of it, I can't just create a new top level with a generate
statement. Now I have to go edit a completely working design and
convert all the inouts to seperate in's and out's. And if that
original block is still being used in the original design, I now have
two different versions of the exact same thing that I have to maintain.

Have fun,

Marc
 

Welcome to EDABoard.com

Sponsor

Back
Top