EDK : FSL macros defined by Xilinx are wrong

Yes ALLAN I am Sure I am using different bits


E.g

This VHDL code I tried but in PAR file no pin assignment for signal 13 to 7

process(CLK_2X,SR_ADDR_IO_int,SR_DATA_IO_int,SR_IRD_int,SR_IWR_int,SR_IVCS_V3_int)
begin
if RISING_EDGE(CLK_2X) then
if SR_IVCS_V3_int = '0' then
if SR_IWR_int = '0' then
if SR_ADDR_IO_int = "001100" then
LED_V3_int <= SR_DATA_IO_int(13 downto 7);
end if;
end if;
end if;
end if;
end process P_SRAM2LED;
 
Uwe Bonnes <bon@elektron.ikp.physik.tu-darmstadt.de> wrote in message news:<behcjc$stq$1@news.tu-darmstadt.de>...
Udayan <udayan@jhu.edu> wrote:
...
: SO I checked my design and removed the extra BUFG that was getting
: tagged to my divided clock output. However, since I would like to
: buffer my divided clock and the Synthesis engine automatically
: attaches a BUFG to the GCLKIOB pin input I am still left with one
: extra GCLK in my design being used up. Is there some way to get around
: that?

Can't you use a Clock Enable and the original clock instead of the divided
clock?

: Would a divide using the clk_dll still use up the extra GCLK
: resource?

The DLL output will use a GCLK too, to my knowledge.

: Also when I do an edge test with my signal lines that arrive at
: regular IOBs and pass them through a BUFG the MAP report gives a
: warning that:

: WARNING:NgdBuild:483 - Attribute "LOC" on "c_req" is on the wrong type
: of
: object. Please see the "Attributes, Constraints, and Carry Logic"
: section of
: the Libraries Guide for more information on this attribute.

: What does this mean? I have tried looking up the Libraries Guide but
: without success.

Perhaps you have an Output pin on a dedicated input or such. Recheck with
the data sheet.

Bye
Hi Uwe,

I sorted the GCLK problem out. Apparently if you use a clkdll it does
not put a BUFG in front but an IBUFG and only the locked signal passes
through a BUFG. That way the synthesizer does not infer a GCLK item
for the input.

Still having trouble with the IOB warnings though....SOmehow the
synthesiser has now concluded that they are GCLKIOBs but since they
are not on the pins assigned for GCLKIOBs they are probably causing
the trouble.

If you feel inspired and find an answer to my present query drop me a
line.

Thanks

Udayan
 
The JEDEC standard name for "Jam" is "STAPL".
If you download the very latest Altera STAPL interpreter, it will
support JEDEC standard STAPL files without issue
Earlier versions only supported Altera proprietary formats.

Antti Lukats wrote:

christoph.grundner@agfa.com (Christoph Grundner) wrote in message news:<8a172a75.0307030527.40c28b2d@posting.google.com>...


Hi there

i'm currently trying to configure Alteras JAM Player for a Mitsubishi
M16 Controller to program multi vendor device JTAG chains. Input files
are either *.jam (JAM file) or *.jbc (JAM-ByteCode file).
Is there a (preferably free) Xilinx tool to produce either one of
these file types? Where can i download the tool?



impact generates JAM files, well called STAPL but it is the same thing.
there are some problems though most of the files generates will not
work with non-patched JAM player :(

antti
 
If you are programming the Spartan2 device after the xc18v01 has been
configured then the initiation of the boundary-scan configuration of the
Spartan2 (program -p 2) also initiates configuration of the Spartan2
from the xc18v01and results in a configuration data conflict within the
Spartan2.

There are several ways to work around this problem either one of which
should work
(a) erase the PROM before configuring the Spartan2
(b) set the mode pins on the Spartan2 device to boundary-scan mode
(c) set the preference (Edit->Preferences) to Use HIGHZ instead of BYPASS
(d) generate an mcs file from the bit file and configure the PROM only

Jimy wrote:

HI,
I have a Avnet Spartan2 board, if I download the the .mcs (prom
file) comming with the board to it, it seems everything is OK. Then I
build my design, went through the implementation flow, and I run a
script to run impact.exe,

setMode -bscan
setCable -p lpt1
addDevice -p 1 -part xc18v01
addDevice -p 2 -file download.bit
program -p 2
quit


but what I see is at the end of its running is like this,


'2':programming device...done.
INFO:iMPACT:579 - '2': Completed downloading bit file to device.
INFO:iMPACT:580 - '2':Checking done pin ....done.
'2': Programming terminated, Done did not go high.
----------------------------------------------------------------------
----------------------------------------------------------------------
Done.



Do you know what might be the problem? Note that I have the same P4
download cable, JTAG cable and power supply.

Thanks,
Jim
 
On Tue, 8 Jul 2003 14:33:20 +0200, "Marc Battyani"
<Marc.Battyani@fractalconcept.com> wrote:

Hello,

I want to make a phase measurement at 100MHz with a NCO at 200+ MHz
This NCO will have a 32 bit phase accumulator and a 32 bits phase offset. The
output will be only one bit.
I will use a phase comparator followed by an integrator (digital or analogic
if needed).
At 100MHz the NCO output will be very very noisy but if I integrate it for a
rather long time (10ms) will it have a 0 mean ?
Can I implement this in an FPGA or should I use a DDS chip (AD9854) ?
Where can I find some maths on this subject ?

Thanks

Marc Battyani
I've used the Ad9954 DDS and the phase noise is about -140dBc/Hz @
10KHz 0ffset. Your clk ref needs to be better than that if you want an
accurae measurement. I used a DRO for the reference and an Agilent
E5500 phase noise test set to measure it. Download the Analog Devices
dds tutorial for all the math involved.
 
"t hicks" <hicksthe@egr.msu.edu> wrote in message
news:beqgl8$8m8$1@msunews.cl.msu.edu...
Hi,
I am in the process of doing a re-design on a multi-board system that
I
have working right now. We are redesigning to convert to USB2.0 and to
add

[SNIP]

Thanks,
Theron Hicks
Just some anecdotal evidence. We had a fairly simple multi-card PCI
setup where we had to use very short line-lengths to an on-card PCI bridge
for each card because the signals with simple daughter cards were unusable.
Known impedance and proper termination is critical even at the PCI's 33/66
MHz. 200 MHz would be many times worse.

Norm
 
Second, can I get the bus working correctly for the data stuff. Note: the
bus uses standard lvttl (3.3v) for logic signals at a minimum pulse width of
about 40 ns.
As a straw man, consider PCI. It runs a bit faster than 40 ns, but
it doesn't get anywhere near 18 slots.


The bus has a total of 19 connectors spaced at about .75" apart.
Some of the connectors could be left open and also there will be a sort stub
on the individual cards that must be terminated in some fashion.
Usual practice with multidrop back plane busses is to make that stub
as short as possible and live with it. (no termination) It screws
things up, generally by looking like a small cap which reduces the
effective impedance of the backplane. (Same math as a row of
memory chips on a bus.)

Sometimes with things like this, you can gain a factor of 2 by
putting the master card in the middle and splitting the bus into
two. Or you split it into 4 and interlace the cards on each side.

One thing to consider is putting terminators at each end of the
backplane and using something other than LVTTL.

I expect you will be doing lots of simulations. Please let us
know what you decide to build and/or how well it works.

--
The suespammers.org mail server is located in California. So are all my
other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's. I hate spam.
 
Jay wrote:

I am desinging a Digital-PLL and I'm trying to figure out what DAC
resolution I need to drive my VCO given a .1 degree phase accuracy
requirement (that is, reference and output need to be within .1 degrees
of each other).

The DPLL output operates over a range of 1 Hz to 50 Hz (not too tough),
and my VCO gain is 11.8333 Hz / Volt . My phase detector is the standard
two-DFF type and can get detect a minium time difference of 20 ns in the
two waveforms. DAC output(which controls the VCO) range is 0 - 5V.

My problem is that I'm having trouble relating my phase requirement to
DAC voltage step-size (number of bits).
Assuming that the DAC is updated once each cycle of the output
frequency, you want your frequency to be within f (1 +- 1/3600), which
would generate the maximum phase error, assuming that the phase was
exactly matched at the beginning. That suggests that you want at least
a 12-bit converter.

If you did external filtering, you could use fewer bits and toggle the
DAC setting more frequently such that the average voltage is for the
correct frequency.

Thad
 
Thad Smith wrote:

I'm following up my own post for a correction.

Assuming that the DAC is updated once each cycle of the output
frequency, you want your frequency to be within f (1 +- 1/3600), which
would generate the maximum phase error, assuming that the phase was
exactly matched at the beginning. That suggests that you want at least
a 12-bit converter.
12 bits should be sufficient for the full scale frequency. Since the OP said he needed to
track 1 to 50 Hz with 0.1 degree max phase error, he will need an additional 6 bits to get
the required resolution at the low end (1 Hz). Note that 18 bits of accuracy aren't
needed -- only 18 bits of resolution, because of the feedback. The resolution could be
achieved with 2 10 bit or 12 bit converters.

If you did external filtering, you could use fewer bits and toggle the
DAC setting more frequently such that the average voltage is for the
correct frequency.
This is still true and could be used instead of another converter.

Thad
 
Thad -

sounds good in theory - but there has never been a 12 bit DAC with 12
bits of reality - the LSB is junk with power supply noise and
non-linearity. The 12 bit DACs are really capable of 11 - 11.3 bits
with GOOD (excellent) gound planes and linear regulators and a cold
plate for temperature stability.

The phase error means that his load into the m and n integer will not do
well in this application - they only increment on cross. I would
suggest going to a full digitally done output using the dac and
integrating it a little (time constant ~ 1/2 the update rate). The
phase error budget here will kill him .

Andrew
Thad Smith wrote:

Thad Smith wrote:

I'm following up my own post for a correction.



Assuming that the DAC is updated once each cycle of the output
frequency, you want your frequency to be within f (1 +- 1/3600), which
would generate the maximum phase error, assuming that the phase was
exactly matched at the beginning. That suggests that you want at least
a 12-bit converter.



12 bits should be sufficient for the full scale frequency. Since the OP said he needed to
track 1 to 50 Hz with 0.1 degree max phase error, he will need an additional 6 bits to get
the required resolution at the low end (1 Hz). Note that 18 bits of accuracy aren't
needed -- only 18 bits of resolution, because of the feedback. The resolution could be
achieved with 2 10 bit or 12 bit converters.



If you did external filtering, you could use fewer bits and toggle the
DAC setting more frequently such that the average voltage is for the
correct frequency.



This is still true and could be used instead of another converter.

Thad
 
Thanks for the responses everyone. I'm still a little puzzled, but
it's gone from dark to murky =).

news:<3f28efba.636665@news.wwnet.net>...
On Wed, 30 Jul 2003 19:45:56 -0600, Thad Smith <ThadSmith@
wrote:

Thad Smith wrote:

I'm following up my own post for a correction.

Assuming that the DAC is updated once each cycle of the output
frequency, you want your frequency to be within f (1 +- 1/3600), which
would generate the maximum phase error, assuming that the phase was
exactly matched at the beginning. That suggests that you want at least
a 12-bit converter.

12 bits should be sufficient for the full scale frequency. Since the OP said he needed to
track 1 to 50 Hz with 0.1 degree max phase error, he will need an additional 6 bits to get
the required resolution at the low end (1 Hz).
no-one@nowhere.com (Robert Scott) wrote in message

Not true. The OP quoted the phase error spec in terms of degrees, not
microseconds. .1 deg is 1 out of 3600 at any frequency. So 12 bits
(which gives 1 out of 4096) is good enough at any frequency.
I'm on the same page with regard to the 1/3600 part at a frequency.

I think what Thad is saying is that I need 12-bits at a given
frequency, but to get to a given frequency I need more bits. If my
range is 1-50Hz just to get to a frequency in that range (if I could
do it in integer multiples of Hz) I'd need 6-bits assuming a 1:1
correlation between a bit and output Hz. Then I would need an
additional 12-bits to do sub-frequency control to meet my phase
requirement.

Also,I think that the VCO's gain factor comes into play and will
affect the number of bits, given I don't have a 1:1 correlation
between a bit and a Hz.

Thanks again for the responses...

-- Jay.
 
Andrew Paule wrote:
Thad -

sounds good in theory - but there has never been a 12 bit DAC with 12
bits of reality - the LSB is junk with power supply noise and
non-linearity. The 12 bit DACs are really capable of 11 - 11.3 bits
with GOOD (excellent) gound planes and linear regulators and a cold
plate for temperature stability.
For this application, the OP has error feedback, which removes the need
for long term stability, accuracy, and good linearity. Noise would be
more of a problem, though.

The phase error means that his load into the m and n integer will not do
well in this application - they only increment on cross.
The m and n integer? What do you mean here?

Thad
 
Hello,

In article <3F2A9077.12B7170F@xilinx>, peter@xilinx says...
I have been watching this thread for a while...
Why would anybody do this design in analog, when it is so easy to get
close to perfection doing the whole thing digitally ? For the price of
one DAC you can get thousands of flip-flops. Use a 50 or 100 MHz clock
and achieve any accuracy you want. Use multi-phaese 200 MHz clocks if
you need better than one nanosecond precision...
Am I missing something ?
Well, I cheated just a bit, I don't have a VCO, I have a motor control
unit. I'm varying the voltage to the motor control unit to get a
"frequency" out of it. My VCO "gain" is really the motor gain (RPM/Volt
translated to Hz / volt). I didn't want to complicate the situation by
bringing that in (being a motor and not a VCO doesn't alter the number
of bits question)

Using PWM would work for my application upto 11-12 bits(given a 40-60MHz
input clock), but beyond that my PWM output frequency drops too low. I
haven't solved this problem yet...dithering may work here.

I also considered using a real DAC, buffering the output and driving an
SMPS in voltage-control mode (to drive the motor) but as others have
pointed out, the DAC noise problems will probably kill me.

My newserver has been acting funny(read not letting me post), otherwise
I would have tried to clairify a bit earlier.

But I'm curious Peter, assuming I used a 50 to 100MHz clock, the only
way to get the delays would be to make a (big) shift-register and delay
my signal by clock cycles, right? Wouldn't that mean a huge multiplexor
on the output to select which tap I use?

How exactly would multi-phase 200MHz clocks work out here? Generate a
0deg and a 90deg signal using the DCM on a Xilinx part or something?

Thanks again for all the helful responses everyone.

-- Jay.
 
In pll's, you have either an m or an m and n divide registers.

Thad Smith wrote:

Andrew Paule wrote:

Thad -

sounds good in theory - but there has never been a 12 bit DAC with 12
bits of reality - the LSB is junk with power supply noise and
non-linearity. The 12 bit DACs are really capable of 11 - 11.3 bits
with GOOD (excellent) gound planes and linear regulators and a cold
plate for temperature stability.


For this application, the OP has error feedback, which removes the
need for long term stability, accuracy, and good linearity. Noise
would be more of a problem, though.

The phase error means that his load into the m and n integer will not
do well in this application - they only increment on cross.


The m and n integer? What do you mean here?

Thad
 
I think that you should just listen to Peter, and go straight digital
out to either a DAC or some integration circuit - trying to model this
as a PLL sounds easy, but you should just go straight digital

Andrew

John_H wrote:

"Jay" <se10110@yahoo.com> wrote in message
news:MPG.19944c08c648a2109896bb@news.surfcity.net...
snip


Well, I cheated just a bit, I don't have a VCO, I have a motor control
unit. I'm varying the voltage to the motor control unit to get a
"frequency" out of it. My VCO "gain" is really the motor gain (RPM/Volt
translated to Hz / volt).


/snip

Rather than using an external phase comparator, could you sample the motor
signal to give you how "far" you are from your desired zero phase? This
error magnitude would work to give a better frequency match. If you know
how far off you are in phase, the integral of the frequency difference over
your period (1Hz to 50Hz) can be calculated to regain zero phase. The motor
control slew rate will need to be part of the overall phase-locked-loop
design.

snip


Using PWM would work for my application upto 11-12 bits(given a 40-60MHz
input clock), but beyond that my PWM output frequency drops too low. I
haven't solved this problem yet...dithering may work here.


/snip

You could go to a sigma-delta style converter rather than a simple PWM. I'm
not sure if someone has convenient reference code, but you can get extreme
precision in your control voltage as long as you filter out the high
frequency noise that the converter produces. I've wanted to do something
with this approach using a single-package D-flipflop with nice analog rails
to give me a clean voltage (since FPGA outputs are affected by what else is
going on in the I/O or the core).

Given that the motor control may not react as quickly as one cycle, you may
not need high resolution and the PWM output may be fine.
 
Jay wrote:

Well, I cheated just a bit, I don't have a VCO, I have a motor control
unit. I'm varying the voltage to the motor control unit to get a
"frequency" out of it. My VCO "gain" is really the motor gain (RPM/Volt
translated to Hz / volt).
That introduces some additional considerations. A VCO would be fairly
stable for a given control voltage. Is this true of the motor? If
there are load variations, can the controller keep it within the narrow
speed window to maintain your phase margin. Even if the controller
increases the drive to compensate for a load increase, it probably won't
do anything to recover accumulated phase error. If you absolutely need
0.1 degree maximum phase error while the load changes, you might need a
much stiffer motor drive, as well as immediate feedback from the motor
to the controller, probably a high resolution rotary encoder. If there
is very little load change or your phase error limit can be exceeded at
times, it won't be as bad.

Thad
 
zhengyu wrote:
I've got two quick question. I don't have FPGA yet, but I want someone to
offer me some quick comments

1. I have got to do some 64 bit integer comparison, actually I have to do up
to 64 comparisons at the same time, the output is whether there is any pair
that equals.
This is not a question... :)

Equality compares are easy. It uses a two input XOR for each bit with
all the results being OR'd together. This will take 32 LUTs for the XOR
and the first OR gate and 11 more LUTs to combine the rest for a total
of 43 LUTs in four levels. If the design uses the "special" features
that most chips have (ORing of LUTs within a CLB), you can use the LUTs
in pairs or even groups of four and reduce the number of levels for
speed.


2. If I want to create an 16 bit address space, that would translate to 512
k bits, does Vertex II give enough
block RAM so I don't have to use SRAM to do that? What kind of latency
performance should I expect from
typical SRAM, is 5ns read access reasonable?? what is the performance of
block ram??
Is that 16 bit address (64k words) of 8 bit words? Because 64k x 8 =
512K.

You can get this much RAM in the VirtexII if you use the XC2V500 part.
Or in the new Spartan3 you could use a XC3S1500. I am not sure which
will be cheaper, but I bet it is the Spartan3.

The speed of the block RAM will be much faster than anything external to
the FPGA. The block ram will be synchronous and lends itself well to
pipelined operations.

A lot of how you design will be implemented will depend on your data
flow which you have said nothing about. Think about how the storage
will be orgainized and accessed. Obviously one large block of memory
with one interface will not let you do 64 compares at one time. If you
rate of performing these compares is not fast, you can use one compare
logic block and run the different data through it sequentially. Then
one memory could easily do the job.

--

Rick "rickman" Collins

rick.collins@XYarius.com
Ignore the reply address. To email me use the above address with the XY
removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design URL http://www.arius.com
4 King Ave 301-682-7772 Voice
Frederick, MD 21701-3110 301-682-7666 FAX
 
Yes, the chip is out there, but how many are using it? How long will
this still be an avaiable product?


Marc Van Riet wrote:
Anyone any experience with the FPSLIC devices ? They have several packages
with low pin count (84 PLCC, 100 VQFP, 144 TQFP). Only up to 40Kgates FPGA
(2800 registers), but you do have a processor core, and several peripherals,
and 32Kbytes + 16 Kbytes of memory already built-in.

Marc

"Rob Judd" <judd@ob-wan.com> wrote in message
news:3F2A9152.CE7DFE2A@ob-wan.com...
Nicholas,

No, manufacturability is the main concern. I don't have easy access to
high volume production machinery, which is almost guaranteed to be
necessary for most of the newer packages. If I can plug it in, great. If
not, I need to be able to hand-solder it with a standard Weller iron.

Rob


Nicholas C. Weaver wrote:

In article <3F2A4153.66C411AD@ob-wan.com>, Rob Judd <judd@ob-wan.com
wrote:
Hi,

My application requires a lot of core but few physical i/o lines. Can
anyone suggest a modern fpga that is delivered in a 68-pin plcc and/or
80-pin pqfp package?

Is your concern board area? Hand soldering? Cost?

A small BGA package might be appropriate, as a .5mm spacing BGA for a
small pincount is really tiny, if the concerns are board area and
cost.
--
Nicholas C. Weaver
nweaver@cs.berkeley.edu
--

Rick "rickman" Collins

rick.collins@XYarius.com
Ignore the reply address. To email me use the above address with the XY
removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design URL http://www.arius.com
4 King Ave 301-682-7772 Voice
Frederick, MD 21701-3110 301-682-7666 FAX
 
40K gates is way too small for anything I'm considering, and the "added
value" stuff just wastes internal space. We all know where to find AVR
core and serial if we want it.

Rob


Marc Van Riet wrote:
Anyone any experience with the FPSLIC devices ? They have several packages
with low pin count (84 PLCC, 100 VQFP, 144 TQFP). Only up to 40Kgates FPGA
(2800 registers), but you do have a processor core, and several peripherals,
and 32Kbytes + 16 Kbytes of memory already built-in.

Marc

"Rob Judd" <judd@ob-wan.com> wrote in message
news:3F2A9152.CE7DFE2A@ob-wan.com...
Nicholas,

No, manufacturability is the main concern. I don't have easy access to
high volume production machinery, which is almost guaranteed to be
necessary for most of the newer packages. If I can plug it in, great. If
not, I need to be able to hand-solder it with a standard Weller iron.

Rob


Nicholas C. Weaver wrote:

In article <3F2A4153.66C411AD@ob-wan.com>, Rob Judd <judd@ob-wan.com
wrote:
Hi,

My application requires a lot of core but few physical i/o lines. Can
anyone suggest a modern fpga that is delivered in a 68-pin plcc and/or
80-pin pqfp package?

Is your concern board area? Hand soldering? Cost?

A small BGA package might be appropriate, as a .5mm spacing BGA for a
small pincount is really tiny, if the concerns are board area and
cost.
--
Nicholas C. Weaver
nweaver@cs.berkeley.edu
 
40K gates is way too small for anything I'm considering, and the "added
value" stuff just wastes internal space. We all know where to find AVR
core and serial if we want it included.

Rob


Marc Van Riet wrote:
Anyone any experience with the FPSLIC devices ? They have several packages
with low pin count (84 PLCC, 100 VQFP, 144 TQFP). Only up to 40Kgates FPGA
(2800 registers), but you do have a processor core, and several peripherals,
and 32Kbytes + 16 Kbytes of memory already built-in.

Marc

"Rob Judd" <judd@ob-wan.com> wrote in message
news:3F2A9152.CE7DFE2A@ob-wan.com...
Nicholas,

No, manufacturability is the main concern. I don't have easy access to
high volume production machinery, which is almost guaranteed to be
necessary for most of the newer packages. If I can plug it in, great. If
not, I need to be able to hand-solder it with a standard Weller iron.

Rob


Nicholas C. Weaver wrote:

In article <3F2A4153.66C411AD@ob-wan.com>, Rob Judd <judd@ob-wan.com
wrote:
Hi,

My application requires a lot of core but few physical i/o lines. Can
anyone suggest a modern fpga that is delivered in a 68-pin plcc and/or
80-pin pqfp package?

Is your concern board area? Hand soldering? Cost?

A small BGA package might be appropriate, as a .5mm spacing BGA for a
small pincount is really tiny, if the concerns are board area and
cost.
--
Nicholas C. Weaver
nweaver@cs.berkeley.edu
 

Welcome to EDABoard.com

Sponsor

Back
Top