EDK : FSL macros defined by Xilinx are wrong

Does anybody knows if this condition
affects only simulation?
If you see it in the simulation then there is a good chance that it is going
to happen in the hardware. If you see it in hardware you may be able to
reproduce it in the simulator.

That is life.

Steve
 
You haven't really given us enough information - you should post a little
more about the problem; are you getting an error or warning from the
simulation, and if so, what exactly is happening around the time of the
error. Also we need to know a little more about your application.

Are you using the same clock for both ports of the ram (i.e. are CLKA and
CLKB driven from the same BUFG). If so, there is one condition where reading
and writing to the same address is legal - if the port doing the write is
set to "READ_FIRST" (an attribute set on the RAM), then the data returned
from the read port is the previous contents of the RAM. You also don't
specify if the two ports of the RAM are the same widths.

That being said, you say that you have "logic to avoid hits over the same
address", meaning you are (trying to) guarantee that you never attempt a
simultaneous read and write to the same address. In the past, we have had
some problems with the simulation models of the CoreGen RAMs; we definately
had situations where the model would erroneously declare a read/write
conflict (i.e. there have been bugs in the simulation models in the past).
So, the question is, are you sure there is no conflict. It should be easy to
verify by looking at the waveforms (capuring all the inputs and outputs of
the Dual Port RAM in question). If you are getting an error, look at the
waveforms around this error to determine if you are indeed writing and
reading to the same address (or similar addresses) at the same time.
Remember, if the port wirths of the two ports are not the same, then the
"same address" from the point of view of the RAM is not simply ADDRA ==
ADDRB; if portA is 16 bits wide and portB is 4 bits wide then a contention
exists when ADDRB/4==ADDRA

Avrum

"Nap" <nap@4plus.com> wrote in message
news:4ca417fe.0307040701.52e71a46@posting.google.com...
Hi guys.

I'm working on a project using a HW platform with an XC2V300FG676-5.
In my design I used Dual Ports BRAM produced by Coregen 4.2i (in Fndtn
4.2i). In simulation of the logic I faced the following problem:
I wrote data on Port#A but when tried to read them back using port#B I
realized that have been altered. Xilinx claims in Answer Record 10462
that this condition
has to do with conflict resolution over the BRAM (same Read address on
Port#B with write address on port#A). The answer they provide does not
help me at all because I have alot and complicated logic in order to
avoid hits over same address. Does anybody knows if this condition
affects only simulation?

Thanks in advanced
Nap
 
Hi Lan,

you should not need to do any of this. After opening a V2PDK shell (Windows) or sourcing
v2pro_setup (Solaris) change into your project directory. After editing flow.cfg (if necessary)
type "make synth". Synplify will open (if you didn't chose to add -batch in flow.cfg) and you can
synthesize the design. After leaving synplify type "make fpga" to start the implementation
followed by "make bit" to generate the bitstream.

- Peter


Lan Nguyen wrote:

Hi,

I did the synthesis with Synplify 7.3 and got the output "top.edif".
Then I used "edif2ngd" to convert "top.edif" to "top.ngo". This is
where I got the problem. The "top.ngo" was not properly produced, so
the program stopped and popped up the error:

"ERROR:XdmHelpers:828: File "top.ngo" is not in NGD or XDB format".

I could not figure out what the problem is.

Any help would be very appreciated.

Lan

Peter Ryser <ryserp@xilinx.com> wrote in message news:<3F276433.EFFC08BE@xilinx.com>...
Lan,

while XST is not supported for the ml300_embedded_* design shipping with ML300/V2PDK 1.5 it
will work with the Verilog version but not work with VHDL. However, you will have to remove
some peripherals from the system by modifying flow.cfg and changing the yes/no table at the
end of the file.

The default setup for the ml300_embedded_verilog design is the one that is part of the
ML300 ACE files, ie. Linux will boot even if there are devices like AC97 and others that
are not directly supported by Linux.

flow.cfg is the central file for all configurations, tools, SW, peripherals, etc.

- Peter


Lan Nguyen wrote:

Hi Peter,

I've got the Developer's Kit V2PDK VP4. I wanted to run the reference
designs and test the results via the serial port. I tried and got
nothing in the HyperTerminal.

Does XST work for the synthesis ? If so, what modifications do I have
to make ?

(I was told that the only way is to get Synplify synthesis tool)

Thanks

Lan

Peter Ryser <ryserp@xilinx.com> wrote in message news:<3F1F1FE4.B2A8FCB1@xilinx.com>...
Yes, it does. The reference design actually comes with the MLD (Microprocessor
Library Definition) technology that allows you to automatically generate a BSP
for Linux consisting of Xilinx layer 0 and 1 drivers according to the hardware
definition (MHS). When you generate the libraries from the system_linux.xmp
project file you will get this BSP.

The BSP will also contain necessary patches to the Linux kernel to make the
design work with MontaVista Linux 3.0 (FYI: the only thing that needs to be
patched is the code for the Xilinx interrupt driver since the interrupt
controller from V2PDK and EDK are different)

- Peter


tk wrote:

Hi Peter,

I would like to ask if the reference design support
MontaVista Linux Pro 3.0 ?

Thanks very much!

tk

Peter Ryser wrote:

Antti,

the EDK reference design for ML300 contains
- 1 PPC 405
- 1 PLB DDR
- 1 PLB bus with arbiter
- 1 PLB2OPB bridge
- 1 PLB BRAM controller with 32 KB BRAM attached
- 1 OPB Uart
- 2 OPB GPIO
- 1 OPB 10/100 Ethernet (interrupt driven)
- 1 OPB IIC
- 1 OPB System ACE CF

There is no touchscreen, PS/2, TFT, parallel port and AC97. Adding these
peripherals to the design is planned for a later release that will most
likely happen towards the end of the year.

There is some documentation in the zip file that lists the peripherals and
explains the design.
Again, please contact your Xilinx FAE if you would like to get access to
this design.

Thanks,
- Peter



Antti Lukats wrote:

Peter Ryser <ryserp@xilinx.com> wrote in message
news:<3F1846C0.776CD1F5@xilinx.com>...

If you want to work with EDK please contact your FAE and ask him to get
you access to the EDK reference design for ML300. He will be able to
get you access to the design.

Hi Peter,

when we received the EDK + DDR project, I also asked to be notified
when a better EDK ref. design will be available, and so far have not
got any more info, could you please enlight us what additional cores
are available in the EDK ref. design you mentioned?

ASFAIK TFT and Touchscreen are not implemented (or hopefully are now?)
I have still having trouble to get EDK to work correctly using the
obsoleted TFT ref. design - eg. display is looking in stripes 8 pixels
missing after 8 ok pixels - if the problem is fixed and ref design
availabl would be greate.

antti
 
thanks Peter,

it's working :p

Peter Ryser <ryserp@xilinx.com> wrote in message news:<3F2FE7B1.3C47DABD@xilinx.com>...
Hi Lan,

you should not need to do any of this. After opening a V2PDK shell (Windows) or sourcing
v2pro_setup (Solaris) change into your project directory. After editing flow.cfg (if necessary)
type "make synth". Synplify will open (if you didn't chose to add -batch in flow.cfg) and you can
synthesize the design. After leaving synplify type "make fpga" to start the implementation
followed by "make bit" to generate the bitstream.

- Peter


Lan Nguyen wrote:

Hi,

I did the synthesis with Synplify 7.3 and got the output "top.edif".
Then I used "edif2ngd" to convert "top.edif" to "top.ngo". This is
where I got the problem. The "top.ngo" was not properly produced, so
the program stopped and popped up the error:

"ERROR:XdmHelpers:828: File "top.ngo" is not in NGD or XDB format".

I could not figure out what the problem is.

Any help would be very appreciated.

Lan

Peter Ryser <ryserp@xilinx.com> wrote in message news:<3F276433.EFFC08BE@xilinx.com>...
Lan,

while XST is not supported for the ml300_embedded_* design shipping with ML300/V2PDK 1.5 it
will work with the Verilog version but not work with VHDL. However, you will have to remove
some peripherals from the system by modifying flow.cfg and changing the yes/no table at the
end of the file.

The default setup for the ml300_embedded_verilog design is the one that is part of the
ML300 ACE files, ie. Linux will boot even if there are devices like AC97 and others that
are not directly supported by Linux.

flow.cfg is the central file for all configurations, tools, SW, peripherals, etc.

- Peter


Lan Nguyen wrote:

Hi Peter,

I've got the Developer's Kit V2PDK VP4. I wanted to run the reference
designs and test the results via the serial port. I tried and got
nothing in the HyperTerminal.

Does XST work for the synthesis ? If so, what modifications do I have
to make ?

(I was told that the only way is to get Synplify synthesis tool)

Thanks

Lan

Peter Ryser <ryserp@xilinx.com> wrote in message news:<3F1F1FE4.B2A8FCB1@xilinx.com>...
Yes, it does. The reference design actually comes with the MLD (Microprocessor
Library Definition) technology that allows you to automatically generate a BSP
for Linux consisting of Xilinx layer 0 and 1 drivers according to the hardware
definition (MHS). When you generate the libraries from the system_linux.xmp
project file you will get this BSP.

The BSP will also contain necessary patches to the Linux kernel to make the
design work with MontaVista Linux 3.0 (FYI: the only thing that needs to be
patched is the code for the Xilinx interrupt driver since the interrupt
controller from V2PDK and EDK are different)

- Peter


tk wrote:

Hi Peter,

I would like to ask if the reference design support
MontaVista Linux Pro 3.0 ?

Thanks very much!

tk

Peter Ryser wrote:

Antti,

the EDK reference design for ML300 contains
- 1 PPC 405
- 1 PLB DDR
- 1 PLB bus with arbiter
- 1 PLB2OPB bridge
- 1 PLB BRAM controller with 32 KB BRAM attached
- 1 OPB Uart
- 2 OPB GPIO
- 1 OPB 10/100 Ethernet (interrupt driven)
- 1 OPB IIC
- 1 OPB System ACE CF

There is no touchscreen, PS/2, TFT, parallel port and AC97. Adding these
peripherals to the design is planned for a later release that will most
likely happen towards the end of the year.

There is some documentation in the zip file that lists the peripherals and
explains the design.
Again, please contact your Xilinx FAE if you would like to get access to
this design.

Thanks,
- Peter



Antti Lukats wrote:

Peter Ryser <ryserp@xilinx.com> wrote in message
news:<3F1846C0.776CD1F5@xilinx.com>...

If you want to work with EDK please contact your FAE and ask him to get
you access to the EDK reference design for ML300. He will be able to
get you access to the design.

Hi Peter,

when we received the EDK + DDR project, I also asked to be notified
when a better EDK ref. design will be available, and so far have not
got any more info, could you please enlight us what additional cores
are available in the EDK ref. design you mentioned?

ASFAIK TFT and Touchscreen are not implemented (or hopefully are now?)
I have still having trouble to get EDK to work correctly using the
obsoleted TFT ref. design - eg. display is looking in stripes 8 pixels
missing after 8 ok pixels - if the problem is fixed and ref design
availabl would be greate.

antti
 
kjaram_junk@cox.net (Ken Jaramillo) wrote in message news:<75cebfa6.0308071755.37541a3@posting.google.com>...
I'm using Quartus II version 3.0 and am having trouble meeting setup
and hold timing. This is a large PCI design in the
Cyclone 12C device. The routing I'm getting is really bad so my
setup time violations are pretty bad. I can fix the setup times by
inserting 2 LCELL buffers on the PCI clock and placing the buffers
in such a way to get a lot of clock insertion (around 8 ns). If I synthesize the
design without hold time fixing enabled then I can get Tsu and Tco
to pass (just barely). I have the PCI logic back annotated (placement
not routing). If I then synthesize while enabling hold time fixing
Quartus fixes most of the hold time violations but breaks the setup
timing even though the PCI logic is back annotated. I think Quartus
must be pretty dumb as far as fixing hold timing. If the worst
case setup time is around 15 ns (8 ns clock insertion + 7 ns PCI setup
time requirement) then those worst case paths should have no problem
with hold times. If quartus just placed delays on the short paths
it could fix hold timing. But I suspect that it's placing the delays
around the pin and affecting both long and short paths.

Has anyone else seen this? Does anyone have any idea of how to fix hold
timing while not breaking the setup paths?

Ken Jaramillo
Hi Ken,

The Optimize Hold Time algorithm in Quartus 3.0 is smart enough to
know where to add delay to fix Th without violating Tsu, if possible.
So the behaviour you're seeing where fixing the hold violations causes
setup violations is not generally expected.

That said, there are cases where we cannot fix Th without causing
problems on Tsu.

1. Your circuit is such that the connections to which we have to add
delay to fix a Th violation are also on another, Tsu-critical
connection. If we add delay to resolve Th, we will violate Tsu. The
only solution here is to re-work your circuit.

2. A sub-case of #1 above: the only part of a timing path to which
we could add delay to meet a Th constraint at a register, without
violating some Tsu constraint, is on the last LUT->reg connection on
the Th path. If Quartus register packs this LUT and register together
into one logic cell, a dedicated routing connection is used to make
the connection from the LUT to the register, so we can't slow it down
by adding more routing. The workaround to this is to set a constraint
to force the LUT & register to be kept in separate logic cells. In
our in-house PCI core, we have to turn off register packing on two
banks of registers to allow the router the flexibility to fix the hold
time problem. The constraint we set (in the <project.esf> file) is:

AUTO_PACKED_REGISTERS_STRATIX = OFF on low_ad_or_fb[] and
high_ad_or_fb[]

If your PCI core uses similar names / has a similar structure this may
give you an idea of what to register unpack. Looking at whether or
not the registers at which you're having Tsu / Th problems were packed
with LUTs in the Quartus floorplan editor will also let you see what's
going on.

3. The same as #2, but the register and LUT were packed together by
your synthesis tool. Depending on your synthesis tool, you may be
able to tell it not to do this (but it'll be tool specific).

4. I've never tested the optimize hold time algorithm in cases where
someone has inserted logic cells to slow down the clock (this isn't a
common technique). It is possible Quartus is mis-estimating the delay
at the point where the algorithm kicks in, and hence Quartus thinks
it's OK on Tsu and Th, but is not. If you send me your circuit, I can
see if this is the case, and if necessary upgrade Quartus to model
this better.

5. You can buy Altera's PCI core. It works with Cyclone, and there
are 32-bit and 64-bit versions. You can try it out (instantiate,
simulate, place and route, timing analyze -- basically see that it
works) for free, but have to buy it to generate programming files.
See the Interfaces & Peripherals category on
http://www.altera.com/products/ip/ipm-index.html

The best way for me to give you more guidance on what went wrong and
how you can fix it is for you to send a Quartus archive of your design
to me. All such designs are treated as confidential -- we will use it
in house only as a test case to improve our software.

Regards,
Vaughn

Altera
 
As far as cost goes, it again comes down to what you are trying to achieve.
Many times, the cost of the tools could be justified easily by showing the
savings - for example, saving a speedgrade (use a slower speedgrade part) by
the use of physical synthesis has a great impact on the total cost of your
board.

Using placement constraints is okay if you are trying to meet timing on
certain small sections of the designs AND you know if these are going to be
the bottleneck. It is not feasible to do this if you have a big design that
is using 90+% of the slices/LEs etc.

As FPGAs are getting bigger and can accomodate complex designs, there are
newer physical synthesis tools in the market that have different approaches
to solving the timing closure issue. It is definitely worthwhile to check
them out.


"Christian Schneider" <cgs-news@cgschneider.com> wrote in message
news:bidt5i$83ehs$1@ID-68826.news.uni-berlin.de...
IMHO these tools are very expensive and for my needs I am better with
placement constraints by hand then using these tools. This gives me even
better control and saves me time and money. But your needs may be
different. To be honest I don't see the benefit of e.g. Amplify which
would be worth the money.

BR Chris


Alfredo wrote:

Thanks Neeraj, I have some notion of the benefits of these tools but I
do not
fully understand how to differentiate them and how to choose one for a
specific
task: which one is better for IP immersion, which one help manage better
for
stitching blocs at the top level (modular design), which one interfaces
better
with STA tools for timing closure, ...
I think I'll need to run some testbenches to get a good grasp on these
tools and
how to better use them.

Does anyone have a book or documetation that I can read to learn more
about this
subject?

The only documentation I have now is what the vendors are providing.

Thanks,

***
Alfredo.

"Neeraj Varma" <neeraj@cg-coreel.com> wrote in message
news:bidh5u$8762p$1@ID-159439.news.uni-berlin.de...

The Physical synthesis and SVP tools from 3rd parties are not just mere
Floorplanners. In a nutshell, Amplify and Precision Physical do
something
called "Placement Aware Synthesis", which is the most important thing to
do
to bring down the routing delays...whereas SVP tools like the one from
Hier
help provide final representation of the PLD design early in the design
stage.

Floorplanning is just one of the many features built into these tools,
though an important one, to help achieve performance goals and reducing
the
compile time of the design by working closely with the P&R tools...

Feel free to correct me if I am wrong...

--Neeraj









"Alfredo" <alherrer@nortelnetworks.com> wrote in message
news:bid6ib$mfa$1@zcars0v6.ca.nortel.com...

Hi,
physical synthesis tools for FPGAs are being introduced by a few
vendors.

Have

you read, evaluated or seen a presentation about these tools:
Synplicity's Amplify:

http://www.synplicity.com/products/amplify/index.html

Mentor's Precision: http://www.mentor.com/precisionphysical/
Hier Design's Plan Ahead: http://www.hierdesign.com

Do you have any thoughts about what you would be looking for in an FPGA
floorplanner from a 3rd party (not the FPGA manufacturer)?

Do you use Altera's or Xilinx floorplanner now?

Thanks for you input,

***
Alfredo.
 
Hi Bob:

guess I'm flattered that Hal is getting to defend some of the aspects
here - what I am pointing out is that in a standard flop design, there
are two balanced P/N nodes (assuming you're building a MOS flop - very
few done otherwise anymore) that are used to jam another pair that sets
the output. The balancing that takes place will only force the output
if there is thermal noise - this is how Fiarchild, Intel, and most
others build the things. Take a poke around the net for articles by
Dike and Burton - they have done more work on metastability that almost
anyone else out there, and IEEE uses their work as a standard for this
stuff. I'm not the author, just parroting what others have done.

The amount of noise is not a factor (this is thermal noise, and all
gates exhibit about 9nV per root kT) - thanks boltzman - but the effects
of Miller coupling is as of yet not well understood, at least by me.

Andrew

Bob Perlman wrote:

On Sun, 31 Aug 2003 10:55:19 -0400, rickman <spamgoeshere4@yahoo.com
wrote:



Bob Perlman wrote:


On Sat, 30 Aug 2003 23:15:18 -0400, rickman <spamgoeshere4@yahoo.com
wrote:



Hal Murray wrote:


this has nothing to do with quantization, until you get into QED, but is
a matter of statistical thermal noise on two cells that are used to jam
the outputs of a flop. You need the noise, but that has nothing to do
with undergrad quantum mechanics. Read Peter's stuff - he's quite good
and knowledgable.


Do I need noise? Why? I thought the normal exponential decay
was well modeled (Spice?) without noise. Perhaps you need
it if the FF is "perfectly" ballanced but that has a vanishingly
small probability in the real world.


I think you are right. There is only one point on a continuous range
that will be perfectly balanced. The probability of that is in essence
the inverse of infinity which I don't know that it even has meaning.

If you require noise to shift you out of metastability, then the people
who argue that more noise will get you out quicker could then be right.


A metastable failure doesn't require that you land exactly on the
balance point. There may be only one point that keeps you in the
metastable state forever, but there's a range of points that will
delay FF settling long enough to make your design fail. The more time
you give the design to settle, the shorter that range of points is.

Accordingly, noise doesn't have to kick the FF to that perfect balance
point. It need only force you close enough that the FF output
transition is sufficiently delayed to hose over the circuit.


I don't think you understand the point. We are not saying that balance
or noise are required to demonstrate metastability. It was pointed out
that in a simulation of the effect, something would be needed to move
the FF off the balance point and noise was suggested. But in the real
world the "balance point" is so vanishing small, it would never actually
happen. That is not saying that the FF can not go metastable without
being balanced.



Whoever said, "If you require noise to shift you out of metastability,
then the people who argue that more noise will get you out quicker
could then be right," could you explain further? Are you saying that
noise is required to resolve the metastable state, or is this a
counter-argument to the "noise may get you out faster" claim? Or is
it something else entirely?

Bob Perlman
Cambrian Design Works
 
Andrew -

Thanks for the reply. The Burton/Dike 1999 paper on Miller Effect
sounds interesting; I'll try to get a copy.

Bob Perlman

On Sun, 31 Aug 2003 22:31:39 -0500, Andrew Paule <lsboogy@qwest.net>
wrote:

Hi Bob:

guess I'm flattered that Hal is getting to defend some of the aspects
here - what I am pointing out is that in a standard flop design, there
are two balanced P/N nodes (assuming you're building a MOS flop - very
few done otherwise anymore) that are used to jam another pair that sets
the output. The balancing that takes place will only force the output
if there is thermal noise - this is how Fiarchild, Intel, and most
others build the things. Take a poke around the net for articles by
Dike and Burton - they have done more work on metastability that almost
anyone else out there, and IEEE uses their work as a standard for this
stuff. I'm not the author, just parroting what others have done.

The amount of noise is not a factor (this is thermal noise, and all
gates exhibit about 9nV per root kT) - thanks boltzman - but the effects
of Miller coupling is as of yet not well understood, at least by me.

Andrew

Bob Perlman wrote:

On Sun, 31 Aug 2003 10:55:19 -0400, rickman <spamgoeshere4@yahoo.com
wrote:



Bob Perlman wrote:


On Sat, 30 Aug 2003 23:15:18 -0400, rickman <spamgoeshere4@yahoo.com
wrote:



Hal Murray wrote:


this has nothing to do with quantization, until you get into QED, but is
a matter of statistical thermal noise on two cells that are used to jam
the outputs of a flop. You need the noise, but that has nothing to do
with undergrad quantum mechanics. Read Peter's stuff - he's quite good
and knowledgable.


Do I need noise? Why? I thought the normal exponential decay
was well modeled (Spice?) without noise. Perhaps you need
it if the FF is "perfectly" ballanced but that has a vanishingly
small probability in the real world.


I think you are right. There is only one point on a continuous range
that will be perfectly balanced. The probability of that is in essence
the inverse of infinity which I don't know that it even has meaning.

If you require noise to shift you out of metastability, then the people
who argue that more noise will get you out quicker could then be right.


A metastable failure doesn't require that you land exactly on the
balance point. There may be only one point that keeps you in the
metastable state forever, but there's a range of points that will
delay FF settling long enough to make your design fail. The more time
you give the design to settle, the shorter that range of points is.

Accordingly, noise doesn't have to kick the FF to that perfect balance
point. It need only force you close enough that the FF output
transition is sufficiently delayed to hose over the circuit.


I don't think you understand the point. We are not saying that balance
or noise are required to demonstrate metastability. It was pointed out
that in a simulation of the effect, something would be needed to move
the FF off the balance point and noise was suggested. But in the real
world the "balance point" is so vanishing small, it would never actually
happen. That is not saying that the FF can not go metastable without
being balanced.



Whoever said, "If you require noise to shift you out of metastability,
then the people who argue that more noise will get you out quicker
could then be right," could you explain further? Are you saying that
noise is required to resolve the metastable state, or is this a
counter-argument to the "noise may get you out faster" claim? Or is
it something else entirely?

Bob Perlman
Cambrian Design Works
 
Electron spin has all the same measurement issues that a FF has. If the
state of the electron spin is changing as the measurement is made, then
what state is it in? What will be the result of the measurement?
Rick, the electron spin is +1/2 or -1/2, there is no in between state,
it changes instantaneously (in one fundamental clock tick, ~10^-43
seconds).

Luiz Carlos
 
Hi Austin.

First, as Andrey pointed out, I took the wrong table.
The best values are for GTL:
Vout = low, if Vin <= Vref - 0.05
Vout = high, if Vin >= Vref + 0.05

There is some samll offset voltage from the mis-match between the
differential pairs (both nmos and cmos to cover the voltage range). I
do not know what this offset might be, but I suspect it is less than a
few tens of millivolts, worst case from the transistor models.

The comparator will switch as soon as the voltage is greater than the
offset (we spec 100 mV for speed reasons, not because it needs > 100 mv
to function).

So with 50 mV it will switch, just more slowly than if it was 100 mV.
I really would like to know what this offset is and to have a speed
versus offset formula.

But, let's suppose we didn't reach the offset value. If we range Vin
from Vref-Voffset to Vref+Voffset, I think Vout will range from Vlow
to Vhigh monotonically, maybe almost linearly. Am I right?

Now, if we sample Vout with the input data flip-flop, FF DOUT will be
0 or 1 (forget about metastability for now). Can I say there is Vthr
where: if Vout>Vthr then DOUT=1 and if Vout<Vthr then DOUT=0? What
does happen when we feed a data flip-flop with an analog signal?

I also would like to know if when we define an input as LVTTL (for
example), the same input comparator is used (Vref connected to an
internal reference), or if it is bypassed.

Luiz Carlos
 
Consider using... an analog comparator!
Yes John.
But, if I can have one for free, why not use it?
Even if it doesn't fit my needs, the knowledge remains.

Luiz Carlos
 
Luiz,

Last things first, the LVTTL input does not use the comparator(s) (there are
three different comparators, as well as other ciruits for the various input
standards).

The comparator is designed to have a relatively high gain, so that it
switches quickly.

As I said, the offset voltage is due to the Vt mismatch on the pmos and nmos
diff pairs, and since these are built with .35u (VII) or .25u (VII Pro)
transistors, they are pretty darn fast diff-amps. There is a classic gain
stage after the cmos diff-amp (similar to the ones in "CMOS Circuit Design,
Layout & Simulation" by Baker, Li, and Boyce). The offset voltage is
typically less than a few 10's of mV (say 10 to 20 mV worst case). I am sure
that if you vary the voltage difference slowly enough, you could measure the
gain of the diff-amp. It was designed for HSTL and SSTL IO standards, which
as someone already pointed out, are pretty sloppy. What I will point out
here, is that I am not aware of any monolithic separate comparator that is as
fast as the one that is in the input circuit. This comparator is good for
400 Mbs+ speeds, which is a lot faster than most separate IC comparators....

Austin

Luiz Carlos wrote:

Hi Austin.

First, as Andrey pointed out, I took the wrong table.
The best values are for GTL:
Vout = low, if Vin <= Vref - 0.05
Vout = high, if Vin >= Vref + 0.05

There is some samll offset voltage from the mis-match between the
differential pairs (both nmos and cmos to cover the voltage range). I
do not know what this offset might be, but I suspect it is less than a
few tens of millivolts, worst case from the transistor models.

The comparator will switch as soon as the voltage is greater than the
offset (we spec 100 mV for speed reasons, not because it needs > 100 mv
to function).

So with 50 mV it will switch, just more slowly than if it was 100 mV.

I really would like to know what this offset is and to have a speed
versus offset formula.

But, let's suppose we didn't reach the offset value. If we range Vin
from Vref-Voffset to Vref+Voffset, I think Vout will range from Vlow
to Vhigh monotonically, maybe almost linearly. Am I right?

Now, if we sample Vout with the input data flip-flop, FF DOUT will be
0 or 1 (forget about metastability for now). Can I say there is Vthr
where: if Vout>Vthr then DOUT=1 and if Vout<Vthr then DOUT=0? What
does happen when we feed a data flip-flop with an analog signal?

I also would like to know if when we define an input as LVTTL (for
example), the same input comparator is used (Vref connected to an
internal reference), or if it is bypassed.

Luiz Carlos
 
John,

Uh, it is an anolog comparator....just one optimized for HSTL and SSTL inputs.

Comments about noise are well put, and need to be considered if the comparator
is on the same chip, on the same board.

But to imply that we could somehow be sloppy, and design a crummy comparator is
a bit unfair, the comparator is probably much better than any single device you
could name, it is just that we did not characterize any more than we had to.

Austin

John_H wrote:

"Luiz Carlos" <oen_br@yahoo.com.br> wrote in message
news:8471ba54.0309020217.567a3f03@posting.google.com...
Peter, Austin!
Nobody can help me?

Luiz Carlos

Maybe they can shed some light, but using a digital circuit for analog
functions is like trying to use a car as a tractor. It might work but it
sure as heck wasn't designed to plow fields - you're bound to have problems.

Since digital logic has fixed thresholds and large noise margins (difference
beteween Vih and Vil) there's no need for the designers to be detailed about
keeping the internal noise to the sub-millivolt level when there's so much
activity in the adjacent I/O cells or internal logic.

The Vref pins on the bank are designed for I/O signalling where the
threshold does not change so dynamically changing this value dramatically
can have unforseen effects.

LVDS signalling produces the best differential capability, allowing a
dynamic "Vref" for your doomed analog comparator in the digital device but
the noise margin for LVDS is still a rather large value.

If you put nothing else in the FPGA, I imagine you could get good noise-free
results with a consistent transition (though subject to an offset voltage in
the many 10s of millivolts). My guess is you want more than just the analog
comparator in there.

Consider using... an analog comparator!

- John_H
 
Silly question: I don't see why an ANALOG flip-flop couldn't determine
that it is in the intermediate state at some fixed interval after the
clock, and then force the flop one way or another. Of course, it
might double-glitch in the meantime (flop goes up before logic forces
it down), but it would make a flop with a fixed-maximum metastabel
interval.
DING DING DING

Nobody has been able to fix metastability yet. If you really have
a fix, it's worth a Nobel Prize.

The usual problem with that sort of approach is that you get
a runt pulse on the fix-it signal in cases that would have worked
correctly without it.

--
The suespammers.org mail server is located in California. So are all my
other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's. I hate spam.
 
In article <3F559A3F.91184B1A@yahoo.com>,
rickman <spamgoeshere4@yahoo.com> wrote:
There is no way to determine when a circuit is metastable or not. Async
circuits are not magic, they just depend on predictable delays, just
like any other circuit. The design is different because there is no
common clock so each circuit can run with its own delay. Since the next
circuit will take the output when it is ready, there is no problem with
synchronization.
Silly question: I don't see why an ANALOG flip-flop couldn't determine
that it is in the intermediate state at some fixed interval after the
clock, and then force the flop one way or another. Of course, it
might double-glitch in the meantime (flop goes up before logic forces
it down), but it would make a flop with a fixed-maximum metastabel
interval.

Of course, it seems like such a massive headache to do.

One problem I do see is that if each stage has a different delay, then
it can not accept a new input from the preceding stage until the output
has been taken by the following stage. It seems in the end the async
circuit will run no faster than the slowest stage, which is what the
sync clocked circuit will do.
The trick with asynchronous logic is that the longest stage WHICH IS
IN THE COMPUTATION is the critical path. EG, if 9 of the stages take
1 ns, and the last takes 2ns, but the last only affects 1/2 the data,
with the asynchronous circuitry doing the shortcutting, it will be
considerably faster than the synchronous one.

The problem is: You can automatically balance the pipelining
(retiming) for a synchronous circuit, while asynchronous really playes
havoc with the CAD flow and testing.
--
Nicholas C. Weaver nweaver@cs.berkeley.edu
 
In article <vlc0kv1m4flm5f@corp.supernews.com>,
Hal Murray <hmurray@suespammers.org> wrote:
Silly question: I don't see why an ANALOG flip-flop couldn't determine
that it is in the intermediate state at some fixed interval after the
clock, and then force the flop one way or another. Of course, it
might double-glitch in the meantime (flop goes up before logic forces
it down), but it would make a flop with a fixed-maximum metastabel
interval.

DING DING DING

Nobody has been able to fix metastability yet. If you really have
a fix, it's worth a Nobel Prize.

The usual problem with that sort of approach is that you get
a runt pulse on the fix-it signal in cases that would have worked
correctly without it.
Yes, but the point is it would fix the maximum metastability window,
which is the key requirement, as "did the data come before or after
the clock edge" is really an irrelevant question at the metastable
capture point, you jsut want it to go to ONE or the other (not stick
around and make up its mind a half clock cycle later), and possibly to
tell you.

This isn't FIXING metastability, this is detecting and responding to
it, changing an exponentially decaying bounds into a fixed bounds.

Of course, it seems like way too big a headache to bother with.
--
Nicholas C. Weaver nweaver@cs.berkeley.edu
 
The problem is with your three level "analog ff", instead of one metastable
region, you now have two, one at each side of your middle state. All it
accomplishes is a redistribution of the metastable state at a cost of
considerably more complexity. It doesn't fix anything at all. As rickman and
others have stated, it comes down to a fundamental limitation in measuring a
quantity. Measurement takes a finite amount of time to do. If the transition
happens within that measurement window, you have a metastable event, which is
to say the measurement was indeterminate. Works for digital logic, works for
electron spins and so on. If you really believe you have a work-around, then
you should publish it. Be prepared to nurture your wounds though. The paths
to the holy metastability grail is littered with bloodied bodies, many of whom
have followed the same trail you are considering.

"Nicholas C. Weaver" wrote:

In article <3F56292C.E9F89B7F@yahoo.com>,
rickman <spamgoeshere4@yahoo.com> wrote:
"Nicholas C. Weaver" wrote:
If you don't see the problem it is because you are not looking hard
enough. The issue with metastability comes from trying to measure the
state of a FF or other digital voltage. When a signal is at an
intermediate value a measurement can be inconclusive. Your ANALOG FF
can be just as inconclusive as the digital FF. Besides, the result is
always digital (or more accurately, discreet instead of continuous).

You are suggesting that you add a third state to the measurement, but
you get the same inconclusive measurement between the metastable state
and either the one or the zero states. The result is that the output of
your ANALOG FF would be indeterminate which could result in
metastability in the next stage.

If I am not grasping your idea, then please provide more details.

The flip flop core exists in one of THREE states: Vdd, Vss (the stable
points) and Vms +/- epsilon (the metastable range, in between Vdd and
Vss).

The analog circuitry in the flip flop measures the flip-flop state at
Tdelay after the clock edge.

If it is within Vms +/- a large epsilon (that is, metastable at this
point in time), the analog circuitry forces the flip-flop to Vss, and
also signals that a metastable capture/correction was performed.

This may cause a spurrious transition (eg, the metastable state is
measured, it goes high, and then the post-measurement kick drags it
back down to Vss).

The trick with asynchronous logic is that the longest stage WHICH IS
IN THE COMPUTATION is the critical path. EG, if 9 of the stages take
1 ns, and the last takes 2ns, but the last only affects 1/2 the data,
with the asynchronous circuitry doing the shortcutting, it will be
considerably faster than the synchronous one.

Except that sync designs can do the same thing. A multiply accumulate
typically takes twice as long as the other ops in a calculation. So
they split it into two stages running at full speed. Or if only half
the data needs the MAC, then they can do nothing since this will run at
full speed.

The promise of the asynchronous is that this can occur on a much finer
grain, eg if all the ops are 1 ns, but the final one is either 1 or
1.5 ns.

Of couse, it has never really lived up to this promise, mostly because
the handshaking overhead can be severe, as well as the other problems
(CAD, testing).
--
Nicholas C. Weaver nweaver@cs.berkeley.edu
--
--Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930 Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com

"They that give up essential liberty to obtain a little
temporary safety deserve neither liberty nor safety."
-Benjamin Franklin, 1759
 
Rider,

When configuring the FPGA via JTAG, iMPACT will provide TCK. When
configuring the FPGA via PROM with Master Serial Setup, CCLK is provided
by the FPGA.

At the end of configuration, the FPGA will enter the startup sequence.
Treat this sequence as a state machine that the FPGA needs to go through
before "waking up". This sequence is where you can set the options of
when you want the DONE pin to go high, the IO tri-state to be released,
etc.

To get through this startup sequence, you'll need to provide clocks.
Generally, when configuring via JTAG, you would select startup clock to
be JTAGCLK since you're already providing TCK. So when configuring with
PROM, you would want to set startup clock to CCLK unless you'll be
seperating providing TCK (JTAGCLK) or even userclock to clock through
the startup sequence.

And as for PROM file generation or ACE file generation, the iMPACT help
topic has been vastly improved in 6.1i. If anything isn't clear, please
do contact the Xilinx Hotline Support.

As far as support for patform flash proms, please make sure you use 5.2i
service pack 3 for file generation.

Regards, Wei
Xilinx Applications

rider wrote:
Hi!
Thanks for the group to reply my previous query "Xilinx Parallel Cable
4 (PC4) and Platform Flash JTAG". Specially to Antti Lorenzo and
Aurelian Lazarut. Continuing with the same topic of configuration, i
have a few more queries:

1)In the Xilinx's latest document "Configuration Quick Start
Guidelines" http://www.xilinx.com/bvdocs/appnotes/xapp501.pdf page 13,
the author shows a snap from iMPACT software(fig: 9 Startup options
for Virtex and Spartan 2). The author states:

"Start-Up Clock – The bitstream must be generated with the appropriate
startup clock option for the PART to be configured properly. The
"Start-Up Clock" option by default is set to "CCLK" for Master Serial
Mode. When generating a bitstream for Boundary Scan (JTAG) Mode the
option must be set to "JTAGCLK" in the pull-down menu of the GUI or
using bitgen's command line:
• For configuring using Boundary Scan (JTAG):
bitgen –g startupclk:jtagclk designName.ncd
• For configuring via Master-Serial:
bitgen –g startupclk:cclk designName.ncd"

My question is that when she talks of Master Serial Mode and CCLK,
does she mean she is creating a file for PROM only[PART is PROM here],
because Master Serial mode requires a PROM . The file cannot be loaded
directly to FPGA. And when she talks of JTAG and jtagclk, the PART
could be PROM or FPGA ? Am i right ?

2) I have Xilinx ISE5.1 , does it support the configuration of latest
Xilinx Platform PROM XCF02S via JTAG?

Thanks
 
Lorenzo Lutti wrote:
Yes! But be afraid of iMPACT user interface, which is the worst
nightmare ever invented. I've lost more than ten minutes to understand
how to use a PROM with iMPACT...
Lorenzo, you must be pretty smart if you can solve the "worst nightmare
ever invented" in a mere ten minutes... :)
Peter Alfke
 
Hi all i am new in the group, i am a italian student of computer science
and
i have hobbies for electronics, too... so i have using PIC, St6/7
microcontroller, etc.. now my dream is develop some circuit with fpga
(or
similar) and VHLD language. I have just a bit studing (only teorically)
VHDL
in my university, but now i would REALLY program some chip for develop
some
simple and medium project.
I have not money (and i don't want :) ) to buy some original developing
system, so i would home build some free "programmer" (in-circuit JTAG
???)
how i have do in pass for PIC / St6/7 programmers :)


You'd be better off on comp.arch.fpga, for the actual hardware
questions - I've crossposted to there and set the followups to go
there also.
Ok, now i am here :))

Regarding programming hardware, Altera have the Byteblaster schematics
downloadable from their site, in the Byteblaster datasheet. I can;t
recall if Xilinx have similar.
Ok thank i have found ByteblasterMV's datasheet
(http://www.altera.com/literature/ds/dsbytemv.pdf), is it correctly ?
I have found also a PCB and more at http://c.combaret.free.fr/projects.html
and i that it can interest at somebody... and i have found a old article
"Build your own ByteBlaster" from old mirror of "FreeCore Library" at
http://opencollector.org/history/freecore/What%20Altera%20didn't%20tell%20you.htm

but i am also interessed to Xilix cpld and fpga so somebody know if exist
some free programmer like type for Altera chips ?

Thank you very much ALL

Thank you very much to all friends, and sorry for my very bad and poor
english language :)


It's better than my Italian!
ahahahah :)

ciao ciao :)
Cheers,
Martin

--
martin.j.thompson@trw.com
TRW Conekt, Solihull, UK
http://www.trw.com/conekt
 

Welcome to EDABoard.com

Sponsor

Back
Top