EDK : FSL macros defined by Xilinx are wrong

On Mon, 27 Mar 2006 22:35:31 +0200, "Antti Lukats"
<antti@openchip.org> wrote:

"John Larkin" <jjlarkin@highNOTlandTHIStechnologyPART.com> schrieb im
Newsbeitrag news:6jgg221p6iuffrbbb6dtml39fn3u9sdu4k@4ax.com...
We have a perfect-storm clock problem. A stock 16 MHz crystal
oscillator drives a CPU and two Spartan3 FPGAs. The chips are arranged
linearly in that order (xo, cpu, Fpga1, Fpga2), spaced about 1.5"
apart. The clock trace is 8 mils wide, mostly on layer 6 of the board,
the bottom layer. We did put footprints for a series RC at the end (at
Fpga2) as terminators, just in case.

Now it gets nasty: for other reasons, the ground plane was moved to
layer 5, so we have about 7 mils of dielectric under the clock
microstrip, which calcs to roughly 60 ohms. Add the chips, a couple of
tiny stubs, and a couple of vias, and we're at 50 ohms, or likely
less.

And the crystal oscillator turns out to be both fast and weak. On its
rise, it puts a step into the line of about 1.2 volts in well under 1
ns, and doesn't drive to the Vcc rail until many ns later. At Fpga1,
the clock has a nasty flat spot on its rising edge, just about halfway
up. And it screws up, of course. The last FPGA, at the termination, is
fine, and the CPU is ancient 99-micron technology or something and
couldn't care less.

Adding termination at Fpga2 helps a little, but Fpga1 still glitches
now and then. If it's not truly double-clocking then the noise margin
must be zilch during the plateau, and the termination can't help that.

One fix is to replace the xo with something slower, or kluge a series
inductor, 150 nH works, just at the xo output pin, to slow the rise.
Unappealing, as some boards are in the field, tested fine but we're
concerned they may be marginal.

So we want to deglitch the clock edges *in* the FPGAs, so we can just
send the customers an upgrade rom chip, and not have to kluge any
boards.

Some ideas:

1. Use the DCM to multiply the clock by, say, 8. Run the 16 MHz clock
as data through a dual-rank d-flop resynchronizer, clocked at 128 MHz
maybe, and use the second flop's output as the new clock source. A
Xilinx fae claims this won't work. As far as we can interpret his
English, the DCM is not a true PLL (ok, then what is it?) and will
propagate the glitches, too. He claims there *is* no solution inside
the chip.

2. Run the clock in as a regular logic pin. That drives a delay chain,
a string of buffers, maybe 4 or 5 ns worth; call the input and output
of the string A and B. Next, set up an RS flipflop; set it if A and B
are both high, and clear it if both are low. Drive the new clock net
from that flop. Maybe include a midpoint tap or two in the logic, just
for fun.

3. Program the clock logic threshold to be lower. It's not clear to us
if this is possible without changing Vccio on the FPGAs. Marginal at
best.


Any other thoughts/ideas? Has anybody else fixed clock glitches inside
an FPGA?

John


you can run a genlocked NCO clocked from in-fabric onchip oscillator. your
internal recovered clock will have jitter +-1 clock period of the ring
oscillator (what could be as high as about 370MHz in S3), you might need a
some sync logic that will ensure the 16mhz clock edges are only used to
adjust the NCO.
Nice idea. But I do need the 16 MHz to be long-term correct, although
duty cycle and edges could jitter a bit and not cause problems. So I
could build an internal ring oscillator and use that to resync the
incoming 16 MHz clock (dual-rank d-flops again) on the theory that the
input glitches will never last anything like the 300-ish MHz resync
clock period. And that's even easier.

Thanks for the input,

John
 
I have been playing around with opensparc some using xst. I did manage
to get a build without errors, but since I did not have a clock defined
everything got optimized away. So no gate counts. One thing I did
learn is to not import the files using Project Navigator. It just locks
it up.

A build using an xst script worked. There was one file that had a
function defined that was causing xst to fail. However when I removed
it I got no more complaints.

It may not be possible to synthesize with 8 cores, but I thought I saw
something in the docs about being to specify fewer cores.

I will probably play around with it some more as free time allows and
see how far I can get with it.




Shyam wrote:
Allan Herriman wrote:

It appears openSparc Verilog is written to target an ASIC, not an
FPGA. Whilst it might be possible to get it to compile and even fit
into an FPGA, the performance would probably not be stunning.

In that sense, a different soft-cpu designed to be used on an FPGA
would probably be better.

---It's interesting to see "SoftCores for Multicore FPGA
implementations" listed as an example research area that can be
explored with OpenSPARC technology, at
http://opensparc.sunsource.net/nonav/research.html. Not sure what
area/delay/power one would end up with if this core is implemented on
an FPGA as is. Perhaps certain enhancements/simplifications may be
carried out to the present core in order to make it useful within an
FPGA. Since they have released a variety of hardware/software tools
(and their sources) I guess it becomes possible to study the
performance impact of any architectural modifications.

Has anybody already started working on implementing this on an FPGA? I
would be very interested to know the results.I want to try to do this
but am presently hampered because I don't have Synopsys DC (which is
the recommended synthesis environment) appropriately set up.

Thanks,
Shyam
 
Thank you!
That actually makes sense - there's no need to keep multiple clock
signals and then your sensitivity list shrinks down to two signals:
process(reset, clk)




mk wrote:
On 27 Mar 2006 13:21:20 -0800, "bobrics" <bobrics@gmail.com> wrote:

...
MY_PROCESS: process(reset, clk, state)
begin
...
elsif (rising_edge(clk) and state = 0) then


Remove the state from the dependency list and the elsif expression.
You don't need the state dependency. Move the state = 0 expression to
the statement which elsif evaluates.

HTH.
 
dumak23@yahoo.com wrote:
Hi all,

I'm currently verifying an OPB master i/f using IBM's OPB monitor. I'm
currently getting an error 1.11.3, which says I didn't increment the
ABus correctly during seqAddr bus access. The particular case I'm
looking at is this: ABus = 32'h00000E69, and BE = 4'b0111, using
sequential address, byte-enable transfer, and xfer_size is "byte." The
slave I wrote into is a full-word device that supports byte-enable. So,
there shouldn't be any need for conversion cycles, and so my next ABus
is 0x00000E6C. But apparently the OPB monitor flags this as an error. I
think it expects the ABus to be incremented only by 1, since the
previous xfer_size is byte. I think this is correct _if_ I'm using the
basic dynamic sizing and _not_ the byte-enable architecture.

Digging into the OPB monitor code, it seems that the process that
checks for this particular scenario only checks for the xfer_sizes and
xfer_acks - there's no references to byte-enable signals. The OPB
monitor version I have is 2.0.1, and it seems to be the latest version
that you can get from Xilinx.

Any help / input is greatly appreciated.

Thanks,

dumak23

Hi dumak,

Sorry I cannot help you, but I have some questions since you are doing
the thing that I want to do by myself. I have about the same problems.
I am exploring OPB Master capabilities designed with EDK's Create -
Import peripheral wizard. I inserted Chipscope IBA/OPB core to my
desing to monitor OPB bus. The problem are the signal names in
ChipScope Analyzer - there are 80 of them with NO names attached. How
to correctly assign signal names (or should I make a custom core with
ChipScope Core Generator)?

Cheers, Guru
 
On 28 Mar 2006 01:16:39 -0800, "Guru" <ales.gorkic@email.si> wrote:

...

Sorry I cannot help you, but I have some questions since you are doing
the thing that I want to do by myself. I have about the same problems.
I am exploring OPB Master capabilities designed with EDK's Create -
Import peripheral wizard. I inserted Chipscope IBA/OPB core to my
desing to monitor OPB bus. The problem are the signal names in
ChipScope Analyzer - there are 80 of them with NO names attached. How
to correctly assign signal names (or should I make a custom core with
ChipScope Core Generator)?

Look at a directory with a name of the sort
"implementation\chipscope_opb_iba_0_wrapper", you will sess a .cdc
file, this is the file to load from Chipscope analyzer with the signal
name assignment.

Best luck,

Zara
 
I haven't tried this, but I know that the COREgen system creates new
schematic symbols even when the I/O ports don't change. I would
recommend to look at the file list for your COREgen modules and
figure out which one is the schematic symbol (should be easy to
find if you edit it after generating the core, just look by date).
Make
a copy of this file before you re-generate and restore it afterwards.
I don't think you should have a problem as long as the port names
and sizes don't change...

PeterC wrote:
Hi All,

My design flow includes a top-level schematic into which I place
symbols which represent lower-level modules, including coregen cores
(ISE7.1).

I have modified the schematic symbols for things like adders,
multipliers etc, to look like symbols typically found in arithmetic
texts.

Now, when one of the coregen core's' parameters are changed (say a bus
width) and the core is re-generated, the symbol for that core in my
schematic reverts to the nasty rectangle.

How can I avoid this, ie. how do I avoid having to re-draw my symbols
every time I change core parameters or add a port (say a ACLR) ?

Hope someone can help - seems like this should be possible.

Cheers,
PeterC.
 
John-

a;
reg [7:0] b [31:0];
reg [2:0] bit;
integer i;
always @*
for( i=0; i<32; i=i+1 )
a = b[bit];

Well... I need a to be a wire. I guess there may not be a way to take
advantage of for-loops with assign statements.

-Jeff
 
Hi Guru,

I think you have to use the FPGA editor and extract the names of the
signals.

- dumak23
 
On Mon, 27 Mar 2006 16:48:20 -0800, "Symon" <symon_brewer@hotmail.com>
wrote:

"John Larkin" <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote in message


The unbonded pad thing sounds slick. I argued to use a real pin
in-and-out as the delay element, but certain stingy engineers around
here are unwilling to give up one of their two available test points.

John

I sent you some stuff from my Hotmail account. If your spam filter blocks
it, let me know.
Best, Syms.
No, I got it, thanks hugely. That raises all sorts of possibilities
for the future, what with pad capacitance, tristate drivers, strong
and weak pullups, LVDS, all sorts of possible tricks.

John
 
On Tue, 28 Mar 2006 11:37:44 +1200, Jim Granville
<no.spam@designtools.co.nz> wrote:

John Larkin wrote:
On Tue, 28 Mar 2006 08:55:50 +1200, Jim Granville
Enable the schmitt option on the pin :)


Don't I wish! There is a programmable delay element in the IO block,
but it's probably a string of inverters, not an honest R-C delay, so
it likely can't be used to lowpass the edge. We're not sure.

I wish they'd tell us a little more about the actual electrical
behavior of the i/o bits. I mean, Altera and Actel and everybody else
has snooped all this out already.


Since the issue is 'local', I'd fix it locally, and 2. sounds
preferable. You know the CLK freq, so can choose the delay banding.


That's looking promising; we're testing that one now. Gotta figure how
many cells it takes to delay 5 ns. (We'll just xor the ends and bring
that out to a test point.)

Yes, your main challenge will then be to persuade the tools to keep
your delay elements...
What is the pin-delay on the part - you could use that feature,
enable it on your pin, drive another nearby pin(s) (non bonded?)
and then use those as the S/R time-shutters.
-jg

What's working now is my option #2, delay line with ANDs driving an
r-s flop. The delay is eight internal buffers, giving us about 10 ns.
We're using the midpoint tap, 5ns out, in the logic too, just in case.
We ran tests on two boards, overnight, and it seems to work fine.

We don't care about absolute delay or jitter here; if we did, we'd
probably use Peter's circuit, which propagates clock edges but
supresses additional transitions for a while. I shoulda thunk of that.

Thanks to everybody for some great ideas. Several of the suggestions
(internal ring oscillator, using unbonded pads, Peters's double-edge
circuit) are good to know about.


John
 
Subhasri krishnan wrote:
The design that I am working on is becoming increasingly difficult to
debug and so I am trying to use chipscope pro. I tried some basic
modules to familiarize myself with the tool and I have some basic
questions about it. Can I capture more than 16384 samples? Can I
capture more samples if I have a device with more gates than with
lesser gates. i.e say XC3S200 Vs XC3S1500?
The number of samples is limited by how many free RAMs you have in your
device. I believe that the ChipScope docs make that point perfectly
clear.

I do not understand the
concept of trigger ports. Suppose the whole design is based on the
clock input to the fpga and the fpga is used as a memory controller,
then what are the trigger ports?
You trigger on whatever signals interest you. You might want to
trigger when the memory controller writes to a particular address.

And should the output be triggered?
If you want to see what happens when an output changes to a particular
state, sure.

understand these are very basic questions and I tried to use google and
the user guide to find the answer to this. If there are any basic
documents that I can refer then please point me to it.
Try reading Agilent's guides to using logic analyzers.

-a
 
John-

a;
reg [7:0] b [31:0];
reg [2:0] bit;
integer i;
always @*
for( i=0; i<32; i=i+1 )
a = b[bit];

A doesn't need to be declared a wire to be a combinatorial value. Because
the always block is a combinatorial block, the reg value is a combinatorial
result, not implemented as a flip-flop or "register" primitive. The always
constructs need reg-declared variables to work.

Ok, got it. I suppose I can think of it as a latch, always enabled.
But to let you know XST doesn't like the "*". To synthesize, I had to
use:

always begin
for (i=0; i<32; i=i+1)
a = b[bit];
end

Is it equivalent? XST complains that b and bit are missing from the
sensitivity list.

-Jeff
 
John-

Many thanks.

The "always @*" or "always @(*)" equivalent construct is a Verilog2001
catchall for the sensitivity list. If you don't choose Verilog2001 support
(it's optional in the Synplify compiler) or if XST 7.1.04i doesn't support
that construct, you would do best to include the full sensitivity list -
"always @(b or bit)"
XST burps up "Unexpected event in always block sensitivity list." with
that syntax.

Even going sans list, the result is still not equivalent to the lengthy
assign statement. XST changes routing enough to add 1 nsec on a local
clock net that I use as my canary.

Also - since the "for" statement is a single statement, the begin/end
constructs are superfluous; they do no harm but they add nothing. It's only
when there are multiple lines that the begin/end are needed.
Ya know. Sorry... I've been trying so many darn things I kept 'em
there for experimenting.

-Jeff
 
The 8.1i version of webpack includes core generator. (don't know about
MIG)
I got burnt with a space in the install path name - core generator
wouldn't run. Well thats not quite true, core gen would run but nothing
would get generated, or I got a particular error message. It was
covered by a Xilinx answer record. I posted the link to it in a recent
post to this newsgroup.

Regards
Andrew

maxascent wrote:
I am using ISE 8.1.02i webedition and have just installed the MIG 1.5
coregen. When I try to use it nothing is generated. Do I need the full
version of ISE or should it work in the webedition?

Thanks

Jon
 
The code for Viterbi decoder that i have written, is not giving ERRORS during synthesis but is having like 100 warnings of some ports being disconnected.

Is that possible that ,this is possible reason why the outputs on hardware FPGA Spartan stater kit not working?

iMPACT is showing that the programming succeeded but the outputs are bad, nothing toggling. I used LEDS for O/Ps and switches for Inputs. But, i have Locked one switch for CLOCK on the kit itself and try to toggle the clock by hand.

Would this sort of clock giving work?? OR is this bad because of frequency mismatch and low frequency.

Please guide.
 
Hello Guido,

A short piece od HDL is shown below to explain this:
module simple_bidir( clk, oe, bidir_pin, in_pin, out_pin );

input clk;


inout bidir_pin;

input oe;

input in_pin;

output out_pin;



reg in_reg, out_reg;



assign bidir_pin = oe ? in_reg : 1'bZ ;



always @ (posedge clk)

begin

in_reg <= in_pin;

out_reg <= bidir_pin;

end



assign out_pin = out_reg;



endmodule



A short answer is that three nodes, one data input, oe control line and one
data output, are required to acquire the signal activities on one bidir pin.
In the pre-synthesis node set, the bidir pin name is marked as tappable.
The connection is made only on the input side. The output direction needs
to be tapped on the driver to this pin. In the example above, you will need
to tap on the following nodes:

|simple_bidir|bidir_pin <- This will be the input direction.
|simple_bidir|in_reg <- This will be the output direction.
|simple_bidir|oe <- This is the oe control signal to indicate
the direction to pick.

In the post-fitting node set, the bidir pin name represents the IO pad,
which is not tappable. Nonetheless, the same principle applies. You will
need to find the names of the three nodes to tap. First, find the IO and
figure out the names affiliated to the input (DATAIN and OE) and output
(COMBOUT) ports of the IO cell. One easy and visual way to find the node
names, I found, is to use Resource Property Editor. In the connectivity
view of the IO cell, the signal names are listed. Depending on your version
of Quartus you can drag-and-drop the name into SignalTap II editor. In this
example, the node names are:

|simple_bidir|bidir_pin~1 <- This will be the input direction.
|simple_bidir|in_reg <- This will be the output direction.
|simple_bidir|oe <- This is the oe control signal to indicate
the direction to pick.

Hope this helps,
Subroto Datta
Altera Corp


"Guido" <gvaglia@gmail.com> wrote in message
news:1143537651.051352.64800@i39g2000cwa.googlegroups.com...
Dear all,
I am trying to using SignalTap to debug a design in which I have some
bidirectional ports.
Using both Pre-synthesis and Post-fitting (while using incremental
routing) Signal-Tap signals I am not able to find the IO port in order
to include it in the acquisition.
I tried to find some answer in Altera website or in the manual but I
found nowhere any reference to bidirectional ports and signaltap.
Is it a limitation of the software?
Is there a way to overcome it? Apart from introducing two more Input
and Output port for watching the signal in both phases?

Thank you all

Guido
 
Have you simulated this design? The xilinx tools come with a free
version of Modelsim(a simulator). Write a testbench in VHDL/verilog to
generate stimulus for your design, then have a look in the waveform
window to see if the outputs and internal signals are as you were
expecting.

Using a switch as a clock is like lending your wife the credit card.
Anything could happen. Get a scope and have a look at the switch output
when you press the switch. Most likely you will see it glitch up and
down quickly as you depress the switch.

Get it working in the simulator first, then think about getting it to
work on real hardware.
 
Manpreet schrieb:
Please help!

The code for Viterbi decoder that i have written, is not giving ERRORS during synthesis but is having like 100 warnings of some ports being disconnected.

Is that possible that ,this is possible reason why the outputs on hardware FPGA Spartan stater kit not working?

iMPACT is showing that the programming succeeded but the outputs are bad, nothing toggling. I used LEDS for O/Ps and switches for Inputs. But, i have Locked one switch for CLOCK on the kit itself and try to toggle the clock by hand.

Would this sort of clock giving work?? OR is this bad because of frequency mismatch and low frequency.

How can i actually give the clock and synchronise the inputs with pos edge of clock to see the outputs.

Please guide.

Preet
Hi Preet,

have you provided any debouncing circuitry for your clock (and inputs)?
Otherwise, every press and release of the clock button results in an
unknown number of clock pulses. (Keep in mind that a debouncing circuit
needs to be clocked with less than 100 Hz!)

For the disconnected ports:
Do a simulation! Does it work? Do the outputs toggle?
Check your synthesis report. Is there a warning about some signals
(preferably Enables) becoming tied to VCC or GND? This causes XST to
eliminate the following circuits for their outputs will remain constant
as well. And because the other inputs of these circuits are not used
anymore they become disconnected.

Do you feed the clock into a DLL/DCM? This might not work with a manual
clock. At least DLLs need a minimum Clock frequency!


Have a nice synthesis
Eilert
 
Symon schrieb:
"hongyan" <hy34@njit.edu> wrote in message
news:1143591454.616221.68380@u72g2000cwu.googlegroups.com...
Then when I do the synthesis, I got 33 logic levels (For another bigger
design including the adder, an even smaller number, ex 27) in the
critical path with Propagation time 5.043. I don't understand why the
number of logic level is less than the ALU alone. I suppose they should
be higher by adding the logic numbers. I am using synplifypro 8.1 for
the synthesis.

Hi,
If this is a Xilinx design, try looking at the design in the timing analyser
tool. It will show the logic levels, and you should be able to work out
what's going on.
HTH, and good luck, Syms.


Hi Symon
two more tips:
1) How about the wire delays at the inputs? Are they reduced when you
use registers?

2) Is your tool performing some sort of register balancing for timing
improvement? This may cause portions of the adder to be placed before
the registers thus reducing the logic levels of the adder itself.

have a nice synthesis
Eilert
 

Welcome to EDABoard.com

Sponsor

Back
Top