ping pong buffer overflow issue

S

salimbaba

Guest
Hi,
I am working on Gigabit MAC RTL, and i am using spartan3 xc3s4000 in m
design. I have a Ping Pong buffer implemented in the pipeline stage, bu
the problem is that my buffers overflow when i send bursty traffic. Well
on paper and simulations, they never overrun, but when i use IxChariot t
calculate throughput, they overrun after a while.
Now i need an advice, should i add more buffers like ping pong king kong o
what?
Also, i start reading Ping/Pong after 50 writes. My buffers are usin
different read and write clocks, write clock being twice of read clock.



If anyone has any idea, kindly help me out.

thanks


---------------------------------------
Posted through http://www.FPGARelated.com
 
On Mar 14, 2:35 pm, "salimbaba"
<a1234573@n_o_s_p_a_m.n_o_s_p_a_m.owlpic.com> wrote:
Hi,
I am working on Gigabit MAC RTL, and i am using spartan3 xc3s4000 in my
design. I have a Ping Pong buffer implemented in the pipeline stage, but
the problem is that my buffers overflow when i send bursty traffic. Well,
on paper and simulations, they never overrun,
Then your paper calculations and your simulations are not modelling
the reality.

Now i need an advice, should i add more buffers like ping pong king kong or
what?
First you need to go back and create a more accurate spreadsheet model
for your paper calculations, then run some simulations to see if you
missed something in your spreadsheet model.

Also, i start reading Ping/Pong after 50 writes.
Is that included in your spreadsheet model? Presumably it is in your
simulations if you're simulating your actual design.

My buffers are using
different read and write clocks, write clock being twice of read clock.
That only means that the idle time must be at least equal to the
transmit time. If the burst size is too big, it will still overflow a
buffer that is not large enough.

If anyone has any idea, kindly help me out.
Go back to a simple spreadsheet model first and make sure buffer sizes
are large enough to handle the actual burst sizes. With a
spreadsheet, clock rates (input/output), burst size and latency values
you should be able to calculate worst case buffer sizes. Then insure
that your buffers are that size or larger.

Kevin Jennings
 
Hello,

There are lots of reasons why your simulations might not match your
results in hardware, but it might be easiest to use Chipscope to look
at fifo counts and flags during your tests to see what's really going
on. This of course assumes that using Chipscope is an option for you
(you have internal memory to spare, access to JTAG, etc.).

Regards,
--
Mike Shogren
Director of FPGA Development
Epiq Solutions
http://www.epiq-solutions.com


Yes, i do have access to Chipscope and i have been using it all along, bu
i have not been able to capture the moment where everything goes haywir
(my fifos overrun). Anyway, i'll try something tomorrow and will post th
update.

thanks


---------------------------------------
Posted through http://www.FPGARelated.com
 
Hello,

There are lots of reasons why your simulations might not match your
results in hardware, but it might be easiest to use Chipscope to look
at fifo counts and flags during your tests to see what's really going
on. This of course assumes that using Chipscope is an option for you
(you have internal memory to spare, access to JTAG, etc.).

Regards,
--
Mike Shogren
Director of FPGA Development
Epiq Solutions
http://www.epiq-solutions.com
 
Yes, i do have access to Chipscope and i have been using it all along, but
i have not been able to capture the moment where everything goes haywire
(my fifos overrun). Anyway, i'll try something tomorrow and will post the
update.
If you can't "catch it in the act" then you may need to write some
additional debug logic to help, and then add the debug signals to
Chipscope as well. But, sounds like you are on the right track and
hopefully you make some progress with the additional testing.
Hopefully your build times are not terribly long.

--
Mike Shogren
Director of FPGA Development
Epiq Solutions
http://www.epiq-solutions.com
 
Hello,

There are lots of reasons why your simulations might not match your
results in hardware, but it might be easiest to use Chipscope to look
at fifo counts and flags during your tests to see what's really going
on. This of course assumes that using Chipscope is an option for you
(you have internal memory to spare, access to JTAG, etc.).

Regards,
--
Mike Shogren
Director of FPGA Development
Epiq Solutions
http://www.epiq-solutions.com
Two points regarding Xilinx FIFOs, assuming that you are generating the
with CoreGen:
1. There are lots of flags available to trigger on in ChipScope, e.g
'Full'. You may need a KEEP attribute to use it in ChipScope.
2. Depending on whether your FIFOs ar common-clock or not, and whic
version of ISE you are using, the FIFO behavioural model may be slightl
wrong. The structural models seem to be good, though. See, for instance
http://www.xilinx.com/support/answers/20414.htm

If you have home-brewed your FIFO, stop now, and use CoreGen!


---------------------------------------
Posted through http://www.FPGARelated.com
 
If you have home-brewed your FIFO, stop now, and use CoreGen!
I dont think you can say that you shouldn't use your own FIFO. From m
point of view I have used my own now for quite a while and haven't had an
problems. CoreGen is quite a good product but I find it a but of a pain t
have to regenerate if I just want to change one thing. I would rather hav
a piece of verilog code that I can just instantiate a let the synthesize
work out how to create it.

Jon



---------------------------------------
Posted through http://www.FPGARelated.com
 

Welcome to EDABoard.com

Sponsor

Back
Top