Video Scan Conversion Rate - Camera Input to DVI Display Out

S

Scott Connors

Guest
Hello all.

I am involved in a project where I need to provide a DVI display of a
camera input. Here are the technical specs of the matter:

I have a 125 Hz frame rate camera coming in to my FPGA. The camera
resolution is 640x480. It has the usual vsync and hsync signals, with
dead time in some spots. A typical camera.


On the other end, I have to feed a DVI display with a native
resolution of 1280x1024 and an optimal refresh rate of 60 Hz. A
typical DVI display.

At the moment I simply want to display the camera data in the upper
left corner (640x480) and just write black pixels for the rest of the
screen. So I am not worried about image resizing functions.

The main problem lies in crossing the clock domain for the system.
For use as video buffers I have 2-2MB SRAM's external to the FPGA. I
currently have an implementation where I read from one memory while
writing to the other memory. When I have read an entire frame of
data, I switch the memory operations. I begin reading at the
beginning of the new read memory and write to the what would be the
next memory location in the new write memory. Therefore, part of the
memory is one frame behind the current frame.

This does not cause a problem with the display until I move the
camera. At this point the image tears (a bad white line) throughout
the image until the motion stops. This can be very annoying and I
need to prevent this from happening.

I am wondering if anyone has completed a camera to display conversion
successfully using simple frame buffers or if anyone has any
suggestions on techniques to try.

Logic use is not a problem, as I am currently only using 5% of the
V-II Pro.

Thank you for your assistance.


~Scott
 
Is the line in your 640x480 source signal when the
camera moves?

Are you moving the camera with a motor?
Does the line come from cross talk from the motor?
Are your cables shielded?
Try moving the camera manually (you may need to
disconnect the motor), is the line still there?
If no, you better find a person who is good at
EMI/Cross talk issues

Are you moving the camera manually?
Do the vents get blocked and the camera control
unit heat ups (happened to me once - someone set
paper on top of the camera control unit)?


Are you synchronizing the signals that handshake the
buffer trade-off since the cross clock domains?


Cheers,
Jim
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Jim Lewis
Director of Training mailto:Jim@SynthWorks.com
SynthWorks Design Inc. http://www.SynthWorks.com
1-503-590-4787

Expert VHDL Training for Hardware Design and Verification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Scott Connors wrote:

Hello all.

I am involved in a project where I need to provide a DVI display of a
camera input. Here are the technical specs of the matter:

I have a 125 Hz frame rate camera coming in to my FPGA. The camera
resolution is 640x480. It has the usual vsync and hsync signals, with
dead time in some spots. A typical camera.


On the other end, I have to feed a DVI display with a native
resolution of 1280x1024 and an optimal refresh rate of 60 Hz. A
typical DVI display.

At the moment I simply want to display the camera data in the upper
left corner (640x480) and just write black pixels for the rest of the
screen. So I am not worried about image resizing functions.

The main problem lies in crossing the clock domain for the system.
For use as video buffers I have 2-2MB SRAM's external to the FPGA. I
currently have an implementation where I read from one memory while
writing to the other memory. When I have read an entire frame of
data, I switch the memory operations. I begin reading at the
beginning of the new read memory and write to the what would be the
next memory location in the new write memory. Therefore, part of the
memory is one frame behind the current frame.

This does not cause a problem with the display until I move the
camera. At this point the image tears (a bad white line) throughout
the image until the motion stops. This can be very annoying and I
need to prevent this from happening.

I am wondering if anyone has completed a camera to display conversion
successfully using simple frame buffers or if anyone has any
suggestions on techniques to try.

Logic use is not a problem, as I am currently only using 5% of the
V-II Pro.

Thank you for your assistance.


~Scott
 
Hi Scott,
I think the problem is that when you swap RAM banks, your camera can
have written a partial frame to the write memory. This is fine when the
camera's still, because the data are identical to the last frame, but when
the camera's moving it's gonna go wrong and leave artifacts. You'll have
half of an old frame with half of a new frame in the buffer.
Try this. As the write frame rate is > 2x the read frame rate, when
you swap read banks, wait for the start of the next write frame then write
that into the write buffer. Then stop writing until the frame read cycle is
complete. Then swap banks and start all over again.
HTH, Syms.

"Scott Connors" <scott.a.connors@boeing.com> wrote in message
news:5f825c95.0311041425.58c6ac6e@posting.google.com...
Hello all.

I am involved in a project where I need to provide a DVI display of a
camera input. Here are the technical specs of the matter:

I have a 125 Hz frame rate camera coming in to my FPGA. The camera
resolution is 640x480. It has the usual vsync and hsync signals, with
dead time in some spots. A typical camera.


On the other end, I have to feed a DVI display with a native
resolution of 1280x1024 and an optimal refresh rate of 60 Hz. A
typical DVI display.

At the moment I simply want to display the camera data in the upper
left corner (640x480) and just write black pixels for the rest of the
screen. So I am not worried about image resizing functions.

The main problem lies in crossing the clock domain for the system.
For use as video buffers I have 2-2MB SRAM's external to the FPGA. I
currently have an implementation where I read from one memory while
writing to the other memory. When I have read an entire frame of
data, I switch the memory operations. I begin reading at the
beginning of the new read memory and write to the what would be the
next memory location in the new write memory. Therefore, part of the
memory is one frame behind the current frame.

This does not cause a problem with the display until I move the
camera. At this point the image tears (a bad white line) throughout
the image until the motion stops. This can be very annoying and I
need to prevent this from happening.

I am wondering if anyone has completed a camera to display conversion
successfully using simple frame buffers or if anyone has any
suggestions on techniques to try.

Logic use is not a problem, as I am currently only using 5% of the
V-II Pro.

Thank you for your assistance.


~Scott
 
Another way could be to construct a big frame fifo using the 2x2M srams you
have. Since you cross clock boundaries, you need to construct an async.
fifo.

Hint to fifo construction:

1. ping-pong between two srams
2. an ififo(xilinx internal) for camera input
3. an ofifo(xilinx internal) for dvi output
4. counters on each end to keep tracking pixel flows

---Bob
"Scott Connors" <scott.a.connors@boeing.com> wrote in message
news:5f825c95.0311041425.58c6ac6e@posting.google.com...
Hello all.

I am involved in a project where I need to provide a DVI display of a
camera input. Here are the technical specs of the matter:

I have a 125 Hz frame rate camera coming in to my FPGA. The camera
resolution is 640x480. It has the usual vsync and hsync signals, with
dead time in some spots. A typical camera.


On the other end, I have to feed a DVI display with a native
resolution of 1280x1024 and an optimal refresh rate of 60 Hz. A
typical DVI display.

At the moment I simply want to display the camera data in the upper
left corner (640x480) and just write black pixels for the rest of the
screen. So I am not worried about image resizing functions.

The main problem lies in crossing the clock domain for the system.
For use as video buffers I have 2-2MB SRAM's external to the FPGA. I
currently have an implementation where I read from one memory while
writing to the other memory. When I have read an entire frame of
data, I switch the memory operations. I begin reading at the
beginning of the new read memory and write to the what would be the
next memory location in the new write memory. Therefore, part of the
memory is one frame behind the current frame.

This does not cause a problem with the display until I move the
camera. At this point the image tears (a bad white line) throughout
the image until the motion stops. This can be very annoying and I
need to prevent this from happening.

I am wondering if anyone has completed a camera to display conversion
successfully using simple frame buffers or if anyone has any
suggestions on techniques to try.

Logic use is not a problem, as I am currently only using 5% of the
V-II Pro.

Thank you for your assistance.


~Scott
 
"Scott Connors" <scott.a.connors@boeing.com> wrote in message
news:5f825c95.0311041425.58c6ac6e@posting.google.com...
Hello all.

I am involved in a project where I need to provide a DVI display of a
camera input. Here are the technical specs of the matter:

I have a 125 Hz frame rate camera coming in to my FPGA. The camera
resolution is 640x480. It has the usual vsync and hsync signals, with
dead time in some spots. A typical camera.
Others have already described methods to ensure that your
read and write pointers don't cross.

I have a couple of additional comments though.

I don't understand why you get a white line when you
pan the camera. You should get a horizontal line of horizontal
dislocation that moves up or down the screen, but there should
not be anything there that is not in the original image. If there
is a white line, for example when the image is dark, then
you have something else going wrong.

The fact the camera is running at 125 Hz suggests someone
was concerned about motion. Going down to 60 Hz is going
to do nasty stuff to your motion. Most times you will be dropping
one input frame, but sometimes you will be dropping two in a row.

If the monitor will handle 62.5 Hz, you will get better motion
if you lock the output frame rate to the input. Of course, if the
monitor also has a frame buffer in it, that defeats the purpose.

If I was you I would first make a free-running 62.5 Hz output
raster, and generate a video pattern with horizontally moving
vertical bars. If this displays without tearing on your monitor,
then you can lock the output raster to the input and do the
scan conversion. (This also makes your double buffering easier).
 

Welcome to EDABoard.com

Sponsor

Back
Top