Graphics rendering revisited

V

Vinh Pham

Guest
Since I was responsible for de-railling the original thread, let me be the
one to beat the horse back to life, since it has interesting potential.

The orignal question was: What algorithms can you use to generate live
video, that contains only line art (lines, rectangles, curves, circles,
etc.), if you can't use a frame buffer.

The benefit of using a frame buffer is flexibility. Namely you get random
access to any pixel on the screen. This opens up a wide range of algorithms
you can use to play the performance-area-complexity tradeoff game.

Without a frame buffer, you only have sequential access to your pixels. No
going back, no going forward. Quite Zen I suppose. Anyways, you lose access
to a lot of frame buffer algorithms, but some can still be used.

The conceptually easy ones to understand are math based algorithms, but
often they're expensive hardware wise. In the first section, I'll go over
implementation issues of the ideas that other people gave. Nothing super
complex.

The second section contains a more novel (maybe) approach based on pixel
spacing. It's conceptually harder to get a handle on, but has the potential
to require less resources. Unfortunately there are problems with the idea
that I haven't flushed out. Perhaps someone will have some ideas, or maybe
it'll inspire something better.

Oh yeah, I was too lazy to double check what I wrote, so there might be
problems. I also left things unfinished towards the end, I've got other
things to think about. Hopefully it gets the ball rolling though.

Regards,
Vinh


MATH ALGORITHMS
================

Lines
-----
There was a math based algorithm mentioned by Peter Wallace, where you use
y - (mx + c) = 0 and minx<x<maxx to decide whether to light up a pixel or
not. This algorithm works on a per-pixel basis, so it doesn't need random
access.

I like how Peter formatted the equation as y - (mx + c) = 0 rather than y =
(mx + c) since a compare against zero uses less logic than a compare against
any arbitrary number. On the other hand, y = (mx + c) might produce a design
that can be clocked faster, since you only have one adder. But Xilinx (and I
would assume the other vendors also) have pretty fast carry chains, so it
might not be an issue.

I would recommend using the form x - (my + c) = 0 instead. The reason is
because we're scanning through the pixels left-right, top-bottom. x is
always changing while y takes many clock cycles to change. Therefore you can
use a multicycle multiplier that trades off time for area savings. Also the
(my) term can be implemented with a constant-multiplier (a * K) which uses
less area than a regular multiplier (a * b).

Also because y changes slowly, it's better to use miny<y<maxy. But I guess
if you have a horizontal line, you'll need to use minx<x<maxx.

Oh yeah, (x) and (y) are integers, but (m) and (c) have a fractional part.
I'm not sure if this thinking is correct, but if we assume the resolution of
a TV screen is 512x512 (yes I know it's never that good), then (m) could be
as small as 1/512 and as large as 512, so we'd need 9 bits of integer and 9
bits of fraction (9.9). I suppose (c) would need the same thing. Cool,
that's 18-bits, just right for the dedicated multipliers in Xilinx and
Altera.

Curves
------
Instead of thinking of curves as smooth lines, you can imagine them being a
string of straight lines connected together (piece-wise linear). So
theoretically you can draw any curve "just" by varying (m) and (c) over
time. Of course the prospect of doing this inside the FPGA doesn't look
promising. You could use embedded RAMs to contain a table of these values,
but that could be a large table. Don't forget you would need a table for
each curve in your image. Also, you have to calculate those tables
somewhere. Naturally you'd use an external processor/host computer but
depending on the application, you might not have that luxary.

So that's probably a bad idea, and it might be better to use a math based
solution to curves.

Circles
-------
Roger Larsson's idea for a circle is also math based:

(x - x0)^2 + (y - y0)^2 - r^2 = 0

We can expand this to:

x^2 -2x0(x) + x0^2 + y^2 -2y0(y) + y0^2 - r^2 = 0

x0^2, y0^2, and r^2 are constants so they can be combined into one big
constant K:

x^2 -2x0(x) + y^2 -2y0(y) + K = 0

(x^2 + y^2) is independent of x0,y0, and r, so it can be pre-computed into a
table K(x,y), that's can be used for all circles:

-2x0(x) + -2y0(y) + K(x,y) + K = 0

Of course if you don't have the ram for it, you can break up K(x,y) into two
seperate, smaller tables and do K(x) + K(y). But do keep in mind that K(x,y)
can be used for all circles, so it might be a worthwhile use of RAM.

-2y0(y) can use a multicycle-constant-multiplier while -2x0(x) would
need a pipelined-constant-multiplier.

Now if you got more ram to spare, you could take advantage of the fact
that -2x0(x) is independent of y, and vice versa for -2y0(y). Therefore you
could pre-compute them into their own tables and get:

X0(x) + Y0(y) + K(x,y) + K = 0

Unfortunately you'd need a pair of those tables for every circle in your
image, and more importantly you'd also have to recompute them whenever your
parameters change.

So you can save some logic and improve performance if your parameters don't
change often. If they're pretty dynamic, you'll have to bite the bullet and
use up a lot of logic.

Dealing With Rational Numbers
-----------------------------
Our x and y values are integer only, but the lines and circles described by
the math formulas exist in a rational number space. I haven't given it much
thought, but you might have to be a little careful when comparing values
against zero. Close enough to zero would be more like it. But it might not
be a big problem.


PIXEL SPACING ALGORITHM
=======================

So far we've viewed things as a 2D array. If we think of things in as a 1D
vector, an alternative algorithm presents itself, though I'm not sure how
fruitful it may be in the long run.

BTW, 0 degrees = East, 90 = South, 180 = West, 270 = North.

If you drew a verticle line on the screen, and "rolled out" the pixels into
a vector, what you would see is a bunch of black pixels, evenly spaced. The
spacing would be equal to the width of the screen. Advanced apologies for
the clumsy notation.

Position of line-pixel i, p, is equal to:

p = p[i-1] + W

W = width of screen

To relate the value of p to the x,y space:

p = y*W + x
x = p%W
y = (p - x)/W

If you drew a 45 degrees diagonal, you would notice that the spacing was W +
1:

p = p[i-1] + W + 1

A 135 degrees diagonal would be:

p = p[i-1] + W - 1

So you would think you can draw a line of arbitrary angle simply by
following the formula:

p[0] = starting point of line
p = p[i-1] + W + m
m = function of x/y slope

Unfortunately you run into problems when abs(m)>1. I'll go into it later.
For now, let's assume this algorithm works perfectly for all occasions.

The nice thing about this is there's no multiplication involved, just
addition. You use a down counter and when it reaches zero, you create a
pixel and put a new value in the down counter. You'll need to take into
consideration that (m) can have a fractional part though.

Also, as I said earlier, you can think of a curve as a straight line whose
slope changes as you draw it. The nice thing here is we only have (m) to
worry about, and no (c). To create a circular arc, you'd only need to
increment (m) by a constant. The constant would control the radius of the
circle that the arc belongs to.

But alas, this algorithm doesn't work for all (m). Actually angles from 0 to
45 degrees isn't so bad. 135 to 180 is trickier. I also have a feeling it'd
be difficult to get "pretty" visual results with this.
 
Lot's of good stuff. I'll have to read it later tonight. I just wanted to
modify one assumption you made. Resolution. I'll be working at 4K x 2.5K
and maybe as high as 4K x 4K and 60 frames per second soon. My current work
is at 2K x 1.5K, 60 fps though.

Here's a product I finished recently that's working at 1920 x 1200 and
60fps.
http://www.ecinemasys.com/products/display/edp100/pdf/edp100_preliminary.pdf

The design is 100% mine, electrical, board layout, mechanical, FPGA,
firmware, GUI, etc.

Some of the highlights: Two 1.485GHz inputs, two 1.485GHz outputs, 165MHz
DVI output, USB, lots of interesting real-time processing going on.

Yes, it has a frame buffer (four frames actually). No, it shouldn't be used
to render graphics primitives.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Martin Euredjian

To send private email:
0_0_0_0_@pacbell.net
where
"0_0_0_0_" = "martineu"



"Vinh Pham" <a@a.a> wrote in message
news:a_2fb.40558$5z.28069@twister.socal.rr.com...
Since I was responsible for de-railling the original thread, let me be the
one to beat the horse back to life, since it has interesting potential.

The orignal question was: What algorithms can you use to generate live
video, that contains only line art (lines, rectangles, curves, circles,
etc.), if you can't use a frame buffer.

The benefit of using a frame buffer is flexibility. Namely you get random
access to any pixel on the screen. This opens up a wide range of
algorithms
you can use to play the performance-area-complexity tradeoff game.

Without a frame buffer, you only have sequential access to your pixels. No
going back, no going forward. Quite Zen I suppose. Anyways, you lose
access
to a lot of frame buffer algorithms, but some can still be used.

The conceptually easy ones to understand are math based algorithms, but
often they're expensive hardware wise. In the first section, I'll go over
implementation issues of the ideas that other people gave. Nothing super
complex.

The second section contains a more novel (maybe) approach based on pixel
spacing. It's conceptually harder to get a handle on, but has the
potential
to require less resources. Unfortunately there are problems with the idea
that I haven't flushed out. Perhaps someone will have some ideas, or maybe
it'll inspire something better.

Oh yeah, I was too lazy to double check what I wrote, so there might be
problems. I also left things unfinished towards the end, I've got other
things to think about. Hopefully it gets the ball rolling though.

Regards,
Vinh


MATH ALGORITHMS
================

Lines
-----
There was a math based algorithm mentioned by Peter Wallace, where you use
y - (mx + c) = 0 and minx<x<maxx to decide whether to light up a pixel or
not. This algorithm works on a per-pixel basis, so it doesn't need random
access.

I like how Peter formatted the equation as y - (mx + c) = 0 rather than y
=
(mx + c) since a compare against zero uses less logic than a compare
against
any arbitrary number. On the other hand, y = (mx + c) might produce a
design
that can be clocked faster, since you only have one adder. But Xilinx (and
I
would assume the other vendors also) have pretty fast carry chains, so it
might not be an issue.

I would recommend using the form x - (my + c) = 0 instead. The reason is
because we're scanning through the pixels left-right, top-bottom. x is
always changing while y takes many clock cycles to change. Therefore you
can
use a multicycle multiplier that trades off time for area savings. Also
the
(my) term can be implemented with a constant-multiplier (a * K) which uses
less area than a regular multiplier (a * b).

Also because y changes slowly, it's better to use miny<y<maxy. But I guess
if you have a horizontal line, you'll need to use minx<x<maxx.

Oh yeah, (x) and (y) are integers, but (m) and (c) have a fractional part.
I'm not sure if this thinking is correct, but if we assume the resolution
of
a TV screen is 512x512 (yes I know it's never that good), then (m) could
be
as small as 1/512 and as large as 512, so we'd need 9 bits of integer and
9
bits of fraction (9.9). I suppose (c) would need the same thing. Cool,
that's 18-bits, just right for the dedicated multipliers in Xilinx and
Altera.

Curves
------
Instead of thinking of curves as smooth lines, you can imagine them being
a
string of straight lines connected together (piece-wise linear). So
theoretically you can draw any curve "just" by varying (m) and (c) over
time. Of course the prospect of doing this inside the FPGA doesn't look
promising. You could use embedded RAMs to contain a table of these values,
but that could be a large table. Don't forget you would need a table for
each curve in your image. Also, you have to calculate those tables
somewhere. Naturally you'd use an external processor/host computer but
depending on the application, you might not have that luxary.

So that's probably a bad idea, and it might be better to use a math based
solution to curves.

Circles
-------
Roger Larsson's idea for a circle is also math based:

(x - x0)^2 + (y - y0)^2 - r^2 = 0

We can expand this to:

x^2 -2x0(x) + x0^2 + y^2 -2y0(y) + y0^2 - r^2 = 0

x0^2, y0^2, and r^2 are constants so they can be combined into one big
constant K:

x^2 -2x0(x) + y^2 -2y0(y) + K = 0

(x^2 + y^2) is independent of x0,y0, and r, so it can be pre-computed into
a
table K(x,y), that's can be used for all circles:

-2x0(x) + -2y0(y) + K(x,y) + K = 0

Of course if you don't have the ram for it, you can break up K(x,y) into
two
seperate, smaller tables and do K(x) + K(y). But do keep in mind that
K(x,y)
can be used for all circles, so it might be a worthwhile use of RAM.

-2y0(y) can use a multicycle-constant-multiplier while -2x0(x) would
need a pipelined-constant-multiplier.

Now if you got more ram to spare, you could take advantage of the fact
that -2x0(x) is independent of y, and vice versa for -2y0(y). Therefore
you
could pre-compute them into their own tables and get:

X0(x) + Y0(y) + K(x,y) + K = 0

Unfortunately you'd need a pair of those tables for every circle in your
image, and more importantly you'd also have to recompute them whenever
your
parameters change.

So you can save some logic and improve performance if your parameters
don't
change often. If they're pretty dynamic, you'll have to bite the bullet
and
use up a lot of logic.

Dealing With Rational Numbers
-----------------------------
Our x and y values are integer only, but the lines and circles described
by
the math formulas exist in a rational number space. I haven't given it
much
thought, but you might have to be a little careful when comparing values
against zero. Close enough to zero would be more like it. But it might not
be a big problem.


PIXEL SPACING ALGORITHM
=======================

So far we've viewed things as a 2D array. If we think of things in as a 1D
vector, an alternative algorithm presents itself, though I'm not sure how
fruitful it may be in the long run.

BTW, 0 degrees = East, 90 = South, 180 = West, 270 = North.

If you drew a verticle line on the screen, and "rolled out" the pixels
into
a vector, what you would see is a bunch of black pixels, evenly spaced.
The
spacing would be equal to the width of the screen. Advanced apologies for
the clumsy notation.

Position of line-pixel i, p, is equal to:

p = p[i-1] + W

W = width of screen

To relate the value of p to the x,y space:

p = y*W + x
x = p%W
y = (p - x)/W

If you drew a 45 degrees diagonal, you would notice that the spacing was W
+
1:

p = p[i-1] + W + 1

A 135 degrees diagonal would be:

p = p[i-1] + W - 1

So you would think you can draw a line of arbitrary angle simply by
following the formula:

p[0] = starting point of line
p = p[i-1] + W + m
m = function of x/y slope

Unfortunately you run into problems when abs(m)>1. I'll go into it later.
For now, let's assume this algorithm works perfectly for all occasions.

The nice thing about this is there's no multiplication involved, just
addition. You use a down counter and when it reaches zero, you create a
pixel and put a new value in the down counter. You'll need to take into
consideration that (m) can have a fractional part though.

Also, as I said earlier, you can think of a curve as a straight line whose
slope changes as you draw it. The nice thing here is we only have (m) to
worry about, and no (c). To create a circular arc, you'd only need to
increment (m) by a constant. The constant would control the radius of the
circle that the arc belongs to.

But alas, this algorithm doesn't work for all (m). Actually angles from 0
to
45 degrees isn't so bad. 135 to 180 is trickier. I also have a feeling
it'd
be difficult to get "pretty" visual results with this.


 
Martin,

Looked at the spec's of the EDP100. Looking very nice indeed.. So to convert
the HDSDI into DVI you would need a deďnterlacer and a frame rate converter.
Guess that's where your 4 framestores come from. If you don't mind, I'd like
to know how many fieldstores are actually used in the deinterlacer.
Normally, you'd need two stores for doing the frame rate conversion (double
buffered). So that would leave you with 2 stores left to do deinterlacing
which allows for some nice 3field algorithms.

sorry to go off topic with this.. I'm just curious since I'm roughly in the
same business.

regards,
Jan

"Martin Euredjian" <0_0_0_0_@pacbell.net> schreef in bericht
news:d_3fb.8955$N67.802@newssvr27.news.prodigy.com...
Lot's of good stuff. I'll have to read it later tonight. I just wanted
to
modify one assumption you made. Resolution. I'll be working at 4K x 2.5K
and maybe as high as 4K x 4K and 60 frames per second soon. My current
work
is at 2K x 1.5K, 60 fps though.

Here's a product I finished recently that's working at 1920 x 1200 and
60fps.

http://www.ecinemasys.com/products/display/edp100/pdf/edp100_preliminary.pdf

The design is 100% mine, electrical, board layout, mechanical, FPGA,
firmware, GUI, etc.

Some of the highlights: Two 1.485GHz inputs, two 1.485GHz outputs,
165MHz
DVI output, USB, lots of interesting real-time processing going on.

Yes, it has a frame buffer (four frames actually). No, it shouldn't be
used
to render graphics primitives.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Martin Euredjian

To send private email:
0_0_0_0_@pacbell.net
where
"0_0_0_0_" = "martineu"



"Vinh Pham" <a@a.a> wrote in message
news:a_2fb.40558$5z.28069@twister.socal.rr.com...
Since I was responsible for de-railling the original thread, let me be
the
one to beat the horse back to life, since it has interesting potential.

The orignal question was: What algorithms can you use to generate live
video, that contains only line art (lines, rectangles, curves, circles,
etc.), if you can't use a frame buffer.

The benefit of using a frame buffer is flexibility. Namely you get
random
access to any pixel on the screen. This opens up a wide range of
algorithms
you can use to play the performance-area-complexity tradeoff game.

Without a frame buffer, you only have sequential access to your pixels.
No
going back, no going forward. Quite Zen I suppose. Anyways, you lose
access
to a lot of frame buffer algorithms, but some can still be used.

The conceptually easy ones to understand are math based algorithms, but
often they're expensive hardware wise. In the first section, I'll go
over
implementation issues of the ideas that other people gave. Nothing super
complex.

The second section contains a more novel (maybe) approach based on pixel
spacing. It's conceptually harder to get a handle on, but has the
potential
to require less resources. Unfortunately there are problems with the
idea
that I haven't flushed out. Perhaps someone will have some ideas, or
maybe
it'll inspire something better.

Oh yeah, I was too lazy to double check what I wrote, so there might be
problems. I also left things unfinished towards the end, I've got other
things to think about. Hopefully it gets the ball rolling though.

Regards,
Vinh


MATH ALGORITHMS
================

Lines
-----
There was a math based algorithm mentioned by Peter Wallace, where you
use
y - (mx + c) = 0 and minx<x<maxx to decide whether to light up a pixel
or
not. This algorithm works on a per-pixel basis, so it doesn't need
random
access.

I like how Peter formatted the equation as y - (mx + c) = 0 rather than
y
=
(mx + c) since a compare against zero uses less logic than a compare
against
any arbitrary number. On the other hand, y = (mx + c) might produce a
design
that can be clocked faster, since you only have one adder. But Xilinx
(and
I
would assume the other vendors also) have pretty fast carry chains, so
it
might not be an issue.

I would recommend using the form x - (my + c) = 0 instead. The reason is
because we're scanning through the pixels left-right, top-bottom. x is
always changing while y takes many clock cycles to change. Therefore you
can
use a multicycle multiplier that trades off time for area savings. Also
the
(my) term can be implemented with a constant-multiplier (a * K) which
uses
less area than a regular multiplier (a * b).

Also because y changes slowly, it's better to use miny<y<maxy. But I
guess
if you have a horizontal line, you'll need to use minx<x<maxx.

Oh yeah, (x) and (y) are integers, but (m) and (c) have a fractional
part.
I'm not sure if this thinking is correct, but if we assume the
resolution
of
a TV screen is 512x512 (yes I know it's never that good), then (m) could
be
as small as 1/512 and as large as 512, so we'd need 9 bits of integer
and
9
bits of fraction (9.9). I suppose (c) would need the same thing. Cool,
that's 18-bits, just right for the dedicated multipliers in Xilinx and
Altera.

Curves
------
Instead of thinking of curves as smooth lines, you can imagine them
being
a
string of straight lines connected together (piece-wise linear). So
theoretically you can draw any curve "just" by varying (m) and (c) over
time. Of course the prospect of doing this inside the FPGA doesn't look
promising. You could use embedded RAMs to contain a table of these
values,
but that could be a large table. Don't forget you would need a table for
each curve in your image. Also, you have to calculate those tables
somewhere. Naturally you'd use an external processor/host computer but
depending on the application, you might not have that luxary.

So that's probably a bad idea, and it might be better to use a math
based
solution to curves.

Circles
-------
Roger Larsson's idea for a circle is also math based:

(x - x0)^2 + (y - y0)^2 - r^2 = 0

We can expand this to:

x^2 -2x0(x) + x0^2 + y^2 -2y0(y) + y0^2 - r^2 = 0

x0^2, y0^2, and r^2 are constants so they can be combined into one big
constant K:

x^2 -2x0(x) + y^2 -2y0(y) + K = 0

(x^2 + y^2) is independent of x0,y0, and r, so it can be pre-computed
into
a
table K(x,y), that's can be used for all circles:

-2x0(x) + -2y0(y) + K(x,y) + K = 0

Of course if you don't have the ram for it, you can break up K(x,y) into
two
seperate, smaller tables and do K(x) + K(y). But do keep in mind that
K(x,y)
can be used for all circles, so it might be a worthwhile use of RAM.

-2y0(y) can use a multicycle-constant-multiplier while -2x0(x) would
need a pipelined-constant-multiplier.

Now if you got more ram to spare, you could take advantage of the fact
that -2x0(x) is independent of y, and vice versa for -2y0(y). Therefore
you
could pre-compute them into their own tables and get:

X0(x) + Y0(y) + K(x,y) + K = 0

Unfortunately you'd need a pair of those tables for every circle in your
image, and more importantly you'd also have to recompute them whenever
your
parameters change.

So you can save some logic and improve performance if your parameters
don't
change often. If they're pretty dynamic, you'll have to bite the bullet
and
use up a lot of logic.

Dealing With Rational Numbers
-----------------------------
Our x and y values are integer only, but the lines and circles described
by
the math formulas exist in a rational number space. I haven't given it
much
thought, but you might have to be a little careful when comparing values
against zero. Close enough to zero would be more like it. But it might
not
be a big problem.


PIXEL SPACING ALGORITHM
=======================

So far we've viewed things as a 2D array. If we think of things in as a
1D
vector, an alternative algorithm presents itself, though I'm not sure
how
fruitful it may be in the long run.

BTW, 0 degrees = East, 90 = South, 180 = West, 270 = North.

If you drew a verticle line on the screen, and "rolled out" the pixels
into
a vector, what you would see is a bunch of black pixels, evenly spaced.
The
spacing would be equal to the width of the screen. Advanced apologies
for
the clumsy notation.

Position of line-pixel i, p, is equal to:

p = p[i-1] + W

W = width of screen

To relate the value of p to the x,y space:

p = y*W + x
x = p%W
y = (p - x)/W

If you drew a 45 degrees diagonal, you would notice that the spacing was
W
+
1:

p = p[i-1] + W + 1

A 135 degrees diagonal would be:

p = p[i-1] + W - 1

So you would think you can draw a line of arbitrary angle simply by
following the formula:

p[0] = starting point of line
p = p[i-1] + W + m
m = function of x/y slope

Unfortunately you run into problems when abs(m)>1. I'll go into it
later.
For now, let's assume this algorithm works perfectly for all occasions.

The nice thing about this is there's no multiplication involved, just
addition. You use a down counter and when it reaches zero, you create a
pixel and put a new value in the down counter. You'll need to take into
consideration that (m) can have a fractional part though.

Also, as I said earlier, you can think of a curve as a straight line
whose
slope changes as you draw it. The nice thing here is we only have (m) to
worry about, and no (c). To create a circular arc, you'd only need to
increment (m) by a constant. The constant would control the radius of
the
circle that the arc belongs to.

But alas, this algorithm doesn't work for all (m). Actually angles from
0
to
45 degrees isn't so bad. 135 to 180 is trickier. I also have a feeling
it'd
be difficult to get "pretty" visual results with this.




 
Vinh Pham wrote:

Interesting theoretical enterprise, but I really don't see the point.



Someone just had a rare situation where they couldn't use a frame buffer.
You can think of it as an intellectual exercise :_)



I remember quite some years ago talking to a guy who had invested millions


of $ in developing

Hahaha no wonder he refused to believe you. Sort of like when you buy a
crappy product, but you make yourself believe it's great, because of all the
money you spent on it.

Did E&S's vector display draw only outlines of spheres, or shaded? Shading
with x-y vectors doesn't sound too fun.


Oh, no, it painted them in very nicely. I don't remember whether it had
a variable-width
electron beam. They use this in the Rediffusion flight simulators and
some other
gear that I think had E&S image generators at the end of the processing
chain.
It looked much like Gouraud shading. Yes, that's why the thing cost
several hundred
K $.

What do you think was the main reason why people switched to pixel/raster?
Simplicity? Scales better?


Plain cost. Imagine how insanely difficult it would be to have a color
CRT with a
variable beam width, able to deflect from one side of the screen to the
other in a
couple of uS, and maintain focus and purity while doing all that! Then,
you need a
geometry engine and have to solve all the occlusion and clipping
problems while
flying through the graphics data base one time only. With raster, you
can push a
lot of that work into the logic such that it all gets sorted out when
the most foreground
pixel is rewritten. With vector, you better not write an occluded
background mark,
because the CRT can't erase what it has already drawn. Larger, faster,
cheaper memory
made raster POSSIBLE! When E&S designed this stuff, you just couldn't
do read-modify-
write cycles fast enough to make a usable raster system without making
something
like a 1024-bit wide memory word, and doing all the read-modify-write
work at
1024-bit word width. There actually were some late 1970's imaging
systems that
did this, they cost about $3 million per viewport and filled 5 6-foot
rack cabinets.
Obviously, only for the absolute highest-end flight simulator systems
and such.

Jon
 
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<title></title>
</head>
<body>
<br>
<br>
Martin Euredjian wrote:<br>
&lt;blockquote type="cite"
cite="midHdtfb.11449$5b1.5304@newssvr25.news.prodigy.com"&gt;
<pre wrap="">"Jon Elson" <a class="moz-txt-link-rfc2396E" href="mailto:jmelson@artsci.wustl.edu">&lt;jmelson@artsci.wustl.edu&gt;</a> wrote:

</pre>
&lt;blockquote type="cite"&gt;
<pre wrap="">I guess you are talking about raster-scan displays without a pixel to
</pre>
&lt;/blockquote&gt;
<pre wrap="">&lt;!----&gt;pixel
</pre>
&lt;blockquote type="cite"&gt;
<pre wrap="">frame buffer behind it, and not about vector-drawing displays (like an
oscilloscope in X-Y mode).

Interesting theoretical enterprise, but I really don't see the point.
</pre>
&lt;/blockquote&gt;
<pre wrap="">&lt;!----&gt;
And you wouldn't outside of a contextual reference frame that allowed you to
understand where/why this might be important. It's a very narrow field of
application. Not mainstream at all.


</pre>
&lt;/blockquote&gt;
Well, I'm still not sure I understand it, after reading all the above. &amp;nbsp;The
reason for<br>
this is to convert from one video fomat (HD broadcast?) to another (high-end
<br>
computer LCD monitor - DVI) without introducing a one (or more) frame delay?<br>
But, apparently, you ARE forced to delay the 2nd field, to make it show on
a<br>
non-interlaced display.<br>
<br>
Or, do the different scan rates come into play, as the output frame rate
has no<br>
relationship to the input frame rate?<br>
<br>
Jon<br>
&lt;/body&gt;
&lt;/html&gt;
 
"Jon Elson" wrote:

Well, I'm still not sure I understand it, after reading all the above.
Because of the nature of the work I can't get into the sort of detail that
would paint the whole picture for you. I apologize for that.

One way to look at it might be from the point of view of resources, data
rates, etc. As you hike up in resolution/frame rate (say, 4K x 4K at 60
frames per second, which is what I'm working on) you need some pretty
massive frame store widths to be able to slow things down to where the
processing is manageable. I was looking into the idea of not having to add
yet another frame buffer for something as "simple" as drawing very basic
graphic primitives (let's just call them cursors used to mark things). If
this could be done in real time, as the actual display data is being output
it would/could make an important difference in the design.

I also have a requirement to have a 1 to 1 correspondence between input
image and the corresponding sampled data which will appear on screen as
these graphic primitives. No big deal. The display data actually goes to
another processor at the same time it hits goes to a display system. If you
are rendering your graphics to a separate frame buffer you will have to add
one more frame of delay to the output image in order to guarantee
coincidence. The memory required is not as much of an issue as the added
frame delay. I truly can't get into it much farther than this.

Again, just a look-see for a better way to do it in real time. I'm already
doing it in real time. So, I know it is possible. Just looking for a
better way, if it existed and was publicly available.


--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Martin Euredjian

To send private email:
0_0_0_0_@pacbell.net
where
"0_0_0_0_" = "martineu"
 
It looked much like Gouraud shading. Yes, that's why the thing cost
several hundred K $.
Eeesh, shading with vectors. If there's a will, and a wallet, there's a way
I suppose.

flying through the graphics data base one time only. With raster, you
can push a lot of that work into the logic such that it all gets sorted
out when

I guess it's like software defined radios where more and more of the analog
processing gets pushed into the digital world, for the flexibility.

Larger, faster, cheaper memory made raster POSSIBLE! When E&amp;S
designed this stuff, you just couldn't do read-modify-write cycles fast
enough to make a usable raster system without making something like a
1024-bit wide memory word
So back then vector graphics was quite viable, but they underestimated how
quickly memory technology would advance. One of those fabled "paradigm
shifts?"

Thanks for the insights Jon. Looks like E&amp;S is still chugging along, on the
raster bandwagon http://www.xilinx.com/company/success/evans.htm


--Vinh
 
Martin Euredjian wrote:

One way to look at it might be from the point of view of resources, data
rates, etc. As you hike up in resolution/frame rate (say, 4K x 4K at 60
frames per second, which is what I'm working on) you need some pretty
massive frame store widths to be able to slow things down to where the
processing is manageable. I was looking into the idea of not having to add
yet another frame buffer for something as "simple" as drawing very basic
graphic primitives (let's just call them cursors used to mark things). If
this could be done in real time, as the actual display data is being output
it would/could make an important difference in the design.



Oh, if you just want to superimpose cursors, selection boxes, and things
like that
onto a live video signal, I think that may be very easy to do without a
frame buffer.
There are many systems that do this sort of thing, and have been doing
it since
the 70's.

Jon
 

Welcome to EDABoard.com

Sponsor

Back
Top