shared graphics in notebook

S

suz

Guest
I recently plan to buy a notebook to run verilog simulations, some of
them come with "shared graphic memory" and offer better price. I am
wondering, if this kind of structure has any impact on simualtion
performmance?
I think the simulation process itself is memory access intensive, and
if there are some display activity going on, two sides may fight for
memory buses.
anyone has idea or experience about this?
 
suz wrote:

I recently plan to buy a notebook to run verilog simulations, some of
them come with "shared graphic memory" and offer better price. I am
wondering, if this kind of structure has any impact on simualtion
performmance?
I think the simulation process itself is memory access intensive, and
if there are some display activity going on, two sides may fight for
memory buses.
anyone has idea or experience about this?
I have no experience about your situation, but I think besides the point
that the shared-memory graphics card chews up your main memory, there's
little to worry about the graphics subsystem fighting the CPU for
memory, because if you are doing simulation, all the graphical processes
happen after the number-crunching is done. I'm not sure about gaming
performance, but think about it, would you really count on playing
Unreal 2003 smoothly with a cheap intel graphics chip (with shared
memory)? Probably not even on a desktop PC, not even if it has its own
memory.
 
Jason Zheng wrote:

suz wrote:

I recently plan to buy a notebook tconsonantlyimulations, some of
them come with "shared graphic memory" and offer better price. I am
wondering, if this kind of structure has any impact on simualtion
performmance?
I think the simulation process itself is memory access intensive, and
if there are some display activity going on, two sides may fight for
memory buses.
anyone has idea or experience about this?

I have no experience about your situation, but I think besides the point
that the shared-memory graphics card chews up your main memory, there's
little to worry about the graphics subsystem fighting the CPU for
memory, because if you are doing simulation, all the graphical processes
happen after the number-crunching is done. I'm not sure about gaming
performance, but think about it, would you really count on playing
Unreal 2003 smoothly with a cheap intel graphics chip (with shared
memory)? Probably not even on a desktop PC, not even if it has its own
memory.
Typically, in a shared memory architecture, the display controller has to
constantly scan the frame buffer in main memory to repaint the display.
Depending on the size of your display and the memory bandwidth, this
scanning may use enough memory bandwidth to affect system performance.

--

Cliff Brake
BEC Systems
cbrake _at_ bec-systems _dot_ com
 
Jason Zheng <jzheng@jpl.nasa.gov> wrote in message news:<cdju0f$fqp$1@nntp1.jpl.nasa.gov>...

I have no experience about your situation, but I think besides the point
that the shared-memory graphics card chews up your main memory, there's
little to worry about the graphics subsystem fighting the CPU for
memory, because if you are doing simulation, all the graphical processes
happen after the number-crunching is done. I'm not sure about gaming
performance, but think about it, would you really count on playing
Unreal 2003 smoothly with a cheap intel graphics chip (with shared
memory)? Probably not even on a desktop PC, not even if it has its own
memory.
I think you're missing something fundamental: how does the RAMDAC draw
the display for a plain vanilla 2D framebuffer? By regular fetches to
mainstore.

I suppose you could blank the screen by disabling video during
simulation, but that might not be convenient.

-t
 
Anthony J Bybell wrote:
Jason Zheng <jzheng@jpl.nasa.gov> wrote in message news:<cdju0f$fqp$1@nntp1.jpl.nasa.gov>...


I have no experience about your situation, but I think besides the point
that the shared-memory graphics card chews up your main memory, there's
little to worry about the graphics subsystem fighting the CPU for
memory, because if you are doing simulation, all the graphical processes
happen after the number-crunching is done. I'm not sure about gaming
performance, but think about it, would you really count on playing
Unreal 2003 smoothly with a cheap intel graphics chip (with shared
memory)? Probably not even on a desktop PC, not even if it has its own
memory.


I think you're missing something fundamental: how does the RAMDAC draw
the display for a plain vanilla 2D framebuffer? By regular fetches to
mainstore.

I suppose you could blank the screen by disabling video during
simulation, but that might not be convenient.

-t
Yah but that's at 60-100Hz frequency, very little bandwidth.
 
Jason Zheng wrote:
Anthony J Bybell wrote:

Jason Zheng <jzheng@jpl.nasa.gov> wrote in message
news:<cdju0f$fqp$1@nntp1.jpl.nasa.gov>...


I have no experience about your situation, but I think besides the
point that the shared-memory graphics card chews up your main memory,
there's little to worry about the graphics subsystem fighting the CPU
for memory, because if you are doing simulation, all the graphical
processes happen after the number-crunching is done. I'm not sure
about gaming performance, but think about it, would you really count
on playing Unreal 2003 smoothly with a cheap intel graphics chip
(with shared memory)? Probably not even on a desktop PC, not even if
it has its own memory.



I think you're missing something fundamental: how does the RAMDAC draw
the display for a plain vanilla 2D framebuffer? By regular fetches to
mainstore.

I suppose you could blank the screen by disabling video during
simulation, but that might not be convenient.

-t

Yah but that's at 60-100Hz frequency, very little bandwidth.
That may be the refresh rate but that is NOT the refresh bandwidth
requirements. A "typical" display these days is at least 8-bit color at
1024x768 at 75 Hz refresh. This translates to a minimum of 59 MB/s
bandwidth requirement. A more common CAD setup is 24-bit color at
1280x1024 at 85 Hz refresh -> 334 MB/s. To me, that doesn't count as
"very little bandwidth".

The performance impact of shared memory depends on many design factors
in addition to the display resolution so the only true measure of
whether or not the setup is acceptable is to test it.

But consider this: you will be spending significant $ on software, does
it really make any sense to then cripple the performance of this
software to save $100 on your hardware?
--
Tim Hubberstey, P.Eng. . . . . . Hardware/Software Consulting Engineer
Marmot Engineering . . . . . . . VHDL, ASICs, FPGAs, embedded systems
Vancouver, BC, Canada . . . . . . . . . . . http://www.marmot-eng.com
 
Tim Hubberstey wrote:
Jason Zheng wrote:

Anthony J Bybell wrote:

Jason Zheng <jzheng@jpl.nasa.gov> wrote in message
news:<cdju0f$fqp$1@nntp1.jpl.nasa.gov>...


I have no experience about your situation, but I think besides the
point that the shared-memory graphics card chews up your main
memory, there's little to worry about the graphics subsystem
fighting the CPU for memory, because if you are doing simulation,
all the graphical processes happen after the number-crunching is
done. I'm not sure about gaming performance, but think about it,
would you really count on playing Unreal 2003 smoothly with a cheap
intel graphics chip (with shared memory)? Probably not even on a
desktop PC, not even if it has its own memory.




I think you're missing something fundamental: how does the RAMDAC draw
the display for a plain vanilla 2D framebuffer? By regular fetches to
mainstore.

I suppose you could blank the screen by disabling video during
simulation, but that might not be convenient.

-t


Yah but that's at 60-100Hz frequency, very little bandwidth.


That may be the refresh rate but that is NOT the refresh bandwidth
requirements. A "typical" display these days is at least 8-bit color at
1024x768 at 75 Hz refresh. This translates to a minimum of 59 MB/s
bandwidth requirement. A more common CAD setup is 24-bit color at
1280x1024 at 85 Hz refresh -> 334 MB/s. To me, that doesn't count as
"very little bandwidth".
Just for the sake of arguement, a laptop setup is more likely to be
1024*768*24bit at 60Hz (Active Matrix), which is about 138 MB/s. 266 Mhz
Dual Channgel DDR gives you about 3.2 Gb/s. 400Mhz Dual DDR gives you
6.4Gb/s bandwidth. Also consider the fact that the memory controller
only have to service the graphics controller every 17us.

The performance impact of shared memory depends on many design factors
in addition to the display resolution so the only true measure of
whether or not the setup is acceptable is to test it.
Point taken, you'd have to have a laptop system where you can switch
between an independent graphics card and the onboard graphics chip to
compare. I don't think you can do so unless you have a docking station
that supports AGP slots.

But consider this: you will be spending significant $ on software, does
it really make any sense to then cripple the performance of this
software to save $100 on your hardware?
I agree, but this is for a low budget setup, so the person who is making
this purchase prob. won't spend big bucks on software to begin with.
 

Welcome to EDABoard.com

Sponsor

Back
Top