CCing virtualgl-users, since some people on that list may have some
additional input.
VirtualGL allows multiple users to share the same GPU, and most modern
nVidia or AMD GPUs should be able to easily support 5-10 simultaneous
sessions, particularly if they're not doing a lot of heavy rendering.
When nVidia is referring to the number of displays that a card supports,
they're referring to physical displays. VirtualGL and TurboVNC
virtualize the display of applications in two ways: TurboVNC creates
virtual X servers, each tied to a particular user account. Thus, all of
the basic 2D rendering occurs in these virtual X servers and never
touches the graphics hardware. VirtualGL takes it one step further by
virtualizing the 3D rendering. It does this by intercepting GLX calls
from the OpenGL application and rerouting the OpenGL contexts into
off-screen Pbuffers on the GPU-attached X server (usually display :0.0),
detecting when the OpenGL application has finished rendering a frame,
reading back the pixels from the Pbuffer, and drawing them into the
application's window. Because it's using Pbuffers, there is really no
limit to the number of VirtualGL sessions that can co-exist on a single
GPU, as long as the GPU has enough memory and processing power. My
customers report that main memory and CPU power tend to be more of a
bottleneck than GPU power.
I would suggest a Quadro card rather than a GeForce card, just because
the readback performance is going to be much better (thus there is less
chance that multiple users banging on the card would create a
bottleneck.) At least the last time I checked, nVidia was artificially
limiting the readback performance of GeForce cards, although that was a
couple of years ago. I'm hoping other users will chime in with their
personal experiences, but my gut says that something like a Quadro K620
(< $200) or K1200 (< $400) would be more than adequate for the number of
lightweight sessions you need to run. I personally use a K5000, and it
can handle 10 simultaneous VirtualGL sessions blasting 1 million
polygons each at 40 fps. From your description, it doesn't sound like
you'll be doing anything even close to that intense.
On 12/4/15 12:07 AM, Jordan Poppenk wrote:
> Dear VGL devs / users,
>
> I am configuring a headless linux server to be used by a neuroimaging group. I would like to host many TurboVNC + VirtualGL sessions on this server - say, 5-10 at a time. While none of these sessions will do anything too GPU-intensive - occasional rendering of 3d figures and such - I have read that nVidia GeForce cards will typically only support 1 "display" at a time, and even many Quadro cards support only 2 displays. Does this mean that I can only run 1 TurboVNC+VGL session at a time with a GeForce - or am I able to divide video memory evenly among the various sessions? Any advice you can offer as per a suitable GPU would be much appreciated.
|