Menu

#37 Multi-GPU compatibility

closed-wont-fix
DRC
VirtualGL (44)
5
2014-04-28
2011-12-22
No

Not sure if it's a bug, but:

I have an application using multiple GPUs. There are nGPU + 1 display connections. The nGPU ones use :0.n, and the +1 uses the ssh X11 forwarding (node:10.0). Basically the n GPUs render offscreen, and the +1 receives the pixels from these GPUs for display.

In such a configuration vglrun selects the proxy compression, I guess because it sees the local display connections. I have to manually use 'vglrun -v -q 100 -c jpeg ./release/bin/eqPly' to use jpeg compression for the visible window on node:10.0.

Discussion

  • DRC

    DRC - 2011-12-22

    From the documentation:

    http://virtualgl.svn.sourceforge.net/viewvc/virtualgl/vgl/trunk/doc/index.html#VGL_COMPRESS

    "If the DISPLAY environment variable begins with : or unix:, then VirtualGL assumes that the X display connection is local. If it detects that the 2D X server is a Sun Ray X Server instance, then it will default to using xv compression. Otherwise, it will default to proxy compression. If VirtualGL detects that the 2D X server is remote, then it will default to using yuv compression if that X server is a Sun Ray X Server instance or jpeg compression otherwise."

    However, this assumes that the app is always making a connection to the same display. If it's making a connection to multiple displays, then VirtualGL will choose an image transport based on the first display that is made active in OpenGL (via a call to glXMake[Context]Current().) If that's one of your sub-renderers, which are using local displays, then VGL will choose the X11 transport as a default. Thus, if you want the default behavior to be triggered off of the main renderer and not the sub-renderers, you would need to initialize the main renderer first.

    Closing, since my understanding of the issue is that it's intended behavior in VirtualGL, but please feel free to add comments if you have further insight or if my understanding is incorrect.

     
  • DRC

    DRC - 2011-12-22
    • status: open --> closed-invalid
     
  • DRC

    DRC - 2012-08-23
    • status: closed-invalid --> open
     
  • DRC

    DRC - 2012-08-23

    Re-opening because I really do want to fix this, but in thinking about it further, there are a couple of issues. The most straightforward way of doing this would be to just query the window's visible state whenever pbwin::readback() is called and return from that function immediately if the window is invisible. The issue that creates, however, is that it introduces a round-trip to the X server on every frame, and I really want to avoid that in VirtualGL because of overhead.

    The other possibility involves interposing XMapWindow(), XMapRaised(), and XMapSubwindows(), walking the window tree below the window to be mapped, examining each window to see if it has a corresponding pbwin, and setting a "visible" bit if so. Hoping there might be a more clean way to do that.

     
  • DRC

    DRC - 2013-09-29
    • status: open --> closed-wont-fix
     
  • DRC

    DRC - 2013-09-29

    After further investigation, this appears to just be a can of worms. Unfortunately, many apps like to call glXMakeCurrent() with every frame, and of course they usually call glXSwapBuffers() with every frame as well, so interposing XMapWindow() and similar functions is the only way to set a "visible" bit in the pbwin instance corresponding to an OpenGL window without incurring such a round trip.

    The problem with interposing XMapWindow() is that it often gets called before the first call to glXMakeCurrent(), so a pbwin instance has not yet been assigned to the window, and we don't really know at that point whether the window will be used for 3D rendering or not, so it wouldn't be appropriate to assign a pbwin instance to it yet.

    For the moment, closing as Won't Fix. Feel free to re-open if any new ideas come to light.

     

Log in to post a comment.