Faking quad buffers for 3D TV

Help
2012-06-08
2013-11-26
  • Philip Ashmore

    Philip Ashmore - 2012-06-08

    Hi there.
    Has anyone tried creating an interlaced(left/right) 2D output by faking/simulating quad buffer support?
    For a 3D HD TV, each buffer would be 1920×540, so the client program would request a stereo visual with this size.
    The other alternative is above/below instead of interlacing, which would simplify reading back raster output.
    I'm not an OpenGL expert by any means, but I've seen e.g. stereoquake, a mod for Quake3/ioquake3 which can do side-by side stereo 3D.

     
  • DRC

    DRC - 2012-06-08

    I don't think anyone has tried this, but it seem straightforward enough.  Can you find some source code demonstrating, for instance, how to draw a simple stereo image to such a device?  I presume it would have to be done using OpenGL?  If so, then it seems odd that the client program would have to have explicit knowledge of the device's capabilities, as OpenGL is generally supposed to abstract that.

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-08

    It really is as simple as I'm saying.
    Given any input, the TV can
    * do 2D to 3D conversion (results vary widely)
    * convert 2D interlaced input (left row, right row, left row,…) to 3D
       This halves the vertical resolution so the client program needs to correct the aspect ratio for this.
    * convert 2D "side by side" to 3D
       This halves the horizontal resolution so the client program needs to correct the aspect ratio for this.
    * convert 2D "top/bottom" to 3D
       This halves the vertical resolution so the client program needs to correct the aspect ratio for this.
    * convert checkbox 2D to 3D

    If you go to YouTube, you can filter 3D videos with yt3d:enable=true
    When YouTube detects this tag it adds a "3D" button that allows you to choose several ways to view the video
    Here are some examples

    YouTube in 3D: http://www.youtube.com/watch?v=5ANcspdYh_U&feature=plcp

    StereoQuake:  http://www.youtube.com/watch?v=tXvirxRK-Ww

    Philip

     
  • DRC

    DRC - 2012-06-08

    OK, that answers my question about OpenGL, i.e. it doesn't require it.  In that case, this would be something that could be handled by the server, in much the same way that VirtualGL currently handles red/cyan stereo.  That is, in the process of reading back the stereo images from the 3D graphics card, the server would combine them into one interleaved mono image.  Very convenient, because that means it should theoretically work with any X server, including TurboVNC, whereas if I had to send stereo image pairs to the client, the only way to do that is to have a 3D accelerator on the client and to use the VGL Transport.

    I guess I'm still confused as to how the TV separates left and right eyes.  Presumably it has to set hard boundaries in terms of its real estate?  That is, anything on the left-hand side of the screen, or anything occupying the even lines gets translated into the left image, etc.?  Thus, I suppose it would only work right if the 3D application window was full-screen, or else the window borders and such would screw things up.

    The major concern in terms of 3D applications would be that, effectively, we would have to throw away half of the resolution, so probably the only real market for this would be games.  It's a pretty easy mod on the server, though, and if you have the hardware to try it out, I would be willing to throw it into a pre-release build for you to play with.

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-08

    Your guesses about how the TV separates left and right eyes are correct.

    It technically doesn't throw away half the resulotion - both eyes (when combined) result in 1080p.

    There are parts where the client and the server are involved.

    I'm talking about two things
    * allowing an ordinary PC to connect to a 3D TV and show YouTube 3D videos
    * VirtualGL making client apps think it's talking to 3D hardware through quad-buffers but displaying it as side by side,
       top/bottom or interlaced on the screen or HDMI port, like YouTube 3D

    I'm using VirtualGL now with nVidia - "optirun".
    Maybe if there was another program like that, say

       3d-2d <3d-2d-args> program args

    that made the program think that it was talking to an OpenGL with quad buffers and rendered them on the screen as per
    3d-2d-args
    * -top-bottom
    * - left-right
    * -right-left
    * -interlaced

    Philip

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-09

    You don't need the hardware to try it out.
    All that is needed is the 3d-2d program that makes the program passed as a parameter think it's talking to an OpenGL implementation that has quad buffer support.
    The display would show left/right views as left/right, top/bottom or interlaced just like the YouTube 3D videos do.

     
  • DRC

    DRC - 2012-06-10

    To be clear, the 3d-2d program you're proposing would require the same sort of interposer architecture than VirtualGL, so why not use VirtualGL? What I meant in terms of testing was that it would be nice to verify that the images actually appear in stereo as intended.

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-10

    Yes, VirtualGL looks like a good fit to me too.

    I'm using "bumblebee" for nVidia Optimus support, which uses VirtualGL.
    I have VirtualGL 2.3.1-3, from the suwako.nomanga.net Debian repository, as per the bumblebee install instructions.

    I'm not familiar with it development-wise, so if/when you have that "pre-release build for you to play with" do let me know.

    After finally downloading and reading the VirtualGL documentation, it looks like this could be done with some more options to the vglrun program: -left-right, -left-right-interlaced, -right-left-interlaced, -top-bottom, -top-bottom-interlaced, -bottom-top-interlaced, although any one of these would do for me.

       optirun vglrun +v -st rc glxspheres -s

    reports:
    NOTICE: Replacing dlopen("/lib/x86_64-linux-gnu/libdl.so.2") with dlopen("libdlfaker.so")
    NOTICE: Replacing dlopen("/lib/x86_64-linux-gnu/libdl.so.2") with dlopen("libdlfaker.so")
    NOTICE: Replacing dlopen("/lib/x86_64-linux-gnu/libdl.so.2") with dlopen("libdlfaker.so")
    NOTICE: Replacing dlopen("/lib/x86_64-linux-gnu/libdl.so.2") with dlopen("libdlfaker.so")
    NOTICE: Replacing dlopen("/lib/x86_64-linux-gnu/libdl.so.2") with dlopen("libdlfaker.so")
    NOTICE: Replacing dlopen("/lib/x86_64-linux-gnu/libdl.so.2") with dlopen("libdlfaker.so")
    Polygons in scene: 62464
    Shared memory segment ID for vglconfig: 15237128
    VirtualGL v2.3.1 64-bit (Build 20111220)
    Opening local display :8
    WARNING: VirtualGL attempted and failed to obtain a Pbuffer-enabled
        24-bit visual on the 3D X server :8.  This is normal if
        the 3D application is probing for visuals with certain capabilities,
        but if the app fails to start, then make sure that the 3D X server is
        configured for 24-bit color and has accelerated 3D drivers installed.
    ERROR (596): Could not obtain RGB visual with requested properties

    I believe optirun already replaces dlopen("/lib/x86_64-linux-gnu/libdl.so.2") with dlopen("libdlfaker.so").

    I'll check this page daily for updates.

     
  • DRC

    DRC - 2012-06-13

    The code has been checked into trunk and documented.  The VGL_STEREO option (or 'vglrun -st') now takes additional options:  i for interleaved, tb for top/bottom, and ss for side-by-side.  Look forward to your feedback.

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-14

    I built + installed virtualgl from trunk using libjpeg-turbo_1.2.0_amd64.deb from SourceForge.

    Here's what I tried, and the result:

    $ /opt/VirtualGL/bin/vglrun +v -st left /opt/VirtualGL/bin/glxspheres -s
    Polygons in scene: 62464
    Shared memory segment ID for vglconfig: 26640390
    VirtualGL v2.3.1 64-bit (Build 20111220)
    Opening local display :0
    NOTICE: Replacing dlopen("libGL.so.1") with dlopen("librrfaker.so")
    WARNING: VirtualGL attempted and failed to obtain a Pbuffer-enabled
        24-bit visual on the 3D X server :0.  This is normal if
        the 3D application is probing for visuals with certain capabilities,
        but if the app fails to start, then make sure that the 3D X server is
        configured for 24-bit color and has accelerated 3D drivers installed.
    ERROR (593): Could not obtain RGB visual with requested properties

    Am I using it wrong?

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-14

    I created a ldconfig file /etc/ld.so.conf.d/virtualgl.conf containing /opt/VirtualGL/lib

    Here'st that test again:

    $ /opt/VirtualGL/bin/vglrun +v -st left /opt/VirtualGL/bin/glxspheres -s
    Polygons in scene: 62464
    Shared memory segment ID for vglconfig: 9568261
    VirtualGL v2.3.80 64-bit (Build 20120614)
    Opening local display :0
    NOTICE: Replacing dlopen("libGL.so.1") with dlopen("librrfaker.so")
    WARNING: VirtualGL attempted and failed to obtain a Pbuffer-enabled
        24-bit visual on the 3D X server :0.  This is normal if
        the 3D application is probing for visuals with certain capabilities,
        but if the app fails to start, then make sure that the 3D X server is
        configured for 24-bit color and has accelerated 3D drivers installed.
    ERROR (593): Could not obtain RGB visual with requested properties

     
  • DRC

    DRC - 2012-06-14

    Your 3D graphics card has to support stereo.  Try running glxspheres -s on your root display.  As soon as that works properly, then the above will work properly.

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-14

    There's the problem. I knew there was some assumption hiding between the lines.

    This is the precise exact point of my post - to "Faking quad buffers for 3D TV" - using a 2D video card - one you can watch YouTube 3D with.

    If I can watch 3D videos this way, and VirtualGL can fake a 3D stereo video card, then objective achieved.

    Are you saying I still need a stereo video card even for "vglrun -st left" ?

     
  • DRC

    DRC - 2012-06-15

    Ugh.  Well, there was 6 hours of my life that I'll never get back.  It was not clear from your posts that that was what you really wanted.  You have to understand that, in order for a stereo OpenGL application to work, it has to have somewhere in which to render both the left and right eye views.  If you're talking about genuinely "faking it" into thinking it has a left and a right buffer when the rendering hardware doesn't in fact support that, then the question arises:  OK, so where do you actually put the left and right buffer if not on the 3D hardware?  The only answer is in main memory, which means that 3D rendering could not be hardware-accelerated.  We could also, for instance, store the right buffer in a separate Pbuffer or FBO.  OK, but then we'd end up having to interpose a ton of new OpenGL commands in order to be able to always intercept rendering that was intended for the right buffer and send it to our fake right buffer.  We'd also end up having to fake the visuals on the server side to make the 3D application believe that one is available with stereo support, when in fact none is.  Complete nastiness.  I wouldn't touch it with a 100-foot pole.  Sun actually tried to do this sort of stereo fakery in their proprietary system that preceded their adoption of VirtualGL.  It was fairly complicated and still didn't fully solve the problem.  Ultimately, we just decided to require stereo Pbuffers rather than mess with faking them.

    What I thought you were talking about was faking quad-buffered rendering on the *client*, which is in fact what VirtualGL already does to support anaglyphic stereo (and how the passive stereo stuff is now implemented as well.)  This takes away any requirement for quad-buffered stereo on the client, but stereo is still required on the 3D X Server.

    It may seem non-intuitive that -st left would still require a stereo visual on the server, but understand that the app is behaving fundamentally differently when it has a stereo visual available vs. not.  The left eye buffer is not the same thing it would render if it was rendering a monographic image.  -st left and -st right are mainly testing tools.  No one really uses those in production.

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-15

    I'm sorry we got off on the wrong foot, but I assumed you read "Has anyone tried creating an interlaced(left/right) 2D output by faking/simulating quad buffer support?" - the second line of my first post.

    Time wasted here too, but lets move on.

    My understanding of OpenGL is rusty/incomplete but I was thinking was that, instead of  using pbuffers/fbo, we simply made the 2d front/back buffers twice as high, or twice as wide.
    When the client requested a buffer switch VirtualGL would swap the existing buffer with its non-writable/hidden part and let the app carry on - no need to interpose lots of calls.
    When it came time to render, you've got all the data you need - time to interlace, or for side-by-side or top-bottom visuals, you've already got the buffer data in its finished form.

     
  • DRC

    DRC - 2012-06-15

    Huh?  No, you can't just double the size of a monographic image and magically make it a stereographic image.  The 3D images are built up from left and right eye buffers, and the application has to render each view separately.  Thus, the application has to be specifically stereo-aware.  The application decides which buffer it wants to draw to (front-left, front-right, back-left, or back-right, which is where the term "quad-buffered" comes from.)  Thus, the drawable (window, pixmap, or Pbuffer) must have the correct number of buffers.  I've already explained the pitfalls of trying to simulate this.

    Suffice it to say that your original idea is dead in the water.  VirtualGL can now theoretically output to a 3D TV, although I have no way to test that.  Maybe that will be useful to someone (probably gamers moreso than anyone else.)  If that doesn't help you, then there isn't anything else I can do here.

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-17

    Doubling the size doesn't mean twice the resolution if you're delivering half the resolution to the left and right eyes.
    As for "VirtualGL can now theoretically output to a 3D TV", YouTube can do that now.

    I had a look on the web about the graphics chip in my laptop - lspci shows " NVIDIA Corporation GF108  (rev ff)".
    It's in a Sony 3D laptop
    http://www.zdnet.com/blog/computers/ces-2011-sony-debuts-vaio-f-series-3d-laptop-with-new-nvidia-geforce-gt-540m-graphics/4690

    So quad buffer support has nothing to do with the graphics chip - it's all about the software.

    For the last time, here's what I want to do

    ->  ->  ->  ->
    and this is how YouTube 3D works, for reference
    ->  ->

    You can substitute "l+r" (left and right) with t+b (top and bottom) or "i" (interlaced), they just carve up the same 2D real-estate in different ways.

    My original idea is based on what you can do with YouTube 3D - I'm sorry you think that's dead in the water.

    I've wasted enough time on this - and you can read that any way you like too.

     
  • DRC

    DRC - 2012-06-17

    Yes, YouTube can output to a 3D TV, but that's not the same thing as running an actual stereo 3D application.  YouTube videos are pre-generated content.  Stereo 3D applications, on the other hand, are generating the stereographic views in real time.  To do that, the OpenGL implementation has to present the application with a separate left and right buffer, because the application is well within its rights to demand that every time it renders to the left or right buffer, it can immediately read back a pixel-accurate representation of what it just rendered.  Further, it's well within its rights to copy pixels between the left and right eyes, etc.  That's why the left and right eyes have to remain separate on the graphics hardware.  Either that, or the software (VirtualGL, in the hypothetical case) would have to jump through a serious number of hoops to simulate separate left and right buffers when, behind the scenes, only one buffer is being used (reference previous comments regarding Sun's solution that faked stereo Pbuffers.)  If all you want to do is just generate a 3D TV-compatible view straight from an OpenGL program, nVidia's drivers can already do that, but of course you have to have nVidia hardware, and of course that hardware does quad buffering behind the scenes.  If you want to generate a 3D TV-compatible view and don't have a driver set that supports it, then you can now use VGL to do that (remotely, even), but you still need a quad-buffered back-end renderer so that the application has a left and right eye buffer to render to.  Quad buffering is not necessarily a function of the graphics chip, but it certainly is a function of the 3D driver and is implemented at a level below VirtualGL.

    I'm really not trying to pull rank on you here or to sound arrogant.  I'm just saying that I have 16 years of experience in this solution space and have spent 8 of those years developing VirtualGL.  13 of those 16 years were spent working for Fortune 500 companies, and VirtualGL was actively sold as a product by one of those companies (Sun.)  I don't know everything, nor am I infallible, but I do know OpenGL and I really know VirtualGL, and as an expert in that solution space, I understand what you're trying to do, but I don't think what you're trying to do is a good fit for VirtualGL.  If you think I'm wrong, well, the code is there for you to review, and I welcome you to review it and submit a patch that does what you need.

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-17

    Wow, so this flogged horse may not be quite dead after all.

    From "If you want to generate a 3D TV-compatible view and don't have a driver set that supports it, then you can now use VGL to do that (remotely, even), but you still need a quad-buffered back-end renderer",

    if VGL caches the buffers locally then one could use VGL as the back-end renderer too - it would only need to fake quad buffers without the read-back ability using FBOs or pbuffers.

    Yes it would be relatively slow, but for the likes of Quake3 or simpler 3D stereo OpenGL programs, it would work.

    I'll stop making suggestions about VGL at this point  as
    1. I don't have the detailed knowledge about OpenGL/VGL you do
    2. I should review the code in more detail
    3. things that sound possible from a birds eye view don't always turn out that way

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-18

    PS - If VGL created two windows to handle left/right then it could read front/back pixel data all it liked.

    I just like the idea of using just one window so that when the app was finished rendering you would already have the image in its final form for left/right or top/bottom, although that would raise clipping issues.

     
  • DRC

    DRC - 2012-06-18

    Sorry, but your understanding is fundamentally flawed, and my attempts to explain the  problem to you don't seem to be getting through, so I give up.

     
  • Philip Ashmore

    Philip Ashmore - 2012-06-18

    Sorry if I didn't specify it clearly - I meant that VGL should create two 2D contexts every time it's asked for a 3D visual, one for the left, one for the right.

    If you don't understand that then I give up too.

     
  • DRC

    DRC - 2012-06-18

    I have already explained why that is not a tenable proposition for VirtualGL.

     

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks