virtualgl-users Mailing List for VirtualGL (Page 8)
3D Without Boundaries
Brought to you by:
dcommander
You can subscribe to this list here.
| 2007 |
Jan
(1) |
Feb
(5) |
Mar
(7) |
Apr
(7) |
May
(1) |
Jun
(10) |
Jul
(5) |
Aug
(4) |
Sep
(16) |
Oct
(2) |
Nov
(8) |
Dec
(3) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2008 |
Jan
(6) |
Feb
(18) |
Mar
(6) |
Apr
(5) |
May
(15) |
Jun
(6) |
Jul
(23) |
Aug
(5) |
Sep
(9) |
Oct
(15) |
Nov
(7) |
Dec
(3) |
| 2009 |
Jan
(22) |
Feb
(13) |
Mar
(15) |
Apr
(3) |
May
(19) |
Jun
(1) |
Jul
(44) |
Aug
(16) |
Sep
(13) |
Oct
(32) |
Nov
(34) |
Dec
(6) |
| 2010 |
Jan
(5) |
Feb
(27) |
Mar
(28) |
Apr
(29) |
May
(19) |
Jun
(30) |
Jul
(14) |
Aug
(5) |
Sep
(17) |
Oct
(10) |
Nov
(13) |
Dec
(13) |
| 2011 |
Jan
(18) |
Feb
(34) |
Mar
(57) |
Apr
(39) |
May
(28) |
Jun
(2) |
Jul
(7) |
Aug
(17) |
Sep
(28) |
Oct
(25) |
Nov
(17) |
Dec
(15) |
| 2012 |
Jan
(15) |
Feb
(47) |
Mar
(40) |
Apr
(15) |
May
(15) |
Jun
(34) |
Jul
(44) |
Aug
(66) |
Sep
(34) |
Oct
(8) |
Nov
(37) |
Dec
(23) |
| 2013 |
Jan
(14) |
Feb
(26) |
Mar
(38) |
Apr
(27) |
May
(33) |
Jun
(67) |
Jul
(14) |
Aug
(39) |
Sep
(24) |
Oct
(59) |
Nov
(29) |
Dec
(16) |
| 2014 |
Jan
(21) |
Feb
(17) |
Mar
(21) |
Apr
(11) |
May
(10) |
Jun
(2) |
Jul
(10) |
Aug
|
Sep
(23) |
Oct
(16) |
Nov
(7) |
Dec
(2) |
| 2015 |
Jan
(7) |
Feb
|
Mar
(26) |
Apr
|
May
(2) |
Jun
(16) |
Jul
(1) |
Aug
(5) |
Sep
(6) |
Oct
(10) |
Nov
(5) |
Dec
(6) |
| 2016 |
Jan
|
Feb
(6) |
Mar
|
Apr
(2) |
May
|
Jun
(6) |
Jul
(5) |
Aug
|
Sep
(17) |
Oct
(6) |
Nov
(2) |
Dec
(4) |
| 2017 |
Jan
(3) |
Feb
(25) |
Mar
(4) |
Apr
(3) |
May
(4) |
Jun
(10) |
Jul
(1) |
Aug
(8) |
Sep
|
Oct
|
Nov
|
Dec
|
|
From: DRC <dco...@us...> - 2015-03-10 12:33:17
|
I can't say I'm terribly surprised. Maybe I'm wrong and there is some good technical reason why they can't enable stereo Pbuffers, but nVidia also has a habit of disabling features in their drivers purely for marketing reasons. Theoretically, it might be possible to make VirtualGL emulate stereo using two Pbuffers or even two FBOs on the same Pbuffer, but it would make the solution significantly more complex. Getting it right would cost a lot more in labor than a QuadroFX costs, and quad-buffered stereo is just not popular enough to justify re-architecting VirtualGL. Perhaps others can chime in as to whether Grid adapters work properly. Out of curiosity, what were you planning to use as a client? That's the other hell of quad-buffered stereo-- you need a card on the client as well that can draw in stereo, which probably means another Quadro. On 3/10/15 5:26 AM, Antony Cleave wrote: > Thanks for this, looking at the options it looks like for stereo we are > indeed stuck with the Quadro cards. All stereo options I've tried seem > to get disabled on the M2090 and I don't have a Grid card to try and > it's not exactly cheap so I'm not going to pop down to PC world and pick > one up on the off chance it works. |
|
From: Antony C. <ant...@cl...> - 2015-03-10 10:34:45
|
Thanks for this, looking at the options it looks like for stereo we are
indeed stuck with the Quadro cards. All stereo options I've tried seem
to get disabled on the M2090 and I don't have a Grid card to try and
it's not exactly cheap so I'm not going to pop down to PC world and pick
one up on the off chance it works.
Antony
On 04/03/2015 18:33, DRC wrote:
> I personally use a Quadro FX 5000, so my experience with headless
> configurations is limited. What I will say, however, is that there
> shouldn't be any reason why nVidia couldn't provide stereo Pbuffer
> support in a headless configuration, since VirtualGL doesn't actually
> need stereo display on the server-- it just needs a quad-buffered
> Pbuffer. Now whether or not nVidia's drivers actually allow that is a
> different story. If they don't, then that's an arbitrary decision on
> their part that isn't based on any specific technical requirement.
>
> The only thing I could suggest would be to fool with various settings
> for 'Option "Stereo"' in the xorg.conf file. The default option
> ("onboard stereo") is going to check for hardware stereo support (i.e.
> the stereo DIN connector) on your graphics card, but I think you should
> be able to set it to one of the passive stereo options to get around
> that (see /usr/share/doc/NVIDIA_GLX-1.0/README.txt under Appendix B.)
> For instance, my xorg.conf has 'Option "Stereo" "8"'.
>
> On 3/4/15 11:51 AM, Antony Cleave wrote:
>> Hi
>>
>> I'm wondering how to do quad buffered Stereoscopic rendering with
>> virtualgl and which GPU I'd need.
>>
>> I will be configuring virtualgl on server in a machine room which is
>> part of a HPC system. Some of the nodes in the cluster will have K80M
>> cards in and one of those would be ideal. If that will not work then
>> there are alternative servers which can be used to accommodate suitable GPUs
>>
>> All of these servers will be headless and I've been told by nvidia that
>> their GPUs do not support stereo when no physical heads are available
>> and I'm waiting for them to confirm if this is a hardware limitation or
>> not i.e. if I HAVE to use a quadro card or whether a GRID K2 card might
>> be usable. I was wondering if there was anyone on the list who has ever
>> tried it and might already know.
>>
>> I currently don't have the exact hardware to test, I've been using good
>> old Tesla M2090 GPUs for the majority of my VirtualGL testing and they
>> work perfectly for everything else but when I tried to configure these
>> to accept some kind of stereo option it refuses point blank with "Stereo
>> not supported with NoScanout; disabling Stereo" which I assume is a
>> show-stopper.
> ------------------------------------------------------------------------------
> Dive into the World of Parallel Programming The Go Parallel Website, sponsored
> by Intel and developed in partnership with Slashdot Media, is your hub for all
> things parallel software development, from weekly thought leadership blogs to
> news, videos, case studies, tutorials and more. Take a look and join the
> conversation now. http://goparallel.sourceforge.net/
> _______________________________________________
> VirtualGL-Users mailing list
> Vir...@li...
> https://lists.sourceforge.net/lists/listinfo/virtualgl-users
|
|
From: DRC <dco...@us...> - 2015-03-06 17:46:00
|
On 3/6/15 6:48 AM, Patric Schmitz wrote:
>>> yuv: does not work, failing with
>>> [VGL] ERROR: in operator=--
>>> [VGL] 437: Invalid compression type
>>> Any idea what's happening there?
>>>
>>> I want to know wether these are reasonable rates/limitations,
>>> especially the jpeg transport which seems CPU-bound (using 140% cpu
>>> in top). Increasing -np did not change anything significantly.
>>
>> JPEG probably will be CPU-limited on a fast network, but you're also
>> seeing a lot of unnecessary CPU time because you're running an
>> unrealistic benchmark.
>
> I see, this is on a very fast network so YUV might be an option.
> Any idea about the issue above?
Were you building the VirtualGL faker from source? In looking at the
source, this seems to be the culprit:
switch(f.hdr.compress)
{
case RRCOMP_RGB: compressRGB(f); break;
case RRCOMP_JPEG: compressJPEG(f); break;
#ifdef USEXV
case RRCOMP_YUV: compressYUV(f); break;
#endif
default: _throw("Invalid compression type");
}
I will have to investigate further, but I think that "#ifdef USEXV" is
probably erroneous in this case, since you can build the server-side
components with YUV compression capability even if you aren't building
the client with XV support. But the easiest workaround for now is just
to make sure you are building the whole solution with XV support
(install the libXv development package. VirtualGL will automatically
disable XV support in the build if it can't find the headers/libs for
libXv.)
> Well the Gentoo ebuild as well as the Arch Linux package deploy the
> package as usual below the /usr hierarchy. There were some
> hardcoded /opt pathes in the cmake files as well as the scripts
> themselves which needed to be made a bit more generic. Also for
> some reason I had to supply the absolute paths for the .so names in
> LD_PRELOAD, I am not sure why yet, but I will find that out.
>
> Is there any particular reason why you chose to deploy stuff in /opt
> instead of with any other regular package below the /usr hierarchy?
> FHS says on opt "This directory should contain add-on packages that
> contain static files". So the question would be why virtualgl is
> considered an "add-on" package. Normally this is used for 3rd party
> (ie not belonging to the distro) tools which are kind of
> self-contained folders not split up into bin/ include/ lib/ etc.
> But since the cmake install goes nicely in that fashion one might
> consider dropping the /opt specialities altogether. Well or just
> make both install locations work more generically. I will send you
> the patches I needed to do.
There shouldn't be any hard-coding in the latest release, and the latest
version of vglrun can accommodate installing the faker under any
arbitrary directory.
Historically, VirtualGL was distributed under /opt because of Sun
packaging requirements (VGL used to be a Sun product.) On Solaris, only
operating system packages were allowed to install anything under /usr.
We had to install under /opt/SUNWvgl (because of package naming
requirements) and put a symlink from /opt/VirtualGL --> /opt/SUNWvgl.
On Linux, we provided a similar interface to /opt/VirtualGL so it could
work similarly to the Solaris version, but the faker has always been
installed under /usr/lib* on Linux, because that's the only way it can
be loaded into setuid binaries (such as VirtualBox, for instance.)
I just examined the build system and can find no hard-coded instances of
/opt that aren't easily overridden using CMake variables. There should
be no reason why a build that overrides CMAKE_INSTALL_PREFIX to /usr
wouldn't work fine. The only thing that would still refer to
/opt/VirtualGL in such a build would be the documentation.
Please send me a specific list of files that you think should be
changed. In all likelihood, there's a good reason why they're doing
what they're doing. I've visited this issue several times before.
|
|
From: Patric S. <bz...@ao...> - 2015-03-06 12:48:56
|
On Thu, 05 Mar 2015 15:33:13 -0600 DRC <dco...@us...> wrote: > On 3/5/15 5:55 AM, Patric Schmitz wrote: > Do not use GLXgears as a benchmark. It isn't an OpenGL benchmark. It > uses such a tiny geometry and screen size that, really, all it's > measuring when you run it on a local display is the CPU speed and bus > bandwidth. ... > > Secondly, always disable frame spoiling (vglrun -sp) when running > benchmarks in VirtualGL. Otherwise, you are just measuring how fast > VirtualGL can read back frames on the server, not how fast it can > actually send them to the client. (This is all in the VirtualGL User's > Guide, by the way.) Well my intent was actually to get a benchmark of the VGL transport and not the rendering performance, so that's fine. Didn't get the part about frame spoiling though, that's of course what I actually want to measure. And now I also found the section in the manual, thanks for the hint! > > X11 forwarding (no virtualgl): ~4500 - ~11000 > > here the window stays black. Might that be related to the remote > > GLX implementation being nvidia and the local is AMD? If I test > > with another NVIDIA client, it doesn't work at all, failing with > > X Error of failed request: BadValue (integer parameter out of > > range for operation). Is that expected behavior? > > Obviously diagnosing bugs in remote display solutions that don't use > VirtualGL is out of scope for a VirtualGL support list, but I can tell > you that, yes, it's entirely possible that this is due to a mismatch in > client vs. server GLX. It is unclear from your description whether you > are using SSH to forward X11, but if so, that could be the issue as > well. SSH won't let you forward certain X extensions (including GLX) > unless you establish the connection with 'ssh -Y' instead of 'ssh -X'. I was using -Y so I suspect it is the different GLX versions. Will try with a machine sharing the same hardware and driver versions and see if it works from there. > > yuv: does not work, failing with > > [VGL] ERROR: in operator=-- > > [VGL] 437: Invalid compression type > > Any idea what's happening there? > > > > I want to know wether these are reasonable rates/limitations, > > especially the jpeg transport which seems CPU-bound (using 140% cpu > > in top). Increasing -np did not change anything significantly. > > JPEG probably will be CPU-limited on a fast network, but you're also > seeing a lot of unnecessary CPU time because you're running an > unrealistic benchmark. I see, this is on a very fast network so YUV might be an option. Any idea about the issue above? > ... > Let me simplify it-- ultimately, what you want is "local-like" > performance. You can probably achieve that on a gigabit pipe using > either RGB or YUV encoding with the VGL Transport, but on 100 Mbit or > less, you will absolutely never get more than a few frames/second unless > you use JPEG. The difference between JPEG and RGB or YUV on a 100 Mbit > pipe will be the difference between 50 Megapixels/second and 3-5 > Megapixels/second. Thanks for the clarification. > > Also, what is the preferred way of contributing, just send patches > > here? I had to fix some stuff in the build system as well as the > > scripts to get everything up and running. > > If you're just talking about one or two patches, then you can send > patches to the virtualgl-devel list. Otherwise, please create a > SourceForge patch tracker item. What specifically wasn't working? Bear > in mind that a bunch of companies and research institutions are > building/testing VirtualGL actively, so if there was anything major > wrong with the build system, it would have surfaced before now. Well the Gentoo ebuild as well as the Arch Linux package deploy the package as usual below the /usr hierarchy. There were some hardcoded /opt pathes in the cmake files as well as the scripts themselves which needed to be made a bit more generic. Also for some reason I had to supply the absolute paths for the .so names in LD_PRELOAD, I am not sure why yet, but I will find that out. Is there any particular reason why you chose to deploy stuff in /opt instead of with any other regular package below the /usr hierarchy? FHS says on opt "This directory should contain add-on packages that contain static files". So the question would be why virtualgl is considered an "add-on" package. Normally this is used for 3rd party (ie not belonging to the distro) tools which are kind of self-contained folders not split up into bin/ include/ lib/ etc. But since the cmake install goes nicely in that fashion one might consider dropping the /opt specialities altogether. Well or just make both install locations work more generically. I will send you the patches I needed to do. Thanks for your help! -- Patric Schmitz <bz...@ao...> |
|
From: DRC <dco...@us...> - 2015-03-05 21:33:28
|
On 3/5/15 5:55 AM, Patric Schmitz wrote: > I am trying VirtualGL for remote rendering on our visualization > cluster and have it running by now. I am streaming to a GNU/Linux > workstation using the VGL transports as of yet. The remote has a > NVIDIA GeForce GTX 780 GPU. > > I measured the following framerates with glxgears: > > Locally, on my client machine: ~4500 fps > > Remote, without any forwarding or transport: ~27500 Do not use GLXgears as a benchmark. It isn't an OpenGL benchmark. It uses such a tiny geometry and screen size that, really, all it's measuring when you run it on a local display is the CPU speed and bus bandwidth. And the results have absolutely no meaning in a remote display setting. I mean, 27,000 fps?! What does that even mean?! Most people can't perceive more than 30 fps and definitely no more than 60. Secondly, always disable frame spoiling (vglrun -sp) when running benchmarks in VirtualGL. Otherwise, you are just measuring how fast VirtualGL can read back frames on the server, not how fast it can actually send them to the client. (This is all in the VirtualGL User's Guide, by the way.) The correct benchmark to use when measuring frame rates for VirtualGL is: /opt/VirtualGL/bin/glxspheres64 If you want a fair comparison of remote X11 vs. VirtualGL, then I suggest the following comparisons: Run this in Remote X11: /opt/VirtualGL/bin/glxspheres64 -m Run this in VirtualGL: vglrun -sp /opt/VirtualGL/bin/glxspheres64 -m > X11 forwarding (no virtualgl): ~4500 - ~11000 > here the window stays black. Might that be related to the remote > GLX implementation being nvidia and the local is AMD? If I test > with another NVIDIA client, it doesn't work at all, failing with > X Error of failed request: BadValue (integer parameter out of > range for operation). Is that expected behavior? Obviously diagnosing bugs in remote display solutions that don't use VirtualGL is out of scope for a VirtualGL support list, but I can tell you that, yes, it's entirely possible that this is due to a mismatch in client vs. server GLX. It is unclear from your description whether you are using SSH to forward X11, but if so, that could be the issue as well. SSH won't let you forward certain X extensions (including GLX) unless you establish the connection with 'ssh -Y' instead of 'ssh -X'. > Anyways using VirtualGL with the different compression types yields: > > proxy (X11): ~240 (with 370Mb/s of the 1000Mb/s LAN connection used) > > rgb: ~3300 (880Mb/s used) > > jpeg: ~3100 (110Mb/s used) > > xv: ~19300 BUT the client window stays black again. What can be > reasons for this, and can I expect such a framerate realistically > in case it works (on a high bandwidth connection)? Again, the frame rates in GLXgears are completely bogus. Use a real benchmark. You may be running into the same SSH permission issue with XV as described above, so try 'ssh -Y'. However, please note that the XV Transport is not intended for remote usage. It was developed for the (now discontinued) Sun Ray ultra-thin client, which included an X proxy (called the Sun Ray Server Software or SRSS) that supported the X Video extension as a means of sending pre-encoded YUV frames through the Sun Ray Server and to the client without the server re-encoding them. The idea was that you would run VirtualGL and SRSS on the same machine. For remote usage with the VGL Transport, you want to use YUV encoding instead. It is similar, but it delivers the encoded YUV frames using the VGL Transport and displays them using X Video on the client side. I really do recommend JPEG, though, as the best means of delivering frames in VirtualGL. We're using libjpeg-turbo, which accelerates JPEG encoding/decoding by as much as 5x relative to libjpeg. > yuv: does not work, failing with > [VGL] ERROR: in operator=-- > [VGL] 437: Invalid compression type > Any idea what's happening there? > > I want to know wether these are reasonable rates/limitations, > especially the jpeg transport which seems CPU-bound (using 140% cpu > in top). Increasing -np did not change anything significantly. JPEG probably will be CPU-limited on a fast network, but you're also seeing a lot of unnecessary CPU time because you're running an unrealistic benchmark. Ultimately, the goal of remote display is to produce a solution that the user perceives as local, so really the whole "local vs. remote" comparison isn't valid. If the application generates 50 fps locally and 30 fps remotely, to the user it's going to be perceived very similarly, as long as the 3D rendering keeps up with the mouse movement (which is why the VirtualGL frame spoiling feature exists and why it should always be used with an interactive application but never with a benchmark.) Also, bear in mind that, when using a real application, the server won't be spitting out frames all the time. We have deployments of hundreds of users in various organizations/companies worldwide, and generally these larger deployments are able to provision 2 or more users per CPU core, although it's important to note that not all users will be actively using the server at the same time, nor will any one user be banging away on it 100% for the whole time they are using it. Let me simplify it-- ultimately, what you want is "local-like" performance. You can probably achieve that on a gigabit pipe using either RGB or YUV encoding with the VGL Transport, but on 100 Mbit or less, you will absolutely never get more than a few frames/second unless you use JPEG. The difference between JPEG and RGB or YUV on a 100 Mbit pipe will be the difference between 50 Megapixels/second and 3-5 Megapixels/second. > Also, what is the preferred way of contributing, just send patches > here? I had to fix some stuff in the build system as well as the > scripts to get everything up and running. If you're just talking about one or two patches, then you can send patches to the virtualgl-devel list. Otherwise, please create a SourceForge patch tracker item. What specifically wasn't working? Bear in mind that a bunch of companies and research institutions are building/testing VirtualGL actively, so if there was anything major wrong with the build system, it would have surfaced before now. |
|
From: Patric S. <bz...@ao...> - 2015-03-05 11:55:27
|
Hi everyone, I am trying VirtualGL for remote rendering on our visualization cluster and have it running by now. I am streaming to a GNU/Linux workstation using the VGL transports as of yet. The remote has a NVIDIA GeForce GTX 780 GPU. I measured the following framerates with glxgears: Locally, on my client machine: ~4500 fps Remote, without any forwarding or transport: ~27500 X11 forwarding (no virtualgl): ~4500 - ~11000 here the window stays black. Might that be related to the remote GLX implementation being nvidia and the local is AMD? If I test with another NVIDIA client, it doesn't work at all, failing with X Error of failed request: BadValue (integer parameter out of range for operation). Is that expected behavior? Anyways using VirtualGL with the different compression types yields: proxy (X11): ~240 (with 370Mb/s of the 1000Mb/s LAN connection used) rgb: ~3300 (880Mb/s used) jpeg: ~3100 (110Mb/s used) xv: ~19300 BUT the client window stays black again. What can be reasons for this, and can I expect such a framerate realistically in case it works (on a high bandwidth connection)? yuv: does not work, failing with [VGL] ERROR: in operator=-- [VGL] 437: Invalid compression type Any idea what's happening there? I want to know wether these are reasonable rates/limitations, especially the jpeg transport which seems CPU-bound (using 140% cpu in top). Increasing -np did not change anything significantly. Also, what is the preferred way of contributing, just send patches here? I had to fix some stuff in the build system as well as the scripts to get everything up and running. Thanks in advance! -- Patric Schmitz <bz...@ao...> |
|
From: DRC <dco...@us...> - 2015-03-04 18:33:49
|
I personally use a Quadro FX 5000, so my experience with headless
configurations is limited. What I will say, however, is that there
shouldn't be any reason why nVidia couldn't provide stereo Pbuffer
support in a headless configuration, since VirtualGL doesn't actually
need stereo display on the server-- it just needs a quad-buffered
Pbuffer. Now whether or not nVidia's drivers actually allow that is a
different story. If they don't, then that's an arbitrary decision on
their part that isn't based on any specific technical requirement.
The only thing I could suggest would be to fool with various settings
for 'Option "Stereo"' in the xorg.conf file. The default option
("onboard stereo") is going to check for hardware stereo support (i.e.
the stereo DIN connector) on your graphics card, but I think you should
be able to set it to one of the passive stereo options to get around
that (see /usr/share/doc/NVIDIA_GLX-1.0/README.txt under Appendix B.)
For instance, my xorg.conf has 'Option "Stereo" "8"'.
On 3/4/15 11:51 AM, Antony Cleave wrote:
> Hi
>
> I'm wondering how to do quad buffered Stereoscopic rendering with
> virtualgl and which GPU I'd need.
>
> I will be configuring virtualgl on server in a machine room which is
> part of a HPC system. Some of the nodes in the cluster will have K80M
> cards in and one of those would be ideal. If that will not work then
> there are alternative servers which can be used to accommodate suitable GPUs
>
> All of these servers will be headless and I've been told by nvidia that
> their GPUs do not support stereo when no physical heads are available
> and I'm waiting for them to confirm if this is a hardware limitation or
> not i.e. if I HAVE to use a quadro card or whether a GRID K2 card might
> be usable. I was wondering if there was anyone on the list who has ever
> tried it and might already know.
>
> I currently don't have the exact hardware to test, I've been using good
> old Tesla M2090 GPUs for the majority of my VirtualGL testing and they
> work perfectly for everything else but when I tried to configure these
> to accept some kind of stereo option it refuses point blank with "Stereo
> not supported with NoScanout; disabling Stereo" which I assume is a
> show-stopper.
|
|
From: Antony C. <ant...@cl...> - 2015-03-04 18:21:03
|
Hi I'm wondering how to do quad buffered Stereoscopic rendering with virtualgl and which GPU I'd need. I will be configuring virtualgl on server in a machine room which is part of a HPC system. Some of the nodes in the cluster will have K80M cards in and one of those would be ideal. If that will not work then there are alternative servers which can be used to accommodate suitable GPUs All of these servers will be headless and I've been told by nvidia that their GPUs do not support stereo when no physical heads are available and I'm waiting for them to confirm if this is a hardware limitation or not i.e. if I HAVE to use a quadro card or whether a GRID K2 card might be usable. I was wondering if there was anyone on the list who has ever tried it and might already know. I currently don't have the exact hardware to test, I've been using good old Tesla M2090 GPUs for the majority of my VirtualGL testing and they work perfectly for everything else but when I tried to configure these to accept some kind of stereo option it refuses point blank with "Stereo not supported with NoScanout; disabling Stereo" which I assume is a show-stopper. Thanks Antony |
|
From: DRC <dco...@us...> - 2015-01-26 22:30:04
|
http://sourceforge.net/projects/virtualgl/files/2.4/ Significant changes since 2.4 beta1: [1] Fixed an issue that prevented recent versions of Google Chrome/Chromium from running properly in VirtualGL. [2] VGL_SYNC now affects glFlush(). Although this does not strictly conform to the OpenGL spec (glFlush() is supposed to be an asynchronous command), it was necessary in order to make certain features of Cadence Allegro work properly. Since virtually no applications require VGL_SYNC, it is believed that this change is innocuous. [3] Fixed a bug in vglconnect introduced in VirtualGL 2.3 that prevented 'vglconnect -x' from working properly if the user did not have access to the current directory (vglconnect was erroneously creating a temporary file in the current directory instead of in /tmp.) [4] GLXspheres now warns if the specified polygon count would exceed the limit of 57600 polygons per sphere imposed by GLU and prints the actual polygon count with this limit taken into account. Also, a new option (-n) has been introduced to increase the sphere count. [5] VirtualGL will now only enable color index rendering emulation if a color index context is current. This specifically fixes an interaction issue with MSC Mentat, which occasionally calls glIndexi() when an RGBA context is current, but the fix may affect other applications as well. [6] VirtualGL can now interpose enough of the XCB API to make Qt 5 work properly. Qt 5 does not use XCB to perform 3D rendering (there is no suitable XCB replacement for GLX yet), but it does use XCB to detect whether the GLX extension is available and to handle the application's event queue(s). Thus, when attempting to run Qt 5 applications in VirtualGL, previously the OpenGL portion of the window would fail to resize when the window was resized, or the application would complain that OpenGL was not available and fail to start, or the application would fall back to non-OpenGL rendering. Currently, enabling XCB support in VirtualGL requires building VirtualGL from source and adding -DVGL_FAKEXCB=1 to the CMake command line. The XCB interposer is also disabled by default at run time. It must be enabled by setting the VGL_FAKEXCB environment variable to 1 or passing +xcb to vglrun. [7] Fixed a deadlock that occurred when running compiz 0.9.11 (and possibly other versions as well) with VirtualGL. The issue occurred when compiz called XGrabServer(), followed by glXCreatePixmap() and glXDestroyPixmap(). In VirtualGL, a GLX pixmap resides on the 3D X server, but the corresponding X11 pixmap resides on the 2D X server. Thus, VirtualGL has to synchronize pixels between the two pixmaps in response to certain operations, such as XCopyArea() and XGetImage(), or when the GLX pixmap is destroyed. VirtualGL was previously opening a new connection to the 2D X server in order to perform this synchronization, and because the 2D X server was grabbed, compiz locked up when VirtualGL called XCloseDisplay() on the new display connection. In fact, however, the new display connection was unnecessary, since the GLX/X11 pixmap synchronization occurs within the 3D rendering thread. Thus, VirtualGL now simply reuses the same display connection that is passed to glXCreate[GLX]Pixmap(). [8] NetTest and TCBench for Windows are now supplied in a package called VirtualGL-Utils, which can be built from the VirtualGL source. When the VirtualGL Client for Exceed was discontinued, these utilities ceased to have a home, but they are still useful tools to have, irrespective of the thin client solution that is being used. The Windows build of TCBench was temporarily moved into the Windows TurboVNC Viewer packages, but it proved to be a pain to keep the source code synchronized between the two projects. The VirtualGL-Utils package additionally contains a WGL version of GLXspheres, which is a useful tool to have when benchmarking Windows virtual machines that are running in a VirtualGL environment. [9] Worked around an issue in recent versions of SPECviewperf and FEMFAT visualizer that caused them to segfault when used with VirtualGL. Those applications apparently use a dynamic loading mechanism for OpenGL extension functions, and this mechanism defines symbols such as "glGenBuffers" at file scope. Any symbol exported by an application will override a symbol of the same name exported by a shared library, so when VirtualGL tried to call glGenBuffers(), glBindBuffer(), etc., it was picking up the symbols from the application, not from libGL (and those symbols from the application were not necessarily defined.) VirtualGL now obtains the function pointers it needs for PBO readback directly from libGL using glXProcAddress() rather than relying on the dynamic linker to resolve them. Note that this issue could be worked around in previous versions of VirtualGL by setting VGL_READBACK=sync. |
|
From: DRC <dco...@us...> - 2015-01-19 20:11:05
|
Good to know. I will release 2.4 as soon as I receive confirmation of the other bug fix (the one related to a commercial application overriding glGenBuffers and other PBO-related symbols.) On 1/19/15 6:20 AM, Tim Biedert wrote: > That worked! Performance is also great. Thank you very much! > > >> On 17 Jan 2015, at 16:41, DRC <dco...@us...> wrote: >> >> Ah, sorry, this was my fault. I totally forgot that you now have to do >> 'vglrun +xcb' in order to enable the XCB faker. This was so existing >> VirtualGL shops could rebuild their installation to support Qt5 without >> having to worry about potential interference with non-Qt5 applications. >> At some point, I may become confident enough to allow the XCB faker to >> run all the time, but right now, I'm still nervous about that, because >> we've already discovered a couple of issues caused by the fact that Xlib >> is built on top of xcb (the issues have been fixed, but there may be >> others lurking in the shadows.) |
|
From: Tim B. <bi...@cs...> - 2015-01-19 12:21:09
|
That worked! Performance is also great. Thank you very much!
> On 17 Jan 2015, at 16:41, DRC <dco...@us...> wrote:
>
> Ah, sorry, this was my fault. I totally forgot that you now have to do
> 'vglrun +xcb' in order to enable the XCB faker. This was so existing
> VirtualGL shops could rebuild their installation to support Qt5 without
> having to worry about potential interference with non-Qt5 applications.
> At some point, I may become confident enough to allow the XCB faker to
> run all the time, but right now, I'm still nervous about that, because
> we've already discovered a couple of issues caused by the fact that Xlib
> is built on top of xcb (the issues have been fixed, but there may be
> others lurking in the shadows.)
>
>
> On 1/16/15 5:09 PM, Tim Biedert wrote:
>> Thank you for the quick response!
>>
>> Here's a quite minimal example:
>>
>> -----------------------------------------------------------------------------------------------
>>
>> main.cpp:
>> -----------------------------------------------------------------------------------------------
>>
>>
>> #include <QApplication>
>> #include <QMainWindow>
>> #include <QGLWidget>
>> #include <QDebug>
>>
>>
>> static QGLFormat defaultFormat()
>> {
>> QGLFormat fmt;
>>
>> fmt.setProfile( QGLFormat::CoreProfile );
>> fmt.setAlpha( true );
>> fmt.setSamples( 4 );
>> fmt.setSampleBuffers( true );
>>
>> return fmt;
>> }
>>
>>
>> class Example : public QGLWidget
>> {
>> public:
>> Example(QWidget* parent) : QGLWidget( defaultFormat(), parent )
>> {
>> }
>>
>> virtual void resizeGL( int w, int h )
>> {
>> qDebug() << w << "x" << h;
>> glViewport( 0, 0, w, h );
>> }
>>
>> virtual void paintGL()
>> {
>> glClearColor(0.1f, 0.4f, 0.0f, 0.0f);
>> glClear(GL_COLOR_BUFFER_BIT);
>> }
>>
>> virtual QSize sizeHint() const
>> {
>> return QSize(800, 600);
>> }
>>
>> };
>>
>>
>> int main(int argc, char* argv[])
>> {
>> QApplication application(argc, argv);
>>
>> QMainWindow window;
>> window.setWindowTitle("Example");
>> window.setCentralWidget(new Example(&window));
>> window.show();
>>
>> return application.exec();
>> }
>>
>>
>>
>> -----------------------------------------------------------------------------------------------
>>
>> example.pro
>> -----------------------------------------------------------------------------------------------
>>
>> QT += core gui opengl
>>
>> TARGET = example
>> TEMPLATE = app
>>
>> SOURCES += main.cpp
>>
>> -----------------------------------------------------------------------------------------------
>>
>>
>>
>> To test, I simply do a "vglconnect localhost" and "vglrun
>> ./example". Using the above code, the window is resized to 800x600,
>> but the green frame is only very small in the top left corner.
>>
>>
>> Best,
>> Tim
>>
>>
>>
>>
>> On 01/16/2015 11:31 PM, DRC wrote:
>>> Can you send me the source code for your widget?
>>>
>>> On 1/16/15 4:02 PM, Tim Biedert wrote:
>>>> Dear VirtualGL team,
>>>>
>>>> I’m trying to vglrun a simple Qt 5.4 application which only contains an
>>>> OpenGL widget as the central widget.
>>>>
>>>> There is still a resize issue:
>>>> - When the window is resized, the overridden resize() handler of the GL
>>>> widget even prints out the correct new window size and the
>>>> camera/perspective changes accordingly
>>>> - However, the rendered frame stays fixed at 640x480 resolution
>>>> - Mouse coordinates within the OpenGL widget are also incorrect if the
>>>> inherited widget size is not 640x480.
>>>>
>>>> VirtualGL version is 2.3.91 compiled from SVN trunk (accessed: Jan 16th
>>>> 2015) with -DVGL-FAKEXCB=1.
>>>>
>>>> For better understanding, I’ve created two screen recordings:
>>>>
>>>> 1) Reference how it should behave:
>>>> http://vis.uni-kl.de/~biedert/direct.ogv
>>>>
>>>> 2) Bug demonstration using a local vgl connection:
>>>> http://vis.uni-kl.de/~biedert/vgl.ogv
>>>>
>>>> (Videos are in Theora video format; I use VLC for playback)
>>>>
>>>> Thank you very much for your support!
>>>>
>>>> Best,
>>>> Tim
>>> ------------------------------------------------------------------------------
>>>
>>> New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
>>> GigeNET is offering a free month of service with a new server in Ashburn.
>>> Choose from 2 high performing configs, both with 100TB of bandwidth.
>>> Higher redundancy.Lower latency.Increased capacity.Completely compliant.
>>> http://p.sf.net/sfu/gigenet
>>> _______________________________________________
>>> VirtualGL-Users mailing list
>>> Vir...@li...
>>> https://lists.sourceforge.net/lists/listinfo/virtualgl-users
>>
>>
>>
>>
>> ------------------------------------------------------------------------------
>> New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
>> GigeNET is offering a free month of service with a new server in Ashburn.
>> Choose from 2 high performing configs, both with 100TB of bandwidth.
>> Higher redundancy.Lower latency.Increased capacity.Completely compliant.
>> http://p.sf.net/sfu/gigenet
>>
>>
>>
>> _______________________________________________
>> VirtualGL-Users mailing list
>> Vir...@li...
>> https://lists.sourceforge.net/lists/listinfo/virtualgl-users
>>
>
> ------------------------------------------------------------------------------
> New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
> GigeNET is offering a free month of service with a new server in Ashburn.
> Choose from 2 high performing configs, both with 100TB of bandwidth.
> Higher redundancy.Lower latency.Increased capacity.Completely compliant.
> http://p.sf.net/sfu/gigenet
> _______________________________________________
> VirtualGL-Users mailing list
> Vir...@li...
> https://lists.sourceforge.net/lists/listinfo/virtualgl-users
|
|
From: DRC <dco...@us...> - 2015-01-17 15:41:29
|
Ah, sorry, this was my fault. I totally forgot that you now have to do
'vglrun +xcb' in order to enable the XCB faker. This was so existing
VirtualGL shops could rebuild their installation to support Qt5 without
having to worry about potential interference with non-Qt5 applications.
At some point, I may become confident enough to allow the XCB faker to
run all the time, but right now, I'm still nervous about that, because
we've already discovered a couple of issues caused by the fact that Xlib
is built on top of xcb (the issues have been fixed, but there may be
others lurking in the shadows.)
On 1/16/15 5:09 PM, Tim Biedert wrote:
> Thank you for the quick response!
>
> Here's a quite minimal example:
>
> -----------------------------------------------------------------------------------------------
>
> main.cpp:
> -----------------------------------------------------------------------------------------------
>
>
> #include <QApplication>
> #include <QMainWindow>
> #include <QGLWidget>
> #include <QDebug>
>
>
> static QGLFormat defaultFormat()
> {
> QGLFormat fmt;
>
> fmt.setProfile( QGLFormat::CoreProfile );
> fmt.setAlpha( true );
> fmt.setSamples( 4 );
> fmt.setSampleBuffers( true );
>
> return fmt;
> }
>
>
> class Example : public QGLWidget
> {
> public:
> Example(QWidget* parent) : QGLWidget( defaultFormat(), parent )
> {
> }
>
> virtual void resizeGL( int w, int h )
> {
> qDebug() << w << "x" << h;
> glViewport( 0, 0, w, h );
> }
>
> virtual void paintGL()
> {
> glClearColor(0.1f, 0.4f, 0.0f, 0.0f);
> glClear(GL_COLOR_BUFFER_BIT);
> }
>
> virtual QSize sizeHint() const
> {
> return QSize(800, 600);
> }
>
> };
>
>
> int main(int argc, char* argv[])
> {
> QApplication application(argc, argv);
>
> QMainWindow window;
> window.setWindowTitle("Example");
> window.setCentralWidget(new Example(&window));
> window.show();
>
> return application.exec();
> }
>
>
>
> -----------------------------------------------------------------------------------------------
>
> example.pro
> -----------------------------------------------------------------------------------------------
>
> QT += core gui opengl
>
> TARGET = example
> TEMPLATE = app
>
> SOURCES += main.cpp
>
> -----------------------------------------------------------------------------------------------
>
>
>
> To test, I simply do a "vglconnect localhost" and "vglrun
> ./example". Using the above code, the window is resized to 800x600,
> but the green frame is only very small in the top left corner.
>
>
> Best,
> Tim
>
>
>
>
> On 01/16/2015 11:31 PM, DRC wrote:
>> Can you send me the source code for your widget?
>>
>> On 1/16/15 4:02 PM, Tim Biedert wrote:
>>> Dear VirtualGL team,
>>>
>>> I’m trying to vglrun a simple Qt 5.4 application which only contains an
>>> OpenGL widget as the central widget.
>>>
>>> There is still a resize issue:
>>> - When the window is resized, the overridden resize() handler of the GL
>>> widget even prints out the correct new window size and the
>>> camera/perspective changes accordingly
>>> - However, the rendered frame stays fixed at 640x480 resolution
>>> - Mouse coordinates within the OpenGL widget are also incorrect if the
>>> inherited widget size is not 640x480.
>>>
>>> VirtualGL version is 2.3.91 compiled from SVN trunk (accessed: Jan 16th
>>> 2015) with -DVGL-FAKEXCB=1.
>>>
>>> For better understanding, I’ve created two screen recordings:
>>>
>>> 1) Reference how it should behave:
>>> http://vis.uni-kl.de/~biedert/direct.ogv
>>>
>>> 2) Bug demonstration using a local vgl connection:
>>> http://vis.uni-kl.de/~biedert/vgl.ogv
>>>
>>> (Videos are in Theora video format; I use VLC for playback)
>>>
>>> Thank you very much for your support!
>>>
>>> Best,
>>> Tim
>> ------------------------------------------------------------------------------
>>
>> New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
>> GigeNET is offering a free month of service with a new server in Ashburn.
>> Choose from 2 high performing configs, both with 100TB of bandwidth.
>> Higher redundancy.Lower latency.Increased capacity.Completely compliant.
>> http://p.sf.net/sfu/gigenet
>> _______________________________________________
>> VirtualGL-Users mailing list
>> Vir...@li...
>> https://lists.sourceforge.net/lists/listinfo/virtualgl-users
>
>
>
>
> ------------------------------------------------------------------------------
> New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
> GigeNET is offering a free month of service with a new server in Ashburn.
> Choose from 2 high performing configs, both with 100TB of bandwidth.
> Higher redundancy.Lower latency.Increased capacity.Completely compliant.
> http://p.sf.net/sfu/gigenet
>
>
>
> _______________________________________________
> VirtualGL-Users mailing list
> Vir...@li...
> https://lists.sourceforge.net/lists/listinfo/virtualgl-users
>
|
|
From: Tim B. <bi...@cs...> - 2015-01-16 23:07:56
|
Thank you for the quick response!
Here's a quite minimal example:
-----------------------------------------------------------------------------------------------
main.cpp:
-----------------------------------------------------------------------------------------------
#include <QApplication>
#include <QMainWindow>
#include <QGLWidget>
#include <QDebug>
static QGLFormat defaultFormat()
{
QGLFormat fmt;
fmt.setProfile( QGLFormat::CoreProfile );
fmt.setAlpha( true );
fmt.setSamples( 4 );
fmt.setSampleBuffers( true );
return fmt;
}
class Example : public QGLWidget
{
public:
Example(QWidget* parent) : QGLWidget( defaultFormat(), parent )
{
}
virtual void resizeGL( int w, int h )
{
qDebug() << w << "x" << h;
glViewport( 0, 0, w, h );
}
virtual void paintGL()
{
glClearColor(0.1f, 0.4f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
}
virtual QSize sizeHint() const
{
return QSize(800, 600);
}
};
int main(int argc, char* argv[])
{
QApplication application(argc, argv);
QMainWindow window;
window.setWindowTitle("Example");
window.setCentralWidget(new Example(&window));
window.show();
return application.exec();
}
-----------------------------------------------------------------------------------------------
example.pro
-----------------------------------------------------------------------------------------------
QT += core gui opengl
TARGET = example
TEMPLATE = app
SOURCES += main.cpp
-----------------------------------------------------------------------------------------------
To test, I simply do a "vglconnect localhost" and "vglrun
./example". Using the above code, the window is resized to 800x600,
but the green frame is only very small in the top left corner.
Best,
Tim
On 01/16/2015 11:31 PM, DRC wrote:
> Can you send me the source code for your widget?
>
> On 1/16/15 4:02 PM, Tim Biedert wrote:
>> Dear VirtualGL team,
>>
>> I’m trying to vglrun a simple Qt 5.4 application which only contains an
>> OpenGL widget as the central widget.
>>
>> There is still a resize issue:
>> - When the window is resized, the overridden resize() handler of the GL
>> widget even prints out the correct new window size and the
>> camera/perspective changes accordingly
>> - However, the rendered frame stays fixed at 640x480 resolution
>> - Mouse coordinates within the OpenGL widget are also incorrect if the
>> inherited widget size is not 640x480.
>>
>> VirtualGL version is 2.3.91 compiled from SVN trunk (accessed: Jan 16th
>> 2015) with -DVGL-FAKEXCB=1.
>>
>> For better understanding, I’ve created two screen recordings:
>>
>> 1) Reference how it should behave:
>> http://vis.uni-kl.de/~biedert/direct.ogv
>>
>> 2) Bug demonstration using a local vgl connection:
>> http://vis.uni-kl.de/~biedert/vgl.ogv
>>
>> (Videos are in Theora video format; I use VLC for playback)
>>
>> Thank you very much for your support!
>>
>> Best,
>> Tim
> ------------------------------------------------------------------------------
> New Year. New Location. New Benefits. New Data Center in Ashburn, VA.
> GigeNET is offering a free month of service with a new server in Ashburn.
> Choose from 2 high performing configs, both with 100TB of bandwidth.
> Higher redundancy.Lower latency.Increased capacity.Completely compliant.
> http://p.sf.net/sfu/gigenet
> _______________________________________________
> VirtualGL-Users mailing list
> Vir...@li...
> https://lists.sourceforge.net/lists/listinfo/virtualgl-users
|
|
From: DRC <dco...@us...> - 2015-01-16 22:31:18
|
Can you send me the source code for your widget? On 1/16/15 4:02 PM, Tim Biedert wrote: > Dear VirtualGL team, > > I’m trying to vglrun a simple Qt 5.4 application which only contains an > OpenGL widget as the central widget. > > There is still a resize issue: > - When the window is resized, the overridden resize() handler of the GL > widget even prints out the correct new window size and the > camera/perspective changes accordingly > - However, the rendered frame stays fixed at 640x480 resolution > - Mouse coordinates within the OpenGL widget are also incorrect if the > inherited widget size is not 640x480. > > VirtualGL version is 2.3.91 compiled from SVN trunk (accessed: Jan 16th > 2015) with -DVGL-FAKEXCB=1. > > For better understanding, I’ve created two screen recordings: > > 1) Reference how it should behave: > http://vis.uni-kl.de/~biedert/direct.ogv > > 2) Bug demonstration using a local vgl connection: > http://vis.uni-kl.de/~biedert/vgl.ogv > > (Videos are in Theora video format; I use VLC for playback) > > Thank you very much for your support! > > Best, > Tim |
|
From: Tim B. <bi...@cs...> - 2015-01-16 22:03:05
|
Dear VirtualGL team, I’m trying to vglrun a simple Qt 5.4 application which only contains an OpenGL widget as the central widget. There is still a resize issue: - When the window is resized, the overridden resize() handler of the GL widget even prints out the correct new window size and the camera/perspective changes accordingly - However, the rendered frame stays fixed at 640x480 resolution - Mouse coordinates within the OpenGL widget are also incorrect if the inherited widget size is not 640x480. VirtualGL version is 2.3.91 compiled from SVN trunk (accessed: Jan 16th 2015) with -DVGL-FAKEXCB=1. For better understanding, I’ve created two screen recordings: 1) Reference how it should behave: http://vis.uni-kl.de/~biedert/direct.ogv <http://vis.uni-kl.de/~biedert/direct.ogv> 2) Bug demonstration using a local vgl connection: http://vis.uni-kl.de/~biedert/vgl.ogv <http://vis.uni-kl.de/~biedert/vgl.ogv> (Videos are in Theora video format; I use VLC for playback) Thank you very much for your support! Best, Tim |
|
From: DRC <dco...@us...> - 2014-12-03 18:15:50
|
On 12/3/14 6:59 AM, gri...@ep... wrote: > Hello, > > I am not sure if it is a bug but we have an issue: > > Our setup is a 2-GPU 'VGL-served' machine. With VGL present, we want: > > - two offscreen renderers using :0.0 and :0.1 contributing to: > - one on-screen renderer rendering and assembling :0.0 and :0.1 using > the forwarded DISPLAY (:10.0, redirected to :0.0 by VGL) > > This setup, when run under VirtualGL, should use the VGL redirect for > the on-screen renderer and have no interference on the offscreen > renderers. > > We use Red Hat 6.5 on our server and Ubuntu 14.04 on our client. The > GPUs are Geforce GTX 580 with 3GB memory. > > The last time we tested it, everything was working. But now, it seems > that there is some problems. > When we launch our openGL application using vglrun, only one GPU is > used, even though the offscreen renderers pass :0.0 and :0.1 to > XOpenDisplay(). > We used nvidia-smi to monitor video memory of each GPU and only one > GPU is allocating memory. > > Has something changed in VirtualGL on how the context creation is handled? Not that I'm aware of. If you can verify that an older version of VGL works but the latest one doesn't, then I can treat this as a possible bug, but otherwise I'm not sure what could be causing the problem. |
|
From: <gri...@ep...> - 2014-12-03 12:59:10
|
Hello, I am not sure if it is a bug but we have an issue: Our setup is a 2-GPU 'VGL-served' machine. With VGL present, we want: - two offscreen renderers using :0.0 and :0.1 contributing to: - one on-screen renderer rendering and assembling :0.0 and :0.1 using the forwarded DISPLAY (:10.0, redirected to :0.0 by VGL) This setup, when run under VirtualGL, should use the VGL redirect for the on-screen renderer and have no interference on the offscreen renderers. We use Red Hat 6.5 on our server and Ubuntu 14.04 on our client. The GPUs are Geforce GTX 580 with 3GB memory. The last time we tested it, everything was working. But now, it seems that there is some problems. When we launch our openGL application using vglrun, only one GPU is used, even though the offscreen renderers pass :0.0 and :0.1 to XOpenDisplay(). We used nvidia-smi to monitor video memory of each GPU and only one GPU is allocating memory. Has something changed in VirtualGL on how the context creation is handled? Cheers, Grigori PS: Please see http://sourceforge.net/p/virtualgl/mailman/virtualgl-devel/thread/2B770B96-306D-4094-A347-6B651577BBCE%40gmail.com/#msg29184059 for the discussion when we implemented this feature for more background. |
|
From: Ben A. <bav...@ca...> - 2014-11-24 15:15:54
|
Hi All, Thanks everyone for feedback. DRC, given time, I will try TurboVNC 2.0 next. I am aware of challenges of benchmarking remote 3D viz. solutions, including TurboVNC+VGL. I was just hoping to run Heaven benchmark as a demo, as you suggested. The only way I've found to get a measure of "FPS on thin client" is to run the glxspheres64 benchmark and then measure the client-side FPS via tcbech tool, included with VGL. As for Heaven benhcmark, and pretty much every other 3D application, there is no easy way to measure client-side FPS. Thanks again for all the suggestions. From: DRC <dco...@us...> To: vir...@li... Date: 11/21/2014 10:31 PM Subject: Re: [VirtualGL-Users] Xinerama inside TurboVNC Even though this is really a TurboVNC issue, I'm continuing the thread here, because the same would apply to other X proxies. I have verified that: -- As Nathan pointed out, the XINERAMA warning is innocuous. The actual fatal error is because of the RANDR extension. -- As I suspected, the error is due to the RANDR extension in TurboVNC 1.2.x being too old. Using the TurboVNC 2.0 pre-release fixes the issue. Also please note that benchmarking 3D applications in an X proxy environment is a tricky proposition. The benchmark will be measuring the number of frames that are actually rendered by the GPU, but not all of those frames will actually make it to the client. So the benchmark should only be used as a demo. The results it reports in a VirtualGL/TurboVNC environment are basically bogus and should not in any way be used as a measure of the thin client performance. On 11/11/14 10:09 AM, Ben Avdicevic wrote: > > Hi All, > > I have following setup: > > 1 x GPU server (remote) w/ two NVIDIA Tesla cards running CentOS 6.6 > NVIDIA 240.30 drivers installed > VGL 2.3.3 installed > TurboVNC 1.2.2 installed > > My thin client is Windows 7 with TurboVNC client running. > > I have tested that VGL runs Ok by running glxspheres64 and Google Earth. > > The application I'm having problem running is Unigine Heaven benchmark. > > I launch the application like this: > > # vglrun ./heaven > > The error I get is following: > > Xlib: extension "XINERAMA" missing on display ":1.0". > X Error of failed request: BadRequest (invalid request code or no such > operation) > Major opcode of failed request: 138 (RANDR) > Minor opcode of failed request: 8 (RRGetScreenResources) > Serial number of failed request: 10 > Current serial number in output stream: 10 > AL lib: ALc.c:1879: exit(): closing 1 Device > AL lib: ALc.c:1808: alcCloseDevice(): destroying 1 Context(s) > > Display 1:0 is my TurboVNC session. It seems XINERAMA extension is missing > inside my VNC X11 display. I don't know how to enable it. > > I have tried to enable XINERAMA inside of my system X11 display ":0" by > adding following to xorg.conf > > option "xinerama" "on" > > This works fine. I can see in Xorg.0.log file that Xinerama is enabled. > > But, even after restarting TurboVNC session, the Xinerama extension is > still missing inside VNC. There does not seem to be any obvious way to > include the xinerama extension with "vncserver" command when creating the > VNC X11 display. > > I'd appreciate any help/suggestions to help me resolve this issue. > > Thanks in advance, > Ben ------------------------------------------------------------------------------ Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration & more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk _______________________________________________ VirtualGL-Users mailing list Vir...@li... https://lists.sourceforge.net/lists/listinfo/virtualgl-users |
|
From: DRC <dco...@us...> - 2014-11-22 03:31:13
|
Even though this is really a TurboVNC issue, I'm continuing the thread here, because the same would apply to other X proxies. I have verified that: -- As Nathan pointed out, the XINERAMA warning is innocuous. The actual fatal error is because of the RANDR extension. -- As I suspected, the error is due to the RANDR extension in TurboVNC 1.2.x being too old. Using the TurboVNC 2.0 pre-release fixes the issue. Also please note that benchmarking 3D applications in an X proxy environment is a tricky proposition. The benchmark will be measuring the number of frames that are actually rendered by the GPU, but not all of those frames will actually make it to the client. So the benchmark should only be used as a demo. The results it reports in a VirtualGL/TurboVNC environment are basically bogus and should not in any way be used as a measure of the thin client performance. On 11/11/14 10:09 AM, Ben Avdicevic wrote: > > Hi All, > > I have following setup: > > 1 x GPU server (remote) w/ two NVIDIA Tesla cards running CentOS 6.6 > NVIDIA 240.30 drivers installed > VGL 2.3.3 installed > TurboVNC 1.2.2 installed > > My thin client is Windows 7 with TurboVNC client running. > > I have tested that VGL runs Ok by running glxspheres64 and Google Earth. > > The application I'm having problem running is Unigine Heaven benchmark. > > I launch the application like this: > > # vglrun ./heaven > > The error I get is following: > > Xlib: extension "XINERAMA" missing on display ":1.0". > X Error of failed request: BadRequest (invalid request code or no such > operation) > Major opcode of failed request: 138 (RANDR) > Minor opcode of failed request: 8 (RRGetScreenResources) > Serial number of failed request: 10 > Current serial number in output stream: 10 > AL lib: ALc.c:1879: exit(): closing 1 Device > AL lib: ALc.c:1808: alcCloseDevice(): destroying 1 Context(s) > > Display 1:0 is my TurboVNC session. It seems XINERAMA extension is missing > inside my VNC X11 display. I don't know how to enable it. > > I have tried to enable XINERAMA inside of my system X11 display ":0" by > adding following to xorg.conf > > option "xinerama" "on" > > This works fine. I can see in Xorg.0.log file that Xinerama is enabled. > > But, even after restarting TurboVNC session, the Xinerama extension is > still missing inside VNC. There does not seem to be any obvious way to > include the xinerama extension with "vncserver" command when creating the > VNC X11 display. > > I'd appreciate any help/suggestions to help me resolve this issue. > > Thanks in advance, > Ben |
|
From: DRC <dco...@us...> - 2014-11-13 22:57:01
|
TurboVNC 1.2 and later support RANDR. It might be fruitful to try the TurboVNC 2.0 pre-release, however, since it supports the latest version of RANDR. I am out of the office but will try to reproduce this when I get back, assuming it isn't resolved by then. > On Nov 11, 2014, at 3:18 PM, Nathan Kidd <nat...@sp...> wrote: > >> On 11/11/14 11:09 AM, Ben Avdicevic wrote: >> # vglrun ./heaven >> >> The error I get is following: >> >> Xlib: extension "XINERAMA" missing on display ":1.0". >> X Error of failed request: BadRequest (invalid request code or no such >> operation) >> Major opcode of failed request: 138 (RANDR) >> Minor opcode of failed request: 8 (RRGetScreenResources) >> Serial number of failed request: 10 >> Current serial number in output stream: 10 >> AL lib: ALc.c:1879: exit(): closing 1 Device >> AL lib: ALc.c:1808: alcCloseDevice(): destroying 1 Context(s) > > 1) This is really about TurboVNC, not VirtualGL, and should be on the > turbovnc-users list > > 2) The error is not about XINERAMA, but RANDR. It appears heaven tries > to use RANDR without first checking its availability. You need an X > proxy that supports RANDR. Perhaps you can add an option to your > TurboVNC startup to enable RANDR. > > -Nathan > > ------------------------------------------------------------------------------ > Comprehensive Server Monitoring with Site24x7. > Monitor 10 servers for $9/Month. > Get alerted through email, SMS, voice calls or mobile push notifications. > Take corrective actions from your mobile device. > http://pubads.g.doubleclick.net/gampad/clk?id=154624111&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users |
|
From: Nathan K. <nat...@sp...> - 2014-11-11 21:35:27
|
On 11/11/14 11:09 AM, Ben Avdicevic wrote: > # vglrun ./heaven > > The error I get is following: > > Xlib: extension "XINERAMA" missing on display ":1.0". > X Error of failed request: BadRequest (invalid request code or no such > operation) > Major opcode of failed request: 138 (RANDR) > Minor opcode of failed request: 8 (RRGetScreenResources) > Serial number of failed request: 10 > Current serial number in output stream: 10 > AL lib: ALc.c:1879: exit(): closing 1 Device > AL lib: ALc.c:1808: alcCloseDevice(): destroying 1 Context(s) 1) This is really about TurboVNC, not VirtualGL, and should be on the turbovnc-users list 2) The error is not about XINERAMA, but RANDR. It appears heaven tries to use RANDR without first checking its availability. You need an X proxy that supports RANDR. Perhaps you can add an option to your TurboVNC startup to enable RANDR. -Nathan |
|
From: Ben A. <bav...@ca...> - 2014-11-11 20:30:34
|
After further reading, I understand that 'nvidia-xconfig' can be used to
configure a "headless" configuration. I did NOT do this because I was not
aware of it. However, I am not sure that would help me resolve the
original issue with XINERAMA extension.
To generate my xorg.conf I did two things.
1. Run "nvidia-xconfig" without any parameters
2. Modify xorg.conf manually add 4 "device" stanzas with BusID for each
device. Xorg would fail to start with "unknown device" error before that.
Now Xorg starts great.
In case its helpful, here is my complete xorg.conf
# cat /etc/X11/xorg.conf
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig: version 340.32
(buildmeister@swio-display-x64-rhel04-01) Tue Aug 5 21:15:33 PDT 2014
Section "DRI"
Mode 0666
EndSection
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0" 0 0
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"
option "Xinerama" "on"
EndSection
Section "Files"
EndSection
Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/input/mice"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection
Section "InputDevice"
# generated from data in "/etc/sysconfig/keyboard"
Identifier "Keyboard0"
Driver "kbd"
Option "XkbLayout" "us"
Option "XkbModel" "pc105"
EndSection
Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
HorizSync 28.0 - 33.0
VertRefresh 43.0 - 72.0
Option "DPMS"
EndSection
Section "Device"
Identifier "device_1"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BusID "PCI:5:0:0"
EndSection
Section "Device"
Identifier "device_2"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BusID "PCI:6:0:0"
EndSection
Section "Device"
Identifier "device_3"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BusID "PCI:132:0:0"
EndSection
Section "Device"
Identifier "device_4"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BusID "PCI:133:0:0"
EndSection
Section "Screen"
Identifier "Screen0"
Device "device_1"
Device "device_2"
Device "device_3"
Device "device_4"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24
EndSubSection
EndSection
And here is my Xorg.0.log file:
# cat /var/log/Xorg.0.log
[ 83233.260]
X.Org X Server 1.15.0
Release Date: 2013-12-27
[ 83233.260] X Protocol Version 11, Revision 0
[ 83233.260] Build Operating System: c6b8 2.6.32-220.el6.x86_64
[ 83233.260] Current Operating System: Linux visnode 2.6.32-504.el6.x86_64
#1 SMP Wed Oct 15 04:27:16 UTC 2014 x86_64
[ 83233.260] Kernel command line: ro
root=UUID=5842e528-3fdc-4822-ba2a-c91593101c09 nomodeset rd_NO_LUKS
KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 rd_NO_MD
SYSFONT=latarcyrheb-sun16 crashkernel=137M@0M rd_NO_LVM rd_NO_DM rhgb
pcie_aspm=off biosdevname=0 intel_idle.max_cstate=0 i915.i915_enable_rc6=0
[ 83233.260] Build Date: 18 October 2014 11:46:15AM
[ 83233.260] Build ID: xorg-x11-server 1.15.0-22.el6.centos
[ 83233.260] Current version of pixman: 0.32.4
[ 83233.260] Before reporting problems, check https://wiki.centos.org/
to make sure that you have the latest version.
[ 83233.260] Markers: (--) probed, (**) from config file, (==) default
setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[ 83233.260] (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 11
09:20:50 2014
[ 83233.260] (==) Using config file: "/etc/X11/xorg.conf"
[ 83233.260] (==) Using system config directory
"/usr/share/X11/xorg.conf.d"
[ 83233.260] (==) ServerLayout "Layout0"
[ 83233.260] (**) |-->Screen "Screen0" (0)
[ 83233.260] (**) | |-->Monitor "Monitor0"
[ 83233.260] (==) No device specified for screen "Screen0".
Using the first device section listed.
[ 83233.260] (**) | |-->Device "K80_0"
[ 83233.260] (**) |-->Input Device "Keyboard0"
[ 83233.260] (**) |-->Input Device "Mouse0"
[ 83233.260] (**) Option "Xinerama" "on"
[ 83233.260] (==) Automatically adding devices
[ 83233.260] (==) Automatically enabling devices
[ 83233.260] (==) Not automatically adding GPU devices
[ 83233.260] (**) Xinerama: enabled
[ 83233.260] (==) FontPath set to:
catalogue:/etc/X11/fontpath.d,
built-ins
[ 83233.260] (==) ModulePath set to "/usr/lib64/xorg/modules"
[ 83233.260] (WW) Hotplugging is on, devices using drivers 'kbd', 'mouse'
or 'vmmouse' will be disabled.
[ 83233.260] (WW) Disabling Keyboard0
[ 83233.260] (WW) Disabling Mouse0
[ 83233.260] (II) Loader magic: 0x822020
[ 83233.260] (II) Module ABI versions:
[ 83233.260] X.Org ANSI C Emulation: 0.4
[ 83233.260] X.Org Video Driver: 15.0
[ 83233.260] X.Org XInput driver : 20.0
[ 83233.260] X.Org Server Extension : 8.0
[ 83233.263] (--) PCI: (0:5:0:0) 10de:102d:10de:106c rev 161, Mem @
0xbd000000/16777216, 0x38f800000000/17179869184, 0x38fc00000000/33554432,
BIOS @ 0x????????/65536
[ 83233.263] (--) PCI: (0:6:0:0) 10de:102d:10de:106c rev 161, Mem @
0xbc000000/16777216, 0x38f000000000/17179869184, 0x38f400000000/33554432,
BIOS @ 0x????????/65536
[ 83233.263] (--) PCI:*(0:11:1:0) 102b:0532:15d9:0626 rev 10, Mem @
0xbb000000/16777216, 0xbf000000/16384, 0xbe800000/8388608, BIOS @
0x????????/131072
[ 83233.263] (--) PCI: (0:132:0:0) 10de:102d:10de:106c rev 161, Mem @
0xfa000000/16777216, 0x39f800000000/17179869184, 0x39fc00000000/33554432,
BIOS @ 0x????????/65536
[ 83233.263] (--) PCI: (0:133:0:0) 10de:102d:10de:106c rev 161, Mem @
0xf9000000/16777216, 0x39f000000000/17179869184, 0x39f400000000/33554432,
BIOS @ 0x????????/65536
[ 83233.263] Initializing built-in extension Generic Event Extension
[ 83233.263] Initializing built-in extension SHAPE
[ 83233.263] Initializing built-in extension MIT-SHM
[ 83233.263] Initializing built-in extension XInputExtension
[ 83233.263] Initializing built-in extension XTEST
[ 83233.263] Initializing built-in extension BIG-REQUESTS
[ 83233.263] Initializing built-in extension SYNC
[ 83233.263] Initializing built-in extension XKEYBOARD
[ 83233.263] Initializing built-in extension XC-MISC
[ 83233.263] Initializing built-in extension SECURITY
[ 83233.263] Initializing built-in extension XINERAMA
[ 83233.263] Initializing built-in extension XFIXES
[ 83233.263] Initializing built-in extension RENDER
[ 83233.263] Initializing built-in extension RANDR
[ 83233.263] Initializing built-in extension COMPOSITE
[ 83233.263] Initializing built-in extension DAMAGE
[ 83233.263] Initializing built-in extension MIT-SCREEN-SAVER
[ 83233.263] Initializing built-in extension DOUBLE-BUFFER
[ 83233.263] Initializing built-in extension RECORD
[ 83233.263] Initializing built-in extension DPMS
[ 83233.263] Initializing built-in extension Present
[ 83233.263] Initializing built-in extension X-Resource
[ 83233.263] Initializing built-in extension XVideo
[ 83233.263] Initializing built-in extension XVideo-MotionCompensation
[ 83233.263] Initializing built-in extension SELinux
[ 83233.263] Initializing built-in extension XFree86-VidModeExtension
[ 83233.263] Initializing built-in extension XFree86-DGA
[ 83233.263] Initializing built-in extension XFree86-DRI
[ 83233.263] Initializing built-in extension DRI2
[ 83233.263] (II) LoadModule: "glx"
[ 83233.264] (II) Loading /usr/lib64/xorg/modules/extensions/libglx.so
[ 83233.274] (II) Module glx: vendor="NVIDIA Corporation"
[ 83233.274] compiled for 4.0.2, module version = 1.0.0
[ 83233.274] Module class: X.Org Server Extension
[ 83233.274] (II) NVIDIA GLX Module 340.32 Tue Aug 5 20:32:43 PDT 2014
[ 83233.274] Loading extension GLX
[ 83233.274] (II) LoadModule: "nvidia"
[ 83233.274] (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so
[ 83233.275] (II) Module nvidia: vendor="NVIDIA Corporation"
[ 83233.275] compiled for 4.0.2, module version = 1.0.0
[ 83233.275] Module class: X.Org Video Driver
[ 83233.275] (II) NVIDIA dlloader X Driver 340.32 Tue Aug 5 20:13:04 PDT
2014
[ 83233.275] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
[ 83233.275] (++) using VT number 1
[ 83233.280] (II) Loading sub module "fb"
[ 83233.280] (II) LoadModule: "fb"
[ 83233.280] (II) Loading /usr/lib64/xorg/modules/libfb.so
[ 83233.280] (II) Module fb: vendor="X.Org Foundation"
[ 83233.280] compiled for 1.15.0, module version = 1.0.0
[ 83233.280] ABI class: X.Org ANSI C Emulation, version 0.4
[ 83233.280] (WW) Unresolved symbol: fbGetGCPrivateKey
[ 83233.280] (II) Loading sub module "wfb"
[ 83233.280] (II) LoadModule: "wfb"
[ 83233.280] (II) Loading /usr/lib64/xorg/modules/libwfb.so
[ 83233.280] (II) Module wfb: vendor="X.Org Foundation"
[ 83233.280] compiled for 1.15.0, module version = 1.0.0
[ 83233.280] ABI class: X.Org ANSI C Emulation, version 0.4
[ 83233.280] (II) Loading sub module "ramdac"
[ 83233.280] (II) LoadModule: "ramdac"
[ 83233.280] (II) Module "ramdac" already built-in
[ 83233.280] (WW) NVIDIA: The Composite and Xinerama extensions are both
enabled, which
[ 83233.280] (WW) NVIDIA: is an unsupported configuration. The driver
will continue
[ 83233.280] (WW) NVIDIA: to load, but may behave strangely.
[ 83233.280] (WW) NVIDIA: Xinerama is enabled, so RandR has likely been
disabled by the
[ 83233.281] (WW) NVIDIA: X server.
[ 83233.281] (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32
[ 83233.281] (==) NVIDIA(0): RGB weight 888
[ 83233.281] (==) NVIDIA(0): Default visual is TrueColor
[ 83233.281] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)
[ 83233.281] (**) NVIDIA(0): Enabling 2D acceleration
[ 83235.873] (II) NVIDIA(0): NVIDIA GPU Tesla K80 (GK210) at PCI:5:0:0
(GPU-0)
[ 83235.873] (--) NVIDIA(0): Memory: 11796480 kBytes
[ 83235.873] (--) NVIDIA(0): VideoBIOS: 80.21.1b.00.01
[ 83235.873] (II) NVIDIA(0): Detected PCI Express Link width: 16X
[ 83235.873] (--) NVIDIA(0): Valid display device(s) on Tesla K80 at
PCI:5:0:0
[ 83235.873] (--) NVIDIA(0): none
[ 83235.873] (II) NVIDIA(0): Validated MetaModes:
[ 83235.873] (II) NVIDIA(0): "NULL"
[ 83235.873] (II) NVIDIA(0): Virtual screen size determined to be 640 x 480
[ 83235.873] (WW) NVIDIA(0): Unable to get display device for DPI
computation.
[ 83235.873] (==) NVIDIA(0): DPI set to (75, 75); computed from built-in
default
[ 83235.873] (--) Depth 24 pixmap format is 32 bpp
[ 83238.394] (II) NVIDIA(GPU-3): NVIDIA GPU Tesla K80 (GK210) at
PCI:133:0:0 (GPU-3)
[ 83238.394] (--) NVIDIA(GPU-3): Memory: 11796480 kBytes
[ 83238.394] (--) NVIDIA(GPU-3): VideoBIOS: 80.21.1b.00.02
[ 83238.394] (II) NVIDIA(GPU-3): Detected PCI Express Link width: 16X
[ 83238.394] (--) NVIDIA(GPU-3): Valid display device(s) on Tesla K80 at
PCI:133:0:0
[ 83238.394] (--) NVIDIA(GPU-3): none
[ 83240.869] (II) NVIDIA(GPU-2): NVIDIA GPU Tesla K80 (GK210) at
PCI:132:0:0 (GPU-2)
[ 83240.869] (--) NVIDIA(GPU-2): Memory: 11796480 kBytes
[ 83240.869] (--) NVIDIA(GPU-2): VideoBIOS: 80.21.1b.00.01
[ 83240.869] (II) NVIDIA(GPU-2): Detected PCI Express Link width: 16X
[ 83240.869] (--) NVIDIA(GPU-2): Valid display device(s) on Tesla K80 at
PCI:132:0:0
[ 83240.869] (--) NVIDIA(GPU-2): none
[ 83243.383] (II) NVIDIA(GPU-1): NVIDIA GPU Tesla K80 (GK210) at PCI:6:0:0
(GPU-1)
[ 83243.383] (--) NVIDIA(GPU-1): Memory: 11796480 kBytes
[ 83243.383] (--) NVIDIA(GPU-1): VideoBIOS: 80.21.1b.00.02
[ 83243.383] (II) NVIDIA(GPU-1): Detected PCI Express Link width: 16X
[ 83243.383] (--) NVIDIA(GPU-1): Valid display device(s) on Tesla K80 at
PCI:6:0:0
[ 83243.383] (--) NVIDIA(GPU-1): none
[ 83243.383] (II) NVIDIA: Using 3072.00 MB of virtual memory for indirect
memory
[ 83243.383] (II) NVIDIA: access.
[ 83243.389] (II) NVIDIA(0): Setting mode "NULL"
[ 83243.405] (II) NVIDIA(0): Built-in logo is bigger than the screen.
[ 83243.405] Loading extension NV-GLX
[ 83243.415] (==) NVIDIA(0): Disabling shared memory pixmaps
[ 83243.415] (==) NVIDIA(0): Backing store enabled
[ 83243.415] (==) NVIDIA(0): Silken mouse enabled
[ 83243.415] (**) NVIDIA(0): DPMS enabled
[ 83243.415] Loading extension NV-CONTROL
[ 83243.415] (II) Loading sub module "dri2"
[ 83243.415] (II) LoadModule: "dri2"
[ 83243.415] (II) Module "dri2" already built-in
[ 83243.415] (II) NVIDIA(0): [DRI2] Setup complete
[ 83243.415] (II) NVIDIA(0): [DRI2] VDPAU driver: nvidia
[ 83243.415] (WW) NVIDIA(0): Not registering RandR
[ 83243.415] (==) RandR enabled
[ 83243.419] (II) SELinux: Disabled on system
[ 83243.419] (II) Initializing extension GLX
[ 83243.455] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 1 connected
from local host ( uid=0 gid=0 pid=31256 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.461] (II) config/hal: Adding input device Winbond Electronics Corp
Hermon USB hidmouse Device
[ 83243.461] (II) LoadModule: "evdev"
[ 83243.461] (WW) Warning, couldn't open module evdev
[ 83243.461] (II) UnloadModule: "evdev"
[ 83243.461] (II) Unloading evdev
[ 83243.461] (EE) Failed to load module "evdev" (module does not exist, 0)
[ 83243.461] (EE) No input driver matching `evdev'
[ 83243.461] (EE) config/hal: NewInputDeviceRequest failed (15)
[ 83243.464] (II) config/hal: Adding input device Power Button
[ 83243.464] (II) LoadModule: "evdev"
[ 83243.464] (WW) Warning, couldn't open module evdev
[ 83243.464] (II) UnloadModule: "evdev"
[ 83243.464] (II) Unloading evdev
[ 83243.464] (EE) Failed to load module "evdev" (module does not exist, 0)
[ 83243.464] (EE) No input driver matching `evdev'
[ 83243.464] (EE) config/hal: NewInputDeviceRequest failed (15)
[ 83243.467] (II) config/hal: Adding input device Power Button
[ 83243.467] (II) LoadModule: "evdev"
[ 83243.467] (WW) Warning, couldn't open module evdev
[ 83243.467] (II) UnloadModule: "evdev"
[ 83243.467] (II) Unloading evdev
[ 83243.467] (EE) Failed to load module "evdev" (module does not exist, 0)
[ 83243.467] (EE) No input driver matching `evdev'
[ 83243.467] (EE) config/hal: NewInputDeviceRequest failed (15)
[ 83243.470] (II) config/hal: Adding input device Macintosh mouse button
emulation
[ 83243.470] (II) LoadModule: "evdev"
[ 83243.470] (WW) Warning, couldn't open module evdev
[ 83243.470] (II) UnloadModule: "evdev"
[ 83243.470] (II) Unloading evdev
[ 83243.470] (EE) Failed to load module "evdev" (module does not exist, 0)
[ 83243.470] (EE) No input driver matching `evdev'
[ 83243.470] (EE) config/hal: NewInputDeviceRequest failed (15)
[ 83243.473] (II) config/hal: Adding input device Winbond Electronics Corp
Hermon USB hidmouse Device
[ 83243.473] (II) LoadModule: "evdev"
[ 83243.473] (WW) Warning, couldn't open module evdev
[ 83243.473] (II) UnloadModule: "evdev"
[ 83243.473] (II) Unloading evdev
[ 83243.473] (EE) Failed to load module "evdev" (module does not exist, 0)
[ 83243.473] (EE) No input driver matching `evdev'
[ 83243.473] (EE) config/hal: NewInputDeviceRequest failed (15)
[ 83243.479] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 2 connected
from local host ( uid=0 gid=0 pid=31269 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.479] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 2 disconnected
[ 83243.482] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 2 connected
from local host ( uid=0 gid=0 pid=31271 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.482] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 2 disconnected
[ 83243.482] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 2 connected
from local host ( uid=0 gid=0 pid=31272 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.485] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 2 disconnected
[ 83243.486] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 2 connected
from local host ( uid=0 gid=0 pid=31275 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.486] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 2 disconnected
[ 83243.517] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 2 connected
from local host ( uid=42 gid=42 pid=31281 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.519] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 3 connected
from local host ( uid=42 gid=42 pid=31284 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.523] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 4 connected
from local host ( uid=42 gid=42 pid=31286 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.571] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 5 connected
from local host ( uid=42 gid=42 pid=31293 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.572] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 6 connected
from local host ( uid=42 gid=42 pid=31292 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.584] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 7 connected
from local host ( uid=42 gid=42 pid=31297 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.584] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 7 disconnected
[ 83243.613] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 7 connected
from local host ( uid=42 gid=42 pid=31286 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.613] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 8 connected
from local host ( uid=42 gid=42 pid=31293 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.622] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 9 connected
from local host ( uid=42 gid=42 pid=31304 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.628] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 10 connected
from local host ( uid=42 gid=42 pid=31304 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.644] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 11 connected
from local host ( uid=42 gid=42 pid=31308 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.646] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 12 connected
from local host ( uid=42 gid=42 pid=31309 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.647] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 13 connected
from local host ( uid=42 gid=42 pid=31310 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.653] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 14 connected
from local host ( uid=42 gid=42 pid=31307 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.657] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 15 connected
from local host ( uid=42 gid=42 pid=31312 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.658] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 16 connected
from local host ( uid=42 gid=42 pid=31308 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.659] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 17 connected
from local host ( uid=42 gid=42 pid=31307 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.660] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 18 connected
from local host ( uid=42 gid=42 pid=31309 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.661] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 19 connected
from local host ( uid=42 gid=42 pid=31310 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.662] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 20 connected
from local host ( uid=42 gid=42 pid=31307 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.663] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 20 disconnected
[ 83243.663] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 20 connected
from local host ( uid=42 gid=42 pid=31312 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.678] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 16 disconnected
[ 83243.679] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 11 disconnected
[ 83243.704] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 11 connected
from local host ( uid=42 gid=42 pid=31293 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.705] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 11 disconnected
[ 83243.724] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 16 connected
from local host ( uid=42 gid=42 pid=31327 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83243.726] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 17 disconnected
[ 83243.727] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 14 disconnected
[ 83243.730] AUDIT: Tue Nov 11 09:21:00 2014: 31259: client 14 connected
from local host ( uid=42 gid=42 pid=31327 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 645
[ 83274.421] AUDIT: Tue Nov 11 09:21:31 2014: 31259: client 14 disconnected
[ 83274.421] AUDIT: Tue Nov 11 09:21:31 2014: 31259: client 16 disconnected
[ 83318.069] AUDIT: Tue Nov 11 09:22:14 2014: 31259: client 14 connected
from local host ( uid=0 gid=0 pid=31537 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 0
[ 83323.116] AUDIT: Tue Nov 11 09:22:19 2014: 31259: client 16 connected
from local host ( uid=0 gid=0 pid=31546 )
Auth name: MIT-MAGIC-COOKIE-1 ID: 0
[ 83323.144] AUDIT: Tue Nov 11 09:22:19 2014: 31259: client 16 disconnected
[ 83382.169] AUDIT: Tue Nov 11 09:23:19 2014: 31259: client 14 disconnected
Regards,
Benjamin (B.) Avdicevic
IBM Platform SaaS - Developer
Software Defined Systems, STG
E-mail: bav...@ca...
Find me on: and within IBM on:
3600 Steeles Ave East
Markham, ON L3R 9Z7
Canada
From: Ben Avdicevic/Ontario/IBM@IBMCA
To: vir...@li...
Date: 11/11/2014 11:42 AM
Subject: [VirtualGL-Users] Xinerama inside TurboVNC
Hi All,
I have following setup:
1 x GPU server (remote) w/ two NVIDIA Tesla cards running CentOS 6.6
NVIDIA 240.30 drivers installed
VGL 2.3.3 installed
TurboVNC 1.2.2 installed
My thin client is Windows 7 with TurboVNC client running.
I have tested that VGL runs Ok by running glxspheres64 and Google Earth.
The application I'm having problem running is Unigine Heaven benchmark.
I launch the application like this:
# vglrun ./heaven
The error I get is following:
Xlib: extension "XINERAMA" missing on display ":1.0".
X Error of failed request: BadRequest (invalid request code or no such
operation)
Major opcode of failed request: 138 (RANDR)
Minor opcode of failed request: 8 (RRGetScreenResources)
Serial number of failed request: 10
Current serial number in output stream: 10
AL lib: ALc.c:1879: exit(): closing 1 Device
AL lib: ALc.c:1808: alcCloseDevice(): destroying 1 Context(s)
Display 1:0 is my TurboVNC session. It seems XINERAMA extension is missing
inside my VNC X11 display. I don't know how to enable it.
I have tried to enable XINERAMA inside of my system X11 display ":0" by
adding following to xorg.conf
option "xinerama" "on"
This works fine. I can see in Xorg.0.log file that Xinerama is enabled.
But, even after restarting TurboVNC session, the Xinerama extension is
still missing inside VNC. There does not seem to be any obvious way to
include the xinerama extension with "vncserver" command when creating the
VNC X11 display.
I'd appreciate any help/suggestions to help me resolve this issue.
Thanks in advance,
Ben
------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://pubads.g.doubleclick.net/gampad/clk?id=154624111&iu=/4140/ostg.clktrk
_______________________________________________
VirtualGL-Users mailing list
Vir...@li...
https://lists.sourceforge.net/lists/listinfo/virtualgl-users
|
|
From: Ben A. <bav...@ca...> - 2014-11-11 16:41:27
|
Hi All, I have following setup: 1 x GPU server (remote) w/ two NVIDIA Tesla cards running CentOS 6.6 NVIDIA 240.30 drivers installed VGL 2.3.3 installed TurboVNC 1.2.2 installed My thin client is Windows 7 with TurboVNC client running. I have tested that VGL runs Ok by running glxspheres64 and Google Earth. The application I'm having problem running is Unigine Heaven benchmark. I launch the application like this: # vglrun ./heaven The error I get is following: Xlib: extension "XINERAMA" missing on display ":1.0". X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 138 (RANDR) Minor opcode of failed request: 8 (RRGetScreenResources) Serial number of failed request: 10 Current serial number in output stream: 10 AL lib: ALc.c:1879: exit(): closing 1 Device AL lib: ALc.c:1808: alcCloseDevice(): destroying 1 Context(s) Display 1:0 is my TurboVNC session. It seems XINERAMA extension is missing inside my VNC X11 display. I don't know how to enable it. I have tried to enable XINERAMA inside of my system X11 display ":0" by adding following to xorg.conf option "xinerama" "on" This works fine. I can see in Xorg.0.log file that Xinerama is enabled. But, even after restarting TurboVNC session, the Xinerama extension is still missing inside VNC. There does not seem to be any obvious way to include the xinerama extension with "vncserver" command when creating the VNC X11 display. I'd appreciate any help/suggestions to help me resolve this issue. Thanks in advance, Ben |
|
From: DRC <dco...@us...> - 2014-11-02 00:41:29
|
At least in my testing, the 3D window managers in Ubuntu 14.04 (Unity) and RHEL 7 (Gnome 3.8.x) now work when you use the latest pre-releases of VirtualGL (2.4 post-beta: http://www.virtualgl.org/DeveloperInfo/PreReleases) and TurboVNC (2.0 evolving: http://www.turbovnc.org/DeveloperInfo/PreReleases). Pass an argument of -3dwm to /opt/TurboVNC/bin/vncserver when starting TurboVNC in order to activate 3D window manager support (all this really does is run the xstartup.turbovnc file with 'vglrun +wm'. You can achieve the same thing less automatically with other X proxies, if you choose.) Be sure to delete ~/.vnc/xstartup.turbovnc and let vncserver create a new one for you, because there are some fixes in xstartup.turbovnc that are necessary to make Unity start properly under Ubuntu 14.04. Specifically, the new xstartup.turbovnc will automatically launch the gnome-fallback session if 3D window manager support isn't activated, and it will launch Unity otherwise. Using -3dwm also allows you to run older versions of Gnome (for instance, the one that ships with RHEL 6) with desktop effects enabled. As a side effect, running a 3D window manager in this way allows you to launch 3D applications without using vglrun. I don't consider this a "feature", though. It's actually more of a bug, because there are likely some hidden issues that will spring up because of it-- certain non-3D applications that won't enjoy having VirtualGL preloaded into them. I don't know of a way around it at the moment, though, so please do your own testing and file a bug report or post a message to VirtualGL-Users if you encounter any issues. DRC |
|
From: DRC <dco...@us...> - 2014-10-27 17:49:13
|
On 10/27/14 3:16 AM, Peter Astrand wrote: > I totally agree with you on this one. I guess people thinks it's more fun > to work with new projects and new code, than to maintain old stuff. But > this idea of "new" = "always better" is present in the entire industry; > not just in the open source community. Indeed it is, and although I still prefer Mac to Windows, Apple is actually one of the worst culprits here lately in terms of shoving new stuff down the user's throat instead of fixing long-standing bugs. They force you to upgrade your O/S in order to run the latest versions of their applications (and often the latest versions of 3rd party applications as well, because each new version of Xcode drops support for all but the latest O/S.) But they are lax about fixing driver bugs with older hardware in their new O/S's, thus forcing you to upgrade your hardware as well. A literally 2-year-old driver bug in Mavericks that Apple, in their typical fashion, won't own up to has prevented me from upgrading (the bug basically prevents firewire drives from going to sleep.) I read an article a while ago -- probably written in response to some epic software-related failure in a NASA space probe or something like that -- that discussed why software should, from the point of view of testing, be treated like a physical machine. We would never accept "bugs" in physical engines, yet we consider buggy software to be an inevitable fact of life. It doesn't have to be, though. I go one step further and consider not only the stability of the "engine" but also its performance. If your car can only go 30 miles per hour, then it doesn't matter how bug-free the engine is. The fact that its performance is severely limited is, from the end user's point of view, a bug as well. |