virtualgl-users Mailing List for VirtualGL (Page 9)
3D Without Boundaries
Brought to you by:
dcommander
You can subscribe to this list here.
| 2007 |
Jan
(1) |
Feb
(5) |
Mar
(7) |
Apr
(7) |
May
(1) |
Jun
(10) |
Jul
(5) |
Aug
(4) |
Sep
(16) |
Oct
(2) |
Nov
(8) |
Dec
(3) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2008 |
Jan
(6) |
Feb
(18) |
Mar
(6) |
Apr
(5) |
May
(15) |
Jun
(6) |
Jul
(23) |
Aug
(5) |
Sep
(9) |
Oct
(15) |
Nov
(7) |
Dec
(3) |
| 2009 |
Jan
(22) |
Feb
(13) |
Mar
(15) |
Apr
(3) |
May
(19) |
Jun
(1) |
Jul
(44) |
Aug
(16) |
Sep
(13) |
Oct
(32) |
Nov
(34) |
Dec
(6) |
| 2010 |
Jan
(5) |
Feb
(27) |
Mar
(28) |
Apr
(29) |
May
(19) |
Jun
(30) |
Jul
(14) |
Aug
(5) |
Sep
(17) |
Oct
(10) |
Nov
(13) |
Dec
(13) |
| 2011 |
Jan
(18) |
Feb
(34) |
Mar
(57) |
Apr
(39) |
May
(28) |
Jun
(2) |
Jul
(7) |
Aug
(17) |
Sep
(28) |
Oct
(25) |
Nov
(17) |
Dec
(15) |
| 2012 |
Jan
(15) |
Feb
(47) |
Mar
(40) |
Apr
(15) |
May
(15) |
Jun
(34) |
Jul
(44) |
Aug
(66) |
Sep
(34) |
Oct
(8) |
Nov
(37) |
Dec
(23) |
| 2013 |
Jan
(14) |
Feb
(26) |
Mar
(38) |
Apr
(27) |
May
(33) |
Jun
(67) |
Jul
(14) |
Aug
(39) |
Sep
(24) |
Oct
(59) |
Nov
(29) |
Dec
(16) |
| 2014 |
Jan
(21) |
Feb
(17) |
Mar
(21) |
Apr
(11) |
May
(10) |
Jun
(2) |
Jul
(10) |
Aug
|
Sep
(23) |
Oct
(16) |
Nov
(7) |
Dec
(2) |
| 2015 |
Jan
(7) |
Feb
|
Mar
(26) |
Apr
|
May
(2) |
Jun
(16) |
Jul
(1) |
Aug
(5) |
Sep
(6) |
Oct
(10) |
Nov
(5) |
Dec
(6) |
| 2016 |
Jan
|
Feb
(6) |
Mar
|
Apr
(2) |
May
|
Jun
(6) |
Jul
(5) |
Aug
|
Sep
(17) |
Oct
(6) |
Nov
(2) |
Dec
(4) |
| 2017 |
Jan
(3) |
Feb
(25) |
Mar
(4) |
Apr
(3) |
May
(4) |
Jun
(10) |
Jul
(1) |
Aug
(8) |
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Peter A. <as...@ce...> - 2014-10-27 08:48:40
|
On Sat, 25 Oct 2014, DRC wrote: > (performance, perhaps), but after 18 years as a professional developer > and 10 years as a professional open source developer, I have noticed a > trend recently whereby developers seem to want to change things just for > the sake of changing things (systemd, anyone?) and new code seems to be > valued more than old, regardless of whether the new code actually > represents an improvement for the end user. I see projects re-inventing > the whole wheel when just changing one of the spokes would be > sufficient, or re-inventing the wheel so it can handle terrain that will > only be encountered 1% of the time, or even worse, re-inventing the > wheel because of the possibility of encountering terrain that, in > reality, will likely never be encountered. When I see this, it doesn't I totally agree with you on this one. I guess people thinks it's more fun to work with new projects and new code, than to maintain old stuff. But this idea of "new" = "always better" is present in the entire industry; not just in the open source community. /Still running Windows XP64 on my laptop. --- Peter Astrand ThinLinc Chief Developer Cendio AB http://cendio.com Teknikringen 8 http://twitter.com/ThinLinc 583 30 Linkoping http://facebook.com/ThinLinc Phone: +46-13-214600 http://google.com/+CendioThinLinc |
|
From: DRC <dco...@us...> - 2014-10-25 16:36:25
|
This was a tricky problem to solve, but the good news is that it didn't require interposing all of XCB, and at least in the near term, I don't think that will become a requirement. Referring to http://xcb.freedesktop.org/opengl/ : "XCB-GLX only communicates with the X server. It does not perform any hardware initialization or touch the OpenGL client-side state. Because of this, XCB-GLX cannot be used as a replacement for the GLX API." Thus, as long as Qt 5 is built with GLX support, it has to use the Xlib-XCB hybrid interface, which means that it will use XCB for 2D stuff and GLX for 3D stuff. That's good news for VirtualGL, because it means that I simply (I say "simply" somewhat ironically, because it was still a tedious job) had to interpose the XCB-GLX functions associated with querying the GLX extension (in order to make Qt 5 believe that GLX exists, even when the 2D X server doesn't support it) and the XCB functions associated with event queue handling (to intercept window resizes.) To wax philosophical, I am frankly not sure why the Qt developers chose to migrate to XCB. There may have been a legitimate reason (performance, perhaps), but after 18 years as a professional developer and 10 years as a professional open source developer, I have noticed a trend recently whereby developers seem to want to change things just for the sake of changing things (systemd, anyone?) and new code seems to be valued more than old, regardless of whether the new code actually represents an improvement for the end user. I see projects re-inventing the whole wheel when just changing one of the spokes would be sufficient, or re-inventing the wheel so it can handle terrain that will only be encountered 1% of the time, or even worse, re-inventing the wheel because of the possibility of encountering terrain that, in reality, will likely never be encountered. When I see this, it doesn't surprise me when I also see large-scale software projects (both open source and proprietary) that have major bugs outstanding for years that are just being ignored. The modern philosophy seems to be, "Who cares about bugs? Let's hack. We can just re-design everything from the ground up, and then all of the old bugs will be gone." Rebuilding the engine instead of changing the oil. At this point, I start shaking my fist at the sky, yelling "Khhhaaaaannnnn", and telling the kids to stop playing on my lawn, etc. XCB can certainly do some things that Xlib can't, but it is undeniably a lower-level and less user-friendly interface-- something that is appropriate as an underlying interface for Xlib but not as a replacement for it. Thus, I'm not really sure why Qt 5 chose to use it. With few applications being written with remote X11 in mind these days, the ability to do asynchronous requests (one of the main selling points of XCB) seems like a pretty weak argument. I'm also noticing that the XCB developers have, over time, allowed some synchronous functions to creep into the API, functions that have 1:1 Xlib equivalents, thus making the argument in favor of XCB even weaker. I would be interested to see a post from the Qt developers regarding why they chose to make the switch, because it isn't obvious to me. At any rate, it wouldn't surprise me if Qt 6 breaks VirtualGL again, or even Qt 5.4.x, but as long as there is no wholesale movement toward using XCB to replace GLX, it should be possible to support these future versions by making incremental modifications to VirtualGL. The new XCB interposer code has been checked into SVN trunk and will be part of the final 2.4 release. Given that XCB is not available on all of the platforms we support, currently this functionality can only be enabled by doing a custom build of VirtualGL and passing -DVGL_FAKEXCB=1 on the CMake command line. I've tested it on CentOS 6, and it should work on systems of similar vintage and newer. Let me know if you run into any issues. On 7/29/14 2:32 AM, DRC wrote: > Ugh. The more I look into this, the nastier it gets. A lot of the > other Qt5 OpenGL demos don't work because VirtualGL isn't intercepting > xcb_glx_query_version(), which the application calls to check whether > the GLX extension is available. But there are other xcb_glx* functions > as well, and it seems like VirtualGL should intercept and reroute those > to avoid future problems. > > I unfortunately don't think that a simple hack is going to be very > meaningful here. It's going to require a full-blown xcb interposer. > Developing this interposer is straightforward, but it's going to be > time-consuming. It will require introducing new hashes solely for the > xcb structures, as well as a new dynamic loader mechanism (since libxcb > isn't available on older platforms, we'll have to check for its presence > before trying to use it.) Extensive testing will be required, which > will probably include porting some of the GLX demos and/or other test > applications in the VirtualGL source to the XCB API. > > This is unfortunately outside of the scope of pro bono work. I will > post it as a hot topic on the web site and see if I can attract funding. > > > On 7/29/14 12:41 AM, Pet...@cs... wrote: >> An in-house app fails to display any 3D content with VirtualGL since >> it was migrated to Qt5 from Qt4. According to the developer no code >> was changed other than what was required to migrate the app to the Qt5 >> namespace. >> >> -----Original Message----- >> From: DRC [mailto:dco...@us...] >> Sent: Tuesday, 29 July 2014 3:29 PM >> To: vir...@li... >> Subject: Re: [VirtualGL-Users] Issues with Qt 5 >> >> Have you been experiencing other issues besides just the resize issue? >> >> >> On 7/25/14 5:17 AM, Pet...@cs... wrote: >>> Hi All, >>> >>> We have been having trouble running OpenGL Qt5 apps with VirtualGL >>> including 2.4 beta1. One easily reproducible issue is the OpenGL >>> examples supplied with Qt5 don't resize with their windows when run >>> with VirtualGL. This is seen with both Qt 5.3 we built ourselves and >>> the Qt >>> 5.2 shipped with Ubuntu 14.04 in the qtbase5-examples package. >>> >>> On Ubuntu running the following command and resizing the window >>> produces six lines as per the trace below. Should we see a >>> glXCreatePbuffer call if all is working properly? >>> >>> vglrun +tr /usr/lib/x86_64-linux-gnu/qt5/examples/opengl/cube/cube >>> >>> [VGL] glXMakeCurrent (dpy=0x006c75a0(:1) drawable=0x01400006 >>> ctx=0x0089fed0 config=0x00000135(0x135) drawable=0x01200003 >>> renderer=Quadro K2000/PCIe/SSE2 ) 0.036001 ms >>> >>> [VGL] glXSwapBuffers (dpy=0x006c75a0(:1) drawable=0x01400006 >>> pbw->getglxdrawable()=0x01200003 ) 1.675844 ms >>> >>> [VGL] glXMakeCurrent (dpy=0x006c75a0(:1) drawable=0x01400006 >>> ctx=0x0089fed0 config=0x00000135(0x135) drawable=0x01200003 >>> renderer=Quadro K2000/PCIe/SSE2 ) 0.014782 ms >>> >>> [VGL] glViewport (x=0 y=0 width=737 height=590 ) 0.006914 ms >>> >>> [VGL] glXMakeCurrent (dpy=0x006c75a0(:1) drawable=0x01400006 >>> ctx=0x0089fed0 config=0x00000135(0x135) drawable=0x01200003 >>> renderer=Quadro K2000/PCIe/SSE2 ) 0.010014 ms >>> >>> [VGL] glXSwapBuffers (dpy=0x006c75a0(:1) drawable=0x01400006 >>> pbw->getglxdrawable()=0x01200003 ) 1.435041 ms >>> >>> Regards, >>> >>> Peter >> >> ------------------------------------------------------------------------------ >> >> Infragistics Professional >> Build stunning WinForms apps today! >> Reboot your WinForms applications with our WinForms controls. >> Build a bridge from your legacy apps to the future. >> http://pubads.g.doubleclick.net/gampad/clk?id=153845071&iu=/4140/ostg.clktrk >> >> _______________________________________________ >> VirtualGL-Users mailing list >> Vir...@li... >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >> >> ------------------------------------------------------------------------------ >> >> Infragistics Professional >> Build stunning WinForms apps today! >> Reboot your WinForms applications with our WinForms controls. >> Build a bridge from your legacy apps to the future. >> http://pubads.g.doubleclick.net/gampad/clk?id=153845071&iu=/4140/ostg.clktrk >> >> _______________________________________________ >> VirtualGL-Users mailing list >> Vir...@li... >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >> |
|
From: DRC <dco...@us...> - 2014-10-03 22:07:06
|
Sorry for the delayed reply. I finally got a chance to try this, and I don't observe the issue you're reporting below. Both with TurboVNC 1.2.x and the 2.0 evolving X server, I can do /opt/TurboVNC/bin/vncserver -query localhost -noxstartup, and as long as the display manager has XDMCP enabled, then I get a login screen whenever I connect. When I log out of the window manager, however, it doesn't return me to the login screen for some reason. Not sure why. XDMCP is not very secure, so few people use it in a production environment, and thus it's likely that no one has really played around with it much in TurboVNC. It's better to have each user log in with SSH (or a web portal) and start their own Xvnc session under their own user account. If you can identify a specific bug that prevents it from working in TurboVNC, I'm happy to fix it, but it seems to work for me. On 3/11/14 6:34 AM, Isamu Yamashita wrote: > Hi folks, > > When I used newer turbovnc with the option "-query" to enable XDMCP, I got > the following errors and Xvnc did not work normally. This issue happened > in both > 1.2.1 and 1.2.80. I have used 1.1 with XDMCP so far and have never seen the > issue before. > > After executing the following command > > /opt/turbovnc/bin/vncserver -query localhost -noxstartup > > I got the output message in ~/.vnc/localhost.localdomain\:1.log as follows: > > ----------------------------- > > TurboVNC Server (Xvnc) 32-bit v1.2.1 (build 20131122) > Copyright (C) 1999-2013 The VirtualGL Project and many others (see > README.txt) > Visit http://www.virtualgl.org <http://www.virtualgl.org/> for more > information on TurboVNC > > 10/03/2014 21:22:45 Using auth configuration file > /etc/turbovncserver-auth.conf > 10/03/2014 21:22:45 Enabled authentication method 'vnc' > 10/03/2014 21:22:45 Enabled authentication method 'otp' > 10/03/2014 21:22:45 Enabled authentication method 'pam-userpwd' > 10/03/2014 21:22:45 Desktop name 'TurboVNC: localhost.localdomain:1 > (yamashita)' > (localhost.localdomain:1) > 10/03/2014 21:22:45 Protocol versions supported: 3.3, 3.7, 3.8, 3.7t, 3.8t > 10/03/2014 21:22:45 Listening for VNC connections on TCP port 5901 > 10/03/2014 21:22:45 Interface 0.0.0.0 > 10/03/2014 21:22:45 Listening for HTTP connections on TCP port 5801 > 10/03/2014 21:22:45 URL http://localhost.localdomain:5801 > <http://localhost.localdomain:5801/> > 10/03/2014 21:22:45 Interface 0.0.0.0 > 10/03/2014 21:22:45 Framebuffer: BGRX 8/8/8/8 > 10/03/2014 21:22:45 VNC extension running! > XDM: Alive respose indicates session dead, declaring session dead > TurboVNC Server (Xvnc) 32-bit v1.2.1 (build 20131122) > Copyright (C) 1999-2013 The VirtualGL Project and many others (see > README.txt) > Visit http://www.virtualgl.org <http://www.virtualgl.org/> for more > information on TurboVNC > > 10/03/2014 21:22:45 Using auth configuration file > /etc/turbovncserver-auth.conf > 10/03/2014 21:22:45 Enabled authentication method 'vnc' > 10/03/2014 21:22:45 Enabled authentication method 'otp' > 10/03/2014 21:22:45 Enabled authentication method 'pam-userpwd' > 10/03/2014 21:22:45 Desktop name 'TurboVNC: localhost.localdomain:1 > (yamashita)' (localhost.localdomain:1) > 10/03/2014 21:22:45 Protocol versions supported: 3.3, 3.7, 3.8, 3.7t, 3.8t > 10/03/2014 21:22:45 Framebuffer: BGRX 8/8/8/8 > 10/03/2014 21:22:45 VNC extension running! > AUDIT: Mon Mar 10 21:22:45 2014: 2610 Xvnc: client 1 rejected from IP > 127.0.0.1 port 57791 > Auth name: MIT-MAGIC-COOKIE-1 ID: -1 > AUDIT: Mon Mar 10 21:22:45 2014: 2610 Xvnc: client 1 rejected from IP > 127.0.0.1 port 57792 > Auth name: MIT-MAGIC-COOKIE-1 ID: -1 > AUDIT: Mon Mar 10 21:22:45 2014: 2610 Xvnc: client 1 rejected from IP > 127.0.0.1 port 57793 > Auth name: MIT-MAGIC-COOKIE-1 ID: -1 > XDM: Alive respose indicates session dead, declaring session dead > > ------------------------- > > My OS is Cent OS 6.2 and all security settings, selinux and iptable, > are disabled. > > Could anyone give me some advises ? > > Regards, > > Isamu > > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > > > > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > |
|
From: Göbbert, J. H. <goe...@vr...> - 2014-10-03 05:53:54
|
Yes, that link gives just the answer. Thanks. I was searching the internet with the wrong keywords like "virtualGL vs. multiple remote 3D X-servers" or simular. best, Jens Henrik ________________________________________ From: DRC [dco...@us...] Sent: Friday, October 03, 2014 7:45 AM To: vir...@li... Subject: Re: [VirtualGL-Users] virtualGL vs. multiple remote 3D X-servers A lot of what I posted here is just a more nuts-and-bolts version of the same information that is provided in the background article: http://www.virtualgl.org/About/Background The nuts and bolts change over time, and I don't really want to have to re-write that article every time they do. The basic 10,000-foot managerial explanation has not changed in 10 years, and it's basically this: VirtualGL enables multi-user GPU sharing/load balancing, whereas the screen scraping/dedicated GPU approach doesn't. On 10/3/14 12:13 AM, Göbbert, Jens Henrik wrote: > Hi VirtualGL, > > thanks for your detailed answer - we searched, but could not find a good explanaition like this: > >> Efficiency and cost. VirtualGL and TurboVNC are only going to take >> up resources when they are running. A full-blown X server has a much >> larger footprint. The screen scraper will eat up CPU cycles even if the >> 3D application is sitting there doing nothing, because the screen >> scraper is having to poll for changes in the pixels. > > best, > Jens Henrik > > P.S.: You might want mention the 'screen scraper'-approach in the documentation/introduction/wiki and compare it with VirtualGL. > > ________________________________________ > From: DRC [dco...@us...] > Sent: Thursday, October 02, 2014 10:26 PM > To: vir...@li... > Subject: Re: [VirtualGL-Users] virtualGL vs. multiple remote 3D X-servers > > For starters, GLXgears is not a GPU benchmark. It is a CPU benchmark, > because its geometry and window size are so small that its frame rate is > almost entirely dependent upon CPU overhead. Please (and I'm saying > this to the entire 3D application community, not just you) stop quoting > GLXgears frame rates as if they have any relevance to GPU performance. > > GLXspheres (which is provided with VirtualGL) is a much better solution > if you need something quick & dirty, but you also have to understand > what it is you're benchmarking. GLXspheres is designed primarily as an > image benchmark for remote display systems, so it is meant to be limited > by the drawing speed of the remote display solution, not by the 3D > rendering speed. On the K5000 that nVidia was kind enough to send me > for testing, I can literally max out the geometry size on GLXspheres-- > over a billion polys-- and it keeps chugging along at 300 fps, because > it's using display lists by default (and thus, once the geometry is > downloaded once to the GPU, subsequent frames just instruct the GPU to > reuse that same geometry.) > > Not every app uses display lists, though, so if you want to use > GLXspheres as a quick & dirty OpenGL pipeline benchmark, then I suggest > boosting its geometry size to 500,000 or a million polygons and enabling > immediate mode (-m -p 500000). This will give a better sense of what a > "busy" immediate-mode OpenGL app might do. > > When your benchmark is running at hundreds of frames per second, that's > a clue that it isn't testing anything resembling a real-world use case. > In the real world, you're never going to see more than 60 fps because > of your monitor's refresh rate, and most humans can't perceive any > difference after 25-30 fps. In real-world visualization scenarios, if > things get too fast, then the engineers will just start using larger > (more accurate) models. :) > > So why would you use VirtualGL? Several reasons: > > (1) The approach you're describing, in which multiple 3D X servers are > served up with VNC, requires screen scraping. Screen scraping > periodically reads the pixels on the framebuffer and compares them > against a snapshot of the pixels taken earlier. There are some > solutions-- the RealVNC/TigerVNC X.org module and x11vnc, for instance-- > that are a little more sophisticated than just a plain screen scraper. > They use the X Damage extension and other techniques to get hints as to > which part of the display to read back, but these techniques don't work > well (or sometimes at all) with hardware-accelerated 3D. Either the > OpenGL pixels don't update at all, or OpenGL drawing is out of sync with > the delivery of pixels to the client (and thus you get tearing artifacts.) > > I personally tested the version of x11vnc that ships with libvncserver > 0.9.9 (libvncserver 0.9.9 has the TurboVNC extensions, so at the library > level at least, it's a fast solution.) I observed bad tearing artifacts > for a few seconds, and then it would hang because the X server got too > busy processing the 3D drawing and couldn't spare any cycles for x11vnc > (X servers are single-threaded.) Turning off X Damage support in x11vnc > made the solution at least usable, but without X Damage support, x11vnc > is mainly just polling the display, so it will incur a lot of overhead. > This became particularly evident when using interactive apps. > glxspheres -i > > I couldn't get the TigerVNC 1.3.1 X.org module to work at all, and the > TigerVNC 1.1.0 X.org module (the one that ships with RHEL 6) did not > display any pixels from the OpenGL app. > > (2) The ability to share a GPU among multiple users. VirtualGL > installations often have dozens of users sharing the GPU, because not > all of them will be using it simultaneously, and even when they are, > they might only need to process a small model that uses 1% of the GPU's > capacity. Like I said above, a K5000 pipe can process billions of > polys/second. That's the equivalent performance of at least a handful > of desktop GPUs (if not more) combined. It's a lot more cost-effective > to buy beefy servers with really beefy multi-pipe GPU configurations and > provision the servers to handle 40 or 50 users. You can't do that if > each user has a dedicated GPU, because you can't install 40 or 50 > dedicated GPUs into a single machine. > > (3) Efficiency and cost. VirtualGL and TurboVNC are only going to take > up resources when they are running. A full-blown X server has a much > larger footprint. The screen scraper will eat up CPU cycles even if the > 3D application is sitting there doing nothing, because the screen > scraper is having to poll for changes in the pixels. > TurboVNC/VirtualGL, on the other hand, will not take up CPU cycles > unless the 3D application is actually drawing something. Furthermore, > if the user goes to lunch, their GPU is now sitting completely idle. If > the user only needs to process a 50,000-polygon model, then their > dedicated GPU is being grossly underutilized. > > > On 10/2/14 12:36 PM, Göbbert, Jens Henrik wrote: >> Hi VirtualGL, >> >> I am using virtualGL since some years now to get 3d-accelerated remote >> visualization possible via TurboVNC on our frond-nodes of a >> compute-cluster via. >> >> Just recently I was asked why remote 3d-accelerated desktop scenario is >> not possible with multiple 3d-accelerated X-servers+VNC, each dedicated >> to a single user? >> >> I cannot answer this question as I would like to, as it seems to run fine: >> >> We tested to run multiple 3d-accelerated X-servers on the same machine >> with a single GPU without any problems. >> >> glxgears showed 600 frames per second on both at the same time –> both >> X-server where 3d-accelerated. >> >> Why shouldn´t I go for multiple 3D-X-server (one for each user) >> >> and send its framebuffer via VNC to the workstations >> >> instead of using VirtualGL? >> >> Best, >> >> Jens Henrik > > ------------------------------------------------------------------------------ > Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer > Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports > Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper > Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer > http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > ------------------------------------------------------------------------------ > Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer > Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports > Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper > Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer > http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > ------------------------------------------------------------------------------ Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk _______________________________________________ VirtualGL-Users mailing list Vir...@li... https://lists.sourceforge.net/lists/listinfo/virtualgl-users |
|
From: DRC <dco...@us...> - 2014-10-03 05:45:40
|
A lot of what I posted here is just a more nuts-and-bolts version of the same information that is provided in the background article: http://www.virtualgl.org/About/Background The nuts and bolts change over time, and I don't really want to have to re-write that article every time they do. The basic 10,000-foot managerial explanation has not changed in 10 years, and it's basically this: VirtualGL enables multi-user GPU sharing/load balancing, whereas the screen scraping/dedicated GPU approach doesn't. On 10/3/14 12:13 AM, Göbbert, Jens Henrik wrote: > Hi VirtualGL, > > thanks for your detailed answer - we searched, but could not find a good explanaition like this: > >> Efficiency and cost. VirtualGL and TurboVNC are only going to take >> up resources when they are running. A full-blown X server has a much >> larger footprint. The screen scraper will eat up CPU cycles even if the >> 3D application is sitting there doing nothing, because the screen >> scraper is having to poll for changes in the pixels. > > best, > Jens Henrik > > P.S.: You might want mention the 'screen scraper'-approach in the documentation/introduction/wiki and compare it with VirtualGL. > > ________________________________________ > From: DRC [dco...@us...] > Sent: Thursday, October 02, 2014 10:26 PM > To: vir...@li... > Subject: Re: [VirtualGL-Users] virtualGL vs. multiple remote 3D X-servers > > For starters, GLXgears is not a GPU benchmark. It is a CPU benchmark, > because its geometry and window size are so small that its frame rate is > almost entirely dependent upon CPU overhead. Please (and I'm saying > this to the entire 3D application community, not just you) stop quoting > GLXgears frame rates as if they have any relevance to GPU performance. > > GLXspheres (which is provided with VirtualGL) is a much better solution > if you need something quick & dirty, but you also have to understand > what it is you're benchmarking. GLXspheres is designed primarily as an > image benchmark for remote display systems, so it is meant to be limited > by the drawing speed of the remote display solution, not by the 3D > rendering speed. On the K5000 that nVidia was kind enough to send me > for testing, I can literally max out the geometry size on GLXspheres-- > over a billion polys-- and it keeps chugging along at 300 fps, because > it's using display lists by default (and thus, once the geometry is > downloaded once to the GPU, subsequent frames just instruct the GPU to > reuse that same geometry.) > > Not every app uses display lists, though, so if you want to use > GLXspheres as a quick & dirty OpenGL pipeline benchmark, then I suggest > boosting its geometry size to 500,000 or a million polygons and enabling > immediate mode (-m -p 500000). This will give a better sense of what a > "busy" immediate-mode OpenGL app might do. > > When your benchmark is running at hundreds of frames per second, that's > a clue that it isn't testing anything resembling a real-world use case. > In the real world, you're never going to see more than 60 fps because > of your monitor's refresh rate, and most humans can't perceive any > difference after 25-30 fps. In real-world visualization scenarios, if > things get too fast, then the engineers will just start using larger > (more accurate) models. :) > > So why would you use VirtualGL? Several reasons: > > (1) The approach you're describing, in which multiple 3D X servers are > served up with VNC, requires screen scraping. Screen scraping > periodically reads the pixels on the framebuffer and compares them > against a snapshot of the pixels taken earlier. There are some > solutions-- the RealVNC/TigerVNC X.org module and x11vnc, for instance-- > that are a little more sophisticated than just a plain screen scraper. > They use the X Damage extension and other techniques to get hints as to > which part of the display to read back, but these techniques don't work > well (or sometimes at all) with hardware-accelerated 3D. Either the > OpenGL pixels don't update at all, or OpenGL drawing is out of sync with > the delivery of pixels to the client (and thus you get tearing artifacts.) > > I personally tested the version of x11vnc that ships with libvncserver > 0.9.9 (libvncserver 0.9.9 has the TurboVNC extensions, so at the library > level at least, it's a fast solution.) I observed bad tearing artifacts > for a few seconds, and then it would hang because the X server got too > busy processing the 3D drawing and couldn't spare any cycles for x11vnc > (X servers are single-threaded.) Turning off X Damage support in x11vnc > made the solution at least usable, but without X Damage support, x11vnc > is mainly just polling the display, so it will incur a lot of overhead. > This became particularly evident when using interactive apps. > glxspheres -i > > I couldn't get the TigerVNC 1.3.1 X.org module to work at all, and the > TigerVNC 1.1.0 X.org module (the one that ships with RHEL 6) did not > display any pixels from the OpenGL app. > > (2) The ability to share a GPU among multiple users. VirtualGL > installations often have dozens of users sharing the GPU, because not > all of them will be using it simultaneously, and even when they are, > they might only need to process a small model that uses 1% of the GPU's > capacity. Like I said above, a K5000 pipe can process billions of > polys/second. That's the equivalent performance of at least a handful > of desktop GPUs (if not more) combined. It's a lot more cost-effective > to buy beefy servers with really beefy multi-pipe GPU configurations and > provision the servers to handle 40 or 50 users. You can't do that if > each user has a dedicated GPU, because you can't install 40 or 50 > dedicated GPUs into a single machine. > > (3) Efficiency and cost. VirtualGL and TurboVNC are only going to take > up resources when they are running. A full-blown X server has a much > larger footprint. The screen scraper will eat up CPU cycles even if the > 3D application is sitting there doing nothing, because the screen > scraper is having to poll for changes in the pixels. > TurboVNC/VirtualGL, on the other hand, will not take up CPU cycles > unless the 3D application is actually drawing something. Furthermore, > if the user goes to lunch, their GPU is now sitting completely idle. If > the user only needs to process a 50,000-polygon model, then their > dedicated GPU is being grossly underutilized. > > > On 10/2/14 12:36 PM, Göbbert, Jens Henrik wrote: >> Hi VirtualGL, >> >> I am using virtualGL since some years now to get 3d-accelerated remote >> visualization possible via TurboVNC on our frond-nodes of a >> compute-cluster via. >> >> Just recently I was asked why remote 3d-accelerated desktop scenario is >> not possible with multiple 3d-accelerated X-servers+VNC, each dedicated >> to a single user? >> >> I cannot answer this question as I would like to, as it seems to run fine: >> >> We tested to run multiple 3d-accelerated X-servers on the same machine >> with a single GPU without any problems. >> >> glxgears showed 600 frames per second on both at the same time –> both >> X-server where 3d-accelerated. >> >> Why shouldn´t I go for multiple 3D-X-server (one for each user) >> >> and send its framebuffer via VNC to the workstations >> >> instead of using VirtualGL? >> >> Best, >> >> Jens Henrik > > ------------------------------------------------------------------------------ > Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer > Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports > Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper > Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer > http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > ------------------------------------------------------------------------------ > Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer > Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports > Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper > Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer > http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > |
|
From: Göbbert, J. H. <goe...@vr...> - 2014-10-03 05:14:00
|
Hi VirtualGL, thanks for your detailed answer - we searched, but could not find a good explanaition like this: > Efficiency and cost. VirtualGL and TurboVNC are only going to take > up resources when they are running. A full-blown X server has a much > larger footprint. The screen scraper will eat up CPU cycles even if the > 3D application is sitting there doing nothing, because the screen > scraper is having to poll for changes in the pixels. best, Jens Henrik P.S.: You might want mention the 'screen scraper'-approach in the documentation/introduction/wiki and compare it with VirtualGL. ________________________________________ From: DRC [dco...@us...] Sent: Thursday, October 02, 2014 10:26 PM To: vir...@li... Subject: Re: [VirtualGL-Users] virtualGL vs. multiple remote 3D X-servers For starters, GLXgears is not a GPU benchmark. It is a CPU benchmark, because its geometry and window size are so small that its frame rate is almost entirely dependent upon CPU overhead. Please (and I'm saying this to the entire 3D application community, not just you) stop quoting GLXgears frame rates as if they have any relevance to GPU performance. GLXspheres (which is provided with VirtualGL) is a much better solution if you need something quick & dirty, but you also have to understand what it is you're benchmarking. GLXspheres is designed primarily as an image benchmark for remote display systems, so it is meant to be limited by the drawing speed of the remote display solution, not by the 3D rendering speed. On the K5000 that nVidia was kind enough to send me for testing, I can literally max out the geometry size on GLXspheres-- over a billion polys-- and it keeps chugging along at 300 fps, because it's using display lists by default (and thus, once the geometry is downloaded once to the GPU, subsequent frames just instruct the GPU to reuse that same geometry.) Not every app uses display lists, though, so if you want to use GLXspheres as a quick & dirty OpenGL pipeline benchmark, then I suggest boosting its geometry size to 500,000 or a million polygons and enabling immediate mode (-m -p 500000). This will give a better sense of what a "busy" immediate-mode OpenGL app might do. When your benchmark is running at hundreds of frames per second, that's a clue that it isn't testing anything resembling a real-world use case. In the real world, you're never going to see more than 60 fps because of your monitor's refresh rate, and most humans can't perceive any difference after 25-30 fps. In real-world visualization scenarios, if things get too fast, then the engineers will just start using larger (more accurate) models. :) So why would you use VirtualGL? Several reasons: (1) The approach you're describing, in which multiple 3D X servers are served up with VNC, requires screen scraping. Screen scraping periodically reads the pixels on the framebuffer and compares them against a snapshot of the pixels taken earlier. There are some solutions-- the RealVNC/TigerVNC X.org module and x11vnc, for instance-- that are a little more sophisticated than just a plain screen scraper. They use the X Damage extension and other techniques to get hints as to which part of the display to read back, but these techniques don't work well (or sometimes at all) with hardware-accelerated 3D. Either the OpenGL pixels don't update at all, or OpenGL drawing is out of sync with the delivery of pixels to the client (and thus you get tearing artifacts.) I personally tested the version of x11vnc that ships with libvncserver 0.9.9 (libvncserver 0.9.9 has the TurboVNC extensions, so at the library level at least, it's a fast solution.) I observed bad tearing artifacts for a few seconds, and then it would hang because the X server got too busy processing the 3D drawing and couldn't spare any cycles for x11vnc (X servers are single-threaded.) Turning off X Damage support in x11vnc made the solution at least usable, but without X Damage support, x11vnc is mainly just polling the display, so it will incur a lot of overhead. This became particularly evident when using interactive apps. glxspheres -i I couldn't get the TigerVNC 1.3.1 X.org module to work at all, and the TigerVNC 1.1.0 X.org module (the one that ships with RHEL 6) did not display any pixels from the OpenGL app. (2) The ability to share a GPU among multiple users. VirtualGL installations often have dozens of users sharing the GPU, because not all of them will be using it simultaneously, and even when they are, they might only need to process a small model that uses 1% of the GPU's capacity. Like I said above, a K5000 pipe can process billions of polys/second. That's the equivalent performance of at least a handful of desktop GPUs (if not more) combined. It's a lot more cost-effective to buy beefy servers with really beefy multi-pipe GPU configurations and provision the servers to handle 40 or 50 users. You can't do that if each user has a dedicated GPU, because you can't install 40 or 50 dedicated GPUs into a single machine. (3) Efficiency and cost. VirtualGL and TurboVNC are only going to take up resources when they are running. A full-blown X server has a much larger footprint. The screen scraper will eat up CPU cycles even if the 3D application is sitting there doing nothing, because the screen scraper is having to poll for changes in the pixels. TurboVNC/VirtualGL, on the other hand, will not take up CPU cycles unless the 3D application is actually drawing something. Furthermore, if the user goes to lunch, their GPU is now sitting completely idle. If the user only needs to process a 50,000-polygon model, then their dedicated GPU is being grossly underutilized. On 10/2/14 12:36 PM, Göbbert, Jens Henrik wrote: > Hi VirtualGL, > > I am using virtualGL since some years now to get 3d-accelerated remote > visualization possible via TurboVNC on our frond-nodes of a > compute-cluster via. > > Just recently I was asked why remote 3d-accelerated desktop scenario is > not possible with multiple 3d-accelerated X-servers+VNC, each dedicated > to a single user? > > I cannot answer this question as I would like to, as it seems to run fine: > > We tested to run multiple 3d-accelerated X-servers on the same machine > with a single GPU without any problems. > > glxgears showed 600 frames per second on both at the same time –> both > X-server where 3d-accelerated. > > Why shouldn´t I go for multiple 3D-X-server (one for each user) > > and send its framebuffer via VNC to the workstations > > instead of using VirtualGL? > > Best, > > Jens Henrik ------------------------------------------------------------------------------ Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk _______________________________________________ VirtualGL-Users mailing list Vir...@li... https://lists.sourceforge.net/lists/listinfo/virtualgl-users |
|
From: DRC <dco...@us...> - 2014-10-03 04:51:06
|
On 10/2/14 11:44 PM, DRC wrote: > To pop the stack on the original poster's questions, at the OpenGL > level, you can get linear or even super-linear scaling of the GPU > resource among multiple users. If I run 5 sessions of GLXspheres at a > time, each will perform at about 200 million quads/second. If I run 10, > each will perform at about 100 million quads/second. If each user is > working with a 1-million-polygon model, then that's over 30 users at 30 > frames/second. Obviously there will be other constraints on this in a > real-world environment-- VirtualGL and TurboVNC have some CPU overhead > to compress/deliver the 3D images to the client, users might be dealing > with larger models, applications that use a lot of textures won't scale > as well because they'll exhaust GPU memory, etc. However, you're also > not going to have all 30 users banging away all the time. Some of them > will be down the hall, some of them will be reading e-mail, some of them > won't even be in the office, some will be staring at the model and > making small changes rather than manipulating the entire scene. I should also mention that another constraint you'll have in a real-world environment is reading back the pixels, and you may exhaust your bus bandwidth before you actually exhaust your GPU processing power. But my point is-- people throw quite a few users onto their GPUs. Santos, one of our largest (if not our largest) installations, provisions about 13-16 users per high-end nVidia pipe, although the user workloads vary greatly (oil & gas apps run the gamut of everything from straight 2D X11 all the way to monster 3D visualization.) > On 10/2/14 5:13 PM, Nathan Kidd wrote: >> On 02/10/14 04:26 PM, DRC wrote: >>> On the K5000 that nVidia was kind enough to send me >>> for testing, I can literally max out the geometry size on GLXspheres-- >>> over a billion polys-- and it keeps chugging along at 300 fps, because >>> it's using display lists by default (and thus, once the geometry is >>> downloaded once to the GPU, subsequent frames just instruct the GPU to >>> reuse that same geometry.) >> >> FYI I recently was testing the theoretical limit on a card and went down >> the path of: >> `glxspheres -p 1000000` "no difference" >> `glxspheres -p 10000000` "hmmm, not breaking a sweat" >> `glxspheres -p 1000000000` "wow" >> >> Then I took a trace and found out that the number of actual ROPs was no >> different between 10 million and 1 billion. gluSphere() apparently hits >> a limit on how much geometry it produces and won't go higher (increasing >> window size didn't do anything; I didn't read the GLU source). >> >> Bottom line: `glxspheres -p 3500000` (which equates to a little over 14 >> millon ROPs per frame) is the highest load the stock glxspheres/libGLU >> will produce. >> >> -Nathan >> |
|
From: DRC <dco...@us...> - 2014-10-03 04:45:25
|
We crossed the streams. I discovered exactly the same thing (refer to previous message.) On 10/2/14 10:38 PM, Nathan Kidd wrote: > On 02/10/14 09:16 PM, DRC wrote: >> Note that there are actually 61 spheres in the default configuration (20 >> per ring + the 1 in the center), so apparently the polygon limit is >> around 60,000 per sphere. It might simply be that the polygon count is >> clamped to a 16-bit value or something. > > Ok, I won't be lazy. `apt-get source libglu1-mesa` and a little > LibreOffice Calc later the restriction is quite plain: > > quad.c: > #define CACHE_SIZE 240 > ... > gluSphere(GLUquadric *qobj, GLdouble radius, GLint slices, GLint stacks) > if (slices >= CACHE_SIZE) slices = CACHE_SIZE-1; > if (stacks >= CACHE_SIZE) stacks = CACHE_SIZE-1; > > And the solid drawing path is roughly: > (glVertex + glNormal) * (slice*2 + 2 + (stacks -2)*slices*2) > > Thus max ROPs/sphere = 227532, * 61 spheres = 13879452 total ROPs. > > Looks like the easiest path to increased poly count is to increase the > number of spheres. > > -Nathan > |
|
From: DRC <dco...@us...> - 2014-10-03 04:44:35
|
Confirmed in the libGLU source-- slices and stacks are clamped at 240, so 57600 polys per sphere is the max-- very nicely fits your observation (57600 * 61 spheres = 3513600 polys.) I modified GLXspheres such that it calculates whether this limit will be reached, warns the user, and prints the actual polygon count, taking the limit into account. I also added an option (-n) for increasing the sphere count, which enables polygon counts higher than 3.5 million. For instance: >glxspheres -n 240 -p 10000000 Polygons in scene: 10029456 (241 spheres * 41616 polys/spheres) Visual ID of window: 0x2c Context is Direct OpenGL Renderer: Quadro K5000/PCIe/SSE2 92.948421 frames/sec - 103.730438 Mpixels/sec 95.584804 frames/sec - 106.672641 Mpixels/sec > glxspheres -n 2400 -p 100000000 Polygons in scene: 99920016 (2401 spheres * 41616 polys/spheres) Visual ID of window: 0x2c Context is Direct OpenGL Renderer: Quadro K5000/PCIe/SSE2 10.136359 frames/sec - 11.312176 Mpixels/sec 9.982275 frames/sec - 11.140219 Mpixels/sec Much more along the lines of what I would expect from the K5000 (about a billion quads/second. The press usually reports it at 1.8 billion tris/sec, so that number make sense.) To pop the stack on the original poster's questions, at the OpenGL level, you can get linear or even super-linear scaling of the GPU resource among multiple users. If I run 5 sessions of GLXspheres at a time, each will perform at about 200 million quads/second. If I run 10, each will perform at about 100 million quads/second. If each user is working with a 1-million-polygon model, then that's over 30 users at 30 frames/second. Obviously there will be other constraints on this in a real-world environment-- VirtualGL and TurboVNC have some CPU overhead to compress/deliver the 3D images to the client, users might be dealing with larger models, applications that use a lot of textures won't scale as well because they'll exhaust GPU memory, etc. However, you're also not going to have all 30 users banging away all the time. Some of them will be down the hall, some of them will be reading e-mail, some of them won't even be in the office, some will be staring at the model and making small changes rather than manipulating the entire scene. On 10/2/14 5:13 PM, Nathan Kidd wrote: > On 02/10/14 04:26 PM, DRC wrote: >> On the K5000 that nVidia was kind enough to send me >> for testing, I can literally max out the geometry size on GLXspheres-- >> over a billion polys-- and it keeps chugging along at 300 fps, because >> it's using display lists by default (and thus, once the geometry is >> downloaded once to the GPU, subsequent frames just instruct the GPU to >> reuse that same geometry.) > > FYI I recently was testing the theoretical limit on a card and went down > the path of: > `glxspheres -p 1000000` "no difference" > `glxspheres -p 10000000` "hmmm, not breaking a sweat" > `glxspheres -p 1000000000` "wow" > > Then I took a trace and found out that the number of actual ROPs was no > different between 10 million and 1 billion. gluSphere() apparently hits > a limit on how much geometry it produces and won't go higher (increasing > window size didn't do anything; I didn't read the GLU source). > > Bottom line: `glxspheres -p 3500000` (which equates to a little over 14 > millon ROPs per frame) is the highest load the stock glxspheres/libGLU > will produce. > > -Nathan > |
|
From: Nathan K. <nat...@sp...> - 2014-10-03 03:38:44
|
On 02/10/14 09:16 PM, DRC wrote:
> Note that there are actually 61 spheres in the default configuration (20
> per ring + the 1 in the center), so apparently the polygon limit is
> around 60,000 per sphere. It might simply be that the polygon count is
> clamped to a 16-bit value or something.
Ok, I won't be lazy. `apt-get source libglu1-mesa` and a little
LibreOffice Calc later the restriction is quite plain:
quad.c:
#define CACHE_SIZE 240
...
gluSphere(GLUquadric *qobj, GLdouble radius, GLint slices, GLint stacks)
if (slices >= CACHE_SIZE) slices = CACHE_SIZE-1;
if (stacks >= CACHE_SIZE) stacks = CACHE_SIZE-1;
And the solid drawing path is roughly:
(glVertex + glNormal) * (slice*2 + 2 + (stacks -2)*slices*2)
Thus max ROPs/sphere = 227532, * 61 spheres = 13879452 total ROPs.
Looks like the easiest path to increased poly count is to increase the
number of spheres.
-Nathan
--
Nathan Kidd OpenText Connectivity Solutions nk...@op...
Software Developer http://connectivity.opentext.com +1 905-762-6001
|
|
From: DRC <dco...@us...> - 2014-10-03 01:41:56
|
Well, gee, that makes a lot more sense. Dean, if you're reading this, the last column in that chart I sent you a year ago should read "3.5 million" and not "10 million." :| I made a note about that in the usage screen of GLXspheres. I think I should further modify it so that it allows the sphere count to be adjusted. It was kept static in order to guarantee that the same image was always generated. Note that there are actually 61 spheres in the default configuration (20 per ring + the 1 in the center), so apparently the polygon limit is around 60,000 per sphere. It might simply be that the polygon count is clamped to a 16-bit value or something. On 10/2/14 5:13 PM, Nathan Kidd wrote: > On 02/10/14 04:26 PM, DRC wrote: >> On the K5000 that nVidia was kind enough to send me >> for testing, I can literally max out the geometry size on GLXspheres-- >> over a billion polys-- and it keeps chugging along at 300 fps, because >> it's using display lists by default (and thus, once the geometry is >> downloaded once to the GPU, subsequent frames just instruct the GPU to >> reuse that same geometry.) > > FYI I recently was testing the theoretical limit on a card and went down > the path of: > `glxspheres -p 1000000` "no difference" > `glxspheres -p 10000000` "hmmm, not breaking a sweat" > `glxspheres -p 1000000000` "wow" > > Then I took a trace and found out that the number of actual ROPs was no > different between 10 million and 1 billion. gluSphere() apparently hits > a limit on how much geometry it produces and won't go higher (increasing > window size didn't do anything; I didn't read the GLU source). > > Bottom line: `glxspheres -p 3500000` (which equates to a little over 14 > millon ROPs per frame) is the highest load the stock glxspheres/libGLU > will produce. > > -Nathan > |
|
From: Nathan K. <nat...@sp...> - 2014-10-02 22:14:00
|
On 02/10/14 04:26 PM, DRC wrote: > On the K5000 that nVidia was kind enough to send me > for testing, I can literally max out the geometry size on GLXspheres-- > over a billion polys-- and it keeps chugging along at 300 fps, because > it's using display lists by default (and thus, once the geometry is > downloaded once to the GPU, subsequent frames just instruct the GPU to > reuse that same geometry.) FYI I recently was testing the theoretical limit on a card and went down the path of: `glxspheres -p 1000000` "no difference" `glxspheres -p 10000000` "hmmm, not breaking a sweat" `glxspheres -p 1000000000` "wow" Then I took a trace and found out that the number of actual ROPs was no different between 10 million and 1 billion. gluSphere() apparently hits a limit on how much geometry it produces and won't go higher (increasing window size didn't do anything; I didn't read the GLU source). Bottom line: `glxspheres -p 3500000` (which equates to a little over 14 millon ROPs per frame) is the highest load the stock glxspheres/libGLU will produce. -Nathan -- Nathan Kidd OpenText Connectivity Solutions nk...@op... Software Developer http://connectivity.opentext.com +1 905-762-6001 |
|
From: DRC <dco...@us...> - 2014-10-02 20:26:55
|
For starters, GLXgears is not a GPU benchmark. It is a CPU benchmark, because its geometry and window size are so small that its frame rate is almost entirely dependent upon CPU overhead. Please (and I'm saying this to the entire 3D application community, not just you) stop quoting GLXgears frame rates as if they have any relevance to GPU performance. GLXspheres (which is provided with VirtualGL) is a much better solution if you need something quick & dirty, but you also have to understand what it is you're benchmarking. GLXspheres is designed primarily as an image benchmark for remote display systems, so it is meant to be limited by the drawing speed of the remote display solution, not by the 3D rendering speed. On the K5000 that nVidia was kind enough to send me for testing, I can literally max out the geometry size on GLXspheres-- over a billion polys-- and it keeps chugging along at 300 fps, because it's using display lists by default (and thus, once the geometry is downloaded once to the GPU, subsequent frames just instruct the GPU to reuse that same geometry.) Not every app uses display lists, though, so if you want to use GLXspheres as a quick & dirty OpenGL pipeline benchmark, then I suggest boosting its geometry size to 500,000 or a million polygons and enabling immediate mode (-m -p 500000). This will give a better sense of what a "busy" immediate-mode OpenGL app might do. When your benchmark is running at hundreds of frames per second, that's a clue that it isn't testing anything resembling a real-world use case. In the real world, you're never going to see more than 60 fps because of your monitor's refresh rate, and most humans can't perceive any difference after 25-30 fps. In real-world visualization scenarios, if things get too fast, then the engineers will just start using larger (more accurate) models. :) So why would you use VirtualGL? Several reasons: (1) The approach you're describing, in which multiple 3D X servers are served up with VNC, requires screen scraping. Screen scraping periodically reads the pixels on the framebuffer and compares them against a snapshot of the pixels taken earlier. There are some solutions-- the RealVNC/TigerVNC X.org module and x11vnc, for instance-- that are a little more sophisticated than just a plain screen scraper. They use the X Damage extension and other techniques to get hints as to which part of the display to read back, but these techniques don't work well (or sometimes at all) with hardware-accelerated 3D. Either the OpenGL pixels don't update at all, or OpenGL drawing is out of sync with the delivery of pixels to the client (and thus you get tearing artifacts.) I personally tested the version of x11vnc that ships with libvncserver 0.9.9 (libvncserver 0.9.9 has the TurboVNC extensions, so at the library level at least, it's a fast solution.) I observed bad tearing artifacts for a few seconds, and then it would hang because the X server got too busy processing the 3D drawing and couldn't spare any cycles for x11vnc (X servers are single-threaded.) Turning off X Damage support in x11vnc made the solution at least usable, but without X Damage support, x11vnc is mainly just polling the display, so it will incur a lot of overhead. This became particularly evident when using interactive apps. glxspheres -i I couldn't get the TigerVNC 1.3.1 X.org module to work at all, and the TigerVNC 1.1.0 X.org module (the one that ships with RHEL 6) did not display any pixels from the OpenGL app. (2) The ability to share a GPU among multiple users. VirtualGL installations often have dozens of users sharing the GPU, because not all of them will be using it simultaneously, and even when they are, they might only need to process a small model that uses 1% of the GPU's capacity. Like I said above, a K5000 pipe can process billions of polys/second. That's the equivalent performance of at least a handful of desktop GPUs (if not more) combined. It's a lot more cost-effective to buy beefy servers with really beefy multi-pipe GPU configurations and provision the servers to handle 40 or 50 users. You can't do that if each user has a dedicated GPU, because you can't install 40 or 50 dedicated GPUs into a single machine. (3) Efficiency and cost. VirtualGL and TurboVNC are only going to take up resources when they are running. A full-blown X server has a much larger footprint. The screen scraper will eat up CPU cycles even if the 3D application is sitting there doing nothing, because the screen scraper is having to poll for changes in the pixels. TurboVNC/VirtualGL, on the other hand, will not take up CPU cycles unless the 3D application is actually drawing something. Furthermore, if the user goes to lunch, their GPU is now sitting completely idle. If the user only needs to process a 50,000-polygon model, then their dedicated GPU is being grossly underutilized. On 10/2/14 12:36 PM, Göbbert, Jens Henrik wrote: > Hi VirtualGL, > > I am using virtualGL since some years now to get 3d-accelerated remote > visualization possible via TurboVNC on our frond-nodes of a > compute-cluster via. > > Just recently I was asked why remote 3d-accelerated desktop scenario is > not possible with multiple 3d-accelerated X-servers+VNC, each dedicated > to a single user? > > I cannot answer this question as I would like to, as it seems to run fine: > > We tested to run multiple 3d-accelerated X-servers on the same machine > with a single GPU without any problems. > > glxgears showed 600 frames per second on both at the same time –> both > X-server where 3d-accelerated. > > Why shouldn´t I go for multiple 3D-X-server (one for each user) > > and send its framebuffer via VNC to the workstations > > instead of using VirtualGL? > > Best, > > Jens Henrik |
|
From: Robert G. <ra...@rd...> - 2014-10-02 19:33:33
|
Someone please correct me if I am wrong here but isn't 600 fps low for 3D? I get that same performance from software GLX from TigerVNC. When running the same tests on an actual hardware X server with 3D acceleration, I numbers in the thousands. The 600 fps is what I get from using CPU resources to simulate 3D operations instead of a GPU actually doing the rendering operations. On Thu, Oct 2, 2014 at 1:36 PM, Göbbert, Jens Henrik < goe...@vr...> wrote: > Hi VirtualGL, > > > > I am using virtualGL since some years now to get 3d-accelerated remote > visualization possible via TurboVNC on our frond-nodes of a compute-cluster > via. > > Just recently I was asked why remote 3d-accelerated desktop scenario is > not possible with multiple 3d-accelerated X-servers+VNC, each dedicated to > a single user? > > > > I cannot answer this question as I would like to, as it seems to run fine: > > We tested to run multiple 3d-accelerated X-servers on the same machine > with a single GPU without any problems. > > glxgears showed 600 frames per second on both at the same time -> both > X-server where 3d-accelerated. > > > > Why shouldn´t I go for multiple 3D-X-server (one for each user) > > and send its framebuffer via VNC to the workstations > > instead of using VirtualGL? > > > > Best, > > Jens Henrik > > > > -- > > Dipl.-Ing. Jens Henrik Göbbert > > > > IT Center - Computational Science & Engineering > > Computer Science Department - Virtual Reality Group > > Jülich Aachen Research Alliance - JARA-HPC > > > > IT Center > > RWTH Aachen University > > Seffenter Weg 23 > > 52074 Aachen, Germany > > Phone: +49 241 80-24381 > > goe...@vr... > > www.vr.rwth-aachen.de > > http://www.jara.org > > > > > ------------------------------------------------------------------------------ > Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer > Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports > Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper > Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer > > http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > |
|
From: Göbbert, J. H. <goe...@vr...> - 2014-10-02 18:11:45
|
Hi VirtualGL, I am using virtualGL since some years now to get 3d-accelerated remote visualization possible via TurboVNC on our frond-nodes of a compute-cluster via. Just recently I was asked why remote 3d-accelerated desktop scenario is not possible with multiple 3d-accelerated X-servers+VNC, each dedicated to a single user? I cannot answer this question as I would like to, as it seems to run fine: We tested to run multiple 3d-accelerated X-servers on the same machine with a single GPU without any problems. glxgears showed 600 frames per second on both at the same time -> both X-server where 3d-accelerated. Why shouldn´t I go for multiple 3D-X-server (one for each user) and send its framebuffer via VNC to the workstations instead of using VirtualGL? Best, Jens Henrik -- Dipl.-Ing. Jens Henrik Göbbert IT Center - Computational Science & Engineering Computer Science Department - Virtual Reality Group Jülich Aachen Research Alliance - JARA-HPC IT Center RWTH Aachen University Seffenter Weg 23 52074 Aachen, Germany Phone: +49 241 80-24381 goe...@vr...<mailto:goe...@vr...> www.vr.rwth-aachen.de<http://www.vr.rwth-aachen.de> http://www.jara.org<http://www.jara.org/> |
|
From: DRC <dco...@us...> - 2014-09-23 20:16:38
|
Please move this discussion to TurboVNC-Users as I asked you to do. There are people on this list who have no interest in TurboVNC. DRC On 9/23/14 2:57 PM, Dieter Blaas wrote: > Hi, > sorry for still posting here but I did not want to interrupt my thread: > Before installing on the server running Centos I have now installed > turbovnc_1.2.80_amd64.deb and turbovnc_1.2.80_i386.deb on two laptops > running ubuntu-14.04 64 and 32 bits, respectively. Regardless from where > I connect I get a black screen and the message (in a box "TurboVNC > Viewer") saying 'connection closed' > > this is in ./vnc/ubuntu:1.log: > ............................. > 23/09/2014 21:37:48 Using protocol version 3.8 > 23/09/2014 21:37:48 Enabling TightVNC protocol extensions > 23/09/2014 21:37:53 Full-control authentication enabled for 192.168.1.9 > 23/09/2014 21:37:53 Pixel format for client 192.168.1.9: > 23/09/2014 21:37:53 32 bpp, depth 24, little endian > 23/09/2014 21:37:53 true colour: max r 255 g 255 b 255, shift r 16 g 8 b 0 > 23/09/2014 21:37:53 no translation needed > 23/09/2014 21:37:53 Enabling Desktop Size protocol extension for client > 192.168. > 1.9 > 23/09/2014 21:37:53 Enabling Extended Desktop Size protocol extension > for client > 192.168.1.9 > 23/09/2014 21:37:53 rfbProcessClientNormalMessage: ignoring unknown > encoding -30 > 7 (fffffecd) > 23/09/2014 21:37:53 Enabling LastRect protocol extension for client > 192.168.1.9 > 23/09/2014 21:37:53 Enabling Continuous Updates protocol extension for > client 19 > 2.168.1.9 > 23/09/2014 21:37:53 Enabling Fence protocol extension for client 192.168.1.9 > 23/09/2014 21:37:53 Using tight encoding for client 192.168.1.9 > 23/09/2014 21:37:53 Using JPEG subsampling 2, Q42 for client 192.168.1.9 > 23/09/2014 21:37:53 Using JPEG quality 30 for client 192.168.1.9 > 23/09/2014 21:37:53 Using JPEG subsampling 1 for client 192.168.1.9 > 23/09/2014 21:37:53 Using Tight compression level 1 for client 192.168.1.9 > Segmentation fault at address (nil) > > Fatal server error: > Caught signal 11 (Segmentation fault). Server aborting > XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":1" > after 14 requests (14 known processed) with 0 events remaining. > > What is wrong? > Thanks, D. > > ------------------------------------------------------------------------ > Dieter Blaas, > Max F. Perutz Laboratories > Medical University of Vienna, > Inst. Med. Biochem., Vienna Biocenter (VBC), > Dr. Bohr Gasse 9/3, > A-1030 Vienna, Austria, > Tel: 0043 1 4277 61630, > Fax: 0043 1 4277 9616, > e-mail: die...@me... > ------------------------------------------------------------------------ > > Am 17.09.2014 15:52, schrieb DRC: >> TurboVNC now has it's own set of mailing lists, so please subscribe to TurboVNC-Users instead of or in addition to VirtualGL-Users.This list is for VirtualGL only. >> >> Please try the pre-release build of the TurboVNC server, available on TurboVNC.org. It has a completely overhauled keyboard handler. If something is still not working in that pre-release, then I can fix it, but the ancient keyboard handler in 1.2.x is not supportable anymore. >> >>> On Sep 17, 2014, at 2:59 AM, Dieter Blaas <die...@me...> wrote: >>> >>> Hi, >>> I am using TurboVNC (in Ubuntu 14.04 and in WIn7) to connect to >>> VirtualGL running on a Centos6.5 machine. Everything is fine except from >>> the lack of transmission of special characters like backtick, acute, and >>> caret (but @ and | do work!). When connecting to the same machine via >>> ssh all characters are displayed correctly. How can I fix this? >>> Thanks, D. >>> >>> ------------------------------------------------------------------------ >>> Dieter Blaas, >>> Max F. Perutz Laboratories >>> Medical University of Vienna, >>> Inst. Med. Biochem., Vienna Biocenter (VBC), >>> Dr. Bohr Gasse 9/3, >>> A-1030 Vienna, Austria, >>> Tel: 0043 1 4277 61630, >>> Fax: 0043 1 4277 9616, >>> e-mail: die...@me... >>> ------------------------------------------------------------------------ >>> >>> >>> ------------------------------------------------------------------------------ >>> Want excitement? >>> Manually upgrade your production database. >>> When you want reliability, choose Perforce >>> Perforce version control. Predictably reliable. >>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> VirtualGL-Users mailing list >>> Vir...@li... >>> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >> ------------------------------------------------------------------------------ >> Want excitement? >> Manually upgrade your production database. >> When you want reliability, choose Perforce >> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> _______________________________________________ >> VirtualGL-Users mailing list >> Vir...@li... >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >> > > > ------------------------------------------------------------------------------ > Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer > Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports > Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper > Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer > http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > |
|
From: Dieter B. <die...@me...> - 2014-09-23 19:57:48
|
Hi,
sorry for still posting here but I did not want to interrupt my thread:
Before installing on the server running Centos I have now installed
turbovnc_1.2.80_amd64.deb and turbovnc_1.2.80_i386.deb on two laptops
running ubuntu-14.04 64 and 32 bits, respectively. Regardless from where
I connect I get a black screen and the message (in a box "TurboVNC
Viewer") saying 'connection closed'
this is in ./vnc/ubuntu:1.log:
.............................
23/09/2014 21:37:48 Using protocol version 3.8
23/09/2014 21:37:48 Enabling TightVNC protocol extensions
23/09/2014 21:37:53 Full-control authentication enabled for 192.168.1.9
23/09/2014 21:37:53 Pixel format for client 192.168.1.9:
23/09/2014 21:37:53 32 bpp, depth 24, little endian
23/09/2014 21:37:53 true colour: max r 255 g 255 b 255, shift r 16 g 8 b 0
23/09/2014 21:37:53 no translation needed
23/09/2014 21:37:53 Enabling Desktop Size protocol extension for client
192.168.
1.9
23/09/2014 21:37:53 Enabling Extended Desktop Size protocol extension
for client
192.168.1.9
23/09/2014 21:37:53 rfbProcessClientNormalMessage: ignoring unknown
encoding -30
7 (fffffecd)
23/09/2014 21:37:53 Enabling LastRect protocol extension for client
192.168.1.9
23/09/2014 21:37:53 Enabling Continuous Updates protocol extension for
client 19
2.168.1.9
23/09/2014 21:37:53 Enabling Fence protocol extension for client 192.168.1.9
23/09/2014 21:37:53 Using tight encoding for client 192.168.1.9
23/09/2014 21:37:53 Using JPEG subsampling 2, Q42 for client 192.168.1.9
23/09/2014 21:37:53 Using JPEG quality 30 for client 192.168.1.9
23/09/2014 21:37:53 Using JPEG subsampling 1 for client 192.168.1.9
23/09/2014 21:37:53 Using Tight compression level 1 for client 192.168.1.9
Segmentation fault at address (nil)
Fatal server error:
Caught signal 11 (Segmentation fault). Server aborting
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":1"
after 14 requests (14 known processed) with 0 events remaining.
What is wrong?
Thanks, D.
------------------------------------------------------------------------
Dieter Blaas,
Max F. Perutz Laboratories
Medical University of Vienna,
Inst. Med. Biochem., Vienna Biocenter (VBC),
Dr. Bohr Gasse 9/3,
A-1030 Vienna, Austria,
Tel: 0043 1 4277 61630,
Fax: 0043 1 4277 9616,
e-mail: die...@me...
------------------------------------------------------------------------
Am 17.09.2014 15:52, schrieb DRC:
> TurboVNC now has it's own set of mailing lists, so please subscribe to TurboVNC-Users instead of or in addition to VirtualGL-Users.This list is for VirtualGL only.
>
> Please try the pre-release build of the TurboVNC server, available on TurboVNC.org. It has a completely overhauled keyboard handler. If something is still not working in that pre-release, then I can fix it, but the ancient keyboard handler in 1.2.x is not supportable anymore.
>
>> On Sep 17, 2014, at 2:59 AM, Dieter Blaas <die...@me...> wrote:
>>
>> Hi,
>> I am using TurboVNC (in Ubuntu 14.04 and in WIn7) to connect to
>> VirtualGL running on a Centos6.5 machine. Everything is fine except from
>> the lack of transmission of special characters like backtick, acute, and
>> caret (but @ and | do work!). When connecting to the same machine via
>> ssh all characters are displayed correctly. How can I fix this?
>> Thanks, D.
>>
>> ------------------------------------------------------------------------
>> Dieter Blaas,
>> Max F. Perutz Laboratories
>> Medical University of Vienna,
>> Inst. Med. Biochem., Vienna Biocenter (VBC),
>> Dr. Bohr Gasse 9/3,
>> A-1030 Vienna, Austria,
>> Tel: 0043 1 4277 61630,
>> Fax: 0043 1 4277 9616,
>> e-mail: die...@me...
>> ------------------------------------------------------------------------
>>
>>
>> ------------------------------------------------------------------------------
>> Want excitement?
>> Manually upgrade your production database.
>> When you want reliability, choose Perforce
>> Perforce version control. Predictably reliable.
>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
>> _______________________________________________
>> VirtualGL-Users mailing list
>> Vir...@li...
>> https://lists.sourceforge.net/lists/listinfo/virtualgl-users
> ------------------------------------------------------------------------------
> Want excitement?
> Manually upgrade your production database.
> When you want reliability, choose Perforce
> Perforce version control. Predictably reliable.
> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
> _______________________________________________
> VirtualGL-Users mailing list
> Vir...@li...
> https://lists.sourceforge.net/lists/listinfo/virtualgl-users
>
|
|
From: Dieter B. <die...@me...> - 2014-09-18 14:46:38
|
Sorry, it was set by a mistake..... ------------------------------------------------------------------------ Dieter Blaas, Max F. Perutz Laboratories Medical University of Vienna, Inst. Med. Biochem., Vienna Biocenter (VBC), Dr. Bohr Gasse 9/3, A-1030 Vienna, Austria, Tel: 0043 1 4277 61630, Fax: 0043 1 4277 9616, e-mail: die...@me... ------------------------------------------------------------------------ Am 18.09.2014 16:22, schrieb DRC: > Please translate your message. I do not speak German. > > On Sep 18, 2014, at 8:16 AM, Dieter Blaas > <die...@me... <mailto:die...@me...>> > wrote: > >> Danke, hab die Mail unten eben bekommen! Werde es selbst machen! Bei >> Problemen melde ich mich wieder. >> Ich nehme an Centos verwendet rpm! >> hG Dieter >> >> ---------------------------------------- >> Hallo Dieter, >> >> Ich habe deine Anfrage in unsere Task Liste übernommen. Sobald ich >> mehr Informationen für dich habe, melde ich mich. >> >> MfgM >> ------------------------------------------------------------------------ >> Dieter Blaas, >> Max F. Perutz Laboratories >> Medical University of Vienna, >> Inst. Med. Biochem., Vienna Biocenter (VBC), >> Dr. Bohr Gasse 9/3, >> A-1030 Vienna, Austria, >> Tel: 0043 1 4277 61630, >> Fax: 0043 1 4277 9616, >> e-mail:die...@me... >> ------------------------------------------------------------------------ >> Am 18.09.2014 15:19, schrieb Kevin Van Workum: >>> http://www.turbovnc.org/DeveloperInfo/PreReleases >>> >>> On Thu, Sep 18, 2014 at 5:54 AM, Dieter Blaas >>> <die...@me... >>> <mailto:die...@me...>> wrote: >>> >>> Many thanks for the hint but under TurboVNC.org >>> <http://TurboVNC.org> I do not find any >>> pre-release. The newest version on this page is turbovnc >>> 1.2.1-20131122 >>> which is already installed! Do I miss anything? >>> D. >>> >>> ------------------------------------------------------------------------ >>> Dieter Blaas, >>> Max F. Perutz Laboratories >>> Medical University of Vienna, >>> Inst. Med. Biochem., Vienna Biocenter (VBC), >>> Dr. Bohr Gasse 9/3, >>> A-1030 Vienna, Austria, >>> Tel: 0043 1 4277 61630, >>> Fax: 0043 1 4277 9616, >>> e-mail: die...@me... >>> <mailto:die...@me...> >>> ------------------------------------------------------------------------ >>> >>> Am 17.09.2014 15:52, schrieb DRC: >>> > TurboVNC now has it's own set of mailing lists, so please >>> subscribe to TurboVNC-Users instead of or in addition to >>> VirtualGL-Users.This list is for VirtualGL only. >>> > >>> > Please try the pre-release build of the TurboVNC server, >>> available on TurboVNC.org <http://TurboVNC.org>. It has a >>> completely overhauled keyboard handler. If something is still >>> not working in that pre-release, then I can fix it, but the >>> ancient keyboard handler in 1.2.x is not supportable anymore. >>> > >>> >> On Sep 17, 2014, at 2:59 AM, Dieter Blaas >>> <die...@me... >>> <mailto:die...@me...>> wrote: >>> >> >>> >> Hi, >>> >> I am using TurboVNC (in Ubuntu 14.04 and in WIn7) to >>> connect to >>> >> VirtualGL running on a Centos6.5 machine. Everything is fine >>> except from >>> >> the lack of transmission of special characters like backtick, >>> acute, and >>> >> caret (but @ and | do work!). When connecting to the same >>> machine via >>> >> ssh all characters are displayed correctly. How can I fix this? >>> >> Thanks, D. >>> >> >>> >> >>> ------------------------------------------------------------------------ >>> >> Dieter Blaas, >>> >> Max F. Perutz Laboratories >>> >> Medical University of Vienna, >>> >> Inst. Med. Biochem., Vienna Biocenter (VBC), >>> >> Dr. Bohr Gasse 9/3, >>> >> A-1030 Vienna, Austria, >>> >> Tel: 0043 1 4277 61630, >>> >> Fax: 0043 1 4277 9616, >>> >> e-mail: die...@me... >>> <mailto:die...@me...> >>> >> >>> ------------------------------------------------------------------------ >>> >> >>> >> >>> >> >>> ------------------------------------------------------------------------------ >>> >> Want excitement? >>> >> Manually upgrade your production database. >>> >> When you want reliability, choose Perforce >>> >> Perforce version control. Predictably reliable. >>> >> >>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> >> _______________________________________________ >>> >> VirtualGL-Users mailing list >>> >> Vir...@li... >>> <mailto:Vir...@li...> >>> >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >>> > >>> ------------------------------------------------------------------------------ >>> > Want excitement? >>> > Manually upgrade your production database. >>> > When you want reliability, choose Perforce >>> > Perforce version control. Predictably reliable. >>> > >>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> > _______________________________________________ >>> > VirtualGL-Users mailing list >>> > Vir...@li... >>> <mailto:Vir...@li...> >>> > https://lists.sourceforge.net/lists/listinfo/virtualgl-users >>> > >>> >>> >>> ------------------------------------------------------------------------------ >>> Want excitement? >>> Manually upgrade your production database. >>> When you want reliability, choose Perforce >>> Perforce version control. Predictably reliable. >>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> VirtualGL-Users mailing list >>> Vir...@li... >>> <mailto:Vir...@li...> >>> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >>> >>> >>> >>> >>> -- >>> Kevin Van Workum, PhD >>> Sabalcore Computing Inc. >>> "Where Data Becomes Discovery" >>> http://www.sabalcore.com >>> 877-492-8027 ext. 1011 >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Want excitement? >>> Manually upgrade your production database. >>> When you want reliability, choose Perforce >>> Perforce version control. Predictably reliable. >>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> >>> >>> _______________________________________________ >>> VirtualGL-Users mailing list >>> Vir...@li... >>> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >> >> ------------------------------------------------------------------------------ >> Want excitement? >> Manually upgrade your production database. >> When you want reliability, choose Perforce >> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> _______________________________________________ >> VirtualGL-Users mailing list >> Vir...@li... >> <mailto:Vir...@li...> >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > > > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users |
|
From: DRC <dco...@us...> - 2014-09-18 14:22:43
|
Please translate your message. I do not speak German. > On Sep 18, 2014, at 8:16 AM, Dieter Blaas <die...@me...> wrote: > > Danke, hab die Mail unten eben bekommen! Werde es selbst machen! Bei Problemen melde ich mich wieder. > Ich nehme an Centos verwendet rpm! > hG Dieter > > ---------------------------------------- > Hallo Dieter, > > Ich habe deine Anfrage in unsere Task Liste übernommen. Sobald ich mehr Informationen für dich habe, melde ich mich. > > MfgM > ------------------------------------------------------------------------ > Dieter Blaas, > Max F. Perutz Laboratories > Medical University of Vienna, > Inst. Med. Biochem., Vienna Biocenter (VBC), > Dr. Bohr Gasse 9/3, > A-1030 Vienna, Austria, > Tel: 0043 1 4277 61630, > Fax: 0043 1 4277 9616, > e-mail: die...@me... > ------------------------------------------------------------------------ > Am 18.09.2014 15:19, schrieb Kevin Van Workum: >> http://www.turbovnc.org/DeveloperInfo/PreReleases >> >> On Thu, Sep 18, 2014 at 5:54 AM, Dieter Blaas <die...@me...> wrote: >>> Many thanks for the hint but under TurboVNC.org I do not find any >>> pre-release. The newest version on this page is turbovnc 1.2.1-20131122 >>> which is already installed! Do I miss anything? >>> D. >>> >>> ------------------------------------------------------------------------ >>> Dieter Blaas, >>> Max F. Perutz Laboratories >>> Medical University of Vienna, >>> Inst. Med. Biochem., Vienna Biocenter (VBC), >>> Dr. Bohr Gasse 9/3, >>> A-1030 Vienna, Austria, >>> Tel: 0043 1 4277 61630, >>> Fax: 0043 1 4277 9616, >>> e-mail: die...@me... >>> ------------------------------------------------------------------------ >>> >>> Am 17.09.2014 15:52, schrieb DRC: >>> > TurboVNC now has it's own set of mailing lists, so please subscribe to TurboVNC-Users instead of or in addition to VirtualGL-Users.This list is for VirtualGL only. >>> > >>> > Please try the pre-release build of the TurboVNC server, available on TurboVNC.org. It has a completely overhauled keyboard handler. If something is still not working in that pre-release, then I can fix it, but the ancient keyboard handler in 1.2.x is not supportable anymore. >>> > >>> >> On Sep 17, 2014, at 2:59 AM, Dieter Blaas <die...@me...> wrote: >>> >> >>> >> Hi, >>> >> I am using TurboVNC (in Ubuntu 14.04 and in WIn7) to connect to >>> >> VirtualGL running on a Centos6.5 machine. Everything is fine except from >>> >> the lack of transmission of special characters like backtick, acute, and >>> >> caret (but @ and | do work!). When connecting to the same machine via >>> >> ssh all characters are displayed correctly. How can I fix this? >>> >> Thanks, D. >>> >> >>> >> ------------------------------------------------------------------------ >>> >> Dieter Blaas, >>> >> Max F. Perutz Laboratories >>> >> Medical University of Vienna, >>> >> Inst. Med. Biochem., Vienna Biocenter (VBC), >>> >> Dr. Bohr Gasse 9/3, >>> >> A-1030 Vienna, Austria, >>> >> Tel: 0043 1 4277 61630, >>> >> Fax: 0043 1 4277 9616, >>> >> e-mail: die...@me... >>> >> ------------------------------------------------------------------------ >>> >> >>> >> >>> >> ------------------------------------------------------------------------------ >>> >> Want excitement? >>> >> Manually upgrade your production database. >>> >> When you want reliability, choose Perforce >>> >> Perforce version control. Predictably reliable. >>> >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> >> _______________________________________________ >>> >> VirtualGL-Users mailing list >>> >> Vir...@li... >>> >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >>> > ------------------------------------------------------------------------------ >>> > Want excitement? >>> > Manually upgrade your production database. >>> > When you want reliability, choose Perforce >>> > Perforce version control. Predictably reliable. >>> > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> > _______________________________________________ >>> > VirtualGL-Users mailing list >>> > Vir...@li... >>> > https://lists.sourceforge.net/lists/listinfo/virtualgl-users >>> > >>> >>> >>> ------------------------------------------------------------------------------ >>> Want excitement? >>> Manually upgrade your production database. >>> When you want reliability, choose Perforce >>> Perforce version control. Predictably reliable. >>> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> VirtualGL-Users mailing list >>> Vir...@li... >>> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >> >> >> >> -- >> Kevin Van Workum, PhD >> Sabalcore Computing Inc. >> "Where Data Becomes Discovery" >> http://www.sabalcore.com >> 877-492-8027 ext. 1011 >> >> >> >> ------------------------------------------------------------------------------ >> Want excitement? >> Manually upgrade your production database. >> When you want reliability, choose Perforce >> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> >> >> _______________________________________________ >> VirtualGL-Users mailing list >> Vir...@li... >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users |
|
From: Dieter B. <die...@me...> - 2014-09-18 14:16:57
|
Danke, hab die Mail unten eben bekommen! Werde es selbst machen! Bei Problemen melde ich mich wieder. Ich nehme an Centos verwendet rpm! hG Dieter ---------------------------------------- Hallo Dieter, Ich habe deine Anfrage in unsere Task Liste übernommen. Sobald ich mehr Informationen für dich habe, melde ich mich. MfgM ------------------------------------------------------------------------ Dieter Blaas, Max F. Perutz Laboratories Medical University of Vienna, Inst. Med. Biochem., Vienna Biocenter (VBC), Dr. Bohr Gasse 9/3, A-1030 Vienna, Austria, Tel: 0043 1 4277 61630, Fax: 0043 1 4277 9616, e-mail: die...@me... ------------------------------------------------------------------------ Am 18.09.2014 15:19, schrieb Kevin Van Workum: > http://www.turbovnc.org/DeveloperInfo/PreReleases > > On Thu, Sep 18, 2014 at 5:54 AM, Dieter Blaas > <die...@me... <mailto:die...@me...>> > wrote: > > Many thanks for the hint but under TurboVNC.org I do not find any > pre-release. The newest version on this page is turbovnc > 1.2.1-20131122 > which is already installed! Do I miss anything? > D. > > ------------------------------------------------------------------------ > Dieter Blaas, > Max F. Perutz Laboratories > Medical University of Vienna, > Inst. Med. Biochem., Vienna Biocenter (VBC), > Dr. Bohr Gasse 9/3, > A-1030 Vienna, Austria, > Tel: 0043 1 4277 61630, > Fax: 0043 1 4277 9616, > e-mail: die...@me... > <mailto:die...@me...> > ------------------------------------------------------------------------ > > Am 17.09.2014 15:52, schrieb DRC: > > TurboVNC now has it's own set of mailing lists, so please > subscribe to TurboVNC-Users instead of or in addition to > VirtualGL-Users.This list is for VirtualGL only. > > > > Please try the pre-release build of the TurboVNC server, > available on TurboVNC.org. It has a completely overhauled keyboard > handler. If something is still not working in that pre-release, > then I can fix it, but the ancient keyboard handler in 1.2.x is > not supportable anymore. > > > >> On Sep 17, 2014, at 2:59 AM, Dieter Blaas > <die...@me... > <mailto:die...@me...>> wrote: > >> > >> Hi, > >> I am using TurboVNC (in Ubuntu 14.04 and in WIn7) to > connect to > >> VirtualGL running on a Centos6.5 machine. Everything is fine > except from > >> the lack of transmission of special characters like backtick, > acute, and > >> caret (but @ and | do work!). When connecting to the same > machine via > >> ssh all characters are displayed correctly. How can I fix this? > >> Thanks, D. > >> > >> > ------------------------------------------------------------------------ > >> Dieter Blaas, > >> Max F. Perutz Laboratories > >> Medical University of Vienna, > >> Inst. Med. Biochem., Vienna Biocenter (VBC), > >> Dr. Bohr Gasse 9/3, > >> A-1030 Vienna, Austria, > >> Tel: 0043 1 4277 61630, > >> Fax: 0043 1 4277 9616, > >> e-mail: die...@me... > <mailto:die...@me...> > >> > ------------------------------------------------------------------------ > >> > >> > >> > ------------------------------------------------------------------------------ > >> Want excitement? > >> Manually upgrade your production database. > >> When you want reliability, choose Perforce > >> Perforce version control. Predictably reliable. > >> > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > >> _______________________________________________ > >> VirtualGL-Users mailing list > >> Vir...@li... > <mailto:Vir...@li...> > >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > > ------------------------------------------------------------------------------ > > Want excitement? > > Manually upgrade your production database. > > When you want reliability, choose Perforce > > Perforce version control. Predictably reliable. > > > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > > _______________________________________________ > > VirtualGL-Users mailing list > > Vir...@li... > <mailto:Vir...@li...> > > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > > > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > <mailto:Vir...@li...> > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > > > > -- > Kevin Van Workum, PhD > Sabalcore Computing Inc. > "Where Data Becomes Discovery" > http://www.sabalcore.com > 877-492-8027 ext. 1011 > > > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > > > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users |
|
From: Kevin V. W. <va...@sa...> - 2014-09-18 13:42:33
|
http://www.turbovnc.org/DeveloperInfo/PreReleases On Thu, Sep 18, 2014 at 5:54 AM, Dieter Blaas <die...@me... > wrote: > Many thanks for the hint but under TurboVNC.org I do not find any > pre-release. The newest version on this page is turbovnc 1.2.1-20131122 > which is already installed! Do I miss anything? > D. > > ------------------------------------------------------------------------ > Dieter Blaas, > Max F. Perutz Laboratories > Medical University of Vienna, > Inst. Med. Biochem., Vienna Biocenter (VBC), > Dr. Bohr Gasse 9/3, > A-1030 Vienna, Austria, > Tel: 0043 1 4277 61630, > Fax: 0043 1 4277 9616, > e-mail: die...@me... > ------------------------------------------------------------------------ > > Am 17.09.2014 15:52, schrieb DRC: > > TurboVNC now has it's own set of mailing lists, so please subscribe to > TurboVNC-Users instead of or in addition to VirtualGL-Users.This list is > for VirtualGL only. > > > > Please try the pre-release build of the TurboVNC server, available on > TurboVNC.org. It has a completely overhauled keyboard handler. If something > is still not working in that pre-release, then I can fix it, but the > ancient keyboard handler in 1.2.x is not supportable anymore. > > > >> On Sep 17, 2014, at 2:59 AM, Dieter Blaas < > die...@me...> wrote: > >> > >> Hi, > >> I am using TurboVNC (in Ubuntu 14.04 and in WIn7) to connect to > >> VirtualGL running on a Centos6.5 machine. Everything is fine except from > >> the lack of transmission of special characters like backtick, acute, and > >> caret (but @ and | do work!). When connecting to the same machine via > >> ssh all characters are displayed correctly. How can I fix this? > >> Thanks, D. > >> > >> ------------------------------------------------------------------------ > >> Dieter Blaas, > >> Max F. Perutz Laboratories > >> Medical University of Vienna, > >> Inst. Med. Biochem., Vienna Biocenter (VBC), > >> Dr. Bohr Gasse 9/3, > >> A-1030 Vienna, Austria, > >> Tel: 0043 1 4277 61630, > >> Fax: 0043 1 4277 9616, > >> e-mail: die...@me... > >> ------------------------------------------------------------------------ > >> > >> > >> > ------------------------------------------------------------------------------ > >> Want excitement? > >> Manually upgrade your production database. > >> When you want reliability, choose Perforce > >> Perforce version control. Predictably reliable. > >> > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > >> _______________________________________________ > >> VirtualGL-Users mailing list > >> Vir...@li... > >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > > ------------------------------------------------------------------------------ > > Want excitement? > > Manually upgrade your production database. > > When you want reliability, choose Perforce > > Perforce version control. Predictably reliable. > > > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > > _______________________________________________ > > VirtualGL-Users mailing list > > Vir...@li... > > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > > > > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > -- Kevin Van Workum, PhD Sabalcore Computing Inc. "Where Data Becomes Discovery" http://www.sabalcore.com 877-492-8027 ext. 1011 -- |
|
From: Dieter B. <die...@me...> - 2014-09-18 09:57:11
|
Many thanks for the hint but under TurboVNC.org I do not find any pre-release. The newest version on this page is turbovnc 1.2.1-20131122 which is already installed! Do I miss anything? D. ------------------------------------------------------------------------ Dieter Blaas, Max F. Perutz Laboratories Medical University of Vienna, Inst. Med. Biochem., Vienna Biocenter (VBC), Dr. Bohr Gasse 9/3, A-1030 Vienna, Austria, Tel: 0043 1 4277 61630, Fax: 0043 1 4277 9616, e-mail: die...@me... ------------------------------------------------------------------------ Am 17.09.2014 15:52, schrieb DRC: > TurboVNC now has it's own set of mailing lists, so please subscribe to TurboVNC-Users instead of or in addition to VirtualGL-Users.This list is for VirtualGL only. > > Please try the pre-release build of the TurboVNC server, available on TurboVNC.org. It has a completely overhauled keyboard handler. If something is still not working in that pre-release, then I can fix it, but the ancient keyboard handler in 1.2.x is not supportable anymore. > >> On Sep 17, 2014, at 2:59 AM, Dieter Blaas <die...@me...> wrote: >> >> Hi, >> I am using TurboVNC (in Ubuntu 14.04 and in WIn7) to connect to >> VirtualGL running on a Centos6.5 machine. Everything is fine except from >> the lack of transmission of special characters like backtick, acute, and >> caret (but @ and | do work!). When connecting to the same machine via >> ssh all characters are displayed correctly. How can I fix this? >> Thanks, D. >> >> ------------------------------------------------------------------------ >> Dieter Blaas, >> Max F. Perutz Laboratories >> Medical University of Vienna, >> Inst. Med. Biochem., Vienna Biocenter (VBC), >> Dr. Bohr Gasse 9/3, >> A-1030 Vienna, Austria, >> Tel: 0043 1 4277 61630, >> Fax: 0043 1 4277 9616, >> e-mail: die...@me... >> ------------------------------------------------------------------------ >> >> >> ------------------------------------------------------------------------------ >> Want excitement? >> Manually upgrade your production database. >> When you want reliability, choose Perforce >> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> _______________________________________________ >> VirtualGL-Users mailing list >> Vir...@li... >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > |
|
From: DRC <dco...@us...> - 2014-09-17 13:52:54
|
TurboVNC now has it's own set of mailing lists, so please subscribe to TurboVNC-Users instead of or in addition to VirtualGL-Users.This list is for VirtualGL only. Please try the pre-release build of the TurboVNC server, available on TurboVNC.org. It has a completely overhauled keyboard handler. If something is still not working in that pre-release, then I can fix it, but the ancient keyboard handler in 1.2.x is not supportable anymore. > On Sep 17, 2014, at 2:59 AM, Dieter Blaas <die...@me...> wrote: > > Hi, > I am using TurboVNC (in Ubuntu 14.04 and in WIn7) to connect to > VirtualGL running on a Centos6.5 machine. Everything is fine except from > the lack of transmission of special characters like backtick, acute, and > caret (but @ and | do work!). When connecting to the same machine via > ssh all characters are displayed correctly. How can I fix this? > Thanks, D. > > ------------------------------------------------------------------------ > Dieter Blaas, > Max F. Perutz Laboratories > Medical University of Vienna, > Inst. Med. Biochem., Vienna Biocenter (VBC), > Dr. Bohr Gasse 9/3, > A-1030 Vienna, Austria, > Tel: 0043 1 4277 61630, > Fax: 0043 1 4277 9616, > e-mail: die...@me... > ------------------------------------------------------------------------ > > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users |
|
From: Dieter B. <die...@me...> - 2014-09-17 09:00:11
|
Hi,
I am using TurboVNC (in Ubuntu 14.04 and in WIn7) to connect to
VirtualGL running on a Centos6.5 machine. Everything is fine except from
the lack of transmission of special characters like backtick, acute, and
caret (but @ and | do work!). When connecting to the same machine via
ssh all characters are displayed correctly. How can I fix this?
Thanks, D.
------------------------------------------------------------------------
Dieter Blaas,
Max F. Perutz Laboratories
Medical University of Vienna,
Inst. Med. Biochem., Vienna Biocenter (VBC),
Dr. Bohr Gasse 9/3,
A-1030 Vienna, Austria,
Tel: 0043 1 4277 61630,
Fax: 0043 1 4277 9616,
e-mail: die...@me...
------------------------------------------------------------------------
|
|
From: DRC <dco...@us...> - 2014-09-12 01:03:17
|
Has the driver been upgraded recently? Does setting VGL_READBACK=sync work around the problem? Does using a different version of VGL work around it? Suffice it to say that no one else is experiencing this, so I strongly suspect that you'll have better luck filing it as a support request with nVidia. They are familiar with VGL. On Sep 11, 2014, at 8:46 AM, Stephan Josel <s....@ax...> wrote: > > Hi! > > Thanks for your quick reply. Yes, I forgot to mention the card. It's a > Nvidia GPU GT 430 with driver version 334.21. We didn't experience any > of this kind of problem for a long time, it's very strange. > > Thanks, > Stephan > >> I have never seen issues like this with nVidia drivers, and in fact VGL >> is deployed commercially on machines where dozens of users (sometimes >> more) are sharing a GPU. >> >> I have, however, seen multi-instance problems like this with Intel's >> drivers. You didn't specify which GPU and driver revision you are using, >> and that's probably the most relevant piece of information pertaining to >> this issue. >> >>> On Sep 11, 2014, at 6:55 AM, Stephan Josel <s.josel@...> wrote: >>> >>> Hi! >>> >>> Has anyone experienced the following and found a solution? >>> >>> On our server (I set it up as headless X-server on display :0) where vgl >>> is installed user A starts an OpenGL application. Another user (user B) >>> connects to the server and tries to start also the same OpenGL >>> application. However, user B does not get a correct display of the >>> OpenGL application, he rather gets a "damaged" framebuffer or even a >>> black one. If I change the order of start of the application (user B >>> first then user A), the same behavior is there. >>> >>> I did the following to exclude errors: >>> >>> * Check if X :0 has direct rendering: yes >>> * Found no suspicious log entries in the vgl client logs >>> * Reproduced the error with different OpenGL applications >>> >>> It's very strange since this behavior came suddenly. >>> >>> Thanks in advance, >>> Stephan >> ------------------------------------------------------------------------------ >> >>> Want excitement? >>> Manually upgrade your production database. >>> When you want reliability, choose Perforce >>> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> >>> _______________________________________________ >>> VirtualGL-Users mailing list >>> VirtualGL-Users@... >>> https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users |