virtualgl-users Mailing List for VirtualGL (Page 7)
3D Without Boundaries
Brought to you by:
dcommander
You can subscribe to this list here.
| 2007 |
Jan
(1) |
Feb
(5) |
Mar
(7) |
Apr
(7) |
May
(1) |
Jun
(10) |
Jul
(5) |
Aug
(4) |
Sep
(16) |
Oct
(2) |
Nov
(8) |
Dec
(3) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2008 |
Jan
(6) |
Feb
(18) |
Mar
(6) |
Apr
(5) |
May
(15) |
Jun
(6) |
Jul
(23) |
Aug
(5) |
Sep
(9) |
Oct
(15) |
Nov
(7) |
Dec
(3) |
| 2009 |
Jan
(22) |
Feb
(13) |
Mar
(15) |
Apr
(3) |
May
(19) |
Jun
(1) |
Jul
(44) |
Aug
(16) |
Sep
(13) |
Oct
(32) |
Nov
(34) |
Dec
(6) |
| 2010 |
Jan
(5) |
Feb
(27) |
Mar
(28) |
Apr
(29) |
May
(19) |
Jun
(30) |
Jul
(14) |
Aug
(5) |
Sep
(17) |
Oct
(10) |
Nov
(13) |
Dec
(13) |
| 2011 |
Jan
(18) |
Feb
(34) |
Mar
(57) |
Apr
(39) |
May
(28) |
Jun
(2) |
Jul
(7) |
Aug
(17) |
Sep
(28) |
Oct
(25) |
Nov
(17) |
Dec
(15) |
| 2012 |
Jan
(15) |
Feb
(47) |
Mar
(40) |
Apr
(15) |
May
(15) |
Jun
(34) |
Jul
(44) |
Aug
(66) |
Sep
(34) |
Oct
(8) |
Nov
(37) |
Dec
(23) |
| 2013 |
Jan
(14) |
Feb
(26) |
Mar
(38) |
Apr
(27) |
May
(33) |
Jun
(67) |
Jul
(14) |
Aug
(39) |
Sep
(24) |
Oct
(59) |
Nov
(29) |
Dec
(16) |
| 2014 |
Jan
(21) |
Feb
(17) |
Mar
(21) |
Apr
(11) |
May
(10) |
Jun
(2) |
Jul
(10) |
Aug
|
Sep
(23) |
Oct
(16) |
Nov
(7) |
Dec
(2) |
| 2015 |
Jan
(7) |
Feb
|
Mar
(26) |
Apr
|
May
(2) |
Jun
(16) |
Jul
(1) |
Aug
(5) |
Sep
(6) |
Oct
(10) |
Nov
(5) |
Dec
(6) |
| 2016 |
Jan
|
Feb
(6) |
Mar
|
Apr
(2) |
May
|
Jun
(6) |
Jul
(5) |
Aug
|
Sep
(17) |
Oct
(6) |
Nov
(2) |
Dec
(4) |
| 2017 |
Jan
(3) |
Feb
(25) |
Mar
(4) |
Apr
(3) |
May
(4) |
Jun
(10) |
Jul
(1) |
Aug
(8) |
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Marco M. <mar...@gm...> - 2015-06-12 21:13:35
|
Ok, thanks. Is there some technical documentation for virtualgl developers? I need to understand the code, if i have to write a plugin. And the code is not trivial. I have no experience, and i never read virtualgl code before. However, thanks again for you answers. MM 2015-06-12 22:42 GMT+02:00 DRC <dco...@us...>: > OK, then yes, with the qualifications that you mention (a workload that > updates a predictable area of the screen in a predictable way, i.e. > games & video applications; low-bandwidth; reasonable image quality), > then a video codec such as H.264 is definitely going to be the best > solution. > > I am working with some engineers at nVidia to look into how to integrate > H.264 with VirtualGL and TurboVNC, but we are still just bouncing ideas > around at this point. I don't know if we'll come up with anything > concrete in the timeframe you need. And since you're a student, writing > a transport plugin would be a good exercise. > > Study testplugin.cpp. Some things you'll have to consider: > > -- how to handle the delivery of images over the network. The VGL > Transport is really designed for use along with a remote X server, so > it's probably not the right protocol for you to use. What would be > Really Cool(TM) is if you could leverage an existing streaming video > protocol. > > -- how to handle the delivery of keyboard/mouse events from the client > to the server. When you're dealing with an immersive application, such > as a game, for which window and GUI management are irrelevant, you could > do something clever, such as using an Xvfb instance on the server side > to act as a 2D X server. Xvfb would be used solely for event > management. Your plugin would intercept keyboard/mouse events from the > client and inject them into the Xvfb instance so that the 3D application > will receive them via its X event queue. No actual pixels will be drawn > into that X server-- the pixels will be intercepted by the VGL plugin > and sent directly to the client. > > You are not required to open source any of your work (VGL plugins can be > proprietary), but if you choose to do so, I'll be happy to review it for > possible inclusion in VirtualGL. > > Also, you should investigate whether libx264 is a faster solution than > FFmpeg for compressing H.264. > > Please feel free to ask follow-up questions on the virtualgl-devel list. > > > On 6/12/15 12:17 PM, Marco Marino wrote: > > Thanks for your answer. > > As you told "H.264 doesn't benefit all applications-- mainly it will > > only improve the situation for games and other very video-like > > workloads". And this is my purpose! I need a reliable, high latency > > network compatible "transport". I'm interested in gaming, and video > > reproduction but i want (!) to use linux, and i thought that VirtualGL > > could be an option. > > Use a low JPEG quality is not a solution for me, I need good image > quality. > > Write a transport plugin could be a solution? What i have to study > > before I can write a transport plugin? Have you some example (yes, i > > know testplugin.cpp) ? > > Thank you, > > MM > > > > 2015-05-27 9:42 GMT+02:00 DRC <dco...@us... > > <mailto:dco...@us...>>: > > > > Depending on what you really need to do, it may not be necessary to > get > > that fancy. Most users of VirtualGL use it with an X11 proxy. > TurboVNC > > is the X proxy that our project provides and which has features and > > performance specifically targeted at 3D applications, but there are > > other X proxies that work with VGL as well (although usually not as > fast > > as TurboVNC.) The X proxy, when combined with VirtualGL, will cause > all > > of the 2D (X11) as well as 3D (OpenGL) rendering commands from the > > application to be rendered on the server and sent to the client as > image > > updates. TurboVNC specifically has a feature called "automatic > lossless > > refresh", which allows you to use a very low JPEG quality for most > > updates, but when the server senses that you have stopped moving > things > > around (such as when you stop rotating a 3D scene, etc.), it will > send a > > lossless update of the screen. X proxies also do a much better job > of > > handling high-latency networks, since they are sending more > > coarse-grained image updates instead of very fine-grained and chatty > X11 > > updates over the network. > > > > VirtualGL can also operate without an X proxy, but this is less > useful > > on high-latency networks, because it requires that the non-3D-related > > X11 commands be sent over the network to the client machine. Even > > though the 3D-related X11 commands are being redirected by VirtualGL > and > > are being rendered on the server, the 2D-related X11 commands used to > > draw the application GUI, etc. are still sent over the network, and > that > > can create performance problems on wide-area networks. > > > > Now, to more specifically answer your question-- you can reduce the > JPEG > > quality in VirtualGL, which will significantly reduce the bandwidth. > > Using TurboVNC will likely reduce the bandwidth as well, not only > > because the X11 stuff is being rendered on the server but because > > TurboVNC has a more adaptive compression scheme, whereas VirtualGL > uses > > plain motion-JPEG. I'm currently working with nVidia to research > > possible ways of leveraging their NVENC library to do H.264 > compression > > in VirtualGL, but there are a lot of technical hurdles we haven't > > figured out yet vis-a-vis how that would interact with X proxies > such as > > TurboVNC. Also, H.264 doesn't benefit all applications-- mainly it > will > > only improve the situation for games and other very video-like > > workloads. More tradition 3D applications (CAD, visualization, etc.) > > are already quite optimal with TurboVNC's existing compression > scheme. > > > > VirtualGL does have a transport plugin API, so you could conceivably > use > > that to write your own transport plugin based on the existing VGL > > Transport code (see server/testplugin.cpp for an example) but which > uses > > ffmpeg instead of libjpeg-turbo to do the 3D image compression. > Whether > > or not that would really improve bandwidth usage would depend a lot > on > > your specific workload. > > > > > > On 5/27/15 1:31 AM, Marco Marino wrote: > > > Hi, > > > i'm a student and i'm trying virtualgl for education purposes. > From my > > > tests with vglrun I saw that huge amount of bandwidth is required > for > > > acceptable image quality (eg: -c jpeg with quality 90 requires > peaks of > > > 50/60 Mbits). Is there a way to compress the video flow? Can i do > > > something like: "VGLSOCKET" -> My compression library (like ffmpeg > or > > > similar) -> TCP/UDP socket ? In practice i want try to compress > with my > > > own code the output of VGL. Is this possible? I'm sorry if this is > a > > > stupid question, but I don't have knowledge about virtualgl. > > > Thanks > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > > VirtualGL-Users mailing list > > Vir...@li... > > <mailto:Vir...@li...> > > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > > > > > > > > > > ------------------------------------------------------------------------------ > > > > > > > > _______________________________________________ > > VirtualGL-Users mailing list > > Vir...@li... > > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > |
|
From: Morgan R. <mor...@ro...> - 2015-06-12 21:10:39
|
Have you considered migrating to github from sourceforge, in light of their recent behavior? Sorry if this has already been addressed. On Fri, Jun 12, 2015 at 8:19 AM, DRC <dco...@us...> wrote: > http://sourceforge.net/projects/virtualgl/files/2.4.1/ > > Packaging changes: > These packages were built with libjpeg-turbo 1.4.1: > http://sourceforge.net/projects/libjpeg-turbo/files/1.4.1/ > This improves performance on 64-bit Mac clients, relative to VirtualGL 2.4. > > Significant changes since 2.4: > > [1] When an application doesn't explicitly specify its visual > requirements by calling glXChooseVisual()/glXChooseFBConfig(), the > default GLX framebuffer config that VirtualGL assigns to it now contains > a stencil buffer. This eliminates the need to specify > VGL_DEFAULTFBCONFIG=GLX_STENCIL_SIZE,8 with certain applications > (previously necessary when running Abaqus v6 and MAGMA5.) > > [2] VirtualGL will no longer advertise that it supports the > GLX_ARB_create_context and GLX_ARB_create_context_profile extensions > unless the underlying OpenGL library exports the > glXCreateContextAttribsARB() function. > > [3] Fixed "Invalid MIT-MAGIC-COOKIE-1" errors that would prevent > VirtualGL from working when vglconnect was used to connect to a > VirtualGL server from a client running Cygwin/X. > > [4] If a 3D application is rendering to the front buffer and one of the > end-of-frame trigger functions (glFlush()/glFinish()/glXWaitGL()) is > called, VirtualGL will no longer read back the framebuffer unless the > render mode is GL_RENDER. Reading back the front buffer when the render > mode is GL_SELECT or GL_FEEDBACK is not only unnecessary, but it was > known to cause a GLXBadContextState error with newer nVidia drivers > (340.xx and later) in certain cases. > > [5] Fixed a deadlock that occurred in the multi-threaded rendering test > of fakerut when it was run with the XCB interposer enabled. This was > due to VirtualGL attempting to handle XCB events when Xlib owned the > event queue. It is possible that this issue affected or would have > affected real-world applications as well. > > [6] Fixed an issue that caused certain 3D applications (observed with > CAESES/FFW, although others were possibly affected as well) to abort > with "ERROR: in TempContext-- Could not bind OpenGL context to window > (window may may have disappeared)". When the 3D application called > glXChooseVisual(), VirtualGL was choosing a corresponding FB config with > GLX_DRAWABLE_TYPE=GLX_PBUFFER_BIT (assuming that VGL_DRAWABLE=pbuffer, > which is the default.) This is incorrect, however, because regardless > of the value of VGL_DRAWABLE, VirtualGL still uses Pixmaps on the 3D X > server to represent GLX Pixmaps (necessary in order to make > GLX_EXT_texture_from_pixmap work properly.) Thus, VGL needs to choose > an FB config that supports both Pbuffers and Pixmaps. This was > generally only a problem with nVidia drivers, because they export > different FB configs for GLX_PBUFFER_BIT and > GLX_PBUFFER_BIT|GLX_PIXMAP_BIT. > > > ------------------------------------------------------------------------------ > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > |
|
From: DRC <dco...@us...> - 2015-06-12 20:42:36
|
OK, then yes, with the qualifications that you mention (a workload that updates a predictable area of the screen in a predictable way, i.e. games & video applications; low-bandwidth; reasonable image quality), then a video codec such as H.264 is definitely going to be the best solution. I am working with some engineers at nVidia to look into how to integrate H.264 with VirtualGL and TurboVNC, but we are still just bouncing ideas around at this point. I don't know if we'll come up with anything concrete in the timeframe you need. And since you're a student, writing a transport plugin would be a good exercise. Study testplugin.cpp. Some things you'll have to consider: -- how to handle the delivery of images over the network. The VGL Transport is really designed for use along with a remote X server, so it's probably not the right protocol for you to use. What would be Really Cool(TM) is if you could leverage an existing streaming video protocol. -- how to handle the delivery of keyboard/mouse events from the client to the server. When you're dealing with an immersive application, such as a game, for which window and GUI management are irrelevant, you could do something clever, such as using an Xvfb instance on the server side to act as a 2D X server. Xvfb would be used solely for event management. Your plugin would intercept keyboard/mouse events from the client and inject them into the Xvfb instance so that the 3D application will receive them via its X event queue. No actual pixels will be drawn into that X server-- the pixels will be intercepted by the VGL plugin and sent directly to the client. You are not required to open source any of your work (VGL plugins can be proprietary), but if you choose to do so, I'll be happy to review it for possible inclusion in VirtualGL. Also, you should investigate whether libx264 is a faster solution than FFmpeg for compressing H.264. Please feel free to ask follow-up questions on the virtualgl-devel list. On 6/12/15 12:17 PM, Marco Marino wrote: > Thanks for your answer. > As you told "H.264 doesn't benefit all applications-- mainly it will > only improve the situation for games and other very video-like > workloads". And this is my purpose! I need a reliable, high latency > network compatible "transport". I'm interested in gaming, and video > reproduction but i want (!) to use linux, and i thought that VirtualGL > could be an option. > Use a low JPEG quality is not a solution for me, I need good image quality. > Write a transport plugin could be a solution? What i have to study > before I can write a transport plugin? Have you some example (yes, i > know testplugin.cpp) ? > Thank you, > MM > > 2015-05-27 9:42 GMT+02:00 DRC <dco...@us... > <mailto:dco...@us...>>: > > Depending on what you really need to do, it may not be necessary to get > that fancy. Most users of VirtualGL use it with an X11 proxy. TurboVNC > is the X proxy that our project provides and which has features and > performance specifically targeted at 3D applications, but there are > other X proxies that work with VGL as well (although usually not as fast > as TurboVNC.) The X proxy, when combined with VirtualGL, will cause all > of the 2D (X11) as well as 3D (OpenGL) rendering commands from the > application to be rendered on the server and sent to the client as image > updates. TurboVNC specifically has a feature called "automatic lossless > refresh", which allows you to use a very low JPEG quality for most > updates, but when the server senses that you have stopped moving things > around (such as when you stop rotating a 3D scene, etc.), it will send a > lossless update of the screen. X proxies also do a much better job of > handling high-latency networks, since they are sending more > coarse-grained image updates instead of very fine-grained and chatty X11 > updates over the network. > > VirtualGL can also operate without an X proxy, but this is less useful > on high-latency networks, because it requires that the non-3D-related > X11 commands be sent over the network to the client machine. Even > though the 3D-related X11 commands are being redirected by VirtualGL and > are being rendered on the server, the 2D-related X11 commands used to > draw the application GUI, etc. are still sent over the network, and that > can create performance problems on wide-area networks. > > Now, to more specifically answer your question-- you can reduce the JPEG > quality in VirtualGL, which will significantly reduce the bandwidth. > Using TurboVNC will likely reduce the bandwidth as well, not only > because the X11 stuff is being rendered on the server but because > TurboVNC has a more adaptive compression scheme, whereas VirtualGL uses > plain motion-JPEG. I'm currently working with nVidia to research > possible ways of leveraging their NVENC library to do H.264 compression > in VirtualGL, but there are a lot of technical hurdles we haven't > figured out yet vis-a-vis how that would interact with X proxies such as > TurboVNC. Also, H.264 doesn't benefit all applications-- mainly it will > only improve the situation for games and other very video-like > workloads. More tradition 3D applications (CAD, visualization, etc.) > are already quite optimal with TurboVNC's existing compression scheme. > > VirtualGL does have a transport plugin API, so you could conceivably use > that to write your own transport plugin based on the existing VGL > Transport code (see server/testplugin.cpp for an example) but which uses > ffmpeg instead of libjpeg-turbo to do the 3D image compression. Whether > or not that would really improve bandwidth usage would depend a lot on > your specific workload. > > > On 5/27/15 1:31 AM, Marco Marino wrote: > > Hi, > > i'm a student and i'm trying virtualgl for education purposes. From my > > tests with vglrun I saw that huge amount of bandwidth is required for > > acceptable image quality (eg: -c jpeg with quality 90 requires peaks of > > 50/60 Mbits). Is there a way to compress the video flow? Can i do > > something like: "VGLSOCKET" -> My compression library (like ffmpeg or > > similar) -> TCP/UDP socket ? In practice i want try to compress with my > > own code the output of VGL. Is this possible? I'm sorry if this is a > > stupid question, but I don't have knowledge about virtualgl. > > Thanks > > ------------------------------------------------------------------------------ > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > <mailto:Vir...@li...> > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > > > > ------------------------------------------------------------------------------ > > > > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > |
|
From: Marco M. <mar...@gm...> - 2015-06-12 17:17:58
|
Thanks for your answer. As you told "H.264 doesn't benefit all applications-- mainly it will only improve the situation for games and other very video-like workloads". And this is my purpose! I need a reliable, high latency network compatible "transport". I'm interested in gaming, and video reproduction but i want (!) to use linux, and i thought that VirtualGL could be an option. Use a low JPEG quality is not a solution for me, I need good image quality. Write a transport plugin could be a solution? What i have to study before I can write a transport plugin? Have you some example (yes, i know testplugin.cpp) ? Thank you, MM 2015-05-27 9:42 GMT+02:00 DRC <dco...@us...>: > Depending on what you really need to do, it may not be necessary to get > that fancy. Most users of VirtualGL use it with an X11 proxy. TurboVNC > is the X proxy that our project provides and which has features and > performance specifically targeted at 3D applications, but there are > other X proxies that work with VGL as well (although usually not as fast > as TurboVNC.) The X proxy, when combined with VirtualGL, will cause all > of the 2D (X11) as well as 3D (OpenGL) rendering commands from the > application to be rendered on the server and sent to the client as image > updates. TurboVNC specifically has a feature called "automatic lossless > refresh", which allows you to use a very low JPEG quality for most > updates, but when the server senses that you have stopped moving things > around (such as when you stop rotating a 3D scene, etc.), it will send a > lossless update of the screen. X proxies also do a much better job of > handling high-latency networks, since they are sending more > coarse-grained image updates instead of very fine-grained and chatty X11 > updates over the network. > > VirtualGL can also operate without an X proxy, but this is less useful > on high-latency networks, because it requires that the non-3D-related > X11 commands be sent over the network to the client machine. Even > though the 3D-related X11 commands are being redirected by VirtualGL and > are being rendered on the server, the 2D-related X11 commands used to > draw the application GUI, etc. are still sent over the network, and that > can create performance problems on wide-area networks. > > Now, to more specifically answer your question-- you can reduce the JPEG > quality in VirtualGL, which will significantly reduce the bandwidth. > Using TurboVNC will likely reduce the bandwidth as well, not only > because the X11 stuff is being rendered on the server but because > TurboVNC has a more adaptive compression scheme, whereas VirtualGL uses > plain motion-JPEG. I'm currently working with nVidia to research > possible ways of leveraging their NVENC library to do H.264 compression > in VirtualGL, but there are a lot of technical hurdles we haven't > figured out yet vis-a-vis how that would interact with X proxies such as > TurboVNC. Also, H.264 doesn't benefit all applications-- mainly it will > only improve the situation for games and other very video-like > workloads. More tradition 3D applications (CAD, visualization, etc.) > are already quite optimal with TurboVNC's existing compression scheme. > > VirtualGL does have a transport plugin API, so you could conceivably use > that to write your own transport plugin based on the existing VGL > Transport code (see server/testplugin.cpp for an example) but which uses > ffmpeg instead of libjpeg-turbo to do the 3D image compression. Whether > or not that would really improve bandwidth usage would depend a lot on > your specific workload. > > > On 5/27/15 1:31 AM, Marco Marino wrote: > > Hi, > > i'm a student and i'm trying virtualgl for education purposes. From my > > tests with vglrun I saw that huge amount of bandwidth is required for > > acceptable image quality (eg: -c jpeg with quality 90 requires peaks of > > 50/60 Mbits). Is there a way to compress the video flow? Can i do > > something like: "VGLSOCKET" -> My compression library (like ffmpeg or > > similar) -> TCP/UDP socket ? In practice i want try to compress with my > > own code the output of VGL. Is this possible? I'm sorry if this is a > > stupid question, but I don't have knowledge about virtualgl. > > Thanks > > > ------------------------------------------------------------------------------ > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > |
|
From: DRC <dco...@us...> - 2015-06-12 15:19:39
|
http://sourceforge.net/projects/virtualgl/files/2.4.1/ Packaging changes: These packages were built with libjpeg-turbo 1.4.1: http://sourceforge.net/projects/libjpeg-turbo/files/1.4.1/ This improves performance on 64-bit Mac clients, relative to VirtualGL 2.4. Significant changes since 2.4: [1] When an application doesn't explicitly specify its visual requirements by calling glXChooseVisual()/glXChooseFBConfig(), the default GLX framebuffer config that VirtualGL assigns to it now contains a stencil buffer. This eliminates the need to specify VGL_DEFAULTFBCONFIG=GLX_STENCIL_SIZE,8 with certain applications (previously necessary when running Abaqus v6 and MAGMA5.) [2] VirtualGL will no longer advertise that it supports the GLX_ARB_create_context and GLX_ARB_create_context_profile extensions unless the underlying OpenGL library exports the glXCreateContextAttribsARB() function. [3] Fixed "Invalid MIT-MAGIC-COOKIE-1" errors that would prevent VirtualGL from working when vglconnect was used to connect to a VirtualGL server from a client running Cygwin/X. [4] If a 3D application is rendering to the front buffer and one of the end-of-frame trigger functions (glFlush()/glFinish()/glXWaitGL()) is called, VirtualGL will no longer read back the framebuffer unless the render mode is GL_RENDER. Reading back the front buffer when the render mode is GL_SELECT or GL_FEEDBACK is not only unnecessary, but it was known to cause a GLXBadContextState error with newer nVidia drivers (340.xx and later) in certain cases. [5] Fixed a deadlock that occurred in the multi-threaded rendering test of fakerut when it was run with the XCB interposer enabled. This was due to VirtualGL attempting to handle XCB events when Xlib owned the event queue. It is possible that this issue affected or would have affected real-world applications as well. [6] Fixed an issue that caused certain 3D applications (observed with CAESES/FFW, although others were possibly affected as well) to abort with "ERROR: in TempContext-- Could not bind OpenGL context to window (window may may have disappeared)". When the 3D application called glXChooseVisual(), VirtualGL was choosing a corresponding FB config with GLX_DRAWABLE_TYPE=GLX_PBUFFER_BIT (assuming that VGL_DRAWABLE=pbuffer, which is the default.) This is incorrect, however, because regardless of the value of VGL_DRAWABLE, VirtualGL still uses Pixmaps on the 3D X server to represent GLX Pixmaps (necessary in order to make GLX_EXT_texture_from_pixmap work properly.) Thus, VGL needs to choose an FB config that supports both Pbuffers and Pixmaps. This was generally only a problem with nVidia drivers, because they export different FB configs for GLX_PBUFFER_BIT and GLX_PBUFFER_BIT|GLX_PIXMAP_BIT. |
|
From: DRC <dco...@us...> - 2015-05-27 07:42:51
|
Depending on what you really need to do, it may not be necessary to get that fancy. Most users of VirtualGL use it with an X11 proxy. TurboVNC is the X proxy that our project provides and which has features and performance specifically targeted at 3D applications, but there are other X proxies that work with VGL as well (although usually not as fast as TurboVNC.) The X proxy, when combined with VirtualGL, will cause all of the 2D (X11) as well as 3D (OpenGL) rendering commands from the application to be rendered on the server and sent to the client as image updates. TurboVNC specifically has a feature called "automatic lossless refresh", which allows you to use a very low JPEG quality for most updates, but when the server senses that you have stopped moving things around (such as when you stop rotating a 3D scene, etc.), it will send a lossless update of the screen. X proxies also do a much better job of handling high-latency networks, since they are sending more coarse-grained image updates instead of very fine-grained and chatty X11 updates over the network. VirtualGL can also operate without an X proxy, but this is less useful on high-latency networks, because it requires that the non-3D-related X11 commands be sent over the network to the client machine. Even though the 3D-related X11 commands are being redirected by VirtualGL and are being rendered on the server, the 2D-related X11 commands used to draw the application GUI, etc. are still sent over the network, and that can create performance problems on wide-area networks. Now, to more specifically answer your question-- you can reduce the JPEG quality in VirtualGL, which will significantly reduce the bandwidth. Using TurboVNC will likely reduce the bandwidth as well, not only because the X11 stuff is being rendered on the server but because TurboVNC has a more adaptive compression scheme, whereas VirtualGL uses plain motion-JPEG. I'm currently working with nVidia to research possible ways of leveraging their NVENC library to do H.264 compression in VirtualGL, but there are a lot of technical hurdles we haven't figured out yet vis-a-vis how that would interact with X proxies such as TurboVNC. Also, H.264 doesn't benefit all applications-- mainly it will only improve the situation for games and other very video-like workloads. More tradition 3D applications (CAD, visualization, etc.) are already quite optimal with TurboVNC's existing compression scheme. VirtualGL does have a transport plugin API, so you could conceivably use that to write your own transport plugin based on the existing VGL Transport code (see server/testplugin.cpp for an example) but which uses ffmpeg instead of libjpeg-turbo to do the 3D image compression. Whether or not that would really improve bandwidth usage would depend a lot on your specific workload. On 5/27/15 1:31 AM, Marco Marino wrote: > Hi, > i'm a student and i'm trying virtualgl for education purposes. From my > tests with vglrun I saw that huge amount of bandwidth is required for > acceptable image quality (eg: -c jpeg with quality 90 requires peaks of > 50/60 Mbits). Is there a way to compress the video flow? Can i do > something like: "VGLSOCKET" -> My compression library (like ffmpeg or > similar) -> TCP/UDP socket ? In practice i want try to compress with my > own code the output of VGL. Is this possible? I'm sorry if this is a > stupid question, but I don't have knowledge about virtualgl. > Thanks |
|
From: Marco M. <mar...@gm...> - 2015-05-27 06:32:06
|
Hi, i'm a student and i'm trying virtualgl for education purposes. From my tests with vglrun I saw that huge amount of bandwidth is required for acceptable image quality (eg: -c jpeg with quality 90 requires peaks of 50/60 Mbits). Is there a way to compress the video flow? Can i do something like: "VGLSOCKET" -> My compression library (like ffmpeg or similar) -> TCP/UDP socket ? In practice i want try to compress with my own code the output of VGL. Is this possible? I'm sorry if this is a stupid question, but I don't have knowledge about virtualgl. Thanks |
|
From: DRC <dco...@us...> - 2015-03-30 04:16:27
|
On 3/24/15 9:49 AM, Dr. Roman Grothausmann wrote: > One problem that I've observed with VirtualGL-2.4 (+XCB) is that the app crashes > with > > [VGL] ERROR: in getGLXDrawable-- > [VGL] 186: Window has been deleted by window manager That isn't a crash. It's VirtualGL being nice and preventing a crash by catching the fact that the X window disappeared without the application explicitly calling XDestroyWindow(). In other words, it's a feature, not a bug. From VirtualGL's point of view, it actually is an error, because if the application never calls XDestroyWindow(), then VirtualGL never has a chance to shut down the virtual window instance (which, if using the VirtualGL transport, would also shut down the connection to vglclient.) In order to prevent it, the application should catch the WM_DELETE_WINDOW message from the window manager and shut down its window cleanly. See glxspheres.c in the VirtualGL source for an example of how to do this (although that source code shows how to do it with Xlib, not xcb, I assume it would be similar with xcb.) |
|
From: DRC <dco...@us...> - 2015-03-30 04:05:12
|
After looking through your document, I'm not sure I fully understand where you're proposing to plug in VirtualGL. I also don't understand why you couldn't use VirtualGL as-in if you are already translating EGL into GLX. VirtualGL is little more than a GLX interposer, so once you've established the context on the server-side GPU, VirtualGL doesn't care whether you're using OpenGL or OpenGL ES. On 3/27/15 7:44 AM, Jonathan Rivalan wrote: > Hello, > > thank you for the answer, to give you some details we are currently > working on a cloud version of the AOSP emulator from Google. Usually, > the emulator uses locally a Qemu pipe to redirect the framebuffer from > the emulated Android to the host gpu, encoding it in the Android system > and decoding it in the Qemu window in order to use the host gpu. We are > trying to reproduce this mecanism through TCP in order to have the > emulator running in an hypervirsor and the x11 window on the client > side. The problem we have now is loosing packets due to bandwith > limitations. > > //Some documentation from the Google implementation > https://android.googlesource.com/platform/sdk/+/android-4.1.2_r2/emulator/opengl/DESIGN > > In terms of time, if we have a precise specifications we may be able to > spend some in contributing to the Virtualgl project. It all depends on > the complexity of the task and the time it would take. > > Thanks a lot in any case, > Best, > Jonathan > > > > 2015-03-18 20:21 GMT+01:00 DRC <dco...@us... > <mailto:dco...@us...>>: > > Nothing new. I still think it's feasible, but it will take some time to > implement, and I would need access to equipment that supports OpenGL ES. > That all requires money. > > Which platform do you need to support with OpenGL ES? > > > On 3/18/15 7:48 AM, Jonathan Rivalan wrote: > > Hello everyone, > > > > I'm a french project manager, currently working on a cloudified mobile > > emulator project. We ran some tests with VirtualGL that were pretty > > impressive, but fall on the limit that the solution was not supporting > > OpenGL ES at all. > > > > I found this mailing discussion archive and was wondering if there were > > new enhancements to the discussion / thinking on this subject. > > > >http://sourceforge.net/p/virtualgl/mailman/message/32190480/ > > > > Thank you in advance for any feedbacks, > > Best, > > Jonathan > > -- > > Cordialement - Best Regards, > > *Jonathan Rivalan* > > Ingénieur d'études R&D - R&D Engineer > > Porteur d'offre HTML5 - HTML5 Officer > > Alter Way - Holding > > *tél :*+33 (0)6 79 02 03 98 <tel:%2B33%20%280%296%2079%2002%2003%2098> > > 1 rue royale 92213 Saint-Cloud Cedex > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your > hub for all > things parallel software development, from weekly thought leadership > blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > <mailto:Vir...@li...> > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > > > > > -- > Cordialement - Best Regards, > *Jonathan Rivalan* > Ingénieur d'études R&D - R&D Engineer > Porteur d'offre HTML5 - HTML5 Officer > Alter Way - Holding > *tél :* +33 (0)6 79 02 03 98 > 1 rue royale 92213 Saint-Cloud Cedex > * > * > *<http://www.alterway.fr/signatures/url/1> > * > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > > > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > |
|
From: Jonathan R. <jon...@al...> - 2015-03-27 12:44:35
|
Hello, thank you for the answer, to give you some details we are currently working on a cloud version of the AOSP emulator from Google. Usually, the emulator uses locally a Qemu pipe to redirect the framebuffer from the emulated Android to the host gpu, encoding it in the Android system and decoding it in the Qemu window in order to use the host gpu. We are trying to reproduce this mecanism through TCP in order to have the emulator running in an hypervirsor and the x11 window on the client side. The problem we have now is loosing packets due to bandwith limitations. //Some documentation from the Google implementation https://android.googlesource.com/platform/sdk/+/android-4.1.2_r2/emulator/opengl/DESIGN In terms of time, if we have a precise specifications we may be able to spend some in contributing to the Virtualgl project. It all depends on the complexity of the task and the time it would take. Thanks a lot in any case, Best, Jonathan 2015-03-18 20:21 GMT+01:00 DRC <dco...@us...>: > Nothing new. I still think it's feasible, but it will take some time to > implement, and I would need access to equipment that supports OpenGL ES. > That all requires money. > > Which platform do you need to support with OpenGL ES? > > > On 3/18/15 7:48 AM, Jonathan Rivalan wrote: > > Hello everyone, > > > > I'm a french project manager, currently working on a cloudified mobile > > emulator project. We ran some tests with VirtualGL that were pretty > > impressive, but fall on the limit that the solution was not supporting > > OpenGL ES at all. > > > > I found this mailing discussion archive and was wondering if there were > > new enhancements to the discussion / thinking on this subject. > > > > http://sourceforge.net/p/virtualgl/mailman/message/32190480/ > > > > Thank you in advance for any feedbacks, > > Best, > > Jonathan > > -- > > Cordialement - Best Regards, > > *Jonathan Rivalan* > > Ingénieur d'études R&D - R&D Engineer > > Porteur d'offre HTML5 - HTML5 Officer > > Alter Way - Holding > > *tél :* +33 (0)6 79 02 03 98 > > 1 rue royale 92213 Saint-Cloud Cedex > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs > to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > -- Cordialement - Best Regards, *Jonathan Rivalan* Ingénieur d'études R&D - R&D Engineer Porteur d'offre HTML5 - HTML5 Officer Alter Way - Holding *tél :* +33 (0)6 79 02 03 98 1 rue royale 92213 Saint-Cloud Cedex * <http://www.alterway.fr/signatures/url/1>* |
|
From: Dr. R. G. <gro...@mh...> - 2015-03-24 14:49:45
|
On 23/03/15 21:13, DRC wrote: > On 3/23/15 6:48 AM, Dr. Roman Grothausmann wrote: >> On 23/03/15 12:40, Dr. Roman Grothausmann wrote: >>> With -DVGL_FAKEXCB=1 programmatical resizes now work (using vglconnect or within >>> VNC session), but resizing the Qt5 window with the mouse or using maximize still >>> leads to artefacts. Do You have any idea what could be the reason for this? >> >> I just found on >> http://svn.code.sf.net/p/virtualgl/code/tags/2.4/doc/index.html#hd0015 >> that vglrun has a +xcb option which is needed for this to work (an option vglrun >> -h does not list). With +xcb resizing the Qt5 window with the mouse or using >> maximize works as well. > > Yes, sorry about the bad game of Fizbin. The next release will make Qt5 > support automatic, but it's going to require re-architecting the way > VirtualGL handles the loading of symbols from the underlying libraries. No problem, it was just meant as a hint for others that might stumble into this problem. We are very grateful for VirtualGL, Your maintenance and Your support. Our whole server set-up would be hardly useful without it. Many thanks for that. One problem that I've observed with VirtualGL-2.4 (+XCB) is that the app crashes with [VGL] ERROR: in getGLXDrawable-- [VGL] 186: Window has been deleted by window manager if the window or a sub-window is closed with the closing-cross in the window frame created by the window manager. Can that be related to VirtualGL or XCB? This wasn't a problem before, nor is it when running the Qt5 app without VirtualGL. Thanks again Roman -- Dr. Roman Grothausmann Tomographie und Digitale Bildverarbeitung Tomography and Digital Image Analysis Institut für Funktionelle und Angewandte Anatomie, OE 4120 Medizinische Hochschule Hannover Carl-Neuberg-Str. 1 D-30625 Hannover Tel. +49 511 532-9574 |
|
From: DRC <dco...@us...> - 2015-03-23 20:13:17
|
On 3/23/15 6:48 AM, Dr. Roman Grothausmann wrote: > On 23/03/15 12:40, Dr. Roman Grothausmann wrote: >> With -DVGL_FAKEXCB=1 programmatical resizes now work (using vglconnect or within >> VNC session), but resizing the Qt5 window with the mouse or using maximize still >> leads to artefacts. Do You have any idea what could be the reason for this? > > I just found on > http://svn.code.sf.net/p/virtualgl/code/tags/2.4/doc/index.html#hd0015 > that vglrun has a +xcb option which is needed for this to work (an option vglrun > -h does not list). With +xcb resizing the Qt5 window with the mouse or using > maximize works as well. Yes, sorry about the bad game of Fizbin. The next release will make Qt5 support automatic, but it's going to require re-architecting the way VirtualGL handles the loading of symbols from the underlying libraries. |
|
From: Dr. R. G. <gro...@mh...> - 2015-03-23 11:48:39
|
On 23/03/15 12:40, Dr. Roman Grothausmann wrote: > With -DVGL_FAKEXCB=1 programmatical resizes now work (using vglconnect or within > VNC session), but resizing the Qt5 window with the mouse or using maximize still > leads to artefacts. Do You have any idea what could be the reason for this? I just found on http://svn.code.sf.net/p/virtualgl/code/tags/2.4/doc/index.html#hd0015 that vglrun has a +xcb option which is needed for this to work (an option vglrun -h does not list). With +xcb resizing the Qt5 window with the mouse or using maximize works as well. Cheers, Roman > On 17/03/15 21:17, DRC wrote: >> Rebuild VirtualGL 2.4 from source with -DVGL_FAKEXCB=1. >> >> This will be automatic in the next major release, but since the issue >> was discovered after 2.4 went into beta, I felt it best to isolate the >> new code. Furthermore, libxcb is a more rapidly-changing API/ABI than >> Xlib or GLX, so an XCB faker built for one platform won't work on all >> platforms. We are going to have to develop some sort of dynamic loading >> mechanism for it, which will take some time. For now, please build your >> own custom version for whichever server platform you need to support. >> >> >> On 3/17/15 9:19 AM, Dr. Roman Grothausmann wrote: >>> Dear mailing list members, >>> >>> >>> Are there any known problems of VirtualGL with Qt5? >>> I'm working on a project (extensions to ITKSnap: >>> https://github.com/pyushkevich/itksnap/pull/1) which has just recently >>> moved to Qt5 and since then it seems that windows resizes do not work >>> properly any more (new regions in the program stay black or contain >>> artefacts, see *01.png) if the program is run with vglrun. Neither >>> resizing the main window with the mouse, nor maximize, nor >>> programmatical resizing works, this being the biggest problem, because >>> the program does so for specific modes, one of these is the extension I >>> work on. >>> If I compile and run the application on my desktop PC everything works >>> as expected (see *02.png). However, if run from a server with otherwise >>> functioning VirtualGL it does not work, nor if recompiled on the server. >>> It does work on the server for older versions of ITKSnap that build with >>> Qt4. >>> All together it gives me the impression the problem comes from the >>> combination of VirtualGL with Qt5. >>> >>> Many thanks for any help or hints how to solve this. >>> Roman >> >> ------------------------------------------------------------------------------ >> Dive into the World of Parallel Programming The Go Parallel Website, sponsored >> by Intel and developed in partnership with Slashdot Media, is your hub for all >> things parallel software development, from weekly thought leadership blogs to >> news, videos, case studies, tutorials and more. Take a look and join the >> conversation now. http://goparallel.sourceforge.net/ >> _______________________________________________ >> VirtualGL-Users mailing list >> Vir...@li... >> https://lists.sourceforge.net/lists/listinfo/virtualgl-users >> > -- Dr. Roman Grothausmann Tomographie und Digitale Bildverarbeitung Tomography and Digital Image Analysis Institut für Funktionelle und Angewandte Anatomie, OE 4120 Medizinische Hochschule Hannover Carl-Neuberg-Str. 1 D-30625 Hannover Tel. +49 511 532-9574 |
|
From: Dr. R. G. <gro...@mh...> - 2015-03-23 11:41:03
|
Dear DRC, Many thanks for the quick reply and Your help. I got the chance now to reorganize and reboot our imaging server and trying Your suggestions. With -DVGL_FAKEXCB=1 programmatical resizes now work (using vglconnect or within VNC session), but resizing the Qt5 window with the mouse or using maximize still leads to artefacts. Do You have any idea what could be the reason for this? Many thanks for Your help. Roman On 17/03/15 21:17, DRC wrote: > Rebuild VirtualGL 2.4 from source with -DVGL_FAKEXCB=1. > > This will be automatic in the next major release, but since the issue > was discovered after 2.4 went into beta, I felt it best to isolate the > new code. Furthermore, libxcb is a more rapidly-changing API/ABI than > Xlib or GLX, so an XCB faker built for one platform won't work on all > platforms. We are going to have to develop some sort of dynamic loading > mechanism for it, which will take some time. For now, please build your > own custom version for whichever server platform you need to support. > > > On 3/17/15 9:19 AM, Dr. Roman Grothausmann wrote: >> Dear mailing list members, >> >> >> Are there any known problems of VirtualGL with Qt5? >> I'm working on a project (extensions to ITKSnap: >> https://github.com/pyushkevich/itksnap/pull/1) which has just recently >> moved to Qt5 and since then it seems that windows resizes do not work >> properly any more (new regions in the program stay black or contain >> artefacts, see *01.png) if the program is run with vglrun. Neither >> resizing the main window with the mouse, nor maximize, nor >> programmatical resizing works, this being the biggest problem, because >> the program does so for specific modes, one of these is the extension I >> work on. >> If I compile and run the application on my desktop PC everything works >> as expected (see *02.png). However, if run from a server with otherwise >> functioning VirtualGL it does not work, nor if recompiled on the server. >> It does work on the server for older versions of ITKSnap that build with >> Qt4. >> All together it gives me the impression the problem comes from the >> combination of VirtualGL with Qt5. >> >> Many thanks for any help or hints how to solve this. >> Roman > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users > -- Dr. Roman Grothausmann Tomographie und Digitale Bildverarbeitung Tomography and Digital Image Analysis Institut für Funktionelle und Angewandte Anatomie, OE 4120 Medizinische Hochschule Hannover Carl-Neuberg-Str. 1 D-30625 Hannover Tel. +49 511 532-9574 |
|
From: DRC <dco...@us...> - 2015-03-18 19:35:20
|
Not really. There would be issues with such a solution: -- Compatibility: Indirect OpenGL will not have access to a lot of OpenGL extensions that applications might need, and you could run into even more serious compatibility problems if the X proxy server and the 3D server are not running similar OpenGL stacks (for instance, if the X proxy server was running Mesa and the 3D server was running nVidia.) When you run an application in VirtualGL, VirtualGL passes through most of the OpenGL calls unaltered, so the application behaves as if it is running locally-- with no OpenGL call overhead. -- Performance: The performance issues related to indirect OpenGL are precisely what VirtualGL was designed to circumvent. Read the background article on VirtualGL.org to get a feel for what some of these might be, but basically, you're going to be sending all of the 3D data and commands (including textures, which can be a killer) across a network and receiving uncompressed frames back. There is a tremendous amount of overhead associated with this. Even a single user could monopolize all of the bandwidth of a gigabit pipe in this scenario, and even then, the performance would not come anywhere close to that of the direct rendering scenario. In order to support such a solution in any reasonable fashion, we'd have to come up with some way of compressing the 3D data (which would require immense complexity-- instead of just intercepting GLX, VirtualGL would have to become a full OpenGL interposer. No. Not gonna happen) and compressing the images read back from the remote 3D server (which would require implementing some sort of daemon that would run on that server.) There are other solutions that do some or all of that (DCV comes to mind), but even those solutions don't perform particularly well with a remote 3D server, in my testing. I've been doing this for a while now, and it's been my experience that, in most cases, whenever someone says they want to host an X proxy server on a different machine than a 3D server, it's a problem that can be solved with hardware. We have many deployments of VirtualGL and TurboVNC in corporations, academia, supercomputer centers, etc., and all of those are hosting the application and the X proxy on the same machine without any major hurdles. They use beefy multi-core, multi-processor servers with high-end headless nVidia adapters. On 3/13/15 8:43 PM, Jonathan Wong wrote: > Is there a configuration where it's possible to have the X Proxy and > application together while using a remote 3D X server for VirtualGL? > Sort of a cross between using VirtualGL with an X Proxy on a Different > Machine and the example with VirtualGL and chromium. > > Basically I have a headless, GPU-less server that I'd like to VNC to, > but I'd also like GLX acceleration such that vgl will go through the > remote X server. Something akin to a VMGL-like setup but with VirtualGL. > > Thanks, > Jon |
|
From: DRC <dco...@us...> - 2015-03-18 19:21:41
|
Nothing new. I still think it's feasible, but it will take some time to implement, and I would need access to equipment that supports OpenGL ES. That all requires money. Which platform do you need to support with OpenGL ES? On 3/18/15 7:48 AM, Jonathan Rivalan wrote: > Hello everyone, > > I'm a french project manager, currently working on a cloudified mobile > emulator project. We ran some tests with VirtualGL that were pretty > impressive, but fall on the limit that the solution was not supporting > OpenGL ES at all. > > I found this mailing discussion archive and was wondering if there were > new enhancements to the discussion / thinking on this subject. > > http://sourceforge.net/p/virtualgl/mailman/message/32190480/ > > Thank you in advance for any feedbacks, > Best, > Jonathan > -- > Cordialement - Best Regards, > *Jonathan Rivalan* > Ingénieur d'études R&D - R&D Engineer > Porteur d'offre HTML5 - HTML5 Officer > Alter Way - Holding > *tél :* +33 (0)6 79 02 03 98 > 1 rue royale 92213 Saint-Cloud Cedex |
|
From: Jonathan R. <jon...@al...> - 2015-03-18 12:48:39
|
Hello everyone, I'm a french project manager, currently working on a cloudified mobile emulator project. We ran some tests with VirtualGL that were pretty impressive, but fall on the limit that the solution was not supporting OpenGL ES at all. I found this mailing discussion archive and was wondering if there were new enhancements to the discussion / thinking on this subject. http://sourceforge.net/p/virtualgl/mailman/message/32190480/ Thank you in advance for any feedbacks, Best, Jonathan -- Cordialement - Best Regards, *Jonathan Rivalan* Ingénieur d'études R&D - R&D Engineer Porteur d'offre HTML5 - HTML5 Officer Alter Way - Holding *tél :* +33 (0)6 79 02 03 98 1 rue royale 92213 Saint-Cloud Cedex * <http://www.alterway.fr/signatures/url/1>* |
|
From: DRC <dco...@us...> - 2015-03-17 20:17:26
|
Rebuild VirtualGL 2.4 from source with -DVGL_FAKEXCB=1. This will be automatic in the next major release, but since the issue was discovered after 2.4 went into beta, I felt it best to isolate the new code. Furthermore, libxcb is a more rapidly-changing API/ABI than Xlib or GLX, so an XCB faker built for one platform won't work on all platforms. We are going to have to develop some sort of dynamic loading mechanism for it, which will take some time. For now, please build your own custom version for whichever server platform you need to support. On 3/17/15 9:19 AM, Dr. Roman Grothausmann wrote: > Dear mailing list members, > > > Are there any known problems of VirtualGL with Qt5? > I'm working on a project (extensions to ITKSnap: > https://github.com/pyushkevich/itksnap/pull/1) which has just recently > moved to Qt5 and since then it seems that windows resizes do not work > properly any more (new regions in the program stay black or contain > artefacts, see *01.png) if the program is run with vglrun. Neither > resizing the main window with the mouse, nor maximize, nor > programmatical resizing works, this being the biggest problem, because > the program does so for specific modes, one of these is the extension I > work on. > If I compile and run the application on my desktop PC everything works > as expected (see *02.png). However, if run from a server with otherwise > functioning VirtualGL it does not work, nor if recompiled on the server. > It does work on the server for older versions of ITKSnap that build with > Qt4. > All together it gives me the impression the problem comes from the > combination of VirtualGL with Qt5. > > Many thanks for any help or hints how to solve this. > Roman |
|
From: Dr. R. G. <gro...@mh...> - 2015-03-17 14:43:31
|
Dear mailing list members, Are there any known problems of VirtualGL with Qt5? I'm working on a project (extensions to ITKSnap: https://github.com/pyushkevich/itksnap/pull/1) which has just recently moved to Qt5 and since then it seems that windows resizes do not work properly any more (new regions in the program stay black or contain artefacts, see *01.png) if the program is run with vglrun. Neither resizing the main window with the mouse, nor maximize, nor programmatical resizing works, this being the biggest problem, because the program does so for specific modes, one of these is the extension I work on. If I compile and run the application on my desktop PC everything works as expected (see *02.png). However, if run from a server with otherwise functioning VirtualGL it does not work, nor if recompiled on the server. It does work on the server for older versions of ITKSnap that build with Qt4. All together it gives me the impression the problem comes from the combination of VirtualGL with Qt5. Many thanks for any help or hints how to solve this. Roman -- Dr. Roman Grothausmann Tomographie und Digitale Bildverarbeitung Tomography and Digital Image Analysis Institut für Funktionelle und Angewandte Anatomie, OE 4120 Medizinische Hochschule Hannover Carl-Neuberg-Str. 1 D-30625 Hannover Tel. +49 511 532-9574 |
|
From: Jonathan W. <jon...@gm...> - 2015-03-14 01:43:27
|
Is there a configuration where it's possible to have the X Proxy and application together while using a remote 3D X server for VirtualGL? Sort of a cross between using VirtualGL with an X Proxy on a Different Machine and the example with VirtualGL and chromium. Basically I have a headless, GPU-less server that I'd like to VNC to, but I'd also like GLX acceleration such that vgl will go through the remote X server. Something akin to a VMGL-like setup but with VirtualGL. Thanks, Jon |
|
From: DRC <dco...@us...> - 2015-03-13 18:44:41
|
On 3/13/15 10:25 AM, Alex Xu wrote: > I get the aforementioned error when running almost anything through > VirtualGL through various methods (SSH X11 forwarding and local VNC). > > Specifically, running "vglconnect -s <host> <<< 'vglrun glxinfo'" > results in this error. > > However, running "ssh <host> <<< 'DISPLAY=:0 glxinfo'" shows that > GLX_ARB_create_context is in fact provided by the graphics driver and > should be available. > > Furthermore, running "vglconnect -s <host> <<< 'vglrun glxgears'" does > work (i.e. output to the connecting machine) and is much faster than > indirect rendering. > > I tried Googling for it, but all I found was "update your graphics > drivers" and "use the proprietary drivers". I am using the latest > drivers, xf86-video-intel-2.99.217, and there are no proprietary drivers. > > What troubleshooting steps could I take to narrow down the problem? Try the latest pre-release: http://www.virtualgl.org/DeveloperInfo/PreReleases I think one of the patches OpenText contributed in the past couple of days should fix this. Not sure why Mesa advertises support for that extension but doesn't actually provide the function symbol, but VirtualGL now checks for the presence of glXCreateContextAttribsARB() in the underlying OpenGL library before it reports to the application that GLX_ARB_create_context and GLX_ARB_create_context_profile are available. |
|
From: Nathan K. <nat...@sp...> - 2015-03-13 16:41:52
|
On 13/03/15 11:25 AM, Alex Xu wrote: > I get the aforementioned error when running almost anything through > VirtualGL through various methods (SSH X11 forwarding and local VNC). > > Specifically, running "vglconnect -s <host> <<< 'vglrun glxinfo'" > results in this error. > > However, running "ssh <host> <<< 'DISPLAY=:0 glxinfo'" shows that > GLX_ARB_create_context is in fact provided by the graphics driver and > should be available. > > Furthermore, running "vglconnect -s <host> <<< 'vglrun glxgears'" does > work (i.e. output to the connecting machine) and is much faster than > indirect rendering. > > I tried Googling for it, but all I found was "update your graphics > drivers" and "use the proprietary drivers". I am using the latest > drivers, xf86-video-intel-2.99.217, and there are no proprietary drivers. > > What troubleshooting steps could I take to narrow down the problem? Funny timing: I just posted a patch for that the other day. The VGL nightly page that my search engine pulls up doesn't have that change, so you'll either have to wait for a build with that patch included or if that's something you can do, build the latest trunk yourself. I haven't personally tested VirtualGL with Intel drivers, so I don't know if you'll hit any other issue after solving that one. I have used Mesa + nouveau. -Nathan |
|
From: Alex Xu <ale...@ya...> - 2015-03-13 15:25:51
|
I get the aforementioned error when running almost anything through VirtualGL through various methods (SSH X11 forwarding and local VNC). Specifically, running "vglconnect -s <host> <<< 'vglrun glxinfo'" results in this error. However, running "ssh <host> <<< 'DISPLAY=:0 glxinfo'" shows that GLX_ARB_create_context is in fact provided by the graphics driver and should be available. Furthermore, running "vglconnect -s <host> <<< 'vglrun glxgears'" does work (i.e. output to the connecting machine) and is much faster than indirect rendering. I tried Googling for it, but all I found was "update your graphics drivers" and "use the proprietary drivers". I am using the latest drivers, xf86-video-intel-2.99.217, and there are no proprietary drivers. What troubleshooting steps could I take to narrow down the problem? |
|
From: Antony C. <ant...@cl...> - 2015-03-13 13:03:23
|
On 10/03/2015 12:33, DRC wrote: > I can't say I'm terribly surprised. Maybe I'm wrong and there is some > good technical reason why they can't enable stereo Pbuffers, but nVidia > also has a habit of disabling features in their drivers purely for > marketing reasons. Theoretically, it might be possible to make > VirtualGL emulate stereo using two Pbuffers or even two FBOs on the same > Pbuffer, but it would make the solution significantly more complex. > Getting it right would cost a lot more in labor than a QuadroFX costs, > and quad-buffered stereo is just not popular enough to justify > re-architecting VirtualGL. Yeah this was my thought too, "possible but not worth the effort" > > Perhaps others can chime in as to whether Grid adapters work properly. > > Out of curiosity, what were you planning to use as a client? That's the > other hell of quad-buffered stereo-- you need a card on the client as > well that can draw in stereo, which probably means another Quadro. There is already a machine connected to the quad buffered display with a quadro card which works fine. The customer is a university who is purchasing a cluster from us and would like to use Virtual GL to directly view datasets from one of the cluster nodes. This would have direct access to the large and fast parallel scratch filesystem and would avoid the need to copy all the data out over ethernet to the local storage in the node which can take a long time and could easily just not fit. > On 3/10/15 5:26 AM, Antony Cleave wrote: >> Thanks for this, looking at the options it looks like for stereo we are >> indeed stuck with the Quadro cards. All stereo options I've tried seem >> to get disabled on the M2090 and I don't have a Grid card to try and >> it's not exactly cheap so I'm not going to pop down to PC world and pick >> one up on the off chance it works. > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > VirtualGL-Users mailing list > Vir...@li... > https://lists.sourceforge.net/lists/listinfo/virtualgl-users |
|
From: Paul M. <pau...@mo...> - 2015-03-11 22:55:34
|
All,
Was half way through writing an email and then I solved the problem - so sharing anyway just in case it saves someone else some pain ;)
We have MATLAB r2014a running fine but I get the following when running r2014b the same way and using an opengl command (e.g. "patch")...
vglrun matlab
MATLAB is selecting SOFTWARE OPENGL rendering.
[VGL] WARNING: The OpenGL rendering context obtained on X display
[VGL] :0 is indirect, which may cause performance to suffer.
[VGL] If :0 is a local X display, then the framebuffer device
[VGL] permissions may be set incorrectly.
[VGL] ERROR: glXCreateContextAttribsARB symbol not loaded
The problem is that MATLAB r2014b tries to detect the GPU and gets confused by the multiple display. The solution was found my reading the matlab start up script and finding the -nosoftwareopengl option. So the following works :)
vglrun matlab -nosoftwareopengl
Cheers,
Paul
--
Dr Paul McIntosh
Senior HPC Consultant, Technical Lead,
Multi-modal Australian ScienceS Imaging and Visualisation Environment (www.massive.org.au)
Monash University, Ph: 9902 0439 Mob: 0434 524935
|