From: Kumar, S. <shr...@hp...> - 2010-03-23 11:30:20
|
Hi Simon, > I'm in the process of setting up a small (3x2) tiled display > system using a pair of nvidia NVS 420 cards. This isn't intended > for locally rendered 3D viz, we're just using VizStack as a way > to drive a tiled display, though we are also separately looking > at using VizStack to help manage remote visualisation resources, > along with VGL and related stuff. This is the pilot system for > a reasonably large large of similar systems that we're planning > to roll out over the next few months, and to develop further > after that. Thanks for this info. > I've gotten the tiled display up and running successfully > (after creating a GPU config for the nvs420 and the Dell > 3008wfp monitors we're using), however there are a few things > that are currently problematic. The big one is the bezels - there > doesn't appear to be any way to create a tiled display where the > bezels hide part of the display, rather than the display simply > skipping over them. This is quite problematic for us, since most > of the applications we're looking at for these displays will work > best with the hidden pixels approach. Is there any way that the > current code can support this display mode? If not, is there any > chance it could be implemented, or that you could provide some > advice on how we could implement it here? We haven't implemented bezels yet. More out of a lack of immediate need, I'd say. The current code can be modified to support bezels; I did some experiments to make sure. I would like your feedback on what kind of interface you would like w.r.t bezels. The nvidia card has support for "hidden pixels", and VizStack can use that. The number of pixels to skip for the bezel for a particular display device depends on a. the physical size of the bezel. Some displays have a larget vertical spacing compared to the horizontal spacing. b. the display area of the display device c. the resolution Information needed for (a), (b) and (c) whould come from the display template. Do you think this would be sufficient for your needs ? To improve manual control, we could have a provision to have a per-resolution bezel, specified in pixels. > The second issue we have is that the display as currently set > up doesn't seem to support OpenGL/GLX - xdpyinfo reports that > the GLX and NV-GLX extensions are supported, but the displays > (in various configurations, including one where only a single > GPU is driving two monitors) end up without any working GLX > support. Attempting to run any GLX programs (including glxinfo) > fails to create a context. This is less of an issue than the > bezels, since we aren't planning to use the tiled display to do > 3D rendering, but we'd like to figure out what the issue is in > case we run into it again later, on other configurations. This should not happen - VizStack should setup the server for proper 3D rendering at all times. Can you tell me what happens when you run /opt/vizstack/sbin/vs-test-gpus ? Can you send me the /var/log/Xorg.<display_number>.log file ? Also, send me the config file for the server ( VizStack creates configuration files at runtime). /var/run/vizstack/xorg-<display_number>.conf (replace <display_number> to correspond to your DISPLAY) > Finally, are there any particular constraints on the GPUs > and/or drivers that we can use? From getting VizStack running > on the NVS420 I suspect not, but I wanted to find out if there > are any less obvious constraints/requirements. We're currently > using nvidia workstation hardware, but we'd like to be able to > use VizStack with both nvidia desktop/gaming cards and ATI cards, > since we have a pretty heterogenous environment and we'd also > like to avoid being locked in to nvidia as a supplier. For now, VizStack supports only nvidia cards. We the developers have acccess to only nvidia cards, and hence have implemented support for them. VizStack tries to use generic GPU concepts rather than anything very specific to nvidia, with the intent to support other cards. However, to work with other cards, somebody will have to do the hard work of generating the right config files for them. We will also need to ensure that our abstraction will fit the ATI cards. > Oh, one final question: should we be tracking the svn trunk, or > the 'shree' branch (which seems to be seeing active development)? The svn trunk it has to be for now. The 'shree' branch should be treated as experimental and unstable. Regards -- Shree |