You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
|
Feb
(3) |
Mar
(8) |
Apr
(26) |
May
(41) |
Jun
(3) |
Jul
(19) |
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
2011 |
Jan
(17) |
Feb
(5) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Kumar, S. <shr...@hp...> - 2010-04-23 05:33:34
|
No problem; you can test it at your convenience. I need one more piece of information to solve this problem. If you look in your Xorg log file, you will find a line which says "Supported display device(s)". Can you send me a few Lines around it ? Thanks -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Friday, April 23, 2010 10:17 AM To: viz...@li... Subject: Re: [vizstack-users] Issue when changing between DVI and displayport on NVS420 I probably won't be able to test it until Tuesday Australian time (ANZAC day public holiday Monday), but I'll get onto it as soon as I can. Thanks for the quick response ;-) Simon > -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Friday, 23 April 2010 2:25 PM > To: viz...@li... > Subject: Re: [vizstack-users] Issue when changing between DVI > and displayport on NVS420 > > Hi Simon, > > Thanks for the lspci dump and the info about the display > outputs. I'm glad this won't break any of my internal designs :-) > > Give me a day to remove the hardcoding. Then I'll let you > know so you can test. > > Thanks > -- Shree ------------------------------------------------------------------------------ _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: <Sim...@cs...> - 2010-04-23 04:46:57
|
I probably won't be able to test it until Tuesday Australian time (ANZAC day public holiday Monday), but I'll get onto it as soon as I can. Thanks for the quick response ;-) Simon > -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Friday, 23 April 2010 2:25 PM > To: viz...@li... > Subject: Re: [vizstack-users] Issue when changing between DVI > and displayport on NVS420 > > Hi Simon, > > Thanks for the lspci dump and the info about the display > outputs. I'm glad this won't break any of my internal designs :-) > > Give me a day to remove the hardcoding. Then I'll let you > know so you can test. > > Thanks > -- Shree |
From: Kumar, S. <shr...@hp...> - 2010-04-23 04:27:20
|
Hi Simon, Thanks for the lspci dump and the info about the display outputs. I'm glad this won't break any of my internal designs :-) Give me a day to remove the hardcoding. Then I'll let you know so you can test. Thanks -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Friday, April 23, 2010 4:59 AM To: viz...@li... Subject: Re: [vizstack-users] Issue when changing between DVI and displayport on NVS420 The relevant output of lspci: 03:00.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 04:00.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 04:02.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 05:00.0 VGA compatible controller: nVidia Corporation G98 [Quadro NVS 420] (rev a1) 06:00.0 3D controller: nVidia Corporation G98 [Quadro NVS 420] (rev a1) 07:00.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 08:00.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 08:02.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 09:00.0 VGA compatible controller: nVidia Corporation G98 [Quadro NVS 420] (rev a1) 0a:00.0 3D controller: nVidia Corporation G98 [Quadro NVS 420] (rev a1) So a set of PCI-E bridges, and two G98 GPUs per card (this machine has two NVS420s, of course). The outputs used on each GPU are the same - it's either DFP-0/1 on both, or DFP-2/3 on both. Would you like me to grab the shree branch and do some testing? Simon > -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Thursday, 22 April 2010 6:51 PM > To: viz...@li... > Subject: Re: [vizstack-users] Issue when changing between DVI > and displayport on NVS420 > > > That's good news. > > Your observation about remap_display_outputs, and the > hardcoding is correct (I wanted to avoid going to DFP-3 for > that reason). The hardcoding was done to prevent spurious > erros, and some it has been removed in the current codebase. > In fact, the current codebase (close to rel 1.1) actually > creates templates for unknown GPUs and unknown display devices. > > To fix the hardcoding, I need to understand some things about > this NVS420. The nvidia specs says that it is capable of > driving 4 display devices. The NVS420 is a single card, but > it has 2 GPUs inside it; these show up as separate devices > (check "lspci | grep -i nvidia"). > > Let's say you configure the NVS 420 to drive all 4 display > devices. Then will it use DFP-0 and DFP-1 on the first GPU > and DFP-2 and DFP-3 on the second GPU ? > > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Thursday, April 22, 2010 12:55 PM > To: viz...@li... > Subject: Re: [vizstack-users] Issue when changing between DVI > and displayport on NVS420 > > That partially resolves the problem - in my test 1x2 config > it brings up the first display, but not the second. Here's > the equivalent output from the logfile: > > (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 > (==) NVIDIA(0): RGB weight 888 > (==) NVIDIA(0): Default visual is TrueColor > (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) > (**) NVIDIA(0): Option "ConnectedMonitor" ",DFP-1,DFP-2" > (**) NVIDIA(0): Option "TwinView" > (**) NVIDIA(0): Option "MetaModes" "DFP-1: 2560x1600_60 > @2560x1600 +0+0,DFP-2: 2560x1600_60 @2560x1600 +0+1600" > (**) NVIDIA(0): Option "CustomEDID" > ";DFP-1:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bi > n;DFP-2:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin" > (**) NVIDIA(0): Option "UseDisplayDevice" "DFP-1,DFP-2" > (**) NVIDIA(0): Option "IncludeImplicitMetaModes" "False" > (**) NVIDIA(0): Option "ProbeAllGpus" "False" > (**) NVIDIA(0): Enabling RENDER acceleration > (**) NVIDIA(0): ConnectedMonitor string: ",DFP-1,DFP-2" > (WW) NVIDIA(0): Invalid ConnectedMonitor string token: ""; > discarding token. > (WW) NVIDIA(0): No display device specified for CustomEDID > ""; ignoring. > (II) NVIDIA(0): NVIDIA GPU Quadro NVS 420 (G98GL) at PCI:5:0:0 (GPU-0) > (--) NVIDIA(0): Memory: 524288 kBytes > (--) NVIDIA(0): VideoBIOS: 62.98.68.00.06 > (II) NVIDIA(0): Detected PCI Express Link width: 4X > (--) NVIDIA(0): Interlaced video modes are supported on this GPU > (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 > at PCI:5:0:0: > (--) NVIDIA(0): DELL 3008WFP (DFP-1) > (--) NVIDIA(0): DELL 3008WFP (DFP-2) > (--) NVIDIA(0): DELL 3008WFP (DFP-1): 165.0 MHz maximum pixel clock > (--) NVIDIA(0): DELL 3008WFP (DFP-1): Internal Single Link TMDS > (--) NVIDIA(0): DELL 3008WFP (DFP-2): 330.0 MHz maximum pixel clock > (--) NVIDIA(0): DELL 3008WFP (DFP-2): Internal DisplayPort > (**) NVIDIA(0): TwinView enabled > (II) NVIDIA(0): Assigned Display Devices: DFP-1, DFP-2 > (II) NVIDIA(0): Validated modes: > (II) NVIDIA(0): > (II) NVIDIA(0): > "DFP-1:2560x1600_60@2560x1600+0+0,DFP-2:2560x1600_60@2560x1600+0+1600" > (**) NVIDIA(0): Virtual screen size configured to be 2560 x 3200 > (WW) NVIDIA(0): Cannot find size of first mode for DELL > 3008WFP (DFP-1); > (WW) NVIDIA(0): cannot compute DPI from DELL 3008WFP > (DFP-1)'s EDID. > (==) NVIDIA(0): DPI set to (75, 75); computed from built-in default > (==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals. > > I tried putting [2,3] in as the remap_display_outputs (which, > if I'm understanding what's going on correctly, would resolve > the problem), but that errored out - it's hard-coded not to > support anything beyond a scanout index of 2, despite my > putting in scanout_caps entries up to 3. > > Probably worth removing the hard limit, given there are > single cards with six outputs on the market these days ;-) > > Simon > > > -----Original Message----- > > From: Kumar, Shree [mailto:shr...@hp...] > > Sent: Thursday, 22 April 2010 5:05 PM > > To: viz...@li... > > Subject: Re: [vizstack-users] Issue when changing between DVI and > > displayport on NVS420 > > > > Hi Simon, > > > > I just had a look at that card; it had me confused once. > > > > > One odd thing is that running natively nvidia-settings sees > > the monitors as DFP-2 and DFP-3, rather than DFP-0 and DFP-1. > > > > Very good observation !! > > > > See /opt/vizstack/share/templates/gpus/fx5800.xml. Note that this > > defines a "scanout_caps" for index "2". You need to do something > > similar for your GPU template too - to ensure VizStack can use that > > port. > > > > Can you try this : Add a "remap_display_outputs" parameter in your > > tiled display definition. > > > > <handler_params> > > block_type="gpu"; > > num_blocks=[1,1]; > > block_display_layout=[1,2]; > > display_device="your_device"; > > remap_display_outputs=[1,2]; # <-- this is the line you need to > > add </handler_params> > > > > After that, restart the SSM. Try launching the X server > using VizStack > > again. > > > > Regards > > -- Shree > > > > -----Original Message----- > > From: Sim...@cs... [mailto:Sim...@cs...] > > Sent: Thursday, April 22, 2010 10:33 AM > > To: viz...@li... > > Subject: [vizstack-users] Issue when changing between DVI and > > displayport on NVS420 > > > > I've just switched between a single-link DVI adapter and a > displayport > > adapter for the NVS420 - this allows me to run the monitors > at their > > full native resolution (2560x1600). > > However, after this change I can't get VizStack to bring up an X > > server. > > > > The portion of the Xorg.log file from the nvidia driver is here: > > > > (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 > > (==) NVIDIA(0): RGB weight 888 > > (==) NVIDIA(0): Default visual is TrueColor > > (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) > > (**) NVIDIA(0): Option "ConnectedMonitor" "DFP-0,DFP-1" > > (**) NVIDIA(0): Option "TwinView" > > (**) NVIDIA(0): Option "MetaModes" "DFP-0: 2560x1600_60 @2560x1600 > > +0+0,DFP-1: 2560x1600_60 @2560x1600 +0+1600" > > (**) NVIDIA(0): Option "CustomEDID" > > "DFP-0:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin > > ;DFP-1:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin" > > (**) NVIDIA(0): Option "UseDisplayDevice" "DFP-0,DFP-1" > > (**) NVIDIA(0): Option "IncludeImplicitMetaModes" "False" > > (**) NVIDIA(0): Option "ProbeAllGpus" "False" > > (**) NVIDIA(0): Enabling RENDER acceleration > > (**) NVIDIA(0): ConnectedMonitor string: "DFP-0,DFP-1" > > (II) NVIDIA(0): NVIDIA GPU Quadro NVS 420 (G98GL) at > PCI:5:0:0 (GPU-0) > > (--) NVIDIA(0): Memory: 524288 kBytes > > (--) NVIDIA(0): VideoBIOS: 62.98.68.00.06 > > (II) NVIDIA(0): Detected PCI Express Link width: 4X > > (--) NVIDIA(0): Interlaced video modes are supported on this GPU > > (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 at > > PCI:5:0:0: > > (--) NVIDIA(0): DELL 3008WFP (DFP-0) > > (--) NVIDIA(0): DELL 3008WFP (DFP-1) > > (--) NVIDIA(0): DELL 3008WFP (DFP-0): 165.0 MHz maximum pixel clock > > (--) NVIDIA(0): DELL 3008WFP (DFP-0): Internal Single Link TMDS > > (--) NVIDIA(0): DELL 3008WFP (DFP-1): 165.0 MHz maximum pixel clock > > (--) NVIDIA(0): DELL 3008WFP (DFP-1): Internal Single Link TMDS > > (**) NVIDIA(0): TwinView enabled > > (II) NVIDIA(0): Assigned Display Devices: DFP-0, DFP-1 > > (WW) NVIDIA(0): No valid modes for > > (WW) NVIDIA(0): > > "DFP-0:2560x1600_60@2560x1600+0+0,DFP-1:2560x1600_60@2560x1600 > > +0+1600"; > > (WW) NVIDIA(0): removing. > > (WW) NVIDIA(0): > > (WW) NVIDIA(0): Unable to validate any modes; falling back to the > > default mode > > (WW) NVIDIA(0): "nvidia-auto-select". > > (WW) NVIDIA(0): > > (II) NVIDIA(0): Validated modes: > > (II) NVIDIA(0): "nvidia-auto-select" > > (**) NVIDIA(0): Virtual screen size configured to be 2560 x 3200 > > (WW) NVIDIA(0): Mode "nvidia-auto-select" is larger than > virtual size > > 2560 x > > (WW) NVIDIA(0): 3200; discarding mode > > (EE) NVIDIA(0): Failure to construct a valid mode list: no modes > > remaining. > > (EE) NVIDIA(0): *** Aborting *** > > > > I'm pretty sure I have an updated EDID binary (i.e. > collected /after/ > > connecting the monitors via displayport), and when it's run on a > > native X server the nvidia-settings program correctly > detects that the > > monitors can run at 2560x1600x60Hz, so I'm rather at a loss to know > > what's going on here. > > > > One odd thing is that running natively nvidia-settings sees the > > monitors as DFP-2 and DFP-3, rather than DFP-0 and DFP-1. > > > > Simon Fowler > > Technical Specialist eResearch Visualisation team CSIRO IM&T > > Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 > > -------------------------------------------------------------- > > ---------------- > > _______________________________________________ > > vizstack-users mailing list > > viz...@li... > > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > > > -------------------------------------------------------------- > > ---------------- > > _______________________________________________ > > vizstack-users mailing list > > viz...@li... > > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > > -------------------------------------------------------------- > ---------------- > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > ------------------------------------------------------------------------------ _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: <Sim...@cs...> - 2010-04-22 23:29:03
|
The relevant output of lspci: 03:00.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 04:00.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 04:02.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 05:00.0 VGA compatible controller: nVidia Corporation G98 [Quadro NVS 420] (rev a1) 06:00.0 3D controller: nVidia Corporation G98 [Quadro NVS 420] (rev a1) 07:00.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 08:00.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 08:02.0 PCI bridge: nVidia Corporation PCI express bridge for Quadro Plex S4 / Tesla S870 / Tesla S1070 (rev a3) 09:00.0 VGA compatible controller: nVidia Corporation G98 [Quadro NVS 420] (rev a1) 0a:00.0 3D controller: nVidia Corporation G98 [Quadro NVS 420] (rev a1) So a set of PCI-E bridges, and two G98 GPUs per card (this machine has two NVS420s, of course). The outputs used on each GPU are the same - it's either DFP-0/1 on both, or DFP-2/3 on both. Would you like me to grab the shree branch and do some testing? Simon > -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Thursday, 22 April 2010 6:51 PM > To: viz...@li... > Subject: Re: [vizstack-users] Issue when changing between DVI > and displayport on NVS420 > > > That's good news. > > Your observation about remap_display_outputs, and the > hardcoding is correct (I wanted to avoid going to DFP-3 for > that reason). The hardcoding was done to prevent spurious > erros, and some it has been removed in the current codebase. > In fact, the current codebase (close to rel 1.1) actually > creates templates for unknown GPUs and unknown display devices. > > To fix the hardcoding, I need to understand some things about > this NVS420. The nvidia specs says that it is capable of > driving 4 display devices. The NVS420 is a single card, but > it has 2 GPUs inside it; these show up as separate devices > (check "lspci | grep -i nvidia"). > > Let's say you configure the NVS 420 to drive all 4 display > devices. Then will it use DFP-0 and DFP-1 on the first GPU > and DFP-2 and DFP-3 on the second GPU ? > > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Thursday, April 22, 2010 12:55 PM > To: viz...@li... > Subject: Re: [vizstack-users] Issue when changing between DVI > and displayport on NVS420 > > That partially resolves the problem - in my test 1x2 config > it brings up the first display, but not the second. Here's > the equivalent output from the logfile: > > (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 > (==) NVIDIA(0): RGB weight 888 > (==) NVIDIA(0): Default visual is TrueColor > (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) > (**) NVIDIA(0): Option "ConnectedMonitor" ",DFP-1,DFP-2" > (**) NVIDIA(0): Option "TwinView" > (**) NVIDIA(0): Option "MetaModes" "DFP-1: 2560x1600_60 > @2560x1600 +0+0,DFP-2: 2560x1600_60 @2560x1600 +0+1600" > (**) NVIDIA(0): Option "CustomEDID" > ";DFP-1:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bi > n;DFP-2:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin" > (**) NVIDIA(0): Option "UseDisplayDevice" "DFP-1,DFP-2" > (**) NVIDIA(0): Option "IncludeImplicitMetaModes" "False" > (**) NVIDIA(0): Option "ProbeAllGpus" "False" > (**) NVIDIA(0): Enabling RENDER acceleration > (**) NVIDIA(0): ConnectedMonitor string: ",DFP-1,DFP-2" > (WW) NVIDIA(0): Invalid ConnectedMonitor string token: ""; > discarding token. > (WW) NVIDIA(0): No display device specified for CustomEDID > ""; ignoring. > (II) NVIDIA(0): NVIDIA GPU Quadro NVS 420 (G98GL) at PCI:5:0:0 (GPU-0) > (--) NVIDIA(0): Memory: 524288 kBytes > (--) NVIDIA(0): VideoBIOS: 62.98.68.00.06 > (II) NVIDIA(0): Detected PCI Express Link width: 4X > (--) NVIDIA(0): Interlaced video modes are supported on this GPU > (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 > at PCI:5:0:0: > (--) NVIDIA(0): DELL 3008WFP (DFP-1) > (--) NVIDIA(0): DELL 3008WFP (DFP-2) > (--) NVIDIA(0): DELL 3008WFP (DFP-1): 165.0 MHz maximum pixel clock > (--) NVIDIA(0): DELL 3008WFP (DFP-1): Internal Single Link TMDS > (--) NVIDIA(0): DELL 3008WFP (DFP-2): 330.0 MHz maximum pixel clock > (--) NVIDIA(0): DELL 3008WFP (DFP-2): Internal DisplayPort > (**) NVIDIA(0): TwinView enabled > (II) NVIDIA(0): Assigned Display Devices: DFP-1, DFP-2 > (II) NVIDIA(0): Validated modes: > (II) NVIDIA(0): > (II) NVIDIA(0): > "DFP-1:2560x1600_60@2560x1600+0+0,DFP-2:2560x1600_60@2560x1600+0+1600" > (**) NVIDIA(0): Virtual screen size configured to be 2560 x 3200 > (WW) NVIDIA(0): Cannot find size of first mode for DELL > 3008WFP (DFP-1); > (WW) NVIDIA(0): cannot compute DPI from DELL 3008WFP > (DFP-1)'s EDID. > (==) NVIDIA(0): DPI set to (75, 75); computed from built-in default > (==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals. > > I tried putting [2,3] in as the remap_display_outputs (which, > if I'm understanding what's going on correctly, would resolve > the problem), but that errored out - it's hard-coded not to > support anything beyond a scanout index of 2, despite my > putting in scanout_caps entries up to 3. > > Probably worth removing the hard limit, given there are > single cards with six outputs on the market these days ;-) > > Simon > > > -----Original Message----- > > From: Kumar, Shree [mailto:shr...@hp...] > > Sent: Thursday, 22 April 2010 5:05 PM > > To: viz...@li... > > Subject: Re: [vizstack-users] Issue when changing between DVI and > > displayport on NVS420 > > > > Hi Simon, > > > > I just had a look at that card; it had me confused once. > > > > > One odd thing is that running natively nvidia-settings sees > > the monitors as DFP-2 and DFP-3, rather than DFP-0 and DFP-1. > > > > Very good observation !! > > > > See /opt/vizstack/share/templates/gpus/fx5800.xml. Note that this > > defines a "scanout_caps" for index "2". You need to do something > > similar for your GPU template too - to ensure VizStack can use that > > port. > > > > Can you try this : Add a "remap_display_outputs" parameter in your > > tiled display definition. > > > > <handler_params> > > block_type="gpu"; > > num_blocks=[1,1]; > > block_display_layout=[1,2]; > > display_device="your_device"; > > remap_display_outputs=[1,2]; # <-- this is the line you need to > > add </handler_params> > > > > After that, restart the SSM. Try launching the X server > using VizStack > > again. > > > > Regards > > -- Shree > > > > -----Original Message----- > > From: Sim...@cs... [mailto:Sim...@cs...] > > Sent: Thursday, April 22, 2010 10:33 AM > > To: viz...@li... > > Subject: [vizstack-users] Issue when changing between DVI and > > displayport on NVS420 > > > > I've just switched between a single-link DVI adapter and a > displayport > > adapter for the NVS420 - this allows me to run the monitors > at their > > full native resolution (2560x1600). > > However, after this change I can't get VizStack to bring up an X > > server. > > > > The portion of the Xorg.log file from the nvidia driver is here: > > > > (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 > > (==) NVIDIA(0): RGB weight 888 > > (==) NVIDIA(0): Default visual is TrueColor > > (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) > > (**) NVIDIA(0): Option "ConnectedMonitor" "DFP-0,DFP-1" > > (**) NVIDIA(0): Option "TwinView" > > (**) NVIDIA(0): Option "MetaModes" "DFP-0: 2560x1600_60 @2560x1600 > > +0+0,DFP-1: 2560x1600_60 @2560x1600 +0+1600" > > (**) NVIDIA(0): Option "CustomEDID" > > "DFP-0:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin > > ;DFP-1:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin" > > (**) NVIDIA(0): Option "UseDisplayDevice" "DFP-0,DFP-1" > > (**) NVIDIA(0): Option "IncludeImplicitMetaModes" "False" > > (**) NVIDIA(0): Option "ProbeAllGpus" "False" > > (**) NVIDIA(0): Enabling RENDER acceleration > > (**) NVIDIA(0): ConnectedMonitor string: "DFP-0,DFP-1" > > (II) NVIDIA(0): NVIDIA GPU Quadro NVS 420 (G98GL) at > PCI:5:0:0 (GPU-0) > > (--) NVIDIA(0): Memory: 524288 kBytes > > (--) NVIDIA(0): VideoBIOS: 62.98.68.00.06 > > (II) NVIDIA(0): Detected PCI Express Link width: 4X > > (--) NVIDIA(0): Interlaced video modes are supported on this GPU > > (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 at > > PCI:5:0:0: > > (--) NVIDIA(0): DELL 3008WFP (DFP-0) > > (--) NVIDIA(0): DELL 3008WFP (DFP-1) > > (--) NVIDIA(0): DELL 3008WFP (DFP-0): 165.0 MHz maximum pixel clock > > (--) NVIDIA(0): DELL 3008WFP (DFP-0): Internal Single Link TMDS > > (--) NVIDIA(0): DELL 3008WFP (DFP-1): 165.0 MHz maximum pixel clock > > (--) NVIDIA(0): DELL 3008WFP (DFP-1): Internal Single Link TMDS > > (**) NVIDIA(0): TwinView enabled > > (II) NVIDIA(0): Assigned Display Devices: DFP-0, DFP-1 > > (WW) NVIDIA(0): No valid modes for > > (WW) NVIDIA(0): > > "DFP-0:2560x1600_60@2560x1600+0+0,DFP-1:2560x1600_60@2560x1600 > > +0+1600"; > > (WW) NVIDIA(0): removing. > > (WW) NVIDIA(0): > > (WW) NVIDIA(0): Unable to validate any modes; falling back to the > > default mode > > (WW) NVIDIA(0): "nvidia-auto-select". > > (WW) NVIDIA(0): > > (II) NVIDIA(0): Validated modes: > > (II) NVIDIA(0): "nvidia-auto-select" > > (**) NVIDIA(0): Virtual screen size configured to be 2560 x 3200 > > (WW) NVIDIA(0): Mode "nvidia-auto-select" is larger than > virtual size > > 2560 x > > (WW) NVIDIA(0): 3200; discarding mode > > (EE) NVIDIA(0): Failure to construct a valid mode list: no modes > > remaining. > > (EE) NVIDIA(0): *** Aborting *** > > > > I'm pretty sure I have an updated EDID binary (i.e. > collected /after/ > > connecting the monitors via displayport), and when it's run on a > > native X server the nvidia-settings program correctly > detects that the > > monitors can run at 2560x1600x60Hz, so I'm rather at a loss to know > > what's going on here. > > > > One odd thing is that running natively nvidia-settings sees the > > monitors as DFP-2 and DFP-3, rather than DFP-0 and DFP-1. > > > > Simon Fowler > > Technical Specialist eResearch Visualisation team CSIRO IM&T > > Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 > > -------------------------------------------------------------- > > ---------------- > > _______________________________________________ > > vizstack-users mailing list > > viz...@li... > > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > > > -------------------------------------------------------------- > > ---------------- > > _______________________________________________ > > vizstack-users mailing list > > viz...@li... > > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > > -------------------------------------------------------------- > ---------------- > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > |
From: Kumar, S. <shr...@hp...> - 2010-04-22 08:52:16
|
That's good news. Your observation about remap_display_outputs, and the hardcoding is correct (I wanted to avoid going to DFP-3 for that reason). The hardcoding was done to prevent spurious erros, and some it has been removed in the current codebase. In fact, the current codebase (close to rel 1.1) actually creates templates for unknown GPUs and unknown display devices. To fix the hardcoding, I need to understand some things about this NVS420. The nvidia specs says that it is capable of driving 4 display devices. The NVS420 is a single card, but it has 2 GPUs inside it; these show up as separate devices (check "lspci | grep -i nvidia"). Let's say you configure the NVS 420 to drive all 4 display devices. Then will it use DFP-0 and DFP-1 on the first GPU and DFP-2 and DFP-3 on the second GPU ? -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Thursday, April 22, 2010 12:55 PM To: viz...@li... Subject: Re: [vizstack-users] Issue when changing between DVI and displayport on NVS420 That partially resolves the problem - in my test 1x2 config it brings up the first display, but not the second. Here's the equivalent output from the logfile: (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) NVIDIA(0): Option "ConnectedMonitor" ",DFP-1,DFP-2" (**) NVIDIA(0): Option "TwinView" (**) NVIDIA(0): Option "MetaModes" "DFP-1: 2560x1600_60 @2560x1600 +0+0,DFP-2: 2560x1600_60 @2560x1600 +0+1600" (**) NVIDIA(0): Option "CustomEDID" ";DFP-1:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin;DFP-2:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin" (**) NVIDIA(0): Option "UseDisplayDevice" "DFP-1,DFP-2" (**) NVIDIA(0): Option "IncludeImplicitMetaModes" "False" (**) NVIDIA(0): Option "ProbeAllGpus" "False" (**) NVIDIA(0): Enabling RENDER acceleration (**) NVIDIA(0): ConnectedMonitor string: ",DFP-1,DFP-2" (WW) NVIDIA(0): Invalid ConnectedMonitor string token: ""; discarding token. (WW) NVIDIA(0): No display device specified for CustomEDID ""; ignoring. (II) NVIDIA(0): NVIDIA GPU Quadro NVS 420 (G98GL) at PCI:5:0:0 (GPU-0) (--) NVIDIA(0): Memory: 524288 kBytes (--) NVIDIA(0): VideoBIOS: 62.98.68.00.06 (II) NVIDIA(0): Detected PCI Express Link width: 4X (--) NVIDIA(0): Interlaced video modes are supported on this GPU (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 at PCI:5:0:0: (--) NVIDIA(0): DELL 3008WFP (DFP-1) (--) NVIDIA(0): DELL 3008WFP (DFP-2) (--) NVIDIA(0): DELL 3008WFP (DFP-1): 165.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3008WFP (DFP-1): Internal Single Link TMDS (--) NVIDIA(0): DELL 3008WFP (DFP-2): 330.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3008WFP (DFP-2): Internal DisplayPort (**) NVIDIA(0): TwinView enabled (II) NVIDIA(0): Assigned Display Devices: DFP-1, DFP-2 (II) NVIDIA(0): Validated modes: (II) NVIDIA(0): (II) NVIDIA(0): "DFP-1:2560x1600_60@2560x1600+0+0,DFP-2:2560x1600_60@2560x1600+0+1600" (**) NVIDIA(0): Virtual screen size configured to be 2560 x 3200 (WW) NVIDIA(0): Cannot find size of first mode for DELL 3008WFP (DFP-1); (WW) NVIDIA(0): cannot compute DPI from DELL 3008WFP (DFP-1)'s EDID. (==) NVIDIA(0): DPI set to (75, 75); computed from built-in default (==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals. I tried putting [2,3] in as the remap_display_outputs (which, if I'm understanding what's going on correctly, would resolve the problem), but that errored out - it's hard-coded not to support anything beyond a scanout index of 2, despite my putting in scanout_caps entries up to 3. Probably worth removing the hard limit, given there are single cards with six outputs on the market these days ;-) Simon > -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Thursday, 22 April 2010 5:05 PM > To: viz...@li... > Subject: Re: [vizstack-users] Issue when changing between DVI > and displayport on NVS420 > > Hi Simon, > > I just had a look at that card; it had me confused once. > > > One odd thing is that running natively nvidia-settings sees > the monitors as DFP-2 and DFP-3, rather than DFP-0 and DFP-1. > > Very good observation !! > > See /opt/vizstack/share/templates/gpus/fx5800.xml. Note that > this defines a "scanout_caps" for index "2". You need to do > something similar for your GPU template too - to ensure > VizStack can use that port. > > Can you try this : Add a "remap_display_outputs" parameter in > your tiled display definition. > > <handler_params> > block_type="gpu"; > num_blocks=[1,1]; > block_display_layout=[1,2]; > display_device="your_device"; > remap_display_outputs=[1,2]; # <-- this is the line you > need to add </handler_params> > > After that, restart the SSM. Try launching the X server using > VizStack again. > > Regards > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Thursday, April 22, 2010 10:33 AM > To: viz...@li... > Subject: [vizstack-users] Issue when changing between DVI and > displayport on NVS420 > > I've just switched between a single-link DVI adapter and a > displayport adapter for the NVS420 - this allows me to run > the monitors at their full native resolution (2560x1600). > However, after this change I can't get VizStack to bring up > an X server. > > The portion of the Xorg.log file from the nvidia driver is here: > > (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 > (==) NVIDIA(0): RGB weight 888 > (==) NVIDIA(0): Default visual is TrueColor > (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) > (**) NVIDIA(0): Option "ConnectedMonitor" "DFP-0,DFP-1" > (**) NVIDIA(0): Option "TwinView" > (**) NVIDIA(0): Option "MetaModes" "DFP-0: 2560x1600_60 > @2560x1600 +0+0,DFP-1: 2560x1600_60 @2560x1600 +0+1600" > (**) NVIDIA(0): Option "CustomEDID" > "DFP-0:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin > ;DFP-1:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin" > (**) NVIDIA(0): Option "UseDisplayDevice" "DFP-0,DFP-1" > (**) NVIDIA(0): Option "IncludeImplicitMetaModes" "False" > (**) NVIDIA(0): Option "ProbeAllGpus" "False" > (**) NVIDIA(0): Enabling RENDER acceleration > (**) NVIDIA(0): ConnectedMonitor string: "DFP-0,DFP-1" > (II) NVIDIA(0): NVIDIA GPU Quadro NVS 420 (G98GL) at PCI:5:0:0 (GPU-0) > (--) NVIDIA(0): Memory: 524288 kBytes > (--) NVIDIA(0): VideoBIOS: 62.98.68.00.06 > (II) NVIDIA(0): Detected PCI Express Link width: 4X > (--) NVIDIA(0): Interlaced video modes are supported on this GPU > (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 > at PCI:5:0:0: > (--) NVIDIA(0): DELL 3008WFP (DFP-0) > (--) NVIDIA(0): DELL 3008WFP (DFP-1) > (--) NVIDIA(0): DELL 3008WFP (DFP-0): 165.0 MHz maximum pixel clock > (--) NVIDIA(0): DELL 3008WFP (DFP-0): Internal Single Link TMDS > (--) NVIDIA(0): DELL 3008WFP (DFP-1): 165.0 MHz maximum pixel clock > (--) NVIDIA(0): DELL 3008WFP (DFP-1): Internal Single Link TMDS > (**) NVIDIA(0): TwinView enabled > (II) NVIDIA(0): Assigned Display Devices: DFP-0, DFP-1 > (WW) NVIDIA(0): No valid modes for > (WW) NVIDIA(0): > "DFP-0:2560x1600_60@2560x1600+0+0,DFP-1:2560x1600_60@2560x1600 > +0+1600"; > (WW) NVIDIA(0): removing. > (WW) NVIDIA(0): > (WW) NVIDIA(0): Unable to validate any modes; falling back to > the default mode > (WW) NVIDIA(0): "nvidia-auto-select". > (WW) NVIDIA(0): > (II) NVIDIA(0): Validated modes: > (II) NVIDIA(0): "nvidia-auto-select" > (**) NVIDIA(0): Virtual screen size configured to be 2560 x 3200 > (WW) NVIDIA(0): Mode "nvidia-auto-select" is larger than > virtual size 2560 x > (WW) NVIDIA(0): 3200; discarding mode > (EE) NVIDIA(0): Failure to construct a valid mode list: no > modes remaining. > (EE) NVIDIA(0): *** Aborting *** > > I'm pretty sure I have an updated EDID binary (i.e. collected > /after/ connecting the monitors via displayport), and when > it's run on a native X server the nvidia-settings program > correctly detects that the monitors can run at > 2560x1600x60Hz, so I'm rather at a loss to know what's going on here. > > One odd thing is that running natively nvidia-settings sees > the monitors as DFP-2 and DFP-3, rather than DFP-0 and DFP-1. > > Simon Fowler > Technical Specialist eResearch Visualisation team CSIRO IM&T > Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 > -------------------------------------------------------------- > ---------------- > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > ------------------------------------------------------------------------------ _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: <Sim...@cs...> - 2010-04-22 07:24:51
|
That partially resolves the problem - in my test 1x2 config it brings up the first display, but not the second. Here's the equivalent output from the logfile: (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) NVIDIA(0): Option "ConnectedMonitor" ",DFP-1,DFP-2" (**) NVIDIA(0): Option "TwinView" (**) NVIDIA(0): Option "MetaModes" "DFP-1: 2560x1600_60 @2560x1600 +0+0,DFP-2: 2560x1600_60 @2560x1600 +0+1600" (**) NVIDIA(0): Option "CustomEDID" ";DFP-1:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin;DFP-2:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin" (**) NVIDIA(0): Option "UseDisplayDevice" "DFP-1,DFP-2" (**) NVIDIA(0): Option "IncludeImplicitMetaModes" "False" (**) NVIDIA(0): Option "ProbeAllGpus" "False" (**) NVIDIA(0): Enabling RENDER acceleration (**) NVIDIA(0): ConnectedMonitor string: ",DFP-1,DFP-2" (WW) NVIDIA(0): Invalid ConnectedMonitor string token: ""; discarding token. (WW) NVIDIA(0): No display device specified for CustomEDID ""; ignoring. (II) NVIDIA(0): NVIDIA GPU Quadro NVS 420 (G98GL) at PCI:5:0:0 (GPU-0) (--) NVIDIA(0): Memory: 524288 kBytes (--) NVIDIA(0): VideoBIOS: 62.98.68.00.06 (II) NVIDIA(0): Detected PCI Express Link width: 4X (--) NVIDIA(0): Interlaced video modes are supported on this GPU (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 at PCI:5:0:0: (--) NVIDIA(0): DELL 3008WFP (DFP-1) (--) NVIDIA(0): DELL 3008WFP (DFP-2) (--) NVIDIA(0): DELL 3008WFP (DFP-1): 165.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3008WFP (DFP-1): Internal Single Link TMDS (--) NVIDIA(0): DELL 3008WFP (DFP-2): 330.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3008WFP (DFP-2): Internal DisplayPort (**) NVIDIA(0): TwinView enabled (II) NVIDIA(0): Assigned Display Devices: DFP-1, DFP-2 (II) NVIDIA(0): Validated modes: (II) NVIDIA(0): (II) NVIDIA(0): "DFP-1:2560x1600_60@2560x1600+0+0,DFP-2:2560x1600_60@2560x1600+0+1600" (**) NVIDIA(0): Virtual screen size configured to be 2560 x 3200 (WW) NVIDIA(0): Cannot find size of first mode for DELL 3008WFP (DFP-1); (WW) NVIDIA(0): cannot compute DPI from DELL 3008WFP (DFP-1)'s EDID. (==) NVIDIA(0): DPI set to (75, 75); computed from built-in default (==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals. I tried putting [2,3] in as the remap_display_outputs (which, if I'm understanding what's going on correctly, would resolve the problem), but that errored out - it's hard-coded not to support anything beyond a scanout index of 2, despite my putting in scanout_caps entries up to 3. Probably worth removing the hard limit, given there are single cards with six outputs on the market these days ;-) Simon > -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Thursday, 22 April 2010 5:05 PM > To: viz...@li... > Subject: Re: [vizstack-users] Issue when changing between DVI > and displayport on NVS420 > > Hi Simon, > > I just had a look at that card; it had me confused once. > > > One odd thing is that running natively nvidia-settings sees > the monitors as DFP-2 and DFP-3, rather than DFP-0 and DFP-1. > > Very good observation !! > > See /opt/vizstack/share/templates/gpus/fx5800.xml. Note that > this defines a "scanout_caps" for index "2". You need to do > something similar for your GPU template too - to ensure > VizStack can use that port. > > Can you try this : Add a "remap_display_outputs" parameter in > your tiled display definition. > > <handler_params> > block_type="gpu"; > num_blocks=[1,1]; > block_display_layout=[1,2]; > display_device="your_device"; > remap_display_outputs=[1,2]; # <-- this is the line you > need to add </handler_params> > > After that, restart the SSM. Try launching the X server using > VizStack again. > > Regards > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Thursday, April 22, 2010 10:33 AM > To: viz...@li... > Subject: [vizstack-users] Issue when changing between DVI and > displayport on NVS420 > > I've just switched between a single-link DVI adapter and a > displayport adapter for the NVS420 - this allows me to run > the monitors at their full native resolution (2560x1600). > However, after this change I can't get VizStack to bring up > an X server. > > The portion of the Xorg.log file from the nvidia driver is here: > > (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 > (==) NVIDIA(0): RGB weight 888 > (==) NVIDIA(0): Default visual is TrueColor > (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) > (**) NVIDIA(0): Option "ConnectedMonitor" "DFP-0,DFP-1" > (**) NVIDIA(0): Option "TwinView" > (**) NVIDIA(0): Option "MetaModes" "DFP-0: 2560x1600_60 > @2560x1600 +0+0,DFP-1: 2560x1600_60 @2560x1600 +0+1600" > (**) NVIDIA(0): Option "CustomEDID" > "DFP-0:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin > ;DFP-1:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin" > (**) NVIDIA(0): Option "UseDisplayDevice" "DFP-0,DFP-1" > (**) NVIDIA(0): Option "IncludeImplicitMetaModes" "False" > (**) NVIDIA(0): Option "ProbeAllGpus" "False" > (**) NVIDIA(0): Enabling RENDER acceleration > (**) NVIDIA(0): ConnectedMonitor string: "DFP-0,DFP-1" > (II) NVIDIA(0): NVIDIA GPU Quadro NVS 420 (G98GL) at PCI:5:0:0 (GPU-0) > (--) NVIDIA(0): Memory: 524288 kBytes > (--) NVIDIA(0): VideoBIOS: 62.98.68.00.06 > (II) NVIDIA(0): Detected PCI Express Link width: 4X > (--) NVIDIA(0): Interlaced video modes are supported on this GPU > (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 > at PCI:5:0:0: > (--) NVIDIA(0): DELL 3008WFP (DFP-0) > (--) NVIDIA(0): DELL 3008WFP (DFP-1) > (--) NVIDIA(0): DELL 3008WFP (DFP-0): 165.0 MHz maximum pixel clock > (--) NVIDIA(0): DELL 3008WFP (DFP-0): Internal Single Link TMDS > (--) NVIDIA(0): DELL 3008WFP (DFP-1): 165.0 MHz maximum pixel clock > (--) NVIDIA(0): DELL 3008WFP (DFP-1): Internal Single Link TMDS > (**) NVIDIA(0): TwinView enabled > (II) NVIDIA(0): Assigned Display Devices: DFP-0, DFP-1 > (WW) NVIDIA(0): No valid modes for > (WW) NVIDIA(0): > "DFP-0:2560x1600_60@2560x1600+0+0,DFP-1:2560x1600_60@2560x1600 > +0+1600"; > (WW) NVIDIA(0): removing. > (WW) NVIDIA(0): > (WW) NVIDIA(0): Unable to validate any modes; falling back to > the default mode > (WW) NVIDIA(0): "nvidia-auto-select". > (WW) NVIDIA(0): > (II) NVIDIA(0): Validated modes: > (II) NVIDIA(0): "nvidia-auto-select" > (**) NVIDIA(0): Virtual screen size configured to be 2560 x 3200 > (WW) NVIDIA(0): Mode "nvidia-auto-select" is larger than > virtual size 2560 x > (WW) NVIDIA(0): 3200; discarding mode > (EE) NVIDIA(0): Failure to construct a valid mode list: no > modes remaining. > (EE) NVIDIA(0): *** Aborting *** > > I'm pretty sure I have an updated EDID binary (i.e. collected > /after/ connecting the monitors via displayport), and when > it's run on a native X server the nvidia-settings program > correctly detects that the monitors can run at > 2560x1600x60Hz, so I'm rather at a loss to know what's going on here. > > One odd thing is that running natively nvidia-settings sees > the monitors as DFP-2 and DFP-3, rather than DFP-0 and DFP-1. > > Simon Fowler > Technical Specialist eResearch Visualisation team CSIRO IM&T > Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 > -------------------------------------------------------------- > ---------------- > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > |
From: Kumar, S. <shr...@hp...> - 2010-04-22 07:06:44
|
Hi Simon, I just had a look at that card; it had me confused once. > One odd thing is that running natively nvidia-settings sees the monitors as DFP-2 and DFP-3, rather than DFP-0 and DFP-1. Very good observation !! See /opt/vizstack/share/templates/gpus/fx5800.xml. Note that this defines a "scanout_caps" for index "2". You need to do something similar for your GPU template too - to ensure VizStack can use that port. Can you try this : Add a "remap_display_outputs" parameter in your tiled display definition. <handler_params> block_type="gpu"; num_blocks=[1,1]; block_display_layout=[1,2]; display_device="your_device"; remap_display_outputs=[1,2]; # <-- this is the line you need to add </handler_params> After that, restart the SSM. Try launching the X server using VizStack again. Regards -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Thursday, April 22, 2010 10:33 AM To: viz...@li... Subject: [vizstack-users] Issue when changing between DVI and displayport on NVS420 I've just switched between a single-link DVI adapter and a displayport adapter for the NVS420 - this allows me to run the monitors at their full native resolution (2560x1600). However, after this change I can't get VizStack to bring up an X server. The portion of the Xorg.log file from the nvidia driver is here: (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) NVIDIA(0): Option "ConnectedMonitor" "DFP-0,DFP-1" (**) NVIDIA(0): Option "TwinView" (**) NVIDIA(0): Option "MetaModes" "DFP-0: 2560x1600_60 @2560x1600 +0+0,DFP-1: 2560x1600_60 @2560x1600 +0+1600" (**) NVIDIA(0): Option "CustomEDID" "DFP-0:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin;DFP-1:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin" (**) NVIDIA(0): Option "UseDisplayDevice" "DFP-0,DFP-1" (**) NVIDIA(0): Option "IncludeImplicitMetaModes" "False" (**) NVIDIA(0): Option "ProbeAllGpus" "False" (**) NVIDIA(0): Enabling RENDER acceleration (**) NVIDIA(0): ConnectedMonitor string: "DFP-0,DFP-1" (II) NVIDIA(0): NVIDIA GPU Quadro NVS 420 (G98GL) at PCI:5:0:0 (GPU-0) (--) NVIDIA(0): Memory: 524288 kBytes (--) NVIDIA(0): VideoBIOS: 62.98.68.00.06 (II) NVIDIA(0): Detected PCI Express Link width: 4X (--) NVIDIA(0): Interlaced video modes are supported on this GPU (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 at PCI:5:0:0: (--) NVIDIA(0): DELL 3008WFP (DFP-0) (--) NVIDIA(0): DELL 3008WFP (DFP-1) (--) NVIDIA(0): DELL 3008WFP (DFP-0): 165.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3008WFP (DFP-0): Internal Single Link TMDS (--) NVIDIA(0): DELL 3008WFP (DFP-1): 165.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3008WFP (DFP-1): Internal Single Link TMDS (**) NVIDIA(0): TwinView enabled (II) NVIDIA(0): Assigned Display Devices: DFP-0, DFP-1 (WW) NVIDIA(0): No valid modes for (WW) NVIDIA(0): "DFP-0:2560x1600_60@2560x1600+0+0,DFP-1:2560x1600_60@2560x1600+0+1600"; (WW) NVIDIA(0): removing. (WW) NVIDIA(0): (WW) NVIDIA(0): Unable to validate any modes; falling back to the default mode (WW) NVIDIA(0): "nvidia-auto-select". (WW) NVIDIA(0): (II) NVIDIA(0): Validated modes: (II) NVIDIA(0): "nvidia-auto-select" (**) NVIDIA(0): Virtual screen size configured to be 2560 x 3200 (WW) NVIDIA(0): Mode "nvidia-auto-select" is larger than virtual size 2560 x (WW) NVIDIA(0): 3200; discarding mode (EE) NVIDIA(0): Failure to construct a valid mode list: no modes remaining. (EE) NVIDIA(0): *** Aborting *** I'm pretty sure I have an updated EDID binary (i.e. collected /after/ connecting the monitors via displayport), and when it's run on a native X server the nvidia-settings program correctly detects that the monitors can run at 2560x1600x60Hz, so I'm rather at a loss to know what's going on here. One odd thing is that running natively nvidia-settings sees the monitors as DFP-2 and DFP-3, rather than DFP-0 and DFP-1. Simon Fowler Technical Specialist eResearch Visualisation team CSIRO IM&T Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 ------------------------------------------------------------------------------ _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: <Sim...@cs...> - 2010-04-22 05:02:57
|
I've just switched between a single-link DVI adapter and a displayport adapter for the NVS420 - this allows me to run the monitors at their full native resolution (2560x1600). However, after this change I can't get VizStack to bring up an X server. The portion of the Xorg.log file from the nvidia driver is here: (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 (==) NVIDIA(0): RGB weight 888 (==) NVIDIA(0): Default visual is TrueColor (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) (**) NVIDIA(0): Option "ConnectedMonitor" "DFP-0,DFP-1" (**) NVIDIA(0): Option "TwinView" (**) NVIDIA(0): Option "MetaModes" "DFP-0: 2560x1600_60 @2560x1600 +0+0,DFP-1: 2560x1600_60 @2560x1600 +0+1600" (**) NVIDIA(0): Option "CustomEDID" "DFP-0:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin;DFP-1:/etc/vizstack/templates/displays/edids/Dell-3008wfp.bin" (**) NVIDIA(0): Option "UseDisplayDevice" "DFP-0,DFP-1" (**) NVIDIA(0): Option "IncludeImplicitMetaModes" "False" (**) NVIDIA(0): Option "ProbeAllGpus" "False" (**) NVIDIA(0): Enabling RENDER acceleration (**) NVIDIA(0): ConnectedMonitor string: "DFP-0,DFP-1" (II) NVIDIA(0): NVIDIA GPU Quadro NVS 420 (G98GL) at PCI:5:0:0 (GPU-0) (--) NVIDIA(0): Memory: 524288 kBytes (--) NVIDIA(0): VideoBIOS: 62.98.68.00.06 (II) NVIDIA(0): Detected PCI Express Link width: 4X (--) NVIDIA(0): Interlaced video modes are supported on this GPU (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 at PCI:5:0:0: (--) NVIDIA(0): DELL 3008WFP (DFP-0) (--) NVIDIA(0): DELL 3008WFP (DFP-1) (--) NVIDIA(0): DELL 3008WFP (DFP-0): 165.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3008WFP (DFP-0): Internal Single Link TMDS (--) NVIDIA(0): DELL 3008WFP (DFP-1): 165.0 MHz maximum pixel clock (--) NVIDIA(0): DELL 3008WFP (DFP-1): Internal Single Link TMDS (**) NVIDIA(0): TwinView enabled (II) NVIDIA(0): Assigned Display Devices: DFP-0, DFP-1 (WW) NVIDIA(0): No valid modes for (WW) NVIDIA(0): "DFP-0:2560x1600_60@2560x1600+0+0,DFP-1:2560x1600_60@2560x1600+0+1600"; (WW) NVIDIA(0): removing. (WW) NVIDIA(0): (WW) NVIDIA(0): Unable to validate any modes; falling back to the default mode (WW) NVIDIA(0): "nvidia-auto-select". (WW) NVIDIA(0): (II) NVIDIA(0): Validated modes: (II) NVIDIA(0): "nvidia-auto-select" (**) NVIDIA(0): Virtual screen size configured to be 2560 x 3200 (WW) NVIDIA(0): Mode "nvidia-auto-select" is larger than virtual size 2560 x (WW) NVIDIA(0): 3200; discarding mode (EE) NVIDIA(0): Failure to construct a valid mode list: no modes remaining. (EE) NVIDIA(0): *** Aborting *** I'm pretty sure I have an updated EDID binary (i.e. collected /after/ connecting the monitors via displayport), and when it's run on a native X server the nvidia-settings program correctly detects that the monitors can run at 2560x1600x60Hz, so I'm rather at a loss to know what's going on here. One odd thing is that running natively nvidia-settings sees the monitors as DFP-2 and DFP-3, rather than DFP-0 and DFP-1. Simon Fowler Technical Specialist eResearch Visualisation team CSIRO IM&T Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 |
From: Kumar, S. <shr...@hp...> - 2010-04-16 04:35:05
|
Hi Glenn, > Shree, > > Why is there a need to relate the CUDA device index to a PCI busid? > >Is the issue that some gpus on the same node are allocated by busid for a vis app and others by CUDA device id? VizStack can allocate GPUs and keep track of what is allocated to whom. There is no way to predict in advance which app will get which GPU. I also assume that any GPU can be used for either compute or viz. We need to be able to tell a CUDA app to run on a specific GPU allocated to it. VizStack by itself can distinguish GPUs only by their device ID. To make things easier, VizStack assigns an index to each GPU, in the range 0 to (n-1) where n is the number of GPUs on the system. This number is assigned in the ascending order by PCI ID. CUDA also assigns numbers in the range 0 to (n-1). However, I am not able to figure out how CUDA numbers the GPUs. For some systems and simple configurations, one could hope that the number assigned by CUDA matches the number assigned by VizStack. When there is only 1 GPU in a system, this is certain. If there are two GPUs in a system, can we guarantee that the VizStack ordering will match the CUDA ordering ? I was tempted to say, "yes", but then I stumbled on this link : http://www.ks.uiuc.edu/Research/namd/mailing_list/namd-l/10866.html . If we can reliably say that CUDA device ID <n> maps to the GPU at PCI ID <m>, then we can make the app run on that GPU by setting an environment variable. PS : The "Hardware Tip" section in http://www.ncsa.illinois.edu/UserInfo/Training/Workshops/CUDA/presentations/tutorial-CUDA.html says that "Newer Nvidia driver and hardware also map GPU numbers to PCI locations". However, I haven't seen that to be the case. Regards -- Shree -----Original Message----- From: Kumar, Shree Sent: Thursday, April 15, 2010 9:38 AM To: viz...@li... Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu So you are essentially running a "multi head" configuration. Nice ! If you have feedback w.r.t the documentation, then that would be very useful for us. VizStack will do well for remote sessions. VizStack could be used to manage GPUs for general purpose computation(CUDA), but we haven't done anything specific for this. In fact, there is a potential unsolved problem around this. I'll assume that CUDA would be the API your applications would use. CUDA applications can run on one or more GPUs. However, there is no standard way of expressing which GPU to run an application on. Note that X/OpenGL applications use the environment variable DISPLAY to tell the app where to run. However, not all is lost. VizStack sets an environment variable GPU_INDEX. If your application can be recompiled from source to pick up this, then all is fine. This will work in some cases - e.g. when there is a single GPU in the system OR when there are two GPUs in the system. However, this may not work completely reliably in all cases when there are multiple GPUs on a system. With the CUDA API, there is no precise way to associate a specific GPU (identified by, say its PCI ID) to the device number used by CUDA. With some work, it _may_ be possible to find the which GPU device number (in CUDA terms) corresponds to which physical GPU. This could be used to write out a static configuration file that maps the GPU_INDEX exposed by VizStack to the device number used by CUDA. HTH -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Thursday, April 15, 2010 9:02 AM To: viz...@li... Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu Mostly as a physical desktop in this case. Part of the reason for using vizstack rather than the native physical desktop is that it makes it much easier for us to partition the display - we just start up separate vizstack sessions for each partition, rather than having to jump through hoops to swap between X configurations. It's also partly that this is the configuration that we managed to get working reliably, while we had serious issues getting other things to behave the way we wanted them to. We're also hoping to use vizstack to help manage GPUs for general purpose computation, and for managing GPU resources for remote visualisation (GPU sharing will help enormously with that). Thanks for implementing our request! Simon > -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Thursday, 15 April 2010 1:02 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Yes, support for bezels will be included. > > Will you be using the tiled display as a physical desktop ? > > Bezels will work fine if you don't use the tiled display for > a physical desktop. This is due to an issue with the current > version of the nvidia driver. Sometimes, I have observed that > the desktop pans when hidden pixels are configured, but this > happens only when you move the mouse to an invisible location. > > Regards > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Wednesday, April 14, 2010 11:45 AM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Any chance v1.1 will include support for bezels, as discussed > previously? We're looking to roll out systems in the next > month, and it'd be nice if we could have this feature at the > time we rolled them out. > > Simon > ________________________________________ > From: Kumar, Shree [shr...@hp...] > Sent: Monday, April 12, 2010 2:44 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Thanks for filing a ticket. I had filed one too, and fixed > it. So I've marked the new one as a duplicate. > > We'll "soon" be releasing a v1.1 that will include this fix > and a host of other features and fixes. > > Thanks > -- Shree > > -----Original Message----- > From: Kumar, Shree > Sent: Friday, March 26, 2010 12:57 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Thanks for finding this out. Yes, it can be added to the > startup process. vs-X is probably the right place to create > this directory. VizStack can run on multiple nodes; and > /var/run/vizstack is needed on all nodes. I'll also have to > remove /var/run/vizstack from the RPM packaging I think. > > Can you file this as an issue on SourceForge bug trackers ? > > Thanks > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Thursday, March 25, 2010 7:30 AM > To: viz...@li... > Subject: [vizstack-users] Bug report/misbehaviour under Ubuntu > > With Ubuntu everything under /var/run is cleared out after a > reboot, and /var/run/vizstack isn't recreated when vs-ssm > starts up - is there any chance that could be added to the > startup process? > > Thanks, > > Simon Fowler > Technical Specialist eResearch Visualisation team CSIRO IM&T > Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: Lupton, G. <Gle...@hp...> - 2010-04-15 15:22:40
|
Shree, Why is there a need to relate the CUDA device index to a PCI busid? Is the issue that some gpus on the same node are allocated by busid for a vis app and others by CUDA device id? Glenn -----Original Message----- From: Kumar, Shree Sent: Thursday, April 15, 2010 9:38 AM To: viz...@li... Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu So you are essentially running a "multi head" configuration. Nice ! If you have feedback w.r.t the documentation, then that would be very useful for us. VizStack will do well for remote sessions. VizStack could be used to manage GPUs for general purpose computation(CUDA), but we haven't done anything specific for this. In fact, there is a potential unsolved problem around this. I'll assume that CUDA would be the API your applications would use. CUDA applications can run on one or more GPUs. However, there is no standard way of expressing which GPU to run an application on. Note that X/OpenGL applications use the environment variable DISPLAY to tell the app where to run. However, not all is lost. VizStack sets an environment variable GPU_INDEX. If your application can be recompiled from source to pick up this, then all is fine. This will work in some cases - e.g. when there is a single GPU in the system OR when there are two GPUs in the system. However, this may not work completely reliably in all cases when there are multiple GPUs on a system. With the CUDA API, there is no precise way to associate a specific GPU (identified by, say its PCI ID) to the device number used by CUDA. With some work, it _may_ be possible to find the which GPU device number (in CUDA terms) corresponds to which physical GPU. This could be used to write out a static configuration file that maps the GPU_INDEX exposed by VizStack to the device number used by CUDA. HTH -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Thursday, April 15, 2010 9:02 AM To: viz...@li... Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu Mostly as a physical desktop in this case. Part of the reason for using vizstack rather than the native physical desktop is that it makes it much easier for us to partition the display - we just start up separate vizstack sessions for each partition, rather than having to jump through hoops to swap between X configurations. It's also partly that this is the configuration that we managed to get working reliably, while we had serious issues getting other things to behave the way we wanted them to. We're also hoping to use vizstack to help manage GPUs for general purpose computation, and for managing GPU resources for remote visualisation (GPU sharing will help enormously with that). Thanks for implementing our request! Simon > -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Thursday, 15 April 2010 1:02 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Yes, support for bezels will be included. > > Will you be using the tiled display as a physical desktop ? > > Bezels will work fine if you don't use the tiled display for > a physical desktop. This is due to an issue with the current > version of the nvidia driver. Sometimes, I have observed that > the desktop pans when hidden pixels are configured, but this > happens only when you move the mouse to an invisible location. > > Regards > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Wednesday, April 14, 2010 11:45 AM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Any chance v1.1 will include support for bezels, as discussed > previously? We're looking to roll out systems in the next > month, and it'd be nice if we could have this feature at the > time we rolled them out. > > Simon > ________________________________________ > From: Kumar, Shree [shr...@hp...] > Sent: Monday, April 12, 2010 2:44 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Thanks for filing a ticket. I had filed one too, and fixed > it. So I've marked the new one as a duplicate. > > We'll "soon" be releasing a v1.1 that will include this fix > and a host of other features and fixes. > > Thanks > -- Shree > > -----Original Message----- > From: Kumar, Shree > Sent: Friday, March 26, 2010 12:57 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Thanks for finding this out. Yes, it can be added to the > startup process. vs-X is probably the right place to create > this directory. VizStack can run on multiple nodes; and > /var/run/vizstack is needed on all nodes. I'll also have to > remove /var/run/vizstack from the RPM packaging I think. > > Can you file this as an issue on SourceForge bug trackers ? > > Thanks > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Thursday, March 25, 2010 7:30 AM > To: viz...@li... > Subject: [vizstack-users] Bug report/misbehaviour under Ubuntu > > With Ubuntu everything under /var/run is cleared out after a > reboot, and /var/run/vizstack isn't recreated when vs-ssm > starts up - is there any chance that could be added to the > startup process? > > Thanks, > > Simon Fowler > Technical Specialist eResearch Visualisation team CSIRO IM&T > Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: Kumar, S. <shr...@hp...> - 2010-04-15 13:39:31
|
So you are essentially running a "multi head" configuration. Nice ! If you have feedback w.r.t the documentation, then that would be very useful for us. VizStack will do well for remote sessions. VizStack could be used to manage GPUs for general purpose computation(CUDA), but we haven't done anything specific for this. In fact, there is a potential unsolved problem around this. I'll assume that CUDA would be the API your applications would use. CUDA applications can run on one or more GPUs. However, there is no standard way of expressing which GPU to run an application on. Note that X/OpenGL applications use the environment variable DISPLAY to tell the app where to run. However, not all is lost. VizStack sets an environment variable GPU_INDEX. If your application can be recompiled from source to pick up this, then all is fine. This will work in some cases - e.g. when there is a single GPU in the system OR when there are two GPUs in the system. However, this may not work completely reliably in all cases when there are multiple GPUs on a system. With the CUDA API, there is no precise way to associate a specific GPU (identified by, say its PCI ID) to the device number used by CUDA. With some work, it _may_ be possible to find the which GPU device number (in CUDA terms) corresponds to which physical GPU. This could be used to write out a static configuration file that maps the GPU_INDEX exposed by VizStack to the device number used by CUDA. HTH -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Thursday, April 15, 2010 9:02 AM To: viz...@li... Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu Mostly as a physical desktop in this case. Part of the reason for using vizstack rather than the native physical desktop is that it makes it much easier for us to partition the display - we just start up separate vizstack sessions for each partition, rather than having to jump through hoops to swap between X configurations. It's also partly that this is the configuration that we managed to get working reliably, while we had serious issues getting other things to behave the way we wanted them to. We're also hoping to use vizstack to help manage GPUs for general purpose computation, and for managing GPU resources for remote visualisation (GPU sharing will help enormously with that). Thanks for implementing our request! Simon > -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Thursday, 15 April 2010 1:02 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Yes, support for bezels will be included. > > Will you be using the tiled display as a physical desktop ? > > Bezels will work fine if you don't use the tiled display for > a physical desktop. This is due to an issue with the current > version of the nvidia driver. Sometimes, I have observed that > the desktop pans when hidden pixels are configured, but this > happens only when you move the mouse to an invisible location. > > Regards > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Wednesday, April 14, 2010 11:45 AM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Any chance v1.1 will include support for bezels, as discussed > previously? We're looking to roll out systems in the next > month, and it'd be nice if we could have this feature at the > time we rolled them out. > > Simon > ________________________________________ > From: Kumar, Shree [shr...@hp...] > Sent: Monday, April 12, 2010 2:44 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Thanks for filing a ticket. I had filed one too, and fixed > it. So I've marked the new one as a duplicate. > > We'll "soon" be releasing a v1.1 that will include this fix > and a host of other features and fixes. > > Thanks > -- Shree > > -----Original Message----- > From: Kumar, Shree > Sent: Friday, March 26, 2010 12:57 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Thanks for finding this out. Yes, it can be added to the > startup process. vs-X is probably the right place to create > this directory. VizStack can run on multiple nodes; and > /var/run/vizstack is needed on all nodes. I'll also have to > remove /var/run/vizstack from the RPM packaging I think. > > Can you file this as an issue on SourceForge bug trackers ? > > Thanks > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Thursday, March 25, 2010 7:30 AM > To: viz...@li... > Subject: [vizstack-users] Bug report/misbehaviour under Ubuntu > > With Ubuntu everything under /var/run is cleared out after a > reboot, and /var/run/vizstack isn't recreated when vs-ssm > starts up - is there any chance that could be added to the > startup process? > > Thanks, > > Simon Fowler > Technical Specialist eResearch Visualisation team CSIRO IM&T > Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: <Sim...@cs...> - 2010-04-15 03:32:28
|
Mostly as a physical desktop in this case. Part of the reason for using vizstack rather than the native physical desktop is that it makes it much easier for us to partition the display - we just start up separate vizstack sessions for each partition, rather than having to jump through hoops to swap between X configurations. It's also partly that this is the configuration that we managed to get working reliably, while we had serious issues getting other things to behave the way we wanted them to. We're also hoping to use vizstack to help manage GPUs for general purpose computation, and for managing GPU resources for remote visualisation (GPU sharing will help enormously with that). Thanks for implementing our request! Simon > -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Thursday, 15 April 2010 1:02 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Yes, support for bezels will be included. > > Will you be using the tiled display as a physical desktop ? > > Bezels will work fine if you don't use the tiled display for > a physical desktop. This is due to an issue with the current > version of the nvidia driver. Sometimes, I have observed that > the desktop pans when hidden pixels are configured, but this > happens only when you move the mouse to an invisible location. > > Regards > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Wednesday, April 14, 2010 11:45 AM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Any chance v1.1 will include support for bezels, as discussed > previously? We're looking to roll out systems in the next > month, and it'd be nice if we could have this feature at the > time we rolled them out. > > Simon > ________________________________________ > From: Kumar, Shree [shr...@hp...] > Sent: Monday, April 12, 2010 2:44 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Thanks for filing a ticket. I had filed one too, and fixed > it. So I've marked the new one as a duplicate. > > We'll "soon" be releasing a v1.1 that will include this fix > and a host of other features and fixes. > > Thanks > -- Shree > > -----Original Message----- > From: Kumar, Shree > Sent: Friday, March 26, 2010 12:57 PM > To: viz...@li... > Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu > > Hi Simon, > > Thanks for finding this out. Yes, it can be added to the > startup process. vs-X is probably the right place to create > this directory. VizStack can run on multiple nodes; and > /var/run/vizstack is needed on all nodes. I'll also have to > remove /var/run/vizstack from the RPM packaging I think. > > Can you file this as an issue on SourceForge bug trackers ? > > Thanks > -- Shree > > -----Original Message----- > From: Sim...@cs... [mailto:Sim...@cs...] > Sent: Thursday, March 25, 2010 7:30 AM > To: viz...@li... > Subject: [vizstack-users] Bug report/misbehaviour under Ubuntu > > With Ubuntu everything under /var/run is cleared out after a > reboot, and /var/run/vizstack isn't recreated when vs-ssm > starts up - is there any chance that could be added to the > startup process? > > Thanks, > > Simon Fowler > Technical Specialist eResearch Visualisation team CSIRO IM&T > Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > > -------------------------------------------------------------- > ---------------- > Download Intel® Parallel Studio Eval Try the new > software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > vizstack-users mailing list > viz...@li... > https://lists.sourceforge.net/lists/listinfo/vizstack-users > |
From: Kumar, S. <shr...@hp...> - 2010-04-15 03:11:40
|
Hi Simon, Yes, support for bezels will be included. Will you be using the tiled display as a physical desktop ? Bezels will work fine if you don't use the tiled display for a physical desktop. This is due to an issue with the current version of the nvidia driver. Sometimes, I have observed that the desktop pans when hidden pixels are configured, but this happens only when you move the mouse to an invisible location. Regards -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Wednesday, April 14, 2010 11:45 AM To: viz...@li... Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu Any chance v1.1 will include support for bezels, as discussed previously? We're looking to roll out systems in the next month, and it'd be nice if we could have this feature at the time we rolled them out. Simon ________________________________________ From: Kumar, Shree [shr...@hp...] Sent: Monday, April 12, 2010 2:44 PM To: viz...@li... Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu Hi Simon, Thanks for filing a ticket. I had filed one too, and fixed it. So I've marked the new one as a duplicate. We'll "soon" be releasing a v1.1 that will include this fix and a host of other features and fixes. Thanks -- Shree -----Original Message----- From: Kumar, Shree Sent: Friday, March 26, 2010 12:57 PM To: viz...@li... Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu Hi Simon, Thanks for finding this out. Yes, it can be added to the startup process. vs-X is probably the right place to create this directory. VizStack can run on multiple nodes; and /var/run/vizstack is needed on all nodes. I'll also have to remove /var/run/vizstack from the RPM packaging I think. Can you file this as an issue on SourceForge bug trackers ? Thanks -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Thursday, March 25, 2010 7:30 AM To: viz...@li... Subject: [vizstack-users] Bug report/misbehaviour under Ubuntu With Ubuntu everything under /var/run is cleared out after a reboot, and /var/run/vizstack isn't recreated when vs-ssm starts up - is there any chance that could be added to the startup process? Thanks, Simon Fowler Technical Specialist eResearch Visualisation team CSIRO IM&T Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: <Sim...@cs...> - 2010-04-14 06:15:35
|
Any chance v1.1 will include support for bezels, as discussed previously? We're looking to roll out systems in the next month, and it'd be nice if we could have this feature at the time we rolled them out. Simon ________________________________________ From: Kumar, Shree [shr...@hp...] Sent: Monday, April 12, 2010 2:44 PM To: viz...@li... Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu Hi Simon, Thanks for filing a ticket. I had filed one too, and fixed it. So I've marked the new one as a duplicate. We'll "soon" be releasing a v1.1 that will include this fix and a host of other features and fixes. Thanks -- Shree -----Original Message----- From: Kumar, Shree Sent: Friday, March 26, 2010 12:57 PM To: viz...@li... Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu Hi Simon, Thanks for finding this out. Yes, it can be added to the startup process. vs-X is probably the right place to create this directory. VizStack can run on multiple nodes; and /var/run/vizstack is needed on all nodes. I'll also have to remove /var/run/vizstack from the RPM packaging I think. Can you file this as an issue on SourceForge bug trackers ? Thanks -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Thursday, March 25, 2010 7:30 AM To: viz...@li... Subject: [vizstack-users] Bug report/misbehaviour under Ubuntu With Ubuntu everything under /var/run is cleared out after a reboot, and /var/run/vizstack isn't recreated when vs-ssm starts up - is there any chance that could be added to the startup process? Thanks, Simon Fowler Technical Specialist eResearch Visualisation team CSIRO IM&T Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: Kumar, S. <shr...@hp...> - 2010-04-12 04:46:26
|
Hi Simon, Thanks for filing a ticket. I had filed one too, and fixed it. So I've marked the new one as a duplicate. We'll "soon" be releasing a v1.1 that will include this fix and a host of other features and fixes. Thanks -- Shree -----Original Message----- From: Kumar, Shree Sent: Friday, March 26, 2010 12:57 PM To: viz...@li... Subject: Re: [vizstack-users] Bug report/misbehaviour under Ubuntu Hi Simon, Thanks for finding this out. Yes, it can be added to the startup process. vs-X is probably the right place to create this directory. VizStack can run on multiple nodes; and /var/run/vizstack is needed on all nodes. I'll also have to remove /var/run/vizstack from the RPM packaging I think. Can you file this as an issue on SourceForge bug trackers ? Thanks -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Thursday, March 25, 2010 7:30 AM To: viz...@li... Subject: [vizstack-users] Bug report/misbehaviour under Ubuntu With Ubuntu everything under /var/run is cleared out after a reboot, and /var/run/vizstack isn't recreated when vs-ssm starts up - is there any chance that could be added to the startup process? Thanks, Simon Fowler Technical Specialist eResearch Visualisation team CSIRO IM&T Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: Kumar, S. <shr...@hp...> - 2010-03-26 07:28:13
|
Hi Simon, Thanks for finding this out. Yes, it can be added to the startup process. vs-X is probably the right place to create this directory. VizStack can run on multiple nodes; and /var/run/vizstack is needed on all nodes. I'll also have to remove /var/run/vizstack from the RPM packaging I think. Can you file this as an issue on SourceForge bug trackers ? Thanks -- Shree -----Original Message----- From: Sim...@cs... [mailto:Sim...@cs...] Sent: Thursday, March 25, 2010 7:30 AM To: viz...@li... Subject: [vizstack-users] Bug report/misbehaviour under Ubuntu With Ubuntu everything under /var/run is cleared out after a reboot, and /var/run/vizstack isn't recreated when vs-ssm starts up - is there any chance that could be added to the startup process? Thanks, Simon Fowler Technical Specialist eResearch Visualisation team CSIRO IM&T Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: <Sim...@cs...> - 2010-03-25 01:59:49
|
With Ubuntu everything under /var/run is cleared out after a reboot, and /var/run/vizstack isn't recreated when vs-ssm starts up - is there any chance that could be added to the startup process? Thanks, Simon Fowler Technical Specialist eResearch Visualisation team CSIRO IM&T Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 |
From: <Sim...@cs...> - 2010-03-25 01:51:35
|
> -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Wednesday, 24 March 2010 5:18 PM > To: viz...@li... > Subject: Re: [vizstack-users] A couple of questions related > to tiled display setups > <snipped> > > So if I hacked up some configs for ATI cards I could try > them out, and > > it /should/ theoretically work, but if it doesn't it won't > be high on > > your list of things to fix? > > The configuration files are generated by vs-generate-xconfig. > The complexities of configuring an X server (including > determining invalid > Configs) are completely handled here. > > You may need to generate configs for ATI cards to correspond > to many cases: > 1. X server with a single ATI card > 2. X server with two ATI cards (PCI IDs for both cards > needed) 3. Configuring multiple heads independently. How do > we force modelines ? > Can we use custom EDID files ? > 4. Configuring "speciality items" if needed > - Crossfire > - Stereo. What stereo modes are supported ? > - Synchronization modules (ATI equivalent of G-Sync) > 5. Do the ATI cards support framebuffers larger than the > display resolution > (hidden areas)? Does it support "panning domains" ? If yes, how > to configure them. > 6. Can multiple X servers run, each controlling a different > ATI card ? > If so, how ? > > We may find that we need to add additional items to our abstraction. > I'll give you an example for this. Nvidia Quadro cards > support a "noscanout" mode - you can run an X server without > it driving a display. > This mode is not supported on GeForce. To validate the > configuration of a GPU, I am adding a "supportsNoScanout" > property to each GPU. > > For the configurations script to work, we'll need ways to > determine what ATI cards are installed on the system. For the > nvidia cards, we can get this info from nvidia-xconfig, > nvidia-settings and by parsing the X server's log file. > > Nvidia's drivers offer the same interface for GeForce, > Quadro, and Tesla products. Is this the case with ATI too ? > > Adding complete support for ATI GPUs would take a significant > amount of work. Finding out the right configuration file > formats will definitely be a good start... Support for ATI > could be added in phases to avoid problems. > > Support for ATI GPUs will definitely make VizStack more > complete, and I'm all for it. But it is not very high on my > list of things to fix. I dont have access to any ATI > hardware, and that will also slow down inclusion of ATI support. > Ah, there's /much/ more to it than I was hoping . . . I still don't have access to any ATI hardware, so it's way down my list of priorities - I'd much rather see bezels handled properly. That said I'll see if I can collect some of the info you need to start with. Simon |
From: Kumar, S. <shr...@hp...> - 2010-03-24 06:18:50
|
Hi Simon, >> Information needed for (a), (b) and (c) whould come from the >> display template. >> >> Do you think this would be sufficient for your needs ? >> >I think that would be fine - the other tiled display systems that >I've used (SAGE and CGLX) use the dot pitch combined with the >physical size of the bezel to decide how many hidden pixels to >'draw'; your suggestion seems pretty much the same. Ok. >> To improve manual control, we could have a provision to have >> a per-resolution bezel, specified in pixels. >> > Always nice to have extra knobs to twiddle ;-) Right. Will keep this too. >> This should not happen - VizStack should setup the server for >> proper 3D rendering at all times. > > This actually looks like it might be unrelated to VizStack - > I can't get GLX working at the moment even when a single GPU > is driving a single monitor. No idea what's going on with > that, but I'll hold off blaming VizStack for the problem until > I can get something functional on a plain X desktop. Ok. Let me know what went wrong, though. > > So if I hacked up some configs for ATI cards I could try them out, and > it /should/ theoretically work, but if it doesn't it won't be high on > your list of things to fix? The configuration files are generated by vs-generate-xconfig. The complexities of configuring an X server (including determining invalid Configs) are completely handled here. You may need to generate configs for ATI cards to correspond to many cases: 1. X server with a single ATI card 2. X server with two ATI cards (PCI IDs for both cards needed) 3. Configuring multiple heads independently. How do we force modelines ? Can we use custom EDID files ? 4. Configuring "speciality items" if needed - Crossfire - Stereo. What stereo modes are supported ? - Synchronization modules (ATI equivalent of G-Sync) 5. Do the ATI cards support framebuffers larger than the display resolution (hidden areas)? Does it support "panning domains" ? If yes, how to configure them. 6. Can multiple X servers run, each controlling a different ATI card ? If so, how ? We may find that we need to add additional items to our abstraction. I'll give you an example for this. Nvidia Quadro cards support a "noscanout" mode - you can run an X server without it driving a display. This mode is not supported on GeForce. To validate the configuration of a GPU, I am adding a "supportsNoScanout" property to each GPU. For the configurations script to work, we'll need ways to determine what ATI cards are installed on the system. For the nvidia cards, we can get this info from nvidia-xconfig, nvidia-settings and by parsing the X server's log file. Nvidia's drivers offer the same interface for GeForce, Quadro, and Tesla products. Is this the case with ATI too ? Adding complete support for ATI GPUs would take a significant amount of work. Finding out the right configuration file formats will definitely be a good start... Support for ATI could be added in phases to avoid problems. Support for ATI GPUs will definitely make VizStack more complete, and I'm all for it. But it is not very high on my list of things to fix. I dont have access to any ATI hardware, and that will also slow down inclusion of ATI support. Regards -- Shree VizStack : http://vizstack.sourceforge.net/ ParaComp : http://paracomp.sourceforge.net/ HP Internal Blog : http://blogs.hp.com/shree Personal Blog : http://www.shreekumar.in/ |
From: <Sim...@cs...> - 2010-03-24 02:36:33
|
> -----Original Message----- > From: Kumar, Shree [mailto:shr...@hp...] > Sent: Tuesday, 23 March 2010 10:30 PM > To: viz...@li... > Subject: Re: [vizstack-users] A couple of questions related > to tiled display setups <snip> > We haven't implemented bezels yet. More out of a lack of > immediate need, I'd say. > > The current code can be modified to support bezels; I did > some experiments to make sure. > > I would like your feedback on what kind of interface you > would like w.r.t bezels. The nvidia card has support for > "hidden pixels", and VizStack can use that. > > The number of pixels to skip for the bezel for a particular > display device depends on > a. the physical size of the bezel. Some displays > have a larget vertical spacing compared to the horizontal > spacing. > b. the display area of the display device > c. the resolution > > Information needed for (a), (b) and (c) whould come from the > display template. > > Do you think this would be sufficient for your needs ? > I think that would be fine - the other tiled display systems that I've used (SAGE and CGLX) use the dot pitch combined with the physical size of the bezel to decide how many hidden pixels to 'draw'; your suggestion seems pretty much the same. > To improve manual control, we could have a provision to have > a per-resolution bezel, specified in pixels. > Always nice to have extra knobs to twiddle ;-) > > The second issue we have is that the display as currently set up > > doesn't seem to support OpenGL/GLX - xdpyinfo reports that > the GLX and > > NV-GLX extensions are supported, but the displays (in various > > configurations, including one where only a single GPU is > driving two > > monitors) end up without any working GLX support. Attempting to run > > any GLX programs (including glxinfo) fails to create a > context. This > > is less of an issue than the bezels, since we aren't > planning to use > > the tiled display to do 3D rendering, but we'd like to > figure out what > > the issue is in case we run into it again later, on other > > configurations. > > This should not happen - VizStack should setup the server for > proper 3D rendering at all times. > This actually looks like it might be unrelated to VizStack - I can't get GLX working at the moment even when a single GPU is driving a single monitor. No idea what's going on with that, but I'll hold off blaming VizStack for the problem until I can get something functional on a plain X desktop. > > Finally, are there any particular constraints on the GPUs and/or > > drivers that we can use? From getting VizStack running on > the NVS420 I > > suspect not, but I wanted to find out if there are any less obvious > > constraints/requirements. We're currently using nvidia workstation > > hardware, but we'd like to be able to use VizStack with both nvidia > > desktop/gaming cards and ATI cards, since we have a pretty > > heterogenous environment and we'd also like to avoid being > locked in > > to nvidia as a supplier. > > For now, VizStack supports only nvidia cards. We the > developers have acccess to only nvidia cards, and hence have > implemented support for them. > > VizStack tries to use generic GPU concepts rather than > anything very specific to nvidia, with the intent to support > other cards. However, to work with other cards, somebody will > have to do the hard work of generating the right config files > for them. We will also need to ensure that our abstraction > will fit the ATI cards. > So if I hacked up some configs for ATI cards I could try them out, and it /should/ theoretically work, but if it doesn't it won't be high on your list of things to fix? I'm okay with that - I was mostly looking for clarification. > > Oh, one final question: should we be tracking the svn trunk, or the > > 'shree' branch (which seems to be seeing active development)? > > The svn trunk it has to be for now. The 'shree' branch should > be treated as experimental and unstable. > Okay, trunk it is. Thanks. Simon |
From: Kumar, S. <shr...@hp...> - 2010-03-23 14:40:53
|
Hi Simon, In my email below, I wasn't very clear on VizStack's support for nvidia cards. The version available on SourceForge as release 1.0-2 supports GeForce and Quadro cards, as well as QuadroPlexes. However, you need to create templates for each card - as you well know by know. That version also may not work very well with GeForce cards. The next version of VizStack (part of the "shree" branch) - expected availability next month - will support all nvidia cards supported by the nvidia driver. This will include GeForce, Quadro GPUs, QuadroPlex D & S series, as well as the Tesla line of GPUs. There will be no need to create explicit templates too. HTH -- Shree VizStack : http://vizstack.sourceforge.net/ ParaComp : http://paracomp.sourceforge.net/ HP Internal Blog : http://blogs.hp.com/shree Personal Blog : http://www.shreekumar.in/ -----Original Message----- From: Kumar, Shree Sent: Tuesday, March 23, 2010 5:00 PM To: viz...@li... Subject: Re: [vizstack-users] A couple of questions related to tiled display setups Hi Simon, > I'm in the process of setting up a small (3x2) tiled display > system using a pair of nvidia NVS 420 cards. This isn't intended > for locally rendered 3D viz, we're just using VizStack as a way > to drive a tiled display, though we are also separately looking > at using VizStack to help manage remote visualisation resources, > along with VGL and related stuff. This is the pilot system for > a reasonably large large of similar systems that we're planning > to roll out over the next few months, and to develop further > after that. Thanks for this info. > I've gotten the tiled display up and running successfully > (after creating a GPU config for the nvs420 and the Dell > 3008wfp monitors we're using), however there are a few things > that are currently problematic. The big one is the bezels - there > doesn't appear to be any way to create a tiled display where the > bezels hide part of the display, rather than the display simply > skipping over them. This is quite problematic for us, since most > of the applications we're looking at for these displays will work > best with the hidden pixels approach. Is there any way that the > current code can support this display mode? If not, is there any > chance it could be implemented, or that you could provide some > advice on how we could implement it here? We haven't implemented bezels yet. More out of a lack of immediate need, I'd say. The current code can be modified to support bezels; I did some experiments to make sure. I would like your feedback on what kind of interface you would like w.r.t bezels. The nvidia card has support for "hidden pixels", and VizStack can use that. The number of pixels to skip for the bezel for a particular display device depends on a. the physical size of the bezel. Some displays have a larget vertical spacing compared to the horizontal spacing. b. the display area of the display device c. the resolution Information needed for (a), (b) and (c) whould come from the display template. Do you think this would be sufficient for your needs ? To improve manual control, we could have a provision to have a per-resolution bezel, specified in pixels. > The second issue we have is that the display as currently set > up doesn't seem to support OpenGL/GLX - xdpyinfo reports that > the GLX and NV-GLX extensions are supported, but the displays > (in various configurations, including one where only a single > GPU is driving two monitors) end up without any working GLX > support. Attempting to run any GLX programs (including glxinfo) > fails to create a context. This is less of an issue than the > bezels, since we aren't planning to use the tiled display to do > 3D rendering, but we'd like to figure out what the issue is in > case we run into it again later, on other configurations. This should not happen - VizStack should setup the server for proper 3D rendering at all times. Can you tell me what happens when you run /opt/vizstack/sbin/vs-test-gpus ? Can you send me the /var/log/Xorg.<display_number>.log file ? Also, send me the config file for the server ( VizStack creates configuration files at runtime). /var/run/vizstack/xorg-<display_number>.conf (replace <display_number> to correspond to your DISPLAY) > Finally, are there any particular constraints on the GPUs > and/or drivers that we can use? From getting VizStack running > on the NVS420 I suspect not, but I wanted to find out if there > are any less obvious constraints/requirements. We're currently > using nvidia workstation hardware, but we'd like to be able to > use VizStack with both nvidia desktop/gaming cards and ATI cards, > since we have a pretty heterogenous environment and we'd also > like to avoid being locked in to nvidia as a supplier. For now, VizStack supports only nvidia cards. We the developers have acccess to only nvidia cards, and hence have implemented support for them. VizStack tries to use generic GPU concepts rather than anything very specific to nvidia, with the intent to support other cards. However, to work with other cards, somebody will have to do the hard work of generating the right config files for them. We will also need to ensure that our abstraction will fit the ATI cards. > Oh, one final question: should we be tracking the svn trunk, or > the 'shree' branch (which seems to be seeing active development)? The svn trunk it has to be for now. The 'shree' branch should be treated as experimental and unstable. Regards -- Shree ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |
From: Kumar, S. <shr...@hp...> - 2010-03-23 11:30:20
|
Hi Simon, > I'm in the process of setting up a small (3x2) tiled display > system using a pair of nvidia NVS 420 cards. This isn't intended > for locally rendered 3D viz, we're just using VizStack as a way > to drive a tiled display, though we are also separately looking > at using VizStack to help manage remote visualisation resources, > along with VGL and related stuff. This is the pilot system for > a reasonably large large of similar systems that we're planning > to roll out over the next few months, and to develop further > after that. Thanks for this info. > I've gotten the tiled display up and running successfully > (after creating a GPU config for the nvs420 and the Dell > 3008wfp monitors we're using), however there are a few things > that are currently problematic. The big one is the bezels - there > doesn't appear to be any way to create a tiled display where the > bezels hide part of the display, rather than the display simply > skipping over them. This is quite problematic for us, since most > of the applications we're looking at for these displays will work > best with the hidden pixels approach. Is there any way that the > current code can support this display mode? If not, is there any > chance it could be implemented, or that you could provide some > advice on how we could implement it here? We haven't implemented bezels yet. More out of a lack of immediate need, I'd say. The current code can be modified to support bezels; I did some experiments to make sure. I would like your feedback on what kind of interface you would like w.r.t bezels. The nvidia card has support for "hidden pixels", and VizStack can use that. The number of pixels to skip for the bezel for a particular display device depends on a. the physical size of the bezel. Some displays have a larget vertical spacing compared to the horizontal spacing. b. the display area of the display device c. the resolution Information needed for (a), (b) and (c) whould come from the display template. Do you think this would be sufficient for your needs ? To improve manual control, we could have a provision to have a per-resolution bezel, specified in pixels. > The second issue we have is that the display as currently set > up doesn't seem to support OpenGL/GLX - xdpyinfo reports that > the GLX and NV-GLX extensions are supported, but the displays > (in various configurations, including one where only a single > GPU is driving two monitors) end up without any working GLX > support. Attempting to run any GLX programs (including glxinfo) > fails to create a context. This is less of an issue than the > bezels, since we aren't planning to use the tiled display to do > 3D rendering, but we'd like to figure out what the issue is in > case we run into it again later, on other configurations. This should not happen - VizStack should setup the server for proper 3D rendering at all times. Can you tell me what happens when you run /opt/vizstack/sbin/vs-test-gpus ? Can you send me the /var/log/Xorg.<display_number>.log file ? Also, send me the config file for the server ( VizStack creates configuration files at runtime). /var/run/vizstack/xorg-<display_number>.conf (replace <display_number> to correspond to your DISPLAY) > Finally, are there any particular constraints on the GPUs > and/or drivers that we can use? From getting VizStack running > on the NVS420 I suspect not, but I wanted to find out if there > are any less obvious constraints/requirements. We're currently > using nvidia workstation hardware, but we'd like to be able to > use VizStack with both nvidia desktop/gaming cards and ATI cards, > since we have a pretty heterogenous environment and we'd also > like to avoid being locked in to nvidia as a supplier. For now, VizStack supports only nvidia cards. We the developers have acccess to only nvidia cards, and hence have implemented support for them. VizStack tries to use generic GPU concepts rather than anything very specific to nvidia, with the intent to support other cards. However, to work with other cards, somebody will have to do the hard work of generating the right config files for them. We will also need to ensure that our abstraction will fit the ATI cards. > Oh, one final question: should we be tracking the svn trunk, or > the 'shree' branch (which seems to be seeing active development)? The svn trunk it has to be for now. The 'shree' branch should be treated as experimental and unstable. Regards -- Shree |
From: <Sim...@cs...> - 2010-03-23 02:05:49
|
I'm in the process of setting up a small (3x2) tiled display system using a pair of nvidia NVS 420 cards. This isn't intended for locally rendered 3D viz, we're just using VizStack as a way to drive a tiled display, though we are also separately looking at using VizStack to help manage remote visualisation resources, along with VGL and related stuff. This is the pilot system for a reasonably large large of similar systems that we're planning to roll out over the next few months, and to develop further after that. I've gotten the tiled display up and running successfully (after creating a GPU config for the nvs420 and the Dell 3008wfp monitors we're using), however there are a few things that are currently problematic. The big one is the bezels - there doesn't appear to be any way to create a tiled display where the bezels hide part of the display, rather than the display simply skipping over them. This is quite problematic for us, since most of the applications we're looking at for these displays will work best with the hidden pixels approach. Is there any way that the current code can support this display mode? If not, is there any chance it could be implemented, or that you could provide some advice on how we could implement it here? The second issue we have is that the display as currently set up doesn't seem to support OpenGL/GLX - xdpyinfo reports that the GLX and NV-GLX extensions are supported, but the displays (in various configurations, including one where only a single GPU is driving two monitors) end up without any working GLX support. Attempting to run any GLX programs (including glxinfo) fails to create a context. This is less of an issue than the bezels, since we aren't planning to use the tiled display to do 3D rendering, but we'd like to figure out what the issue is in case we run into it again later, on other configurations. Finally, are there any particular constraints on the GPUs and/or drivers that we can use? From getting VizStack running on the NVS420 I suspect not, but I wanted to find out if there are any less obvious constraints/requirements. We're currently using nvidia workstation hardware, but we'd like to be able to use VizStack with both nvidia desktop/gaming cards and ATI cards, since we have a pretty heterogenous environment and we'd also like to avoid being locked in to nvidia as a supplier. Oh, one final question: should we be tracking the svn trunk, or the 'shree' branch (which seems to be seeing active development)? Simon Fowler Technical Specialist eResearch Visualisation team CSIRO IM&T Yarralumla, 2600 Desk 02 6124 1453 Mob 0409 245 871 |
From: Kumar, S. <shr...@hp...> - 2010-02-23 14:32:03
|
Folks, I'm pleased to announce Release 1.0-2 of VizStack. This is the new stable release of VizStack. The new release is available from : http://sourceforge.net/projects/vizstack/files/ VizStack 1.0-2 consists of bugfixes and small enhancements over 1.0-1 1. Modified : The "-m" option for the scripts vs-configure-system and vs-configure-standalone is no longer available. This has been replaced by a "-r" option which allows specification of a network value. The "-m" option is dropped. 2. Fixed : The configuration commands vs-configure-system and vs-configure-standalone now work with nvidia 190 series drivers. 3. Fixed : SLURM support. Scripts now work for users whose uid and gid are not equal. 4. Fixed : Framelock related errors from the master node, and if SLURM was used as the scheduler. 5. Added : a "-v" option to vs-test-gpus. This shows startup and error messages, making it easier to diagnose setup issues. 6. Fixed : Documentation is now consistent with release files. 7. Many updates to the VizStack manual. Added additional setup tests and a "Troubleshooting" section. 8. Added : If viz-vgl allocates a GPU on the node where the desktop is running, then it starts a vglclient on the same node. Using the environment variables VGL_CLIENT and VGL_PORT, user can point the script to a running vglclient. Thanks to the following people for reporting issues & trying out the earlier release: - Philippe Garnier - Phil Laxton - Robin Jansohn - Benjamin Brunzel And the following folks for testing & fixes - Manju - Sunil I look forward to more guys trying out this new release ! Cheers -- Shree |
From: Kumar, S. <shr...@hp...> - 2010-02-22 08:36:14
|
Hi Ben, VizStack parses driver prints from X's log file. The 190 series drivers altered the format of certain prints, causing this problem. Please take the attached file and replace your existing file /opt/vizstack/sbin/vs-generate-node-config. After that, run vs-configure-standalone again. This should fix the problem. The change in the file can be seen at http://sourceforge.net/apps/trac/vizstack/changeset/12 Regards -- Shree -----Original Message----- From: Mc...@we... [mailto:Mc...@we...] Sent: Friday, February 19, 2010 3:52 PM To: viz...@li... Subject: [vizstack-users] vizstack standalone configuration error Hello, I tried to configure vizstack 1.0-1.x86_64 on a HP z400 workstation in standalone mode. The System is running Fedora 12 with Kernel 2.6.31.12 x86_64 I have two Nvidia Quadro FX 1800 with the latest driver installed. (195.22) so I switched to runlevel 3 and executed the /opt/vizstack/sbin/vs-configure-standalone script. Here is the error I got: [root@vgldemo ~]# /opt/vizstack/sbin/vs-configure-standalone /opt/vizstack/python/vsapi.py:26: DeprecationWarning: The popen2 module is deprecated. Use the subprocess module. import popen2 Processing Node 'localhost'... Errors happened while trying to get the configuration of node 'localhost'. Reason: Traceback (most recent call last): File "/opt/vizstack/sbin/vs-generate-node-config", line 260, in <module> gpuIndices.sort(lambda x,y:int(gpuInfo[x]['BusID'].split(":")[1])-int(gpuInfo[y]['BusID'].split(":")[1])) File "/opt/vizstack/sbin/vs-generate-node-config", line 260, in <lambda> gpuIndices.sort(lambda x,y:int(gpuInfo[x]['BusID'].split(":")[1])-int(gpuInfo[y]['BusID'].split(":")[1])) KeyError: 'BusID' Please fix the above errors & run this tool again Failed to configure standalone configuration I think maybe there is a problem with my graphic driver because the script couldn't get the number of GPUs. It would be great if anyone knows what to do. Greetings Ben ___________________________________________________________ GRATIS für alle WEB.DE-Nutzer: Die maxdome Movie-FLAT! Jetzt freischalten unter http://movieflat.web.de ------------------------------------------------------------------------------ Download Intel® Parallel Studio Eval Try the new software tools for yourself. Speed compiling, find bugs proactively, and fine-tune applications for parallel performance. See why Intel Parallel Studio got high marks during beta. http://p.sf.net/sfu/intel-sw-dev _______________________________________________ vizstack-users mailing list viz...@li... https://lists.sourceforge.net/lists/listinfo/vizstack-users |