|
From: Jason G. <kil...@gm...> - 2018-04-03 17:30:10
|
On Sat, Mar 31, 2018 at 10:59 AM, James Pearson <ja...@mo...> wrote: > The page at <http://linuxwacom.sourceforge.net/wiki/index.php/Dual_and_Multi-Monitor_Set_Up> states: > > "Provided you have the latest git version, you can also use MapToOutput with the NVIDIA binary driver. In this case, the monitor must be specified with "HEAD-0", "HEAD-1", etc. " > > Also, the xsetwacom source states in need_xinerama() : > > * A server bug causes the NVIDIA driver to report RandR 1.3 support > * despite not exposing RandR CRTCs. We need to fall back to Xinerama > * for this case as well. > > and the code has: > > if (!XQueryExtension(dpy, "RANDR", &opcode, &event, &error) || > !XRRQueryVersion(dpy, &maj, &min) || (maj * 1000 + min) < 1002 || > XQueryExtension(dpy, "NV-CONTROL", &opcode, &event, &error)) > { > > However, I'm using the Nvidia driver 384.69 (on CentOS 7.4) and 'xrandr -v' reports: > > xrandr program version 1.5.0 > Server reports RandR version 1.5 > > If I remove the above 'NV-CONTROL' check in need_xinerama(), then MapToOutput appears to work fine with the 'monitor names' as given in the xrandr output ... > > So, I guess this limitation of the Nvidia driver is no longer valid with more recent drivers versions ? > > Also, unless I'm missing something, the comments for need_xinerama() and the code don't agree - the comments state: > > "We depend on RandR 1.3 or better in order to work" > > but the (above) code checks for versions less than 1.2 ? - shouldn't it be checking for versions less than 1.3 ? i.e. checking '(maj * 1000 + min) < 1003' instead ? > > Anyway, I think the following patch will allow Nvidia drivers with RandR 1.4 or better to work (assuming the Nvidia bug is fixed in 1.4 and above?) > > --- ./tools/xsetwacom.c.dist 2016-08-08 01:06:24.000000000 +0100 > +++ ./tools/xsetwacom.c 2018-03-31 18:25:18.533865665 +0100 > @@ -2169,10 +2169,13 @@ static Bool need_xinerama(Display *dpy) > { > int opcode, event, error; > int maj, min; > + int randrv; > + > + randrv = XQueryExtension(dpy, "NV-CONTROL", &opcode, &event, &error) ? > + 1004 : 1002; > > if (!XQueryExtension(dpy, "RANDR", &opcode, &event, &error) || > - !XRRQueryVersion(dpy, &maj, &min) || (maj * 1000 + min) < 1002 || > - XQueryExtension(dpy, "NV-CONTROL", &opcode, &event, &error)) > + !XRRQueryVersion(dpy, &maj, &min) || (maj * 1000 + min) < randrv) > { > TRACE("RandR extension not found, too old, or NV-CONTROL " > "extension is also present.\n"); > > Thanks > > James Pearson > Thanks for checking up on this -- I don't think anyone has re-evaluated the state of nVidia xrandr support since that code was written way back in 2011. The "we depend on RandR 1.3" comment definitely appears to be inaccurate. As for the version check, I believe the problem is that XRRQueryVersion returns the version supported by the server, but there's no way to know if the driver actually supports it or not (see https://bugzilla.freedesktop.org/show_bug.cgi?id=16741). An old driver on a new server wouldn't work as expected. Do you know if the oldest still-supported version of the binary nVidia drivers can use the RandR codepath successfully? If so, then we can probably just get rid of the check entirely. Otherwise, we'd probably have to find a way to get the nVidia driver version to see if the RandR codepath can be used or not. Jason --- Now instead of four in the eights place / you’ve got three, ‘Cause you added one / (That is to say, eight) to the two, / But you can’t take seven from three, / So you look at the sixty-fours.... |