This patch:
1. Refactors the code which chooses the best visual.
The code was changed to ensure a visual which allows
bitwise copying of bitmaps from Windows to X pixmaps to
be used in preference to all others. Otherwise, it
makes sure to chose the best visual, as long as it's
not a problematic >24bpp visual. If a TrueColor visual
is not available, it picks an 8-plane
256-colormap-entries PseudoColor visual, whcih is the
only pseudocolor (colormap'd) visual we support. All in
all, we support 8, 15, 16 and 24 color depths. In all
other cases, we'll avoid running at all, presenting an
intelligible error message (instead of being unable to
draw and just displaying a blank window).
2. Rephrases error messages to be more precise.
3. Makes a clear distinction between the terms "depth"
and "bpp": depth is the meaningful bits, bpp is the
actual bits per pixel, including alignment. The patch
makes sure the correct terms are used throughout
variable names and messages.
4. Eliminates some compilation warnings along the way :)
The patch was tested on:
- 24-plane R8G8B8 visual with 32bpp pixmaps
- 16-plane R5G6B5 visual with 16bpp pixmaps
- 15-plane R5G5B5 visual with 16bpp pixmaps
- 8-plane 256-colormap-size visual
Whenever available, I made sure to test the optimized
(no-bitmap-translation) code path.
Logged In: YES
user_id=335423
Peter, to quote you:
"If you are able to come up with a patch that solves real
problems and that makes, I think we should apply it."
:)
Screenshot of distorted graphics against Windows 2000 server
Logged In: YES
user_id=344921
With this patch, the graphic is distorted when running on a
16-bit display against a server with 8bpp. Also, valgrind
reports errors in this case. The attached screenshot shows
the problem.
Valgrind log when running against Windows 2000 server
Updated patch
Logged In: YES
user_id=344921
The attached patch is slightly modified:
* All the visual select stuff has been lifted out to a
separate function. ui_init became very long...
* Indent fixes, by running indent all.
I'm also wondering: rdesktop is currently using depth 8 as
default. Does this really make sense? I suggest that we
change this to 16, 24 or 32, if the fallback mechanism works
correctly.
Logged In: YES
user_id=335423
Peter, thanks for the response. I'll apply your fixed patch
to my code, then proceed to check out your 8bit-on-16bit
issue and will Valgrind the code.
As to depth 8, how come? We first try TrueColor visuals,
finding one with the best characteristics (but nothing >24,
since that spells truble). Then, if that fails, we take a
PseudoColor 8 depth visual.
Logged In: YES
user_id=344921
What I mean is that we should perhaps change the default
value for g_server_bpp (or g_server_depth as it's called in
patch) to a higher value than 8. Currently, users are
regularly doing "rdesktop -a 24", because very seldom you
actually want depth 8 even if the server supports more.
Logged In: YES
user_id=335423
Re display corruption of 8bpp RDP on 16bpp display, I'm
unable to reproduce it. What's the endianess of your client
machine? What's the endianess of your X server? (Can be
checked with xdpyinfo | grep 'image byte order')
Logged In: YES
user_id=335423
I'm also unable to reproduce the same error (8bpp Win2K on
16bpp display) with Valgrind's memcheck. The closest I've
got seems to be an error within Xlib itself:
==14696== Syscall param writev(vector[...]) points to
uninitialised byte(s)
==14696== at 0x1BBB81C3: writev (in /lib/tls/libc-2.3.5.so)
==14696== by 0x1BA76616: _X11TransSocketWritev
(Xtranssock.c:2147)
==14696== by 0x1BA768C6: _X11TransWritev (Xtrans.c:907)
==14696== by 0x1BA5AFFC: _XSend (XlibInt.c:1411)
==14696== by 0x1BA4EA6B: PutSubImage (PutImage.c:838)
...
==14696== Address 0x1BF6436A is 834 bytes inside a block of
size 201476 alloc'd
==14696== at 0x1B90459D: malloc (vg_replace_malloc.c:130)
==14696== by 0x1BA5C077: _XAllocScratch (XlibInt.c:2923)
==14696== by 0x1BA4EDE2: PutSubImage (PutImage.c:811)
==14696== by 0x1BA4F1D2: XPutImage (PutImage.c:1024)
==14696== by 0x8054278: ui_desktop_restore (xwin.c:2697)
...
Logged In: YES
user_id=344921
>What's the endianess of your client machine? What's the
>endianess of your X server?
LSBFirst on both.
Logged In: YES
user_id=344921
Re display corruption: The problem only happens when falling
back from depth 16 to 8:
$ ./rdesktop brie -a 16
WARNING: Remote desktop does not support colour depth 16;
falling back to 8
The server is Windows 2000.
Logged In: YES
user_id=335423
Just tried exactly that:
cvs update -C
apply your patch
build
./rdesktop win2kserver -a 16
WARNING: Remote desktop does not support colour depth 16;
falling back to 8
and no display problems. It's just as usual. The valgrind
log is clean as well. Checked both on my x86 and x86-64
machines so far, and all seems fair and square.
Are you sure you didn't have some experimental code in your
run? What video driver? (I tried NVidia's non-free one and
Radeon's free one.)
Logged In: YES
user_id=335423
The new patch (rdesktop-visuals-take2.patch):
1. Fixes the issue described by Peter.
2. Separates the issue of bitmap format optimization into
two things:
1. RDP depth and X11 Visual depth match in format so we
can do direct bitmap copies
2. RDP depth is not the same but the X11 Visual depth has
the same color masks as RDP has at that depth. This brings
in some optimization into the bitmap translation.
Update over rdesktop-visuals-2.patch
Logged In: YES
user_id=344921
The new patch seems to work good, and I have applied it.
Thanks for your contribution!
Logged In: YES
user_id=344921
I've just discovered that with this patch, colors are now
displayed correctly when running rdesktop on Solaris, using
this configuration:
RDP depth: 16, display depth: 24, display bpp: 32, X server
BE: 0, host BE: 1
Can you take a look?
Logged In: YES
user_id=335423
You mean colors are displayed INcorrectly?
Logged In: YES
user_id=344921
>You mean colors are displayed INcorrectly?
Yes.
Logged In: NO
Hi Ilya,
Thanks for the patch. At least the current cvs checkout is
working on my SGI Octane2 now.
--
Jeremy Meng
Logged In: YES
user_id=335423
Sorry it took me some time to get back to this. Turns out
g_arch_match also implied an LE host, whereas my
newly-introduced g_compatible_depth didn't imply it.
So, there - I've renamed it to g_compatible_arch and made it
depend on the host being LE.
The value of the optimization g_compatible_arch triggers is
questionable anyway so I didn't bother solving it in a
better way (making the code within the optimization add
BSWAP if running on BE) -- instead we simply use the
optimization as little as we previously did.
P.S. I didn't have a BE machine to test it on, so I
reverse-tested it on my LE machine :)