I'm using tilesort with a custom read-back-and-send SPU
(inherits from renderspu) to put the image back into the
app window. A problem arises when the window width
or height is not an even multiple of the number of
columns or rows - some of the tiles will be incorrectly
empty.
On the client, in defaultNewTiling, the extents are
computed by dividing the window muralWidth(Height) by
tileCols(Rows), to get the increment tileWidth(Height).
Then the extent starting points are even multiples of
those.
On the servers, in fillBucketingHash, the extents are
tested to determine whether they intersect the hash
bucket and therefore are relevant to the current server.
This is done by comparing a computed (x,y) corner of
each bucket region against the corresponding corner of
each of the server's extents. The bucket region corners
are computed as multiples of xinc and yinc, which are
the size of the server's output image.
Now if the width of the image is, say, 111, and there are
2 columns of tiles, then the tileWidth used on the client
to compute the corner of the extents is 111/2 = 55. So
the extents start at 0 and 55. However, the
imagewindow of the second tile will be 111-55=56 wide,
so xinc on that server will be 56. It'll therefore decide
that the buckets start at 0 and 56. So no bucket
matches its extent, which is at 55.
I think the thing to do might be to send the tileWidth
and tileHeight used by the client to the servers, rather
than use the server's imagewindow size.
Logged In: YES
user_id=983
Which bucketing mode are you using? I think this problem
depends on the bucketing mode. See "uniform" vs.
"non-uniform" modes.