I faced problems when I need to download large files (more than 78KB) or list large directory. The culprit is DynamicBuffer.verifyBufferSize(). This method is supposed to ensure the buffer size is big enough for the current write operation. However it will only increase the buffer size by a fixed quantum once. Even if it is not enough after the increase, it doesn't care.
So I just put in a loop to ensure the buffer size is increased until sufficient. Patch file as attached. Hope this can be incorporated into the next official release.
Logged In: YES
user_id=1997937
Originator: NO
This bug has been referred to as relating to file size, but actually is about how much data the server sends back at a time (ie, the kind of server and/or how it's configured).
It appears most servers transmit data in chunks of 32k or less, in which case DynamicBuffer's default buffer size of 32k creates conditions where verifyBufferSize works reliably.
However, logging indicated that when connected to a client's CuteFTP server, that it was sending data across in 64k blocks. In this case the bug in verifyBufferSize's logic will induce a java.lang.ArrayIndexOutOfBoundsException.
I haven't looked at the patch provided, though it sounds like it should work. The patch I put in just allocates the right amount in one shot. It does this by replacing lines 112-113:
if (count > (buf.length - writepos)) {
byte[] tmp = new byte[buf.length + DEFAULT_BUFFER_SIZE];
.. with this:
if (count > (buf.length - writepos)) {
int growBy = count - (buf.length - writepos);
byte[] tmp = new byte[buf.length + growBy];
Note that there's a lot of ways to improve this code beyond these fixes, including capping the size of the buffer, or avoiding buffering altogether using nio direct channel transfers.
THIS PATCH IS IMPORTANT AND NEEDS TO BE RELEASED!
Thanks.