This bug has been incorrectly reported as a problem with transferring large files, from about 2004 on, and is related to patch 1740290.
This bug has been referred to as relating to file size, but actually is about how much data the server sends back at a time (ie, the kind of server and/or how it's configured).
So in actuality, this bug is 100% reproducible if you try to transfer files larger than 64k from the correct type of server.
It appears most servers transmit data in chunks of 32k or less, in which case DynamicBuffer's default buffer size of 32k creates conditions where verifyBufferSize works reliably.
However, logging indicated that when connected to a client's CuteFTP server, that it was sending data across in 64k blocks. In this case the bug in verifyBufferSize's logic will induce a java.lang.ArrayIndexOutOfBoundsException.
I haven't looked at the patch provided in 1740290, though it sounds like it should work. The patch I put in just allocates the right amount in one shot. It
does this by replacing lines 112-113:
if (count > (buf.length - writepos)) {
byte[] tmp = new byte[buf.length + DEFAULT_BUFFER_SIZE];
.. with this:
if (count > (buf.length - writepos)) {
int growBy = count - (buf.length - writepos);
byte[] tmp = new byte[buf.length + growBy];
Note that there's a lot of ways to improve this code beyond these fixes, including capping the size of the buffer, or avoiding buffering altogether using nio direct channel transfers.
THIS PATCH IS IMPORTANT AND NEEDS TO BE RELEASED!
Thanks.
DynamicBuffer.java