- priority: 5 --> 3
I guess that depends on whether the archive nodes
are going to store the files in a compressed format on
their local storage. I was intending that the nodes
could decide the compression format for themselves, but
that might well entail uncompressing the whole file
into memory, then compressing a segment on the fly to
send to the end-point.
As the server would have no way of knowing when
it might be asked for another part of the file, there
would be some complexity here.
We could put the uncompressed file and
recompressed segments into a memory cache and flush
them according to some policy.
The alternative would be to specify the
compression algorithm. Hmm. I am starting to sway to
the former solution with the cache. I feel that this
needs much more thought.
Also, it is usually worth compressing for http
because the network is normally the bottleneck. jar
files are zlib compressed. I wonder how much further an
average jarfile can be compressed - perhaps try with
100 jarfiles and a few compression algorithms?