Arvid -- thanks so much for your response (see below) on the PP algorithm.
Per your instructions, I am putting on the mailing list as well.
For everyone else, the issue of piece picking and the thread on slow
downloads, resulted in me doing some very basic network testing on my own
home ISP (Time Warner/Roadrunner here in the US).
From what I know of my service, we are rate limited to around 40KB/sec
upload and 512KB/sec download. I have seen faster downloads, but the
upload is definitely throttled.
What is even more surprising (but makes sense) is that if I attempt to
copy multiple files over multiple connections by using secure copy "scp",
my total upload rate is reduced over the single connection rate. This
again is absent of any BitTorrent/Libtorrent protocol -- the only protocol
involved is much more simple SCP which is largely dependent on TCP's
behavior -- which is where I think the problem lies.
What I suspect is that when my ISP throttles my TCP connections, it
probably ends up "picking" the connection who as garnered the most
bandwidth and just forces packet drops at the router using packet shaping
technology -- (we know for example Shaw in Canada has been doing this to
specifically BitTorrent traffic -- see
http://www.dslreports.com/shownews/56419 from 1+ year ago).
And when TCP looses packets, it quickly shrinks is congestion window
(window of un-acked packets), which by extension throttles that
So, when you add multiple TCP connections, their respective congestion
window are in constant flux - unless throttled at the app level (ie. I
understand the BitTorrent protocol does choking to prevent this behavior).
But this raises the larger question on the benefit of many, simultaneous
TCP connections in P2P download in the face of ISP rate limiting? Why do
it at all?? That is, if a single connection between a peer-pair can
consume the full upload rate, why not let it? Why the need to have
multiple peers all killing each over an already throttled pipe? Is it for
Any insights would be most appreciated.
---------- Forwarded message ----------
Date: Fri, 13 Jan 2006 03:32:12 +0100
From: Arvid Norberg <c99ang@...>
Subject: Re: Piece Picker Algorithm?
On Jan 12, 2006, at 19:16, chrisc@... wrote:
> I am trying to understand the piece picker algorithm for an abstracted
> simulation model of LT. Do you have any docs on this?
Unfortunately the source code is the only documentation (apart from some
possible irc logs or posts I've made).
> More specifically, I wanted to confirm, that you allow a "piece" to be split
> at the block level and perform "partial piece" sharing. Here, there are upto
> 256 blocks w/i a single piece. If I do the math correctly, assume you have a
> 1 GB torrent file -- each piece is 256KB by default and so that means each
> block is 1KB. Thus, you have 1 millions blocks that you can
> share it.
It is correct that libtorrent will keep track of pieces at block level (block
is a part of a piece, many blocks make up a piece). And for various reasons, I
decided that 256 blocks would be enough (that's a max piece size of 4 MiB),
this might change in the future though.
The thing is that it's not possible to actually share anything from a piece
before the entire piece has been downloaded, because of two reasons:
1) the data cannot be verified against the piece-hash before the entire piece
has been downloaded.
2) The protocol only allows clients to announce that they have whole pieces.
In your example, a 1 GiB torrent with a piece size of 256 kiB, you'd have 4096
pieces, and that's it. The blocks are just the size of the data chunks that you
download from a client at a time. i.e. you cannot request to download an entire
piece at a time, you have to ask for it in small portions, blocks.
> Now, do you allow out-of-order block-level sharing. That is do you
> keep track of which block w/i a piece has been obtained ??
I do keep track of that in order to avoid downloading the same block twice, but
as I said, it cannot be shared until the entire piece has been downloaded.
> Last, when sharing piece and or blocks, how is a piece selected in LT -- the
> original BT docs call for "random" selection to get a greater diversity of
> which pieces get distributed and this enhance sharing and overall better
> file sharing?
The way it's done in libtorrent is that a piece is selected randomly from the
set of pieces that are most rare. i.e. from all the neighbours (peers) the
client is connected to, it counts the occurance of all the pieces and those
that the fewest peers have, are prioritized. But within them it's still random.
I think the mainline client does something similar actually. It is referred to
> There seem to be some interesting dynamics that come into to play here --
> i.e., should a seeder give up at least each piece/block once before
> repeating any piece??
Well, when a torrent is first seeded, before the swarm has been built up, it's
probably wise to use a so called super seeder. Which will basically force the
swarm to share with each other before it continues to upload.
> Any information would be most helpful!,
I would prefer if you would direct questions like these to the mailing list in