Download retry improvements for 0.2.2
Windows executable:
hash_5d6382880b09457958c6242fdf2f85a370339697
Source code changes: see attached, or
hash_e5828b718a287efd8a97375bd687d76c937e0565
Robust retry enhancement by Mycroftxxx, released
February 5, 2004
Changes based on MUTE_fileSharing-0.2.2-
rc2_UnixSource released January 27, 2004
This version of MUTE should have a much better
download success rate than the release build.
I did a bunch of testing, and it looked to me like the
main reason most downloads failed was that the
downloading program simply gave up too easily. Looking
at the code, I was surprised to see that when a chunk
request fails, the release build only does 1 single retry
for that chunk; if that solitary retry also fails, you're
dead. The single retry is marked with the FRESH_ROUTE
flag, with the intention that it will erase the outdated
routing info that caused the chunk request to fail,
allowing a new, repaired route to be discovered. If the
retry also failed, the assumption was that the server
system must have disconnected, so it quit.
However, I think the great majority of retries aren't
caused by routing failures; they're caused by dropped
messages from traffic congestion. Either the server or
some node along the route to the client gets temporarily
overloaded with traffic and drops some messages,
causing a retry. If the overload lasts long enough to
drop the retry request as well, the download fails. This
probably most often happens to popular servers with
several concurrent uploads running. In this case, the
FRESH_ROUTE flag on the retry can do more harm than
good, since it wipes out the valid routing info being used
by all the concurrent downloads, disrupting all the traffic
to the server and worsening the traffic overload by
forcing many broadcasted messages. This disrupted
traffic can cause the other downloads to also drop
messages, causing them to send out retries with more
FRESH_ROUTE flags, etc.
I've changed the retry code to do up to 8 retries if
required, instead of just 1. The FRESH_ROUTE flag will
only be sent on every 4th retry, #4 and #8. This is
enough retries to persistently endure through any
temporary traffic overloads, and allows routine dropped
messages to be recovered from without disrupting other
downloads. If some client does end up needing to issue
a FRESH_ROUTE command, other concurrent downloads
that get disrupted will respond with normal retries which
can benefit from the FRESH_ROUTE just issued instead
of piling on with their own.
My testing so far has been very positive. I just finished
an unbroken stream of 22 successful downloads. The
23rd apparently failed because the server system
disconnected; I couldn't find the file anymore, even
after several searches. Not bad, especially since this
works without any network abuse or added traffic
burden. The program just doesn't give up so easily.
Note: This source release contains only the changes
listed above, so they can be more selectively examined
and tested by the devs without the complication of
sorting them out from other stuff mixed with them. If
you want to combine these changes with others like my
upload status stuff, you'll need to merge the files by
hand.
0.2.2 robust retry changes (source)
Logged In: NO
That is the problem with P2P at the router level, now one
understands this because they have no clue about routing
problems, its the hammering a server gets when it has a
popular file, so it is important to get a swarming thing
going soon so the burden is spread out on popular files as
fast as possible and some way to make sure busy servers are
not hammered, a simple back off timer would do for now
meaning back off 10 seconds each try and always send the
requester away with info on who else has the same file chunk
Logged In: NO
tried to install, got the following:WXMSW242.DLL not found