Menu

#4 resume-function

open
nobody
None
5
2004-01-25
2004-01-25
Anonymous
No

Hello,
I've added a resume-function to the CVS-code.
A test with some files was successfull (right hash after download).

The attached 'resume.patch' is the output of 'cvs diff -u'.
I don't now how to patch the files with this patch, so I added
the hashs of the changed files:

DownloadItem.cpp
4e65f0ea29e09d2503c6dffc9bf14a47f66112ce

DownloadItem.h
c93a94815becba66ab23fcdb73cb2be7c0479b31

fileShare.cpp
42cfdfe701db52eb5305015198fe0cfd04167b56

fileShare.h
41bd996d0313b01284ca18beb6d74214b414d85f

dread

Discussion

  • Nobody/Anonymous

    Logged In: NO

    I forgot to upload the attached file.
    Here is the hash of 'resume.patch':
    hash_efbad55cbf318ce4dcb962adeb10c9e43d514152

    dread

     
  • Nobody/Anonymous

    Logged In: NO

    I've forgot to upload the attached file 'resume.patch'.
    Here is the hash:
    hash_efbad55cbf318ce4dcb962adeb10c9e43d514152

    dread

     
  • Jason Rohrer

    Jason Rohrer - 2004-02-01

    Logged In: YES
    user_id=61805

    Thanks for working on this.

    Looking at the code, I notice that it only works to resume individual downloads with a "Resume" button. This is certainly useful, especially in v0.2.1, but it is not really a full-featured resume function.

    What if you download the file again from a different person? MUTE should notice this and only DL the portion that it needs. Also, if you download a file with a different name (but same SHA1 hash), MUTE should also be able to resume. This should all happen correctly across different MUTE sessions (for example, if MUTE crashes), so all of the status for a particular incoming file must be saved to disk

    I think that 0.2.2 makes DLs reliable enough that a per-DL resume isn't really as necessary as it used to be.

    Since you seem to be using 0.2.2, you can let me know: how often do you end up using your "Resume" button?

     
  • Nobody/Anonymous

    Logged In: NO

    Some input from someone who's uploading to a lot of people:
    I'm only seeing people's downloads complete successfully
    about 10-15% of the time. Usually they give up partway
    through the download. Some of these failed downloads may
    have paused for a while and then restarted 3 or 4 different
    times, so you can see the timeout code working, but they
    eventually give up before completing. The new timeout code
    seems to help, but still isn't persistent enough.

    Sometimes this is their own fault; they start 10 downloads at
    once and choke the upstream. But often it's just caused by
    heavy demand; they only started one, but so did 15 other
    people, choking the upstream just as badly.

    What I'd love to see would be fairly simple: if a download
    finally times out and fails, it automatically does a search for
    the hash_123... of the file. If it finds a match, it just
    resumes the download with the next needed chunk (maybe
    pick a source randomly from the list if there are multiple). If
    it doesn't find a match, wait 10 minutes and try again. Never
    quit until cancelled or completed. This would recover from
    sources that quit and restart with a new ID, or wait out
    periods of heavy congestion, or maybe it would just find
    another copy of the same file on another system (which
    would be great; automatically used mirrors). If the user has
    30 downloads going, it should probably rotate the re-
    searching timeouts between them all; wait 10 minutes,
    search for #1; if not found, wait 10 minutes, search for #2,
    etc. to keep from flooding the net with searches.

    To me the main thing is that it should be automatic and never
    give up until cancelled. As long as at least one source is
    there, and you keep trying, you should be guaranteed to
    eventually get the whole file.

    It would be great if partial files were saved across sessions,
    and manually starting the same file from somewhere else
    merged into the existing download, etc. but first I think
    downloads should be stubbornly persistent enough to be able
    to count on them working every time if at all possible. We
    can probably wait a while for the rest of the bells & whistles.

    <obligatory fanboy gushing> By the way, the MUTE protocol
    concept is very cool. Super elegant and pretty damn evil-
    proof. Gracias.

     
  • Nobody/Anonymous

    Logged In: NO

    A user can use the program "patch" to take a diff file and
    apply it. Also, read the man page (or search google) and you
    will find that you can patch against a particular date, like
    if you want to patch only against the last release code version.

    From "man patch"

    patch takes a patch file patchfile containing a difference
    listing produced by the diff program and applies those
    differences to one or more original files, producing
    patched versions.

     

Log in to post a comment.

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.