Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.

Close

error on large file transfers?

2008-02-29
2013-02-19
  • "s3cmd put" works up to 4.8 MB file; increasing filesize to 48 MB results in following error (Ubuntu 7.10):

    Traceback (most recent call last):
      File "/usr/local/bin/s3cmd", line 766, in <module>
        cmd_func(args)
      File "/usr/local/bin/s3cmd", line 193, in cmd_object_put
        response = s3.object_put_uri(real_filename, uri_final, extra_headers)
      File "/s3cmd-0.9.6/S3/S3.py", line 225, in object_put_uri
      File "/s3cmd-0.9.6/S3/S3.py", line 201, in object_put
      File "/s3cmd-0.9.6/S3/S3.py", line 377, in send_file
      File "/usr/lib/python2.5/httplib.py", line 707, in send
        self.sock.sendall(str)
      File "<string>", line 1, in sendall
    socket.error: (104, 'Connection reset by peer')

    I'm trying to transfer gigabyte-sized files.

    Thanks,

    - Davi

     
    • Michal Ludvig
      Michal Ludvig
      2008-03-01

      Hi,

      I can't reproduce that. Have uploaded linux kernel tarball (44.5MB) to S3 just a few minutes ago:

      [...]
      INFO: Sent 46731264 bytes (99 % of 46737783)
      INFO: Sent 46735360 bytes (99 % of 46737783)
      INFO: Sent 46737783 bytes (100 % of 46737783)
      File '/share/kernels/linux-2.6.24.tar.bz2' stored as s3://logix.cz-us/linux-2.6.24.tar.bz2 (46737783 bytes) [1 of 1]
      Public URL of the object is: http://logix.cz-us.s3.amazonaws.com/linux-2.6.24.tar.bz2

      It could be something in your network setup.
      Do you use proxy? Perhaps the proxy limits max upload size?
      Do you run over HTTP or HTTPS?
      Has the bucket been created in US or Europe?

      Michal

       
    • Same problem here, versions 0.9.5 and 0.9.6, no proxy, regular http.

      Any file over around 50 meg will result in the same error that the OP posted, 99.9% of the time.
      Oddly, running in debug mode (-d) will make it work ok, 80% of the time with no errors. The remaining 20% - FAIL.

       
      • Michal Ludvig
        Michal Ludvig
        2008-03-03

        What connection type and speed are you on?

        When it fails in Debug mode - is breaking after the same amount of uploaded data? Or after the same time elapsed?

        I can't reproduce it here and recently were able to upload over 350MB file so it's pretty hard to guess what could be wrong. Perhaps it's uploading too fast and when info() is called it slows it down enough to fix the issue.

        Try to put "time.sleep(0.2)" on line 379 in file S3/S3.py and "import time" near the beginning of the file. Does that help?

         
        • Hi Michal,

          I added time.sleep(0.2) to S3.py as you suggested.  Upload rate dropped to ~21 KB/s, from ~1.6 MB/s.  After about 10 minutes of watching the 40 MB file trickle out, I decided to try a simultaneous upload using another tool (trial version of Bucket Explorer), while s3cmd continued to trickle to S3.  The second file (1.3 GB) successfully uploaded at ~1.6 MB/s without issue.  When it completed, upload rate returned to ~21 KB/s as s3cmd continued to trickle.  Upload completed successfully some minutes later.

          I commented out the "time.sleep" call and re-ran s3cmd on the 40 MB file.  This time, upload was successful.  Then I ran it on the 1.3 GB file, and got the same error I posted originally.

          s3cmd -v --> same error, at 0% of file uploaded
          s3cmd -vd --> same error, at 7% of file uploaded

          Uncomment "time.sleep(0.2)", modify it to "time.sleep(0.01)", s3cmd -vd --> runs at reduced upload speed (345 KB/s) and completes succesfully.

          I am on a university network.  My machine:
              - Dell Precision 690
              - two dual-core Xeon 5160 processors running at 3.0 GHz
              - 32 GB of RAM
              - Ubuntu 7.10

           
          • Whoops -- just to be clear, the above post (trying out time.sleep()) is from me, the OP.
            - Davi

             
    • Michal Ludvig
      Michal Ludvig
      2008-03-04

      Guys, can you test s3cmd 0.9.7-pre1 for me please?
      Download from here: http://tmp.logix.cz/s3cmd-0.9.7-pre1.tar.gz

      It should automatically lower the speed on failed upload and retry. Let me know if it helps. I'll try to rework the upload routine from httplib to curl in mid-term but for now this "throttling" may be enough.

      Thanks

      Michal

       
      • Seems to work, thanks Michal.  First time I tried it, it oscillated between 0.5 and 5 MB/s until I stopped watching it.  When I came back, it reported the file had transferred at an average of ~300 KB/s.  Second time I tried it, it throttled down to ~300 KB/s immediately.  Interestingly, using the 'unthrottled' version pegged out at 1 MB/s and did not crash.

        I would suggest throttling back to lower speeds a little more slowly, so as to soak up however much bandwidth is available.

        - Davi

         
    • dsbcpas
      dsbcpas
      2008-03-31

      Hi Michal,

      Just an update on how s3cmd 0.9.7-pre1 is functioning - Very well. We are still at the US S3 site. Any problems resolve on the next day's sync.
      Here is a sample of a warning error. I believe that the file is likely in use elsewhere or changing during upload. Would that causes such a sync problem? No idea if it is continuing through the rest of the directory structure.

      WARNING: Upload of '/home/shared/USR/dave/Mail/Sent' failed (104, 'Connection reset by peer')
      WARNING: Retrying on lower speed (throttle=0.01)
      ERROR: S3 error: 403 (Forbidden): RequestTimeTooSkewed

      Scott

       
    • Relatedly, we should probably do something sane if you try and upload files >5G (S3's object size limit). 
      Easy: refuse, skip that file with a warning/error message
      Harder: some kind of auto split/reassemble functionality, like: turn the file into a 'directory' and append /1, /2, etc on the end for each 5G piece, and if you're told to 'get' a dir like that, concatenate the pieces locally