Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo

Close

ERROR: S3 error: 500 (Internal Server Error):

dsbcpas
2008-02-28
2013-02-19
  • dsbcpas
    dsbcpas
    2008-02-28

    ERROR: S3 error: 500 (Internal Server Error): InternalError 

    -Interesting, I have never seen one. What s3cmd command did you run? 

    She bombs every time on the following command:
    /usr/bin/s3cmd sync /etc s3://mybucketname/syncetc  2>&1 >>/var/log/amazons3.log;

    Due to the nature of the this directory, could it be that she bombs if the file being worked on changed during upload?
    Or could it be related to the large number of files? Where are you storing all the md5sum data for each file? In the last run she stopped at file 117 of 1661 which is part of prior attempts of that directory which contains 3617 files.

    Though likely unrelated, sync does not always deleting remote files, log file as follows:
    Summary: 0 local files to upload, 12 remote files to delete
    not-deleted s3://.....

    I've got around the problem by taring the directory and uploading, however, I have found that I can't use md5sums on stagnant tarred directories on account that a simple change in the order of a file within the tar compilation causes a different ms5sum result, hence, syncing files rather then tarred directories is the most efficient though consumes more disk space.

    Thank you for your assistance.
    Scott

     
    • dsbcpas
      dsbcpas
      2008-03-03

      Here is more detail regarding this error. I don't alway receive any traceback, only 3 of the following out of 11 include any traceback.
      Any idea how to better trace this down?

      Example.

      ERROR: S3 error: 500 (Internal Server Error): InternalError
      Traceback (most recent call last):
        File "/usr/bin/s3cmd", line 767, in <module>
          cmd_func(args)
        File "/usr/bin/s3cmd", line 429, in cmd_sync
          response = s3.object_put_uri(src, uri, attr_header)
        File "/usr/bin/S3/S3.py", line 225, in object_put_uri
          return self.object_put(filename, uri.bucket(), uri.object(), extra_headers)
        File "/usr/bin/S3/S3.py", line 201, in object_put
          response = self.send_file(request, file)
        File "/usr/bin/S3/S3.py", line 377, in send_file
          conn.send(data)
        File "/usr/local/lib/python2.5/httplib.py", line 707, in send
          self.sock.sendall(str)
        File "/usr/local/lib/python2.5/httplib.py", line 1104, in send
          return self._ssl.write(stuff)
      socket.error: (32, 'Broken pipe')

       
    • dsbcpas
      dsbcpas
      2008-03-03

      PS of the 11 errors, two were 400s with no traceback:
      ERROR: S3 error: 400 (Bad Request): RequestTimeout

       
    • dsbcpas
      dsbcpas
      2008-03-03

      Michal.
      S3cmd has been syncing various parts of our directory structure for over a day now. It appears that she routinely fails on directories with a large number of files, over 1,400.  One directory had 7k files for which s3cmd sync managed 1400 or so before dumping. Sometimes she exits after processing only, for example, 42 of 51 files and in another instance 14 of 136. It appears that the large the number of files in the directory, the more like she is to exit without completion.

      Debug and Verbose parameters are not very helpful on account I can't log their output for analysis. I am not logging any activity until it completes the sync command which seems strange and then just the last successful upload, omitting the file it failed working on. Is there a way of capturing the file s3cmd is processing when it exits to see if I can duplicate the result with the put function?

      Scott

       
      • Michal Ludvig
        Michal Ludvig
        2008-03-04

        Hi, can you test s3cmd 0.9.7-pre1 from http://tmp.logix.cz/s3cmd-0.9.7-pre1.tar.gz please?

        It's got two improvements:
        1) will retry failed upload on a lower speed which is more likely to succeed.
        2) won't exit on failed upload, rather the failing file is skipped over.

        Let me know if it helps.

        (and BTW s3cmd is "he" ;-)

         
    • dsbcpas
      dsbcpas
      2008-03-04

      Michal

      Another thought, could the failing to sync a directory, no error given, be a result of URL size limitation???

      Scott

       
    • dsbcpas
      dsbcpas
      2008-03-04

      s3cmd 0.9.7-pre1 Report: Still failing with no indication of file in process that caused problem. Upload "appears" faster and much more stable, but still dropping out from time to time with error 500.
      (Dated line is my script trap for exit 1) Hope this info helps.
      Scott

      INFO: Sending file '/home/shared/ATB/WORK.TXT.wpd', please wait...
      INFO: Sent 583 bytes (100 % of 583)
      ERROR: S3 error: 500 (Internal Server Error): InternalError
      Tue Mar 4 09:05:01 EST 2008  S3SYNCUPLOAD FAILED SYNCING /home/shared/ATB INTO S3 AT synchomeshared:ATB
      INFO: ConfigParser: Reading file '/root/.s3cfg'
      INFO: Processing request, please wait...
      INFO: ConfigParser: Reading file '/root/.s3cfg'
      INFO: Processing request, please wait...
      INFO: ConfigParser: Reading file '/root/.s3cfg'
      INFO: Processing request, please wait...
      ERROR: S3 error: 500 (Internal Server Error): InternalError
      Tue Mar 4 09:05:09 EST 2008  S3SYNCUPLOAD FAILED SYNCING /home/shared/COMPUTAX INTO S3 AT synchomeshared:COMPUTAX
      INFO: ConfigParser: Reading file '/root/.s3cfg'
      INFO: Processing request, please wait...
      INFO: ConfigParser: Reading file '/root/.s3cfg'
      INFO: Processing request, please wait...
      INFO: ConfigParser: Reading file '/root/.s3cfg'
      INFO: Processing request, please wait...
      INFO: Sending file '/home/shared/LIB/RPG/TEST.EXE', please wait...

       
      • Michal Ludvig
        Michal Ludvig
        2008-03-05

        Can you create a new bucket in Europe and rerun your test? That would make sure that you're talking to a completely different host. I still believe the Internal Error is a problem on Amazon side, not s3cmd. Thanks Michal

         
    • dsbcpas
      dsbcpas
      2008-03-05

      Sure! How do I do that? Set up a separate EU account? Scott

       
      • Michal Ludvig
        Michal Ludvig
        2008-03-05

        You don't need a separate EU account, simply create a bucket with:
        s3cmd mb --bucket-location=EU s3://your-new-EU-bucket-name

        The EU storage is slightly more expensive ($0.18 instead of $0.15 for US-based buckets), I hope it won't ruin you :) See http://aws.amazon.com/s3 for pricing details.

        Michal

         
    • dsbcpas
      dsbcpas
      2008-03-05

      Question - Could it relate to my configuration? I compiled Python2.5 due to 2.4 rpm conflict with 2.3 rpm dependencies. May be I should compile 2.4 instead?
      Also, I changed the first line of s3cmd to point to the compiled version: #!/usr/local/bin/python
      Thank you for working with me on this.
      Scott

       
      • Michal Ludvig
        Michal Ludvig
        2008-03-05

        It's unlikely to be related to your python version as the problems are intermittent. If there was something wrong with your python you'd probably observe some problems on every run. FYI I regularly run s3cmd with both Python 2.4 and Python 2.5 and both work fine for me.

        Michal

         
    • dsbcpas
      dsbcpas
      2008-03-05

      Just caught the last trace back when she bombed with no other error messages. Appears to be a socket error.

      Scott

      INFO: Sent 7785 bytes (100 % of 7785)
      Traceback (most recent call last):
        File "/usr/bin/s3cmd", line 785, in <module>
          cmd_func(args)
        File "/usr/bin/s3cmd", line 437, in cmd_sync
          response = s3.object_put_uri(src, uri, attr_header)
        File "/usr/bin/S3/S3.py", line 227, in object_put_uri
          return self.object_put(filename, uri.bucket(), uri.object(), extra_headers)
        File "/usr/bin/S3/S3.py", line 204, in object_put
          response = self.send_file(request, file)
        File "/usr/bin/S3/S3.py", line 403, in send_file
          http_response = conn.getresponse()
        File "/usr/local/lib/python2.5/httplib.py", line 924, in getresponse
          response.begin()
        File "/usr/local/lib/python2.5/httplib.py", line 385, in begin
          version, status, reason = self._read_status()
        File "/usr/local/lib/python2.5/httplib.py", line 343, in _read_status
          line = self.fp.readline()
        File "/usr/local/lib/python2.5/httplib.py", line 1043, in readline
          s = self._read()
        File "/usr/local/lib/python2.5/httplib.py", line 999, in _read
          buf = self._ssl.read(self._bufsize)
      socket.error: (104, 'Connection reset by peer')
      Wed Mar 5 08:59:16 EST 2008  S3SYNCUPLOAD FAILED SYNCING /home/shared/emailarchive INTO S3 AT synchomeshared:emailarchive

       
    • Michal, this may be a clue, --bucket-location=EU, does not work, message - ERROR: S3 error: 403 (Forbidden): SignatureDoesNotMatch

       
    • dsbcpas
      dsbcpas
      2008-03-09

      My error, one problem solved, bucket name must be in lower case!
      I will test EU site and report back.
      Scott

       
    • dsbcpas
      dsbcpas
      2008-03-09

      Michal, you are correct, uploaded to EU NO PROBLEM! Appears to be something with US site. On to other matters!
      Thank you for the EU suggestion. I'll continue to experiment with both. On to new problems.
      Scott

       
      • Michal Ludvig
        Michal Ludvig
        2008-03-11

        I'm glad to hear it's sorted. It might actually be some anomaly in your US bucket as most other users don't get Internal Errors in US either. To get a cheaper service from S3 you may want to create a new bucket in US and try again with that. Or if you're happy with EU go ahead and use that ;-)

         
    • dsbcpas
      dsbcpas
      2008-03-16

      Hi Michal,
      I haven't yet deleted my US bucket and tried to start over yet. It's on my list to do list, but I have 20 gig to move. I think I will wait until after you convert to curl to see what happens. I am syncing daily so any thing missed is generally picked up the next day. It just does not like high volumes.

      Lastest 500 error completely different then prior errors came up in del mode is as follows showing httplib.BadStatusLine.

      Thank you for your help.
      Scott

      ERROR: S3 error: 500 (Internal Server Error): InternalError
      Traceback (most recent call last):
        File "/usr/bin/s3cmd", line 785, in <module>
          cmd_func(args)
        File "/usr/bin/s3cmd", line 267, in cmd_object_del
          response = s3.object_delete_uri(uri)
        File "/usr/bin/S3/S3.py", line 240, in object_delete_uri
          return self.object_delete(uri.bucket(), uri.object())
        File "/usr/bin/S3/S3.py", line 221, in object_delete
          response = self.send_request(request)
        File "/usr/bin/S3/S3.py", line 344, in send_request
          http_response = conn.getresponse()
        File "/usr/local/lib/python2.5/httplib.py", line 924, in getresponse
          response.begin()
        File "/usr/local/lib/python2.5/httplib.py", line 385, in begin
          version, status, reason = self._read_status()
        File "/usr/local/lib/python2.5/httplib.py", line 349, in _read_status
          raise BadStatusLine(line)
      httplib.BadStatusLine