While I was sending the last post s3cmd generated the following error.  I suspect this is probably not the cause of my usual problem, but rather a one-off type of problem as I have only seen this in the logs once before.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!    An unexpected error has occurred.
  Please report the following lines to:
   s3tools-bugs@lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Problem: IOError: (2, 'No such file or directory')
S3cmd:   0.9.9.91

Traceback (most recent call last):
  File "/usr/local/bin/s3cmd", line 1736, in <module>
    main()
  File "/usr/local/bin/s3cmd", line 1681, in main    cmd_func(args)
  File "/usr/local/bin/s3cmd", line 1068, in cmd_sync
    return cmd_sync_local2remote(args)
  File "/usr/local/bin/s3cmd", line 984, in cmd_sync_local2remote
    local_list, remote_list, existing_list = _compare_filelists(local_list, remote_list, True)
  File "/usr/local/bin/s3cmd", line 762, in _compare_filelists
    src_md5 = Utils.hash_file_md5(src_list[file]['full_name'])
  File "/usr/local/lib/python2.6/dist-packages/S3/Utils.py", line 188, in hash_file_md5
    f = open(filename, "rb")
IOError: [Errno 2] No such file or directory: '/home/txoof/.fetchmail.pid'

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    An unexpected error has occurred.
    Please report the above lines to:
   s3tools-bugs@lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

--
Aaron Ciuffo

As I was walkin' - I saw a sign there
And that sign said 'private property'
But on the other side it didn't say nothin!
Now that side was made for you and me!
-This Land Is Your Land, Woody Guthrie


On Wed, Dec 2, 2009 at 8:11 AM, Aaron Ciuffo <aaron.ciuffo@gmail.com> wrote:
I'm using s3cmd to sync around 100K files (80gigs) to S3.  Currently I'm running it from within a script that runs hourly to restart the job if it crashes (it checks for a currently running s3cmd process before it starts a new process).  I'm running with the -v option and logging the output.  I see that after around 100-400 files it just craps out and dies with an exit code of 1. 

Any ideas what I should try to resolve this?  I've included my configuration file below and the command line that I use to start it.

I haven't tried running with the --debug feature on large jobs because it generates HUGE logs (over 100megs); if that would be helpful, I can do that. 

command line:
 /usr/local/bin/s3cmd -r -p --delete-removed -v --exclude-from /etc/s3cmd/exclude/etc.exclude -c /etc/s3cmd/s3cfg sync /etc/ s3://<MYBUCKET>/etc/

Configuration file:
[default]
access_key = <SNIP>
acl_public = False
bucket_location = US
cloudfront_host = cloudfront.amazonaws.com
cloudfront_resource = /2008-06-30/distribution
default_mime_type = binary/octet-stream
delete_removed = False
dry_run = False
encoding = UTF-8
encrypt = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = <SNIP>
guess_mime_type = False
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = True
list_md5 = False
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
recursive = False
recv_chunk = 4096
secret_key = <SNIP>
send_chunk = 4096
simpledb_host = sdb.amazonaws.com
skip_existing = False
urlencoding_mode = normal
use_https = True
verbosity = WARNING

--
Aaron Ciuffo

As I was walkin' - I saw a sign there
And that sign said 'private property'
But on the other side it didn't say nothin!
Now that side was made for you and me!
-This Land Is Your Land, Woody Guthrie