I'm using s3cmd to sync around 100K files (80gigs) to S3. Currently I'm running it from within a script that runs hourly to restart the job if it crashes (it checks for a currently running s3cmd process before it starts a new process). I'm running with the -v option and logging the output. I see that after around 100-400 files it just craps out and dies with an exit code of 1.
Any ideas what I should try to resolve this? I've included my configuration file below and the command line that I use to start it.
I haven't tried running with the --debug feature on large jobs because it generates HUGE logs (over 100megs); if that would be helpful, I can do that.
/usr/local/bin/s3cmd -r -p --delete-removed -v --exclude-from /etc/s3cmd/exclude/etc.exclude -c /etc/s3cmd/s3cfg sync /etc/ s3://<MYBUCKET>/etc/
access_key = <SNIP>
acl_public = False
bucket_location = US
cloudfront_host = cloudfront.amazonaws.com
cloudfront_resource = /2008-06-30/distribution
default_mime_type = binary/octet-stream
delete_removed = False
dry_run = False
encoding = UTF-8
encrypt = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = <SNIP>
guess_mime_type = False
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = True
list_md5 = False
preserve_attrs = True
progress_meter = True
proxy_port = 0
recursive = False
recv_chunk = 4096
secret_key = <SNIP>
send_chunk = 4096
simpledb_host = sdb.amazonaws.com
skip_existing = False
urlencoding_mode = normal
use_https = True
verbosity = WARNING
As I was walkin' - I saw a sign there
And that sign said 'private property'
But on the other side it didn't say nothin!
Now that side was made for you and me!
-This Land Is Your Land, Woody Guthrie