Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
I just noticed this error this morning when some of my cronjobs were running, but I really don't know what changed or what is causing it. Has anyone else experienced it, or have any ideas as to how to fix it?
Traceback (most recent call last):
File "/usr/local/bin/s3cmd", line 766, in <module>
File "/usr/local/bin/s3cmd", line 193, in cmd_object_put
response = s3.object_put_uri(real_filename, uri_final, extra_headers)
File "/usr/local/lib/python2.5/site-packages/S3/S3.py", line 225, in object_put_uri
return self.object_put(filename, uri.bucket(), uri.object(), extra_headers)
File "/usr/local/lib/python2.5/site-packages/S3/S3.py", line 201, in object_put
response = self.send_file(request, file)
File "/usr/local/lib/python2.5/site-packages/S3/S3.py", line 366, in send_file
File "/usr/local/lib/python2.5/httplib.py", line 667, in connect
socket.gaierror: (4, 'Non-recoverable failure in name resolution')
I checked to just make sure it wasn't something like Amazon's S3 hostname being wrong/changed in my configuration file, but the hostname there resolves just fine.
Has it recovered in the meantime? Does running 's3cmd ls' from that host work?
It looks like one of the nameservers involved in resolving S3 hostname failed or there was some other issue in DNS part of the operation. You can use e.g. tcpdump or wireshark to debug what's going on on the wire and which nameserver has hard time. But I expect it will go away as magically as it appeared.
Okay, been 3 days now with the same problem. I seem to be able to run "s3cmd ls" and see a list of my buckets, but 3 of them (the ones I was attempting to access) that were created on the same date cannot be accessed with s3cmd from that machine. However, using s3browser on my machine at home, I can list the contents of all of my buckets. Any ideas?
What's the bucket name? Can you run the command with --debug and post the results?
Just for grins I created new buckets, and it works just fine. So... really weird. I haven't had this problem happen before. Does it mean that whatever servers Amazon has that my server is hitting are unable to look up the buckets, whereas my machine at home (different ISP entirely, 3000 miles from the aforementioned servers) can get to them?
Could be. Try to run "host $BUCKETNAME.s3.amazonaws.com" on both hosts and see if it gives you an address or if there's any meaningful error message. Alternatively try "dig" instead of "host" - that may give you more details, like what nameserver has been queried, etc.