From: <lu...@us...> - 2008-07-29 01:23:17
|
Revision: 208 http://s3tools.svn.sourceforge.net/s3tools/?rev=208&view=rev Author: ludvigm Date: 2008-07-29 01:23:11 +0000 (Tue, 29 Jul 2008) Log Message: ----------- * S3/Utils.py (hash_file_md5): Hash files in 32kB chunks instead of reading it all up to a memory first to avoid OOM on large files. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/Utils.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-07-07 09:50:24 UTC (rev 207) +++ s3cmd/trunk/ChangeLog 2008-07-29 01:23:11 UTC (rev 208) @@ -1,3 +1,9 @@ +2008-07-29 Michal Ludvig <mi...@lo...> + + * S3/Utils.py (hash_file_md5): Hash files in 32kB chunks + instead of reading it all up to a memory first to avoid + OOM on large files. + 2008-07-07 Michal Ludvig <mi...@lo...> * s3cmd.1: couple of syntax fixes from Mikhail Gusarov Modified: s3cmd/trunk/S3/Utils.py =================================================================== --- s3cmd/trunk/S3/Utils.py 2008-07-07 09:50:24 UTC (rev 207) +++ s3cmd/trunk/S3/Utils.py 2008-07-29 01:23:11 UTC (rev 208) @@ -139,7 +139,12 @@ def hash_file_md5(filename): h = md5.new() f = open(filename, "rb") - h.update(f.read()) + while True: + # Hash 32kB chunks + data = f.read(32*1024) + if not data: + break + h.update(data) f.close() return h.hexdigest() This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-07-29 01:28:03
|
Revision: 209 http://s3tools.svn.sourceforge.net/s3tools/?rev=209&view=rev Author: ludvigm Date: 2008-07-29 01:28:00 +0000 (Tue, 29 Jul 2008) Log Message: ----------- * Released version 0.9.8.3 Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/NEWS s3cmd/trunk/S3/PkgInfo.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-07-29 01:23:11 UTC (rev 208) +++ s3cmd/trunk/ChangeLog 2008-07-29 01:28:00 UTC (rev 209) @@ -1,5 +1,15 @@ 2008-07-29 Michal Ludvig <mi...@lo...> + * Released version 0.9.8.3 + ------------------------ + +2008-07-29 Michal Ludvig <mi...@lo...> + + * S3/PkgInfo.py: Bumped up version to 0.9.8.3 + * NEWS: Added 0.9.8.3 + +2008-07-29 Michal Ludvig <mi...@lo...> + * S3/Utils.py (hash_file_md5): Hash files in 32kB chunks instead of reading it all up to a memory first to avoid OOM on large files. Modified: s3cmd/trunk/NEWS =================================================================== --- s3cmd/trunk/NEWS 2008-07-29 01:23:11 UTC (rev 208) +++ s3cmd/trunk/NEWS 2008-07-29 01:28:00 UTC (rev 209) @@ -1,3 +1,8 @@ +s3cmd 0.9.8.3 - 2008-07-29 +============= +* Bugfix release. Avoid running out-of-memory in MD5'ing + large files. + s3cmd 0.9.8.2 - 2008-06-27 ============= * Bugfix release. Re-upload file if Amazon doesn't send ETag Modified: s3cmd/trunk/S3/PkgInfo.py =================================================================== --- s3cmd/trunk/S3/PkgInfo.py 2008-07-29 01:23:11 UTC (rev 208) +++ s3cmd/trunk/S3/PkgInfo.py 2008-07-29 01:28:00 UTC (rev 209) @@ -1,5 +1,5 @@ package = "s3cmd" -version = "0.9.8.2" +version = "0.9.8.3" url = "http://s3tools.logix.cz" license = "GPL version 2" short_description = "S3cmd is a tool for managing Amazon S3 storage space." This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-07-31 12:49:07
|
Revision: 211 http://s3tools.svn.sourceforge.net/s3tools/?rev=211&view=rev Author: ludvigm Date: 2008-07-31 12:49:03 +0000 (Thu, 31 Jul 2008) Log Message: ----------- * TODO: Add some items Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/TODO Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-07-29 01:51:36 UTC (rev 210) +++ s3cmd/trunk/ChangeLog 2008-07-31 12:49:03 UTC (rev 211) @@ -1,3 +1,7 @@ +2008-08-01 Michal Ludvig <mi...@lo...> + + * TODO: Add some items + 2008-07-29 Michal Ludvig <mi...@lo...> * Released version 0.9.8.3 Modified: s3cmd/trunk/TODO =================================================================== --- s3cmd/trunk/TODO 2008-07-29 01:51:36 UTC (rev 210) +++ s3cmd/trunk/TODO 2008-07-31 12:49:03 UTC (rev 211) @@ -1,6 +1,18 @@ TODO list for s3cmd project =========================== +- For 0.9.9 + - Implement 'cp' and 'mv' + - Better upload / download progress display (and remove + excessive useless transfer info from verbose/debug + output) + - Warn when encryption is required (conf/arg) for sync + and request for explicit --no-encrypt parameter. + - Add --include/--include-from/--rinclude* for sync + +- After 1.0.0 + - Speed up upload / download with multiple threads. + - Treat objects with "/" in their name as directories - Will need local cache for bucket listings - More user friendly 'del' operation that would work This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-08-19 13:45:28
|
Revision: 216 http://s3tools.svn.sourceforge.net/s3tools/?rev=216&view=rev Author: ludvigm Date: 2008-08-19 13:45:26 +0000 (Tue, 19 Aug 2008) Log Message: ----------- * s3cmd: Always output UTF-8, even on output redirects. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-08-19 13:43:46 UTC (rev 215) +++ s3cmd/trunk/ChangeLog 2008-08-19 13:45:26 UTC (rev 216) @@ -1,3 +1,7 @@ +2008-08-19 Michal Ludvig <mi...@lo...> + + * s3cmd: Always output UTF-8, even on output redirects. + 2008-08-01 Michal Ludvig <mi...@lo...> * TODO: Add some items Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-08-19 13:43:46 UTC (rev 215) +++ s3cmd/trunk/s3cmd 2008-08-19 13:45:26 UTC (rev 216) @@ -14,6 +14,7 @@ import pwd, grp import glob import traceback +import codecs from copy import copy from optparse import OptionParser, Option, OptionValueError, IndentedHelpFormatter @@ -1037,6 +1038,9 @@ from S3 import Utils from S3.Exceptions import * + ## Output UTF-8 in all cases, even on output redirects + sys.stdout = codecs.getwriter("utf-8")(sys.stdout) + main() sys.exit(0) except SystemExit, e: This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-01 00:52:41
|
Revision: 218 http://s3tools.svn.sourceforge.net/s3tools/?rev=218&view=rev Author: ludvigm Date: 2008-09-01 00:52:34 +0000 (Mon, 01 Sep 2008) Log Message: ----------- * s3cmd, S3/S3.py, S3/Config.py: Allow access to upper-case named buckets again with --use-old-connect-method (uses http://s3.amazonaws.com/bucket/object instead of http://bucket.s3.amazonaws.com/object) Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/NEWS s3cmd/trunk/S3/Config.py s3cmd/trunk/S3/S3.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-08-30 11:29:13 UTC (rev 217) +++ s3cmd/trunk/ChangeLog 2008-09-01 00:52:34 UTC (rev 218) @@ -1,3 +1,10 @@ +2008-09-01 Michal Ludvig <mi...@lo...> + + * s3cmd, S3/S3.py, S3/Config.py: Allow access to upper-case + named buckets again with --use-old-connect-method + (uses http://s3.amazonaws.com/bucket/object instead of + http://bucket.s3.amazonaws.com/object) + 2008-08-19 Michal Ludvig <mi...@lo...> * s3cmd: Always output UTF-8, even on output redirects. Modified: s3cmd/trunk/NEWS =================================================================== --- s3cmd/trunk/NEWS 2008-08-30 11:29:13 UTC (rev 217) +++ s3cmd/trunk/NEWS 2008-09-01 00:52:34 UTC (rev 218) @@ -1,3 +1,8 @@ +s3cmd 0.9.9 - ??? +=========== +* Allow access to upper-case named buckets with + --use-old-connect-method parameter + s3cmd 0.9.8.3 - 2008-07-29 ============= * Bugfix release. Avoid running out-of-memory in MD5'ing Modified: s3cmd/trunk/S3/Config.py =================================================================== --- s3cmd/trunk/S3/Config.py 2008-08-30 11:29:13 UTC (rev 217) +++ s3cmd/trunk/S3/Config.py 2008-09-01 00:52:34 UTC (rev 218) @@ -15,6 +15,7 @@ secret_key = "" host_base = "s3.amazonaws.com" host_bucket = "%(bucket)s.s3.amazonaws.com" + use_old_connect_method = False simpledb_host = "sdb.amazonaws.com" verbosity = logging.WARNING send_chunk = 4096 Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-08-30 11:29:13 UTC (rev 217) +++ s3cmd/trunk/S3/S3.py 2008-09-01 00:52:34 UTC (rev 218) @@ -71,7 +71,7 @@ return httplib.HTTPConnection(self.get_hostname(bucket)) def get_hostname(self, bucket): - if bucket: + if bucket and not Config().use_old_connect_method: if self.redir_map.has_key(bucket): host = self.redir_map[bucket] else: @@ -85,10 +85,12 @@ self.redir_map[bucket] = redir_hostname def format_uri(self, resource): - if self.config.proxy_host != "": - uri = "http://%s%s" % (self.get_hostname(resource['bucket']), resource['uri']) + if resource['bucket'] and Config().use_old_connect_method: + uri = "/%s%s" % (resource['bucket'], resource['uri']) else: uri = resource['uri'] + if self.config.proxy_host != "": + uri = "http://%s%s" % (self.get_hostname(resource['bucket']), uri) debug('format_uri(): ' + uri) return uri Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-08-30 11:29:13 UTC (rev 217) +++ s3cmd/trunk/s3cmd 2008-09-01 00:52:34 UTC (rev 218) @@ -898,6 +898,7 @@ optparser.add_option("-H", "--human-readable-sizes", dest="human_readable_sizes", action="store_true", help="Print sizes in human readable form.") optparser.add_option("-v", "--verbose", dest="verbosity", action="store_const", const=logging.INFO, help="Enable verbose output.") + optparser.add_option( "--use-old-connect-method", dest="use_old_connect_method", action="store_true", help="Use deprecated method for connection to S3. Allows for upper-case bucket names but doesn't allow for buckets in Europe") optparser.add_option("-d", "--debug", dest="verbosity", action="store_const", const=logging.DEBUG, help="Enable debug output.") optparser.add_option( "--version", dest="show_version", action="store_true", help="Show s3cmd version (%s) and exit." % (PkgInfo.version)) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-01 01:45:43
|
Revision: 219 http://s3tools.svn.sourceforge.net/s3tools/?rev=219&view=rev Author: ludvigm Date: 2008-09-01 01:45:35 +0000 (Mon, 01 Sep 2008) Log Message: ----------- * S3/PkgInfo.py: Bumped up version to 0.9.9-pre1 Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/PkgInfo.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-01 00:52:34 UTC (rev 218) +++ s3cmd/trunk/ChangeLog 2008-09-01 01:45:35 UTC (rev 219) @@ -1,5 +1,9 @@ 2008-09-01 Michal Ludvig <mi...@lo...> + * S3/PkgInfo.py: Bumped up version to 0.9.9-pre1 + +2008-09-01 Michal Ludvig <mi...@lo...> + * s3cmd, S3/S3.py, S3/Config.py: Allow access to upper-case named buckets again with --use-old-connect-method (uses http://s3.amazonaws.com/bucket/object instead of Modified: s3cmd/trunk/S3/PkgInfo.py =================================================================== --- s3cmd/trunk/S3/PkgInfo.py 2008-09-01 00:52:34 UTC (rev 218) +++ s3cmd/trunk/S3/PkgInfo.py 2008-09-01 01:45:35 UTC (rev 219) @@ -1,5 +1,5 @@ package = "s3cmd" -version = "0.9.8.3" +version = "0.9.9-pre1" url = "http://s3tools.logix.cz" license = "GPL version 2" short_description = "S3cmd is a tool for managing Amazon S3 storage space." This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-01 02:40:14
|
Revision: 220 http://s3tools.svn.sourceforge.net/s3tools/?rev=220&view=rev Author: ludvigm Date: 2008-09-01 02:40:06 +0000 (Mon, 01 Sep 2008) Log Message: ----------- * S3/S3.py: removed object_{get,put,delete}_uri() functions and made object_{get,put,delete}() accept URI instead of bucket/object parameters. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-01 01:45:35 UTC (rev 219) +++ s3cmd/trunk/ChangeLog 2008-09-01 02:40:06 UTC (rev 220) @@ -1,5 +1,11 @@ 2008-09-01 Michal Ludvig <mi...@lo...> + * S3/S3.py: removed object_{get,put,delete}_uri() functions + and made object_{get,put,delete}() accept URI instead of + bucket/object parameters. + +2008-09-01 Michal Ludvig <mi...@lo...> + * S3/PkgInfo.py: Bumped up version to 0.9.9-pre1 2008-09-01 Michal Ludvig <mi...@lo...> Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-09-01 01:45:35 UTC (rev 219) +++ s3cmd/trunk/S3/S3.py 2008-09-01 02:40:06 UTC (rev 220) @@ -152,7 +152,12 @@ response['bucket-location'] = getTextFromXml(response['data'], "LocationConstraint") or "any" return response - def object_put(self, filename, bucket, object, extra_headers = None): + def object_put(self, filename, uri, extra_headers = None): + # TODO TODO + # Make it consistent with stream-oriented object_get() + if uri.type != "s3": + raise ValueError("Expected URI type 's3', got '%s'" % uri.type) + if not os.path.isfile(filename): raise ParameterError("%s is not a regular file" % filename) try: @@ -173,42 +178,32 @@ headers["content-type"] = content_type if self.config.acl_public: headers["x-amz-acl"] = "public-read" - request = self.create_request("OBJECT_PUT", bucket = bucket, object = object, headers = headers) + request = self.create_request("OBJECT_PUT", uri = uri, headers = headers) response = self.send_file(request, file) return response - def object_get_uri(self, uri, stream): + def object_get(self, uri, stream): if uri.type != "s3": raise ValueError("Expected URI type 's3', got '%s'" % uri.type) - request = self.create_request("OBJECT_GET", bucket = uri.bucket(), object = uri.object()) + request = self.create_request("OBJECT_GET", uri = uri) response = self.recv_file(request, stream) return response - def object_delete(self, bucket, object): - request = self.create_request("OBJECT_DELETE", bucket = bucket, object = object) + def object_delete(self, uri): + if uri.type != "s3": + raise ValueError("Expected URI type 's3', got '%s'" % uri.type) + request = self.create_request("OBJECT_DELETE", uri = uri) response = self.send_request(request) return response - def object_put_uri(self, filename, uri, extra_headers = None): - # TODO TODO - # Make it consistent with stream-oriented object_get_uri() - if uri.type != "s3": - raise ValueError("Expected URI type 's3', got '%s'" % uri.type) - return self.object_put(filename, uri.bucket(), uri.object(), extra_headers) - - def object_delete_uri(self, uri): - if uri.type != "s3": - raise ValueError("Expected URI type 's3', got '%s'" % uri.type) - return self.object_delete(uri.bucket(), uri.object()) - def object_info(self, uri): - request = self.create_request("OBJECT_HEAD", bucket = uri.bucket(), object = uri.object()) + request = self.create_request("OBJECT_HEAD", uri = uri) response = self.send_request(request) return response def get_acl(self, uri): if uri.has_object(): - request = self.create_request("OBJECT_GET", bucket = uri.bucket(), object = uri.object(), extra = "?acl") + request = self.create_request("OBJECT_GET", uri = uri, extra = "?acl") else: request = self.create_request("BUCKET_LIST", bucket = uri.bucket(), extra = "?acl") acl = {} @@ -263,8 +258,16 @@ debug("String '%s' encoded to '%s'" % (string, encoded)) return encoded - def create_request(self, operation, bucket = None, object = None, headers = None, extra = None, **params): + def create_request(self, operation, uri = None, bucket = None, object = None, headers = None, extra = None, **params): resource = { 'bucket' : None, 'uri' : "/" } + + if uri and (bucket or object): + raise ValueError("Both 'uri' and either 'bucket' or 'object' parameters supplied") + ## If URI is given use that instead of bucket/object parameters + if uri: + bucket = uri.bucket() + object = uri.has_object() and uri.object() or None + if bucket: resource['bucket'] = str(bucket) if object: Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-09-01 01:45:35 UTC (rev 219) +++ s3cmd/trunk/s3cmd 2008-09-01 02:40:06 UTC (rev 220) @@ -186,7 +186,7 @@ if Config().encrypt: exitcode, real_filename, extra_headers["x-amz-meta-s3tools-gpgenc"] = gpg_encrypt(file) try: - response = s3.object_put_uri(real_filename, uri_final, extra_headers) + response = s3.object_put(real_filename, uri_final, extra_headers) except S3UploadError, e: error("Upload of '%s' failed too many times. Skipping that file." % real_filename) continue @@ -250,7 +250,7 @@ except IOError, e: error("Skipping %s: %s" % (destination, e.strerror)) continue - response = s3.object_get_uri(uri, dst_stream) + response = s3.object_get(uri, dst_stream) if response["headers"].has_key("x-amz-meta-s3tools-gpgenc"): gpg_decrypt(destination, response["headers"]["x-amz-meta-s3tools-gpgenc"]) response["size"] = os.stat(destination)[6] @@ -268,7 +268,7 @@ if uri.type != "s3" or not uri.has_object(): raise ParameterError("Expecting S3 URI instead of '%s'" % uri_arg) - response = s3.object_delete_uri(uri) + response = s3.object_delete(uri) output("Object %s deleted" % uri) def cmd_info(args): @@ -479,7 +479,7 @@ os.open(dst_file, open_flags) # Yeah I know there is a race condition here. Sadly I don't know how to open() in exclusive mode. dst_stream = open(dst_file, "wb") - response = s3.object_get_uri(uri, dst_stream) + response = s3.object_get(uri, dst_stream) dst_stream.close() if response['headers'].has_key('x-amz-meta-s3cmd-attrs') and cfg.preserve_attrs: attrs = _parse_attrs_header(response['headers']['x-amz-meta-s3cmd-attrs']) @@ -575,7 +575,7 @@ for file in rem_list: uri = S3Uri("s3://" + dst_uri.bucket()+"/"+rem_list[file]['object_key']) if cfg.delete_removed: - response = s3.object_delete_uri(uri) + response = s3.object_delete(uri) output("deleted '%s'" % uri) else: output("not-deleted '%s'" % uri) @@ -597,7 +597,7 @@ attr_header = _build_attr_header(src) debug(attr_header) try: - response = s3.object_put_uri(src, uri, attr_header) + response = s3.object_put(src, uri, attr_header) except S3UploadError, e: error("%s: upload failed too many times. Skipping that file." % src) continue This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-01 03:02:39
|
Revision: 221 http://s3tools.svn.sourceforge.net/s3tools/?rev=221&view=rev Author: ludvigm Date: 2008-09-01 03:02:33 +0000 (Mon, 01 Sep 2008) Log Message: ----------- * s3cmd: Refuse 'sync' together with '--encrypt'. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-01 02:40:06 UTC (rev 220) +++ s3cmd/trunk/ChangeLog 2008-09-01 03:02:33 UTC (rev 221) @@ -1,5 +1,6 @@ 2008-09-01 Michal Ludvig <mi...@lo...> + * s3cmd: Refuse 'sync' together with '--encrypt'. * S3/S3.py: removed object_{get,put,delete}_uri() functions and made object_{get,put,delete}() accept URI instead of bucket/object parameters. Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-09-01 02:40:06 UTC (rev 220) +++ s3cmd/trunk/s3cmd 2008-09-01 03:02:33 UTC (rev 221) @@ -555,8 +555,15 @@ for k in attrs: result += "%s:%s/" % (k, attrs[k]) return { 'x-amz-meta-s3cmd-attrs' : result[:-1] } - s3 = S3(Config()) + s3 = S3(cfg) + if cfg.encrypt: + error("S3cmd 'sync' doesn't support GPG encryption, sorry.") + error("Either use unconditional 's3cmd put --recursive'") + error("or disable encryption with --no-encryption parameter.") + sys.exit(1) + + src_uri = S3Uri(src) dst_uri = S3Uri(dst) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-03 01:48:58
|
Revision: 223 http://s3tools.svn.sourceforge.net/s3tools/?rev=223&view=rev Author: ludvigm Date: 2008-09-03 01:48:55 +0000 (Wed, 03 Sep 2008) Log Message: ----------- * s3cmd, S3/Config.py: [rb] Allow removal of non-empty buckets with --force. [mb, rb] Allow multiple arguments, i.e. create or remove multiple buckets at once. [del] Perform recursive removal with --recursive (or -r). Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/Config.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-01 03:05:34 UTC (rev 222) +++ s3cmd/trunk/ChangeLog 2008-09-03 01:48:55 UTC (rev 223) @@ -1,5 +1,13 @@ 2008-09-01 Michal Ludvig <mi...@lo...> + * s3cmd, S3/Config.py: [rb] Allow removal of non-empty buckets + with --force. + [mb, rb] Allow multiple arguments, i.e. create or remove + multiple buckets at once. + [del] Perform recursive removal with --recursive (or -r). + +2008-09-01 Michal Ludvig <mi...@lo...> + * s3cmd: Refuse 'sync' together with '--encrypt'. * S3/S3.py: removed object_{get,put,delete}_uri() functions and made object_{get,put,delete}() accept URI instead of Modified: s3cmd/trunk/S3/Config.py =================================================================== --- s3cmd/trunk/S3/Config.py 2008-09-01 03:05:34 UTC (rev 222) +++ s3cmd/trunk/S3/Config.py 2008-09-03 01:48:55 UTC (rev 223) @@ -22,6 +22,7 @@ recv_chunk = 4096 human_readable_sizes = False force = False + recursive = False acl_public = False proxy_host = "" proxy_port = 3128 Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-09-01 03:05:34 UTC (rev 222) +++ s3cmd/trunk/s3cmd 2008-09-03 01:48:55 UTC (rev 223) @@ -125,34 +125,43 @@ )) def cmd_bucket_create(args): - uri = S3Uri(args[0]) - if not uri.type == "s3" or not uri.has_bucket() or uri.has_object(): - raise ParameterError("Expecting S3 URI with just the bucket name set instead of '%s'" % args[0]) - try: - s3 = S3(Config()) - response = s3.bucket_create(uri.bucket(), cfg.bucket_location) - except S3Error, e: - if S3.codes.has_key(e.info["Code"]): - error(S3.codes[e.info["Code"]] % uri.bucket()) - return - else: - raise - output("Bucket '%s' created" % uri.bucket()) + s3 = S3(Config()) + for arg in args: + uri = S3Uri(arg) + if not uri.type == "s3" or not uri.has_bucket() or uri.has_object(): + raise ParameterError("Expecting S3 URI with just the bucket name set instead of '%s'" % arg) + try: + response = s3.bucket_create(uri.bucket(), cfg.bucket_location) + output("Bucket '%s' created" % uri.uri()) + except S3Error, e: + if S3.codes.has_key(e.info["Code"]): + error(S3.codes[e.info["Code"]] % uri.bucket()) + return + else: + raise def cmd_bucket_delete(args): - uri = S3Uri(args[0]) - if not uri.type == "s3" or not uri.has_bucket() or uri.has_object(): - raise ParameterError("Expecting S3 URI with just the bucket name set instead of '%s'" % args[0]) - try: - s3 = S3(Config()) - response = s3.bucket_delete(uri.bucket()) - except S3Error, e: - if S3.codes.has_key(e.info["Code"]): - error(S3.codes[e.info["Code"]] % uri.bucket()) - return - else: - raise - output("Bucket '%s' removed" % uri.bucket()) + def _bucket_delete_one(uri): + try: + response = s3.bucket_delete(uri.bucket()) + except S3Error, e: + if e.info['Code'] == 'BucketNotEmpty' and (cfg.force or cfg.recursive): + warning("Bucket is not empty. Removing all the objects from it first. This may take some time...") + subcmd_object_del_uri(uri, recursive = True) + return _bucket_delete_one(uri) + elif S3.codes.has_key(e.info["Code"]): + error(S3.codes[e.info["Code"]] % uri.bucket()) + return + else: + raise + + s3 = S3(Config()) + for arg in args: + uri = S3Uri(arg) + if not uri.type == "s3" or not uri.has_bucket() or uri.has_object(): + raise ParameterError("Expecting S3 URI with just the bucket name set instead of '%s'" % arg) + _bucket_delete_one(uri) + output("Bucket '%s' removed" % uri.uri()) def cmd_object_put(args): s3 = S3(Config()) @@ -260,17 +269,32 @@ (uri, destination, response["size"], response["elapsed"], speed_fmt[0], speed_fmt[1])) def cmd_object_del(args): - s3 = S3(Config()) - while (len(args)): uri_arg = args.pop(0) uri = S3Uri(uri_arg) if uri.type != "s3" or not uri.has_object(): raise ParameterError("Expecting S3 URI instead of '%s'" % uri_arg) - response = s3.object_delete(uri) - output("Object %s deleted" % uri) + subcmd_object_del_uri(uri) +def subcmd_object_del_uri(uri, recursive = None): + s3 = S3(Config()) + if recursive is None: + recursive = cfg.recursive + uri_list = [] + if recursive: + filelist = _get_filelist_remote(uri) + uri_base = 's3://' + uri.bucket() + "/" + for idx in filelist: + object = filelist[idx] + debug("Adding URI " + uri_base + object['object_key']) + uri_list.append(S3Uri(uri_base + object['object_key'])) + else: + uri_list.append(uri) + for _uri in uri_list: + response = s3.object_delete(_uri) + info("Object %s deleted" % _uri) + def cmd_info(args): s3 = S3(Config()) @@ -885,6 +909,7 @@ optparser.add_option("-e", "--encrypt", dest="encrypt", action="store_true", help="Encrypt files before uploading to S3.") optparser.add_option( "--no-encrypt", dest="encrypt", action="store_false", help="Don't encrypt files.") optparser.add_option("-f", "--force", dest="force", action="store_true", help="Force overwrite and other dangerous operations.") + optparser.add_option("-r", "--recursive", dest="recursive", action="store_true", help="Recursive upload, download or removal.") optparser.add_option("-P", "--acl-public", dest="acl_public", action="store_true", help="Store objects with ACL allowing read for anyone.") optparser.add_option( "--acl-private", dest="acl_public", action="store_false", help="Store objects with default ACL allowing access for you only.") optparser.add_option( "--delete-removed", dest="delete_removed", action="store_true", help="Delete remote objects with no corresponding local file [sync]") This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-03 03:39:39
|
Revision: 224 http://s3tools.svn.sourceforge.net/s3tools/?rev=224&view=rev Author: ludvigm Date: 2008-09-03 03:39:36 +0000 (Wed, 03 Sep 2008) Log Message: ----------- * s3cmd, S3/S3.py: Make --verbose mode more useful and default mode less verbose. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-03 01:48:55 UTC (rev 223) +++ s3cmd/trunk/ChangeLog 2008-09-03 03:39:36 UTC (rev 224) @@ -1,5 +1,10 @@ -2008-09-01 Michal Ludvig <mi...@lo...> +2008-09-03 Michal Ludvig <mi...@lo...> + * s3cmd, S3/S3.py: Make --verbose mode more useful and default + mode less verbose. + +2008-09-03 Michal Ludvig <mi...@lo...> + * s3cmd, S3/Config.py: [rb] Allow removal of non-empty buckets with --force. [mb, rb] Allow multiple arguments, i.e. create or remove Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-09-03 01:48:55 UTC (rev 223) +++ s3cmd/trunk/S3/S3.py 2008-09-03 03:39:36 UTC (rev 224) @@ -116,7 +116,7 @@ list = _get_contents(response["data"]) while _list_truncated(response["data"]): marker = list[-1]["Key"] - info("Listing continues after '%s'" % marker) + debug("Listing continues after '%s'" % marker) request = self.create_request("BUCKET_LIST", bucket = bucket, prefix = prefix, marker = self.urlencode_string(marker)) @@ -302,7 +302,7 @@ def send_request(self, request, body = None): method_string, resource, headers = request - info("Processing request, please wait...") + debug("Processing request, please wait...") conn = self.get_connection(resource['bucket']) conn.request(method_string, self.format_uri(resource), body, headers) response = {} @@ -319,7 +319,7 @@ redir_bucket = getTextFromXml(response['data'], ".//Bucket") redir_hostname = getTextFromXml(response['data'], ".//Endpoint") self.set_hostname(redir_bucket, redir_hostname) - info("Redirected to: %s" % (redir_hostname)) + warning("Redirected to: %s" % (redir_hostname)) return self.send_request(request, body) if response["status"] < 200 or response["status"] > 299: @@ -361,7 +361,8 @@ size_left -= len(data) if throttle: time.sleep(throttle) - info("Sent %d bytes (%d %% of %d)" % ( + ## Call progress meter from here + debug("Sent %d bytes (%d %% of %d)" % ( (size_total - size_left), (size_total - size_left) * 100 / size_total, size_total)) @@ -383,7 +384,7 @@ redir_bucket = getTextFromXml(response['data'], ".//Bucket") redir_hostname = getTextFromXml(response['data'], ".//Endpoint") self.set_hostname(redir_bucket, redir_hostname) - info("Redirected to: %s" % (redir_hostname)) + warning("Redirected to: %s" % (redir_hostname)) return self.send_file(request, file) # S3 from time to time doesn't send ETag back in a response :-( @@ -395,10 +396,10 @@ if response["headers"]["etag"].strip('"\'') != md5_hash.hexdigest(): warning("MD5 Sums don't match!") if retries: - info("Retrying upload.") + warning("Retrying upload of %s" % (file.name)) return self.send_file(request, file, throttle, retries - 1) else: - debug("Too many failures. Giving up on '%s'" % (file.name)) + warning("Too many failures. Giving up on '%s'" % (file.name)) raise S3UploadError if response["status"] < 200 or response["status"] > 299: @@ -426,7 +427,7 @@ redir_bucket = getTextFromXml(response['data'], ".//Bucket") redir_hostname = getTextFromXml(response['data'], ".//Endpoint") self.set_hostname(redir_bucket, redir_hostname) - info("Redirected to: %s" % (redir_hostname)) + warning("Redirected to: %s" % (redir_hostname)) return self.recv_file(request, stream) if response["status"] < 200 or response["status"] > 299: @@ -444,7 +445,8 @@ stream.write(data) md5_hash.update(data) size_recvd += len(data) - info("Received %d bytes (%d %% of %d)" % ( + ## Call progress meter from here... + debug("Received %d bytes (%d %% of %d)" % ( size_recvd, size_recvd * 100 / size_total, size_total)) Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-09-03 01:48:55 UTC (rev 223) +++ s3cmd/trunk/s3cmd 2008-09-03 03:39:36 UTC (rev 224) @@ -293,7 +293,7 @@ uri_list.append(uri) for _uri in uri_list: response = s3.object_delete(_uri) - info("Object %s deleted" % _uri) + output("Object %s deleted" % _uri) def cmd_info(args): s3 = S3(Config()) @@ -327,7 +327,7 @@ raise def _get_filelist_local(local_uri): - output("Compiling list of local files...") + info("Compiling list of local files...") local_path = local_uri.path() if os.path.isdir(local_path): loc_base = os.path.join(local_path, "") @@ -358,7 +358,7 @@ return loc_list def _get_filelist_remote(remote_uri): - output("Retrieving list of remote files...") + info("Retrieving list of remote files...") s3 = S3(Config()) response = s3.bucket_list(remote_uri.bucket(), prefix = remote_uri.object()) @@ -377,7 +377,7 @@ return rem_list def _compare_filelists(src_list, dst_list, src_is_local_and_dst_is_remote): - output("Verifying checksums...") + info("Verifying checksums...") cfg = Config() exists_list = {} exclude_list = {} @@ -456,11 +456,11 @@ loc_list = _get_filelist_local(dst_uri) loc_count = len(loc_list) - output("Found %d remote files, %d local files" % (rem_count, loc_count)) + info("Found %d remote files, %d local files" % (rem_count, loc_count)) _compare_filelists(rem_list, loc_list, False) - output("Summary: %d remote files to download, %d local files to delete" % (len(rem_list), len(loc_list))) + info("Summary: %d remote files to download, %d local files to delete" % (len(rem_list), len(loc_list))) for file in loc_list: if cfg.delete_removed: @@ -550,9 +550,15 @@ total_elapsed = time.time() - timestamp_start speed_fmt = formatSize(total_size/total_elapsed, human_readable = True, floating_point = True) - output("Done. Downloaded %d bytes in %0.1f seconds, %0.2f %sB/s" % - (total_size, total_elapsed, speed_fmt[0], speed_fmt[1])) + # Only print out the result if any work has been done or + # if the user asked for verbose output + outstr = "Done. Downloaded %d bytes in %0.1f seconds, %0.2f %sB/s" % (total_size, total_elapsed, speed_fmt[0], speed_fmt[1]) + if total_size > 0: + output(outstr) + else: + info(outstr) + def cmd_sync_local2remote(src, dst): def _build_attr_header(src): attrs = {} @@ -584,7 +590,7 @@ if cfg.encrypt: error("S3cmd 'sync' doesn't support GPG encryption, sorry.") error("Either use unconditional 's3cmd put --recursive'") - error("or disable encryption with --no-encryption parameter.") + error("or disable encryption with --no-encrypt parameter.") sys.exit(1) @@ -597,11 +603,11 @@ rem_list = _get_filelist_remote(dst_uri) rem_count = len(rem_list) - output("Found %d local files, %d remote files" % (loc_count, rem_count)) + info("Found %d local files, %d remote files" % (loc_count, rem_count)) _compare_filelists(loc_list, rem_list, True) - output("Summary: %d local files to upload, %d remote files to delete" % (len(loc_list), len(rem_list))) + info("Summary: %d local files to upload, %d remote files to delete" % (len(loc_list), len(rem_list))) for file in rem_list: uri = S3Uri("s3://" + dst_uri.bucket()+"/"+rem_list[file]['object_key']) @@ -640,9 +646,15 @@ total_elapsed = time.time() - timestamp_start speed_fmt = formatSize(total_size/total_elapsed, human_readable = True, floating_point = True) - output("Done. Uploaded %d bytes in %0.1f seconds, %0.2f %sB/s" % - (total_size, total_elapsed, speed_fmt[0], speed_fmt[1])) + # Only print out the result if any work has been done or + # if the user asked for verbose output + outstr = "Done. Uploaded %d bytes in %0.1f seconds, %0.2f %sB/s" % (total_size, total_elapsed, speed_fmt[0], speed_fmt[1]) + if total_size > 0: + output(outstr) + else: + info(outstr) + def cmd_sync(args): src = args.pop(0) dst = args.pop(0) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-03 03:48:27
|
Revision: 225 http://s3tools.svn.sourceforge.net/s3tools/?rev=225&view=rev Author: ludvigm Date: 2008-09-03 03:48:23 +0000 (Wed, 03 Sep 2008) Log Message: ----------- Updated tasks Modified Paths: -------------- s3cmd/trunk/NEWS s3cmd/trunk/TODO Modified: s3cmd/trunk/NEWS =================================================================== --- s3cmd/trunk/NEWS 2008-09-03 03:39:36 UTC (rev 224) +++ s3cmd/trunk/NEWS 2008-09-03 03:48:23 UTC (rev 225) @@ -2,6 +2,9 @@ =========== * Allow access to upper-case named buckets with --use-old-connect-method parameter +* Removing of non-empty buckets with --force +* Recursively remove objects from buckets with a given + prefix with --recursive (-r) s3cmd 0.9.8.3 - 2008-07-29 ============= Modified: s3cmd/trunk/TODO =================================================================== --- s3cmd/trunk/TODO 2008-09-03 03:39:36 UTC (rev 224) +++ s3cmd/trunk/TODO 2008-09-03 03:48:23 UTC (rev 225) @@ -3,12 +3,10 @@ - For 0.9.9 - Implement 'cp' and 'mv' - - Better upload / download progress display (and remove - excessive useless transfer info from verbose/debug - output) + - Better upload / download progress display - Add --include/--include-from/--rinclude* for sync - Recursive processing / multiple sources with most commands. - - Force removal of non-empty buckets. + - Document --recursive and --force for buckets - After 1.0.0 - Speed up upload / download with multiple threads. This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-03 04:42:20
|
Revision: 226 http://s3tools.svn.sourceforge.net/s3tools/?rev=226&view=rev Author: ludvigm Date: 2008-09-03 04:42:16 +0000 (Wed, 03 Sep 2008) Log Message: ----------- * s3cmd, S3/S3.py, S3/Config.py: Removed --use-old-connect-method again. Autodetect the need for old connect method instead. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/Config.py s3cmd/trunk/S3/S3.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-03 03:48:23 UTC (rev 225) +++ s3cmd/trunk/ChangeLog 2008-09-03 04:42:16 UTC (rev 226) @@ -1,5 +1,10 @@ 2008-09-03 Michal Ludvig <mi...@lo...> + * s3cmd, S3/S3.py, S3/Config.py: Removed --use-old-connect-method + again. Autodetect the need for old connect method instead. + +2008-09-03 Michal Ludvig <mi...@lo...> + * s3cmd, S3/S3.py: Make --verbose mode more useful and default mode less verbose. Modified: s3cmd/trunk/S3/Config.py =================================================================== --- s3cmd/trunk/S3/Config.py 2008-09-03 03:48:23 UTC (rev 225) +++ s3cmd/trunk/S3/Config.py 2008-09-03 04:42:16 UTC (rev 226) @@ -15,7 +15,6 @@ secret_key = "" host_base = "s3.amazonaws.com" host_bucket = "%(bucket)s.s3.amazonaws.com" - use_old_connect_method = False simpledb_host = "sdb.amazonaws.com" verbosity = logging.WARNING send_chunk = 4096 Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-09-03 03:48:23 UTC (rev 225) +++ s3cmd/trunk/S3/S3.py 2008-09-03 04:42:16 UTC (rev 226) @@ -71,7 +71,7 @@ return httplib.HTTPConnection(self.get_hostname(bucket)) def get_hostname(self, bucket): - if bucket and not Config().use_old_connect_method: + if bucket and self.check_bucket_name_dns_conformity(bucket): if self.redir_map.has_key(bucket): host = self.redir_map[bucket] else: @@ -85,7 +85,7 @@ self.redir_map[bucket] = redir_hostname def format_uri(self, resource): - if resource['bucket'] and Config().use_old_connect_method: + if resource['bucket'] and not self.check_bucket_name_dns_conformity(resource['bucket']): uri = "/%s%s" % (resource['bucket'], resource['uri']) else: uri = resource['uri'] @@ -480,13 +480,35 @@ debug("SignHeaders: " + repr(h)) return base64.encodestring(hmac.new(self.config.secret_key, h, sha).digest()).strip() - def check_bucket_name(self, bucket): - invalid = re.compile("([^a-z0-9\._-])").search(bucket) - if invalid: - raise ParameterError("Bucket name '%s' contains disallowed character '%s'. The only supported ones are: lowercase us-ascii letters (a-z), digits (0-9), dot (.), hyphen (-) and underscore (_)." % (bucket, invalid.groups()[0])) + def check_bucket_name(self, bucket, dns_strict = True): + if dns_strict: + invalid = re.search("([^a-z0-9\.-])", bucket) + if invalid: + raise ParameterError("Bucket name '%s' contains disallowed character '%s'. The only supported ones are: lowercase us-ascii letters (a-z), digits (0-9), dot (.) and hyphen (-)." % (bucket, invalid.groups()[0])) + else: + invalid = re.search("([^A-Za-z0-9\._-])", bucket) + if invalid: + raise ParameterError("Bucket name '%s' contains disallowed character '%s'. The only supported ones are: us-ascii letters (a-z, A-Z), digits (0-9), dot (.), hyphen (-) and underscore (_)." % (bucket, invalid.groups()[0])) + if len(bucket) < 3: raise ParameterError("Bucket name '%s' is too short (min 3 characters)" % bucket) if len(bucket) > 255: raise ParameterError("Bucket name '%s' is too long (max 255 characters)" % bucket) + if dns_strict: + if len(bucket) > 63: + raise ParameterError("Bucket name '%s' is too long (max 63 characters)" % bucket) + if re.search("-\.", bucket): + raise ParameterError("Bucket name '%s' must not contain sequence '-.' for DNS compatibility" % bucket) + if re.search("\.\.", bucket): + raise ParameterError("Bucket name '%s' must not contain sequence '..' for DNS compatibility" % bucket) + if not re.search("^[0-9a-z]", bucket): + raise ParameterError("Bucket name '%s' must start with a letter or a digit" % bucket) + if not re.search("[0-9a-z]$", bucket): + raise ParameterError("Bucket name '%s' must end with a letter or a digit" % bucket) return True + def check_bucket_name_dns_conformity(self, bucket): + try: + return self.check_bucket_name(bucket, dns_strict = True) + except ParameterError: + return False Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-09-03 03:48:23 UTC (rev 225) +++ s3cmd/trunk/s3cmd 2008-09-03 04:42:16 UTC (rev 226) @@ -942,7 +942,6 @@ optparser.add_option("-H", "--human-readable-sizes", dest="human_readable_sizes", action="store_true", help="Print sizes in human readable form.") optparser.add_option("-v", "--verbose", dest="verbosity", action="store_const", const=logging.INFO, help="Enable verbose output.") - optparser.add_option( "--use-old-connect-method", dest="use_old_connect_method", action="store_true", help="Use deprecated method for connection to S3. Allows for upper-case bucket names but doesn't allow for buckets in Europe") optparser.add_option("-d", "--debug", dest="verbosity", action="store_const", const=logging.DEBUG, help="Enable debug output.") optparser.add_option( "--version", dest="show_version", action="store_true", help="Show s3cmd version (%s) and exit." % (PkgInfo.version)) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-05 03:24:07
|
Revision: 227 http://s3tools.svn.sourceforge.net/s3tools/?rev=227&view=rev Author: ludvigm Date: 2008-09-05 03:24:04 +0000 (Fri, 05 Sep 2008) Log Message: ----------- * s3cmd: Rework UTF-8 output to keep sys.stdout untouched (or it'd break 's3cmd get' to stdout for binary files). Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-03 04:42:16 UTC (rev 226) +++ s3cmd/trunk/ChangeLog 2008-09-05 03:24:04 UTC (rev 227) @@ -1,3 +1,8 @@ +2008-09-04 Michal Ludvig <mi...@lo...> + + * s3cmd: Rework UTF-8 output to keep sys.stdout untouched (or it'd + break 's3cmd get' to stdout for binary files). + 2008-09-03 Michal Ludvig <mi...@lo...> * s3cmd, S3/S3.py, S3/Config.py: Removed --use-old-connect-method Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-09-03 04:42:16 UTC (rev 226) +++ s3cmd/trunk/s3cmd 2008-09-05 03:24:04 UTC (rev 227) @@ -21,8 +21,12 @@ from logging import debug, info, warning, error from distutils.spawn import find_executable +## Output UTF-8 in all cases, even on output redirects +_unicode_stdout = codecs.getwriter("utf-8")(sys.stdout) +_unicode_stderr = codecs.getwriter("utf-8")(sys.stderr) + def output(message): - print message + _unicode_stdout.write(message + "\n") def check_args_type(args, type, verbose_type): for arg in args: @@ -105,7 +109,7 @@ bucket = uri.bucket() object = uri.object() - output("Bucket '%s':" % bucket) + output("Bucket 's3://%s':" % bucket) if object.endswith('*'): object = object[:-1] try: @@ -957,7 +961,9 @@ ## Some mucking with logging levels to enable ## debugging/verbose output for config file parser on request - logging.basicConfig(level=options.verbosity, format='%(levelname)s: %(message)s') + logging.basicConfig(level=options.verbosity, + format='%(levelname)s: %(message)s', + stream = _unicode_stderr) if options.show_version: output("s3cmd version %s" % PkgInfo.version) @@ -1082,9 +1088,6 @@ from S3 import Utils from S3.Exceptions import * - ## Output UTF-8 in all cases, even on output redirects - sys.stdout = codecs.getwriter("utf-8")(sys.stdout) - main() sys.exit(0) except SystemExit, e: This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-08 00:49:48
|
Revision: 228 http://s3tools.svn.sourceforge.net/s3tools/?rev=228&view=rev Author: ludvigm Date: 2008-09-08 00:49:46 +0000 (Mon, 08 Sep 2008) Log Message: ----------- Added "artwork" directory with site-top logo for now. Added Paths: ----------- s3cmd/trunk/artwork/ s3cmd/trunk/artwork/AtomicClockRadio.ttf s3cmd/trunk/artwork/TypeRa.ttf s3cmd/trunk/artwork/site-top-full-size.xcf s3cmd/trunk/artwork/site-top.png s3cmd/trunk/artwork/site-top.xcf Property changes on: s3cmd/trunk/artwork/AtomicClockRadio.ttf ___________________________________________________________________ Added: svn:mime-type + application/x-font-ttf Property changes on: s3cmd/trunk/artwork/TypeRa.ttf ___________________________________________________________________ Added: svn:mime-type + application/x-font-ttf Property changes on: s3cmd/trunk/artwork/site-top-full-size.xcf ___________________________________________________________________ Added: svn:mime-type + image/x-xcf Property changes on: s3cmd/trunk/artwork/site-top.png ___________________________________________________________________ Added: svn:mime-type + image/png Property changes on: s3cmd/trunk/artwork/site-top.xcf ___________________________________________________________________ Added: svn:mime-type + image/x-xcf This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-09 14:42:36
|
Revision: 230 http://s3tools.svn.sourceforge.net/s3tools/?rev=230&view=rev Author: ludvigm Date: 2008-09-09 14:42:34 +0000 (Tue, 09 Sep 2008) Log Message: ----------- * s3cmd, S3/S3Uri.py, S3/S3.py: All internal representations of S3Uri()s are Unicode (i.e. not UTF-8 but type()==unicode). It still doesn't work on non-UTF8 systems though. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py s3cmd/trunk/S3/S3Uri.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-08 05:12:59 UTC (rev 229) +++ s3cmd/trunk/ChangeLog 2008-09-09 14:42:34 UTC (rev 230) @@ -1,3 +1,9 @@ +2008-09-10 Michal Ludvig <mi...@lo...> + + * s3cmd, S3/S3Uri.py, S3/S3.py: All internal representations of + S3Uri()s are Unicode (i.e. not UTF-8 but type()==unicode). It + still doesn't work on non-UTF8 systems though. + 2008-09-04 Michal Ludvig <mi...@lo...> * s3cmd: Rework UTF-8 output to keep sys.stdout untouched (or it'd Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-09-08 05:12:59 UTC (rev 229) +++ s3cmd/trunk/S3/S3.py 2008-09-09 14:42:34 UTC (rev 230) @@ -222,6 +222,8 @@ ## Low level methods def urlencode_string(self, string): + if type(string) == unicode: + string = string.encode("utf-8") encoded = "" ## List of characters that must be escaped for S3 ## Haven't found this in any official docs Modified: s3cmd/trunk/S3/S3Uri.py =================================================================== --- s3cmd/trunk/S3/S3Uri.py 2008-09-08 05:12:59 UTC (rev 229) +++ s3cmd/trunk/S3/S3Uri.py 2008-09-09 14:42:34 UTC (rev 230) @@ -6,6 +6,7 @@ import re import sys from BidirMap import BidirMap +from logging import debug class S3Uri(object): type = None @@ -32,10 +33,25 @@ def __str__(self): return self.uri() - + + def __unicode__(self): + return self.uri() + def public_url(self): raise ValueError("This S3 URI does not have Anonymous URL representation") - + + def _unicodise(self, string): + """ + Convert 'string' to Unicode or raise an exception. + """ + debug("Unicodising %r" % string) + if type(string) == unicode: + return string + try: + return string.decode("utf-8") + except UnicodeDecodeError: + raise UnicodeDecodeError("Conversion to unicode failed: %r" % string) + class S3UriS3(S3Uri): type = "s3" _re = re.compile("^s3://([^/]+)/?(.*)", re.IGNORECASE) @@ -45,7 +61,7 @@ raise ValueError("%s: not a S3 URI" % string) groups = match.groups() self._bucket = groups[0] - self._object = groups[1] + self._object = self._unicodise(groups[1]) def bucket(self): return self._bucket @@ -78,7 +94,7 @@ raise ValueError("%s: not a S3fs URI" % string) groups = match.groups() self._fsname = groups[0] - self._path = groups[1].split("/") + self._path = self._unicodise(groups[1]).split("/") def fsname(self): return self._fsname @@ -97,7 +113,7 @@ groups = match.groups() if groups[0] not in (None, "file://"): raise ValueError("%s: not a file:// URI" % string) - self._path = groups[1].split("/") + self._path = self._unicodise(groups[1]).split("/") def path(self): return "/".join(self._path) Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-09-08 05:12:59 UTC (rev 229) +++ s3cmd/trunk/s3cmd 2008-09-09 14:42:34 UTC (rev 230) @@ -371,12 +371,12 @@ rem_base_len = len(rem_base) rem_list = {} for object in response['list']: - key = object['Key'][rem_base_len:].encode('utf-8') + key = object['Key'][rem_base_len:] rem_list[key] = { 'size' : int(object['Size']), # 'mtime' : dateS3toUnix(object['LastModified']), ## That's upload time, not our lastmod time :-( 'md5' : object['ETag'][1:-1], - 'object_key' : object['Key'].encode('utf-8'), + 'object_key' : object['Key'] } return rem_list This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-09 15:31:24
|
Revision: 232 http://s3tools.svn.sourceforge.net/s3tools/?rev=232&view=rev Author: ludvigm Date: 2008-09-09 15:31:20 +0000 (Tue, 09 Sep 2008) Log Message: ----------- * testsuite, run-tests.py: Added testsuite with first few tests. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/run-tests.sh Added Paths: ----------- s3cmd/trunk/run-tests.py s3cmd/trunk/testsuite/ s3cmd/trunk/testsuite/binary/ s3cmd/trunk/testsuite/binary/random-crap s3cmd/trunk/testsuite/binary/random-crap.md5 s3cmd/trunk/testsuite/unicode/ s3cmd/trunk/testsuite/unicode/?\197?\170?\197?\134?\208?\135?\208?\140?\197?\147?\196?\145?\208?\151/ s3cmd/trunk/testsuite/unicode/?\197?\170?\197?\134?\208?\135?\208?\140?\197?\147?\196?\145?\208?\151/?\197?\189?\197?\175?\197?\190o s3cmd/trunk/testsuite/unicode/?\197?\170?\197?\134?\208?\135?\208?\140?\197?\147?\196?\145?\208?\151/?\226?\152?\186 unicode ?\226?\130?\172 rocks ?\226?\132?\162 s3cmd/trunk/testsuite/unicode/?\197?\189?\197?\175?\197?\190o Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-09 14:55:01 UTC (rev 231) +++ s3cmd/trunk/ChangeLog 2008-09-09 15:31:20 UTC (rev 232) @@ -1,5 +1,9 @@ 2008-09-10 Michal Ludvig <mi...@lo...> + * testsuite, run-tests.py: Added testsuite with first few tests. + +2008-09-10 Michal Ludvig <mi...@lo...> + * s3cmd, S3/S3Uri.py, S3/S3.py: All internal representations of S3Uri()s are Unicode (i.e. not UTF-8 but type()==unicode). It still doesn't work on non-UTF8 systems though. Added: s3cmd/trunk/run-tests.py =================================================================== --- s3cmd/trunk/run-tests.py (rev 0) +++ s3cmd/trunk/run-tests.py 2008-09-09 15:31:20 UTC (rev 232) @@ -0,0 +1,93 @@ +#!/usr/bin/env python + +## Amazon S3cmd - testsuite +## Author: Michal Ludvig <mi...@lo...> +## http://www.logix.cz/michal +## License: GPL Version 2 + +import sys +import re +from subprocess import Popen, PIPE, STDOUT + +count_pass = 0 +count_fail = 0 + +def test(label, cmd_args = [], retcode = 0, must_find = [], must_not_find = [], must_find_re = [], must_not_find_re = []): + def failure(message = ""): + global count_fail + if message: + message = " (%s)" % message + print "FAIL%s" % (message) + count_fail += 1 + print "----" + print " ".join([arg.find(" ")>=0 and "'%s'" % arg or arg for arg in cmd_args]) + print "----" + print stdout + print "----" + return 1 + def success(message = ""): + global count_pass + if message: + message = " (%s)" % message + print "OK%s" % (message) + count_pass += 1 + return 0 + def compile_list(_list, regexps = False): + if type(_list) not in [ list, tuple ]: + _list = [_list] + + if regexps == False: + _list = [re.escape(item) for item in _list] + + return [re.compile(item) for item in _list] + + print (label + " ").ljust(30, "."), + sys.stdout.flush() + + p = Popen(cmd_args, stdout = PIPE, stderr = STDOUT, universal_newlines = True) + stdout, stderr = p.communicate() + if retcode != p.returncode: + return failure("retcode: %d, expected: %d" % (p.returncode, retcode)) + + find_list = [] + find_list.extend(compile_list(must_find)) + find_list.extend(compile_list(must_find_re, regexps = True)) + not_find_list = [] + not_find_list.extend(compile_list(must_not_find)) + not_find_list.extend(compile_list(must_not_find_re, regexps = True)) + + for pattern in find_list: + match = pattern.search(stdout) + if not match: + return failure("pattern not found: %s" % match.group()) + for pattern in not_find_list: + match = pattern.search(stdout) + if match: + return failure("pattern found: %s" % match.group()) + return success() + +def test_s3cmd(label, cmd_args = [], **kwargs): + if not cmd_args[0].endswith("s3cmd"): + cmd_args.insert(0, "./s3cmd") + return test(label, cmd_args, **kwargs) + +test_s3cmd("Remove test buckets", ['rb', '-r', 's3://s3cmd-autotest-1', 's3://s3cmd-autotest-2', 's3://s3cmd-autotest-3'], + must_find = [ "Bucket 's3://s3cmd-autotest-1/' removed", + "Bucket 's3://s3cmd-autotest-2/' removed", + "Bucket 's3://s3cmd-autotest-3/' removed" ]) + +test_s3cmd("Create one bucket", ['mb', 's3://s3cmd-autotest-1'], + must_find = "Bucket 's3://s3cmd-autotest-1/' created") + +test_s3cmd("Create multiple buckets", ['mb', 's3://s3cmd-autotest-2', 's3://s3cmd-autotest-3'], + must_find = [ "Bucket 's3://s3cmd-autotest-2/' created", "Bucket 's3://s3cmd-autotest-3/' created" ]) + +test_s3cmd("Invalid bucket name", ["mb", "s3://s3cmd-Autotest-.-"], + retcode = 1, + must_find = "ERROR: Parameter problem: Bucket name", + must_not_find_re = "Bucket.*created") + +test_s3cmd("Buckets list", ["ls"], + must_find = [ "autotest-1", "autotest-2", "autotest-3" ], must_not_find_re = "Autotest") + +test_s3cmd("Sync with exclude", ['sync', 'testsuite', 's3://s3cmd-autotest-1/xyz/', '--exclude', '*/thousands/*', '--no-encrypt']) Property changes on: s3cmd/trunk/run-tests.py ___________________________________________________________________ Added: svn:executable + * Modified: s3cmd/trunk/run-tests.sh =================================================================== --- s3cmd/trunk/run-tests.sh 2008-09-09 14:55:01 UTC (rev 231) +++ s3cmd/trunk/run-tests.sh 2008-09-09 15:31:20 UTC (rev 232) @@ -25,19 +25,14 @@ ./s3cmd sync --delete s3cmd-${VER} s3://s3cmd-autotest/sync-test rm -f s3cmd-${VER}/S3/PkgInfo.py rm -f s3cmd-${VER}/s3cmd -./s3cmd sync --delete --exclude "/s3cmd-${VER}/S3/*" s3://s3cmd-autotest/sync-test s3cmd-${VER} +./s3cmd sync --delete --exclude "/s3cmd-${VER}/S3/S3*" s3://s3cmd-autotest/sync-test s3cmd-${VER} rm -rf s3cmd-${VER} ./s3cmd rb s3://s3cmd-autotest/ || true # ERROR: S3 error: 409 (Conflict): BucketNotEmpty -# hack to remove all objects from a bucket -mkdir empty -./s3cmd sync --delete empty/ s3://s3cmd-autotest -rm -rf empty +./s3cmd rb --force s3://s3cmd-autotest/ -./s3cmd rb s3://s3cmd-autotest/ - set +x echo; echo Property changes on: s3cmd/trunk/testsuite/binary/random-crap ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: s3cmd/trunk/testsuite/binary/random-crap.md5 =================================================================== --- s3cmd/trunk/testsuite/binary/random-crap.md5 (rev 0) +++ s3cmd/trunk/testsuite/binary/random-crap.md5 2008-09-09 15:31:20 UTC (rev 232) @@ -0,0 +1 @@ +cb76ecee9a834eadd96b226493acac28 random-crap Added: s3cmd/trunk/testsuite/unicode/?\197?\170?\197?\134?\208?\135?\208?\140?\197?\147?\196?\145?\208?\151/?\226?\152?\186 unicode ?\226?\130?\172 rocks ?\226?\132?\162 =================================================================== This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-15 02:52:37
|
Revision: 233 http://s3tools.svn.sourceforge.net/s3tools/?rev=233&view=rev Author: ludvigm Date: 2008-09-15 02:52:31 +0000 (Mon, 15 Sep 2008) Log Message: ----------- * S3/S3.py: "s3cmd mb" can create upper-case buckets again in US. Non-US (e.g. EU) bucket names must conform strict DNS-rules. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-09 15:31:20 UTC (rev 232) +++ s3cmd/trunk/ChangeLog 2008-09-15 02:52:31 UTC (rev 233) @@ -1,3 +1,9 @@ +2008-09-15 Michal Ludvig <mi...@lo...> + + * S3/S3.py: "s3cmd mb" can create upper-case buckets again + in US. Non-US (e.g. EU) bucket names must conform strict + DNS-rules. + 2008-09-10 Michal Ludvig <mi...@lo...> * testsuite, run-tests.py: Added testsuite with first few tests. Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-09-09 15:31:20 UTC (rev 232) +++ s3cmd/trunk/S3/S3.py 2008-09-15 02:52:31 UTC (rev 233) @@ -126,7 +126,6 @@ return response def bucket_create(self, bucket, bucket_location = None): - self.check_bucket_name(bucket) headers = SortedDict() body = "" if bucket_location and bucket_location.strip().upper() != "US": @@ -134,6 +133,9 @@ body += bucket_location.strip().upper() body += "</LocationConstraint></CreateBucketConfiguration>" debug("bucket_location: " + body) + self.check_bucket_name(bucket, dns_strict = True) + else: + self.check_bucket_name(bucket, dns_strict = False) headers["content-length"] = len(body) if self.config.acl_public: headers["x-amz-acl"] = "public-read" This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-15 03:07:39
|
Revision: 234 http://s3tools.svn.sourceforge.net/s3tools/?rev=234&view=rev Author: ludvigm Date: 2008-09-15 03:07:32 +0000 (Mon, 15 Sep 2008) Log Message: ----------- * S3/S3Uri.py: Display public URLs correctly for non-DNS buckets. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py s3cmd/trunk/S3/S3Uri.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-15 02:52:31 UTC (rev 233) +++ s3cmd/trunk/ChangeLog 2008-09-15 03:07:32 UTC (rev 234) @@ -1,8 +1,9 @@ 2008-09-15 Michal Ludvig <mi...@lo...> * S3/S3.py: "s3cmd mb" can create upper-case buckets again - in US. Non-US (e.g. EU) bucket names must conform strict + in US. Non-US (e.g. EU) bucket names must conform to strict DNS-rules. + * S3/S3Uri.py: Display public URLs correctly for non-DNS buckets. 2008-09-10 Michal Ludvig <mi...@lo...> Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-09-15 02:52:31 UTC (rev 233) +++ s3cmd/trunk/S3/S3.py 2008-09-15 03:07:32 UTC (rev 234) @@ -484,7 +484,8 @@ debug("SignHeaders: " + repr(h)) return base64.encodestring(hmac.new(self.config.secret_key, h, sha).digest()).strip() - def check_bucket_name(self, bucket, dns_strict = True): + @staticmethod + def check_bucket_name(bucket, dns_strict = True): if dns_strict: invalid = re.search("([^a-z0-9\.-])", bucket) if invalid: @@ -511,8 +512,9 @@ raise ParameterError("Bucket name '%s' must end with a letter or a digit" % bucket) return True - def check_bucket_name_dns_conformity(self, bucket): + @staticmethod + def check_bucket_name_dns_conformity(bucket): try: - return self.check_bucket_name(bucket, dns_strict = True) + return S3.check_bucket_name(bucket, dns_strict = True) except ParameterError: return False Modified: s3cmd/trunk/S3/S3Uri.py =================================================================== --- s3cmd/trunk/S3/S3Uri.py 2008-09-15 02:52:31 UTC (rev 233) +++ s3cmd/trunk/S3/S3Uri.py 2008-09-15 03:07:32 UTC (rev 234) @@ -7,6 +7,7 @@ import sys from BidirMap import BidirMap from logging import debug +from S3 import S3 class S3Uri(object): type = None @@ -79,12 +80,15 @@ return "/".join(["s3:/", self._bucket, self._object]) def public_url(self): - return "http://%s.s3.amazonaws.com/%s" % (self._bucket, self._object) + if S3.check_bucket_name_dns_conformity(self._bucket): + return "http://%s.s3.amazonaws.com/%s" % (self._bucket, self._object) + else: + return "http://s3.amazonaws.com/%s/%s" % (self._bucket, self._object) @staticmethod def compose_uri(bucket, object = ""): return "s3://%s/%s" % (bucket, object) - + class S3UriS3FS(S3Uri): type = "s3fs" _re = re.compile("^s3fs://([^/]*)/?(.*)", re.IGNORECASE) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-15 12:51:43
|
Revision: 240 http://s3tools.svn.sourceforge.net/s3tools/?rev=240&view=rev Author: ludvigm Date: 2008-09-15 12:51:41 +0000 (Mon, 15 Sep 2008) Log Message: ----------- * S3/S3.py: Don't run into ZeroDivisionError when speed counter returns 0s elapsed on upload/download file. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-15 12:15:08 UTC (rev 239) +++ s3cmd/trunk/ChangeLog 2008-09-15 12:51:41 UTC (rev 240) @@ -1,3 +1,8 @@ +2008-09-16 Michal Ludvig <mi...@lo...> + + * S3/S3.py: Don't run into ZeroDivisionError when speed counter + returns 0s elapsed on upload/download file. + 2008-09-15 Michal Ludvig <mi...@lo...> * s3cmd, S3/S3.py, S3/Utils.py, S3/S3Uri.py, S3/Exceptions.py: Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-09-15 12:15:08 UTC (rev 239) +++ s3cmd/trunk/S3/S3.py 2008-09-15 12:51:41 UTC (rev 240) @@ -380,7 +380,7 @@ response["data"] = http_response.read() response["elapsed"] = timestamp_end - timestamp_start response["size"] = size_total - response["speed"] = float(response["size"]) / response["elapsed"] + response["speed"] = response["elapsed"] and float(response["size"]) / response["elapsed"] or float(-1) conn.close() if response["status"] == 307: @@ -460,7 +460,7 @@ response["md5match"] = response["headers"]["etag"].find(response["md5"]) >= 0 response["elapsed"] = timestamp_end - timestamp_start response["size"] = size_recvd - response["speed"] = float(response["size"]) / response["elapsed"] + response["speed"] = response["elapsed"] and float(response["size"]) / response["elapsed"] or float(-1) if response["size"] != long(response["headers"]["content-length"]): warning("Reported size (%s) does not match received size (%s)" % ( response["headers"]["content-length"], response["size"])) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-09-15 12:59:19
|
Revision: 243 http://s3tools.svn.sourceforge.net/s3tools/?rev=243&view=rev Author: ludvigm Date: 2008-09-15 12:59:16 +0000 (Mon, 15 Sep 2008) Log Message: ----------- * NEWS: s3cmd 0.9.8.4 released from branches/0.9.8.x SVN branch. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/NEWS Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-09-15 12:56:55 UTC (rev 242) +++ s3cmd/trunk/ChangeLog 2008-09-15 12:59:16 UTC (rev 243) @@ -1,5 +1,9 @@ 2008-09-16 Michal Ludvig <mi...@lo...> + * NEWS: s3cmd 0.9.8.4 released from branches/0.9.8.x SVN branch. + +2008-09-16 Michal Ludvig <mi...@lo...> + * S3/S3.py: Don't run into ZeroDivisionError when speed counter returns 0s elapsed on upload/download file. Modified: s3cmd/trunk/NEWS =================================================================== --- s3cmd/trunk/NEWS 2008-09-15 12:56:55 UTC (rev 242) +++ s3cmd/trunk/NEWS 2008-09-15 12:59:16 UTC (rev 243) @@ -6,6 +6,14 @@ * Recursively remove objects from buckets with a given prefix with --recursive (-r) +s3cmd 0.9.8.4 - 2008-09-16 +============= +* Bugfix release: +* Restored access to upper-case named buckets. +* Improved handling of filenames with Unicode characters. +* Avoid ZeroDivisionError on ultrafast links (for instance + on Amazon EC2) + s3cmd 0.9.8.3 - 2008-07-29 ============= * Bugfix release. Avoid running out-of-memory in MD5'ing This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-11-16 09:37:50
|
Revision: 253 http://s3tools.svn.sourceforge.net/s3tools/?rev=253&view=rev Author: ludvigm Date: 2008-11-16 09:37:44 +0000 (Sun, 16 Nov 2008) Log Message: ----------- Merge from 0.9.8.x branch, rel 244: * s3cmd: Unicode brainfuck again. This time force all output in UTF-8, will see how many complaints we'll get... Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-11-06 11:57:56 UTC (rev 252) +++ s3cmd/trunk/ChangeLog 2008-11-16 09:37:44 UTC (rev 253) @@ -1,3 +1,9 @@ +2008-11-16 Michal Ludvig <mi...@lo...> + + Merge from 0.9.8.x branch, rel 244: + * s3cmd: Unicode brainfuck again. This time force all output + in UTF-8, will see how many complaints we'll get... + 2008-09-16 Michal Ludvig <mi...@lo...> * NEWS: s3cmd 0.9.8.4 released from branches/0.9.8.x SVN branch. Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-11-06 11:57:56 UTC (rev 252) +++ s3cmd/trunk/s3cmd 2008-11-16 09:37:44 UTC (rev 253) @@ -22,10 +22,14 @@ from distutils.spawn import find_executable ## Output native on TTY, UTF-8 otherwise (redirects) -_stdout = sys.stdout.isatty() and sys.stdout or codecs.getwriter("utf-8")(sys.stdout) -_stderr = sys.stderr.isatty() and sys.stderr or codecs.getwriter("utf-8")(sys.stderr) -#_stdout = codecs.getwriter("utf-8")(sys.stdout) -#_stderr = codecs.getwriter("utf-8")(sys.stderr) +#_stdout = sys.stdout.isatty() and sys.stdout or codecs.getwriter("utf-8")(sys.stdout) +#_stderr = sys.stderr.isatty() and sys.stderr or codecs.getwriter("utf-8")(sys.stderr) +## Output UTF-8 in all cases +_stdout = codecs.getwriter("utf-8")(sys.stdout) +_stderr = codecs.getwriter("utf-8")(sys.stderr) +## Leave it to the terminal +#_stdout = sys.stdout +#_stderr = sys.stderr def output(message): _stdout.write(message + "\n") This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-11-16 09:38:56
|
Revision: 254 http://s3tools.svn.sourceforge.net/s3tools/?rev=254&view=rev Author: ludvigm Date: 2008-11-16 09:38:51 +0000 (Sun, 16 Nov 2008) Log Message: ----------- Merge from 0.9.8.x branch, rel 245: * S3/S3.py: Escape parameters in strings. Fixes sync to and ls of directories with spaces. (Thx Lubomir Rintel from Fedora Project) Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-11-16 09:37:44 UTC (rev 253) +++ s3cmd/trunk/ChangeLog 2008-11-16 09:38:51 UTC (rev 254) @@ -1,5 +1,8 @@ 2008-11-16 Michal Ludvig <mi...@lo...> + Merge from 0.9.8.x branch, rel 245: + * S3/S3.py: Escape parameters in strings. Fixes sync to and + ls of directories with spaces. (Thx Lubomir Rintel from Fedora Project) Merge from 0.9.8.x branch, rel 244: * s3cmd: Unicode brainfuck again. This time force all output in UTF-8, will see how many complaints we'll get... Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-11-16 09:37:44 UTC (rev 253) +++ s3cmd/trunk/S3/S3.py 2008-11-16 09:38:51 UTC (rev 254) @@ -110,6 +110,7 @@ def _get_contents(data): return getListFromXml(data, "Contents") + prefix = self.urlencode_string(prefix) request = self.create_request("BUCKET_LIST", bucket = bucket, prefix = prefix) response = self.send_request(request) #debug(response) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-11-16 09:43:49
|
Revision: 255 http://s3tools.svn.sourceforge.net/s3tools/?rev=255&view=rev Author: ludvigm Date: 2008-11-16 09:43:44 +0000 (Sun, 16 Nov 2008) Log Message: ----------- Merge from 0.9.8.x branch, rel 246: * s3cmd, S3/S3.py, S3/Exceptions.py: Don't abort 'sync' or 'put' on files that can't be open (e.g. Permision denied). Print a warning and skip over instead. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/Exceptions.py s3cmd/trunk/S3/S3.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-11-16 09:38:51 UTC (rev 254) +++ s3cmd/trunk/ChangeLog 2008-11-16 09:43:44 UTC (rev 255) @@ -1,5 +1,9 @@ 2008-11-16 Michal Ludvig <mi...@lo...> + Merge from 0.9.8.x branch, rel 246: + * s3cmd, S3/S3.py, S3/Exceptions.py: Don't abort 'sync' or 'put' on files + that can't be open (e.g. Permision denied). Print a warning and skip over + instead. Merge from 0.9.8.x branch, rel 245: * S3/S3.py: Escape parameters in strings. Fixes sync to and ls of directories with spaces. (Thx Lubomir Rintel from Fedora Project) Modified: s3cmd/trunk/S3/Exceptions.py =================================================================== --- s3cmd/trunk/S3/Exceptions.py 2008-11-16 09:38:51 UTC (rev 254) +++ s3cmd/trunk/S3/Exceptions.py 2008-11-16 09:43:44 UTC (rev 255) @@ -48,5 +48,8 @@ class S3DownloadError(S3Exception): pass +class InvalidFileError(S3Exception): + pass + class ParameterError(S3Exception): pass Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-11-16 09:38:51 UTC (rev 254) +++ s3cmd/trunk/S3/S3.py 2008-11-16 09:43:44 UTC (rev 255) @@ -162,12 +162,12 @@ raise ValueError("Expected URI type 's3', got '%s'" % uri.type) if not os.path.isfile(filename): - raise ParameterError("%s is not a regular file" % filename) + raise InvalidFileError("%s is not a regular file" % filename) try: file = open(filename, "rb") size = os.stat(filename)[ST_SIZE] except IOError, e: - raise ParameterError("%s: %s" % (filename, e.strerror)) + raise InvalidFileError("%s: %s" % (filename, e.strerror)) headers = SortedDict() if extra_headers: headers.update(extra_headers) Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-11-16 09:38:51 UTC (rev 254) +++ s3cmd/trunk/s3cmd 2008-11-16 09:43:44 UTC (rev 255) @@ -209,6 +209,9 @@ except S3UploadError, e: error("Upload of '%s' failed too many times. Skipping that file." % real_filename) continue + except InvalidFileError, e: + warning("File can not be uploaded: %s" % e) + continue speed_fmt = formatSize(response["speed"], human_readable = True, floating_point = True) output("File '%s' stored as %s (%d bytes in %0.1f seconds, %0.2f %sB/s) [%d of %d]" % (file, uri_final, response["size"], response["elapsed"], speed_fmt[0], speed_fmt[1], @@ -648,6 +651,9 @@ except S3UploadError, e: error("%s: upload failed too many times. Skipping that file." % src) continue + except InvalidFileError, e: + warning("File can not be uploaded: %s" % e) + continue speed_fmt = formatSize(response["speed"], human_readable = True, floating_point = True) output("File '%s' stored as %s (%d bytes in %0.1f seconds, %0.2f %sB/s) [%d of %d]" % (src, uri, response["size"], response["elapsed"], speed_fmt[0], speed_fmt[1], This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-11-16 09:46:14
|
Revision: 256 http://s3tools.svn.sourceforge.net/s3tools/?rev=256&view=rev Author: ludvigm Date: 2008-11-16 09:46:03 +0000 (Sun, 16 Nov 2008) Log Message: ----------- Merge from 0.9.8.x branch, rel 247: * s3cmd: Re-raise the right exception. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-11-16 09:43:44 UTC (rev 255) +++ s3cmd/trunk/ChangeLog 2008-11-16 09:46:03 UTC (rev 256) @@ -1,5 +1,7 @@ 2008-11-16 Michal Ludvig <mi...@lo...> + Merge from 0.9.8.x branch, rel 247: + * s3cmd: Re-raise the right exception. Merge from 0.9.8.x branch, rel 246: * s3cmd, S3/S3.py, S3/Exceptions.py: Don't abort 'sync' or 'put' on files that can't be open (e.g. Permision denied). Print a warning and skip over Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-11-16 09:43:44 UTC (rev 255) +++ s3cmd/trunk/s3cmd 2008-11-16 09:46:03 UTC (rev 256) @@ -536,7 +536,7 @@ if e.errno in (errno.EPERM, errno.EACCES): warning("%s not writable: %s" % (dst_file, e.strerror)) continue - raise + raise e except KeyboardInterrupt: try: dst_stream.close() except: pass This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-11-16 09:47:36
|
Revision: 257 http://s3tools.svn.sourceforge.net/s3tools/?rev=257&view=rev Author: ludvigm Date: 2008-11-16 09:47:26 +0000 (Sun, 16 Nov 2008) Log Message: ----------- Merge from 0.9.8.x branch, rel 248: * s3cmd: Don't leak open filehandles in sync. Thx Patrick Linskey for report. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-11-16 09:46:03 UTC (rev 256) +++ s3cmd/trunk/ChangeLog 2008-11-16 09:47:26 UTC (rev 257) @@ -1,5 +1,7 @@ 2008-11-16 Michal Ludvig <mi...@lo...> + Merge from 0.9.8.x branch, rel 248: + * s3cmd: Don't leak open filehandles in sync. Thx Patrick Linskey for report. Merge from 0.9.8.x branch, rel 247: * s3cmd: Re-raise the right exception. Merge from 0.9.8.x branch, rel 246: Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-11-16 09:46:03 UTC (rev 256) +++ s3cmd/trunk/s3cmd 2008-11-16 09:47:26 UTC (rev 257) @@ -513,7 +513,7 @@ debug("dst_file=%s" % dst_file) # This will have failed should the file exist - os.open(dst_file, open_flags) + os.close(os.open(dst_file, open_flags)) # Yeah I know there is a race condition here. Sadly I don't know how to open() in exclusive mode. dst_stream = open(dst_file, "wb") response = s3.object_get(uri, dst_stream) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |