From: <lu...@us...> - 2008-12-31 04:20:42
|
Revision: 315 http://s3tools.svn.sourceforge.net/s3tools/?rev=315&view=rev Author: ludvigm Date: 2008-12-31 04:20:33 +0000 (Wed, 31 Dec 2008) Log Message: ----------- * NEWS: Updated. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/NEWS Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-12-31 04:09:24 UTC (rev 314) +++ s3cmd/trunk/ChangeLog 2008-12-31 04:20:33 UTC (rev 315) @@ -1,5 +1,6 @@ 2008-12-31 Michal Ludvig <mi...@lo...> + * NEWS: Updated. * testsuite: reorganised UTF-8 files, added GBK encoding files, moved encoding-specific files to 'tar.gz' archives. * run-tests.py: Adapted to the above change. @@ -19,7 +20,6 @@ --------------------------- * S3/PkgInfo.py: Bumped up version to 0.9.9-pre4 - * NEWS: Updated. 2008-12-30 Michal Ludvig <mi...@lo...> Modified: s3cmd/trunk/NEWS =================================================================== --- s3cmd/trunk/NEWS 2008-12-31 04:09:24 UTC (rev 314) +++ s3cmd/trunk/NEWS 2008-12-31 04:20:33 UTC (rev 315) @@ -1,4 +1,4 @@ -s3cmd 0.9.9-pre4 +s3cmd 0.9.9-pre4 - 2008-12-30 ================ * Support for non-recursive [ls] * Support for multiple sources and recursive [get]. @@ -6,14 +6,18 @@ * New option --skip-existing for [get] and [sync]. * Improved Progress class (fixes Mac OS X) * Fixed installation on Windows and Mac OS X. +* Don't print nasty backtrace on KeyboardInterrupt. +* Should work fine on non-UTF8 systems, provided all + the files are in current system encoding. +* System encoding can be overriden using --encoding. -s3cmd 0.9.9-pre3 +s3cmd 0.9.9-pre3 - 2008-12-01 ================ * Bugfixes only - Fixed sync from S3 to local - Fixed progress meter with Unicode chars -s3cmd 0.9.9-pre2 +s3cmd 0.9.9-pre2 - 2008-11-24 ================ * Implemented progress meter (--progress / --no-progress) * Removing of non-empty buckets with --force This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-12-31 04:22:38
|
Revision: 316 http://s3tools.svn.sourceforge.net/s3tools/?rev=316&view=rev Author: ludvigm Date: 2008-12-31 04:22:29 +0000 (Wed, 31 Dec 2008) Log Message: ----------- * S3/PkgInfo.py: Bumped up version to 0.9.9-pre4 * S3/Exceptions.py: Added missing imports. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/Exceptions.py s3cmd/trunk/S3/PkgInfo.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-12-31 04:20:33 UTC (rev 315) +++ s3cmd/trunk/ChangeLog 2008-12-31 04:22:29 UTC (rev 316) @@ -1,5 +1,7 @@ 2008-12-31 Michal Ludvig <mi...@lo...> + * S3/PkgInfo.py: Bumped up version to 0.9.9-pre4 + * S3/Exceptions.py: Added missing imports. * NEWS: Updated. * testsuite: reorganised UTF-8 files, added GBK encoding files, moved encoding-specific files to 'tar.gz' archives. @@ -19,8 +21,6 @@ * Released version 0.9.9-pre4 --------------------------- - * S3/PkgInfo.py: Bumped up version to 0.9.9-pre4 - 2008-12-30 Michal Ludvig <mi...@lo...> * s3cmd: Replace unknown Unicode characters with '?' Modified: s3cmd/trunk/S3/Exceptions.py =================================================================== --- s3cmd/trunk/S3/Exceptions.py 2008-12-31 04:20:33 UTC (rev 315) +++ s3cmd/trunk/S3/Exceptions.py 2008-12-31 04:22:29 UTC (rev 316) @@ -3,7 +3,7 @@ ## http://www.logix.cz/michal ## License: GPL Version 2 -from Utils import getRootTagName +from Utils import getRootTagName, unicodise, deunicodise from logging import debug, info, warning, error try: Modified: s3cmd/trunk/S3/PkgInfo.py =================================================================== --- s3cmd/trunk/S3/PkgInfo.py 2008-12-31 04:20:33 UTC (rev 315) +++ s3cmd/trunk/S3/PkgInfo.py 2008-12-31 04:22:29 UTC (rev 316) @@ -1,5 +1,5 @@ package = "s3cmd" -version = "0.9.9-pre3" +version = "0.9.9-pre4" url = "http://s3tools.logix.cz" license = "GPL version 2" short_description = "S3cmd is a tool for managing Amazon S3 storage space." This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-12-31 04:25:06
|
Revision: 317 http://s3tools.svn.sourceforge.net/s3tools/?rev=317&view=rev Author: ludvigm Date: 2008-12-31 04:24:56 +0000 (Wed, 31 Dec 2008) Log Message: ----------- * testsuite/exclude.encodings: Added. Modified Paths: -------------- s3cmd/trunk/ChangeLog Added Paths: ----------- s3cmd/trunk/testsuite/exclude.encodings Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-12-31 04:22:29 UTC (rev 316) +++ s3cmd/trunk/ChangeLog 2008-12-31 04:24:56 UTC (rev 317) @@ -7,6 +7,7 @@ moved encoding-specific files to 'tar.gz' archives. * run-tests.py: Adapted to the above change. * run-tests.sh: removed. + * testsuite/exclude.encodings: Added. * run-tests.py: Don't assume utf-8, use preferred encoding instead. * s3cmd, S3/Utils.py, S3/Exceptions.py, S3/Progress.py, Added: s3cmd/trunk/testsuite/exclude.encodings =================================================================== --- s3cmd/trunk/testsuite/exclude.encodings (rev 0) +++ s3cmd/trunk/testsuite/exclude.encodings 2008-12-31 04:24:56 UTC (rev 317) @@ -0,0 +1 @@ +encodings/* This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-12-31 04:29:07
|
Revision: 318 http://s3tools.svn.sourceforge.net/s3tools/?rev=318&view=rev Author: ludvigm Date: 2008-12-31 04:28:58 +0000 (Wed, 31 Dec 2008) Log Message: ----------- * s3cmd: Print a nice error message when --exclude-from file is not readable. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-12-31 04:24:56 UTC (rev 317) +++ s3cmd/trunk/ChangeLog 2008-12-31 04:28:58 UTC (rev 318) @@ -1,5 +1,7 @@ 2008-12-31 Michal Ludvig <mi...@lo...> + * s3cmd: Print a nice error message when --exclude-from + file is not readable. * S3/PkgInfo.py: Bumped up version to 0.9.9-pre4 * S3/Exceptions.py: Added missing imports. * NEWS: Updated. Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-12-31 04:24:56 UTC (rev 317) +++ s3cmd/trunk/s3cmd 2008-12-31 04:28:58 UTC (rev 318) @@ -1047,7 +1047,11 @@ sys.exit(1) def process_exclude_from_file(exf, exclude_array): - exfi = open(exf, "rt") + try: + exfi = open(exf, "rt") + except IOError, e: + error(e) + sys.exit(1) for ex in exfi: ex = ex.strip() if re.match("^#", ex) or re.match("^\s*$", ex): This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-12-31 04:33:01
|
Revision: 319 http://s3tools.svn.sourceforge.net/s3tools/?rev=319&view=rev Author: ludvigm Date: 2008-12-31 04:32:58 +0000 (Wed, 31 Dec 2008) Log Message: ----------- * testsuite/unicode: removed. Modified Paths: -------------- s3cmd/trunk/ChangeLog Removed Paths: ------------- s3cmd/trunk/testsuite/unicode/ Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-12-31 04:28:58 UTC (rev 318) +++ s3cmd/trunk/ChangeLog 2008-12-31 04:32:58 UTC (rev 319) @@ -6,7 +6,8 @@ * S3/Exceptions.py: Added missing imports. * NEWS: Updated. * testsuite: reorganised UTF-8 files, added GBK encoding files, - moved encoding-specific files to 'tar.gz' archives. + moved encoding-specific files to 'tar.gz' archives, removed + unicode dir. * run-tests.py: Adapted to the above change. * run-tests.sh: removed. * testsuite/exclude.encodings: Added. This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-12-31 04:55:36
|
Revision: 320 http://s3tools.svn.sourceforge.net/s3tools/?rev=320&view=rev Author: ludvigm Date: 2008-12-31 04:55:32 +0000 (Wed, 31 Dec 2008) Log Message: ----------- * s3cmd: Reworked internal handling of unicode vs encoded filenames. Should replace unknown characters with '?' instead of baling out. * run-tests.py: Display system encoding in use. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/run-tests.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-12-31 04:32:58 UTC (rev 319) +++ s3cmd/trunk/ChangeLog 2008-12-31 04:55:32 UTC (rev 320) @@ -1,5 +1,11 @@ 2008-12-31 Michal Ludvig <mi...@lo...> + * s3cmd: Reworked internal handling of unicode vs encoded filenames. + Should replace unknown characters with '?' instead of baling out. + +2008-12-31 Michal Ludvig <mi...@lo...> + + * run-tests.py: Display system encoding in use. * s3cmd: Print a nice error message when --exclude-from file is not readable. * S3/PkgInfo.py: Bumped up version to 0.9.9-pre4 Modified: s3cmd/trunk/run-tests.py =================================================================== --- s3cmd/trunk/run-tests.py 2008-12-31 04:32:58 UTC (rev 319) +++ s3cmd/trunk/run-tests.py 2008-12-31 04:55:32 UTC (rev 320) @@ -37,16 +37,19 @@ if not encoding: print "Guessing current system encoding failed. Consider setting $LANG variable." sys.exit(1) +else: + print "System encoding: " + encoding have_encoding = os.path.isdir('testsuite/encodings/' + encoding) if not have_encoding and os.path.isfile('testsuite/encodings/%s.tar.gz' % encoding): - os.system("tar xvz -C testsuite/encodings -f testsuite/encodings/UTF-8.tar.gz") + os.system("tar xvz -C testsuite/encodings -f testsuite/encodings/%s.tar.gz" % encoding) have_encoding = os.path.isdir('testsuite/encodings/' + encoding) if have_encoding: enc_base_remote = "s3://s3cmd-autotest-1/xyz/%s/" % encoding enc_pattern = patterns[encoding] - print "System encoding: " + encoding +else: + print encoding + " specific files not found." def test(label, cmd_args = [], retcode = 0, must_find = [], must_not_find = [], must_find_re = [], must_not_find_re = []): def failure(message = ""): @@ -259,7 +262,7 @@ ## ====== Sync more to S3 -test_s3cmd("Sync more to S3", ['sync', 'testsuite', 's3://s3cmd-autotest-1/xyz/', '--exclude', '*.png', '--no-encrypt', '--exclude-from', 'testsuite/exclude.encodings' ]) +test_s3cmd("Sync more to S3", ['sync', 'testsuite', 's3://s3cmd-autotest-1/xyz/', '--no-encrypt' ]) ## ====== Rename within S3 Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-12-31 04:32:58 UTC (rev 319) +++ s3cmd/trunk/s3cmd 2008-12-31 04:55:32 UTC (rev 320) @@ -341,7 +341,8 @@ for item in remote_keys: seq += 1 uri = item['remote_uri'] - destination = item['local_filename'] + ## Encode / Decode destination with "replace" to make sure it's compatible with current encoding + destination = unicodise_safe(item['local_filename']) seq_label = "[%d of %d]" % (seq, total_count) start_position = 0 @@ -478,7 +479,7 @@ def _get_filelist_local(local_uri): info(u"Compiling list of local files...") - local_path = local_uri.path() + local_path = deunicodise(local_uri.path()) if os.path.isdir(local_path): loc_base = os.path.join(local_path, "") filelist = os.walk(local_path) @@ -497,9 +498,10 @@ ## Synchronize symlinks... one day ## for now skip over continue - file = full_name[loc_base_len:] + file = unicodise(full_name[loc_base_len:]) sr = os.stat_result(os.lstat(full_name)) loc_list[file] = { + 'full_name_unicoded' : unicodise(full_name), 'full_name' : full_name, 'size' : sr.st_size, 'mtime' : sr.st_mtime, @@ -757,6 +759,7 @@ def _build_attr_header(src): import pwd, grp attrs = {} + src = deunicodise(src) st = os.stat_result(os.stat(src)) for attr in cfg.preserve_attrs_list: if attr == 'uname': @@ -823,17 +826,17 @@ file_list.sort() for file in file_list: seq += 1 - src = loc_list[file]['full_name'] + src = loc_list[file] uri = S3Uri(dst_base + file) seq_label = "[%d of %d]" % (seq, total_count) attr_header = None if cfg.preserve_attrs: - attr_header = _build_attr_header(src) + attr_header = _build_attr_header(src['full_name']) debug(attr_header) try: - response = s3.object_put(src, uri, attr_header, extra_label = seq_label) + response = s3.object_put(src['full_name'], uri, attr_header, extra_label = seq_label) except S3UploadError, e: - error(u"%s: upload failed too many times. Skipping that file." % src) + error(u"%s: upload failed too many times. Skipping that file." % src['full_name_unicode']) continue except InvalidFileError, e: warning(u"File can not be uploaded: %s" % e) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2008-12-31 04:57:36
|
Revision: 321 http://s3tools.svn.sourceforge.net/s3tools/?rev=321&view=rev Author: ludvigm Date: 2008-12-31 04:57:27 +0000 (Wed, 31 Dec 2008) Log Message: ----------- * Released version 0.9.9-pre4 Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/PkgInfo.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-12-31 04:55:32 UTC (rev 320) +++ s3cmd/trunk/ChangeLog 2008-12-31 04:57:27 UTC (rev 321) @@ -1,5 +1,10 @@ 2008-12-31 Michal Ludvig <mi...@lo...> + * Released version 0.9.9-pre4 + --------------------------- + +2008-12-31 Michal Ludvig <mi...@lo...> + * s3cmd: Reworked internal handling of unicode vs encoded filenames. Should replace unknown characters with '?' instead of baling out. @@ -28,11 +33,6 @@ 2008-12-30 Michal Ludvig <mi...@lo...> - * Released version 0.9.9-pre4 - --------------------------- - -2008-12-30 Michal Ludvig <mi...@lo...> - * s3cmd: Replace unknown Unicode characters with '?' to avoid UnicodeEncodeError's. Also make all output strings unicode. Modified: s3cmd/trunk/S3/PkgInfo.py =================================================================== --- s3cmd/trunk/S3/PkgInfo.py 2008-12-31 04:55:32 UTC (rev 320) +++ s3cmd/trunk/S3/PkgInfo.py 2008-12-31 04:57:27 UTC (rev 321) @@ -6,6 +6,7 @@ long_description = """ S3cmd lets you copy files from/to Amazon S3 (Simple Storage Service) using a simple to use -command line client. +command line client. Supports rsync-like backup, +encryption, etc. """ This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <w_...@us...> - 2008-12-31 12:20:57
|
Revision: 322 http://s3tools.svn.sourceforge.net/s3tools/?rev=322&view=rev Author: w_tell Date: 2008-12-31 12:20:53 +0000 (Wed, 31 Dec 2008) Log Message: ----------- * S3/S3.py, S3/Utils.py: Use 'hashlib' instead of md5 and sha modules to avoid Python 2.6 warnings. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py s3cmd/trunk/S3/Utils.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-12-31 04:57:27 UTC (rev 321) +++ s3cmd/trunk/ChangeLog 2008-12-31 12:20:53 UTC (rev 322) @@ -1,3 +1,8 @@ +2009-01-01 W. Tell <w_tell -at- sourceforge> + + * S3/S3.py, S3/Utils.py: Use 'hashlib' instead of md5 and sha + modules to avoid Python 2.6 warnings. + 2008-12-31 Michal Ludvig <mi...@lo...> * Released version 0.9.9-pre4 Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-12-31 04:57:27 UTC (rev 321) +++ s3cmd/trunk/S3/S3.py 2008-12-31 12:20:53 UTC (rev 322) @@ -7,15 +7,19 @@ import os, os.path import base64 import time -import md5 -import sha -import hmac import httplib import logging import mimetypes from logging import debug, info, warning, error from stat import ST_SIZE +try: + from hashlib import md5, sha1 +except ImportError: + import md5 + import sha as sha1 +import hmac + from Utils import * from SortedDict import SortedDict from BidirMap import BidirMap @@ -420,7 +424,7 @@ else: raise S3UploadError("Upload failed for: %s" % resource['uri']) file.seek(0) - md5_hash = md5.new() + md5_hash = md5() try: while (size_left > 0): debug("SendFile: Reading up to %d bytes from '%s'" % (self.config.send_chunk, file.name)) @@ -544,7 +548,7 @@ if start_position == 0: # Only compute MD5 on the fly if we're downloading from beginning # Otherwise we'd get a nonsense. - md5_hash = md5.new() + md5_hash = md5() size_left = int(response["headers"]["content-length"]) size_total = start_position + size_left current_position = start_position @@ -626,7 +630,7 @@ h += "/" + resource['bucket'] h += resource['uri'] debug("SignHeaders: " + repr(h)) - return base64.encodestring(hmac.new(self.config.secret_key, h, sha).digest()).strip() + return base64.encodestring(hmac.new(self.config.secret_key, h, sha1).digest()).strip() @staticmethod def check_bucket_name(bucket, dns_strict = True): Modified: s3cmd/trunk/S3/Utils.py =================================================================== --- s3cmd/trunk/S3/Utils.py 2008-12-31 04:57:27 UTC (rev 321) +++ s3cmd/trunk/S3/Utils.py 2008-12-31 12:20:53 UTC (rev 322) @@ -8,7 +8,10 @@ import re import string import random -import md5 +try: + import hashlib as hash +except ImportError: + import md5 as hash import errno from logging import debug, info, warning, error @@ -146,7 +149,7 @@ return mktmpsomething(prefix, randchars, createfunc) def hash_file_md5(filename): - h = md5.new() + h = hash.md5() f = open(filename, "rb") while True: # Hash 32kB chunks This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <w_...@us...> - 2008-12-31 12:47:05
|
Revision: 323 http://s3tools.svn.sourceforge.net/s3tools/?rev=323&view=rev Author: w_tell Date: 2008-12-31 12:47:01 +0000 (Wed, 31 Dec 2008) Log Message: ----------- * S3/S3.py, S3/Utils.py: Use 'hashlib' instead of md5 and sha modules to avoid Python 2.6 warnings. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py s3cmd/trunk/S3/Utils.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-12-31 12:20:53 UTC (rev 322) +++ s3cmd/trunk/ChangeLog 2008-12-31 12:47:01 UTC (rev 323) @@ -1,7 +1,7 @@ 2009-01-01 W. Tell <w_tell -at- sourceforge> * S3/S3.py, S3/Utils.py: Use 'hashlib' instead of md5 and sha - modules to avoid Python 2.6 warnings. + modules to avoid Python 2.6 warnings. 2008-12-31 Michal Ludvig <mi...@lo...> Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-12-31 12:20:53 UTC (rev 322) +++ s3cmd/trunk/S3/S3.py 2008-12-31 12:47:01 UTC (rev 323) @@ -16,8 +16,8 @@ try: from hashlib import md5, sha1 except ImportError: - import md5 - import sha as sha1 + from md5 import md5 + from sha import sha as sha1 import hmac from Utils import * Modified: s3cmd/trunk/S3/Utils.py =================================================================== --- s3cmd/trunk/S3/Utils.py 2008-12-31 12:20:53 UTC (rev 322) +++ s3cmd/trunk/S3/Utils.py 2008-12-31 12:47:01 UTC (rev 323) @@ -9,9 +9,9 @@ import string import random try: - import hashlib as hash + from hashlib import md5 except ImportError: - import md5 as hash + from md5 import md5 import errno from logging import debug, info, warning, error @@ -149,7 +149,7 @@ return mktmpsomething(prefix, randchars, createfunc) def hash_file_md5(filename): - h = hash.md5() + h = md5() f = open(filename, "rb") while True: # Hash 32kB chunks This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <w_...@us...> - 2008-12-31 12:57:25
|
Revision: 324 http://s3tools.svn.sourceforge.net/s3tools/?rev=324&view=rev Author: w_tell Date: 2008-12-31 12:57:19 +0000 (Wed, 31 Dec 2008) Log Message: ----------- Fixed Python 2.4 after conversion to 'hashlib' Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-12-31 12:47:01 UTC (rev 323) +++ s3cmd/trunk/ChangeLog 2008-12-31 12:57:19 UTC (rev 324) @@ -1,7 +1,7 @@ 2009-01-01 W. Tell <w_tell -at- sourceforge> * S3/S3.py, S3/Utils.py: Use 'hashlib' instead of md5 and sha - modules to avoid Python 2.6 warnings. + modules to avoid Python 2.6 warnings. 2008-12-31 Michal Ludvig <mi...@lo...> Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2008-12-31 12:47:01 UTC (rev 323) +++ s3cmd/trunk/S3/S3.py 2008-12-31 12:57:19 UTC (rev 324) @@ -17,7 +17,7 @@ from hashlib import md5, sha1 except ImportError: from md5 import md5 - from sha import sha as sha1 + import sha as sha1 import hmac from Utils import * Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-12-31 12:47:01 UTC (rev 323) +++ s3cmd/trunk/s3cmd 2008-12-31 12:57:19 UTC (rev 324) @@ -768,14 +768,14 @@ except KeyError: attr = "uid" val = st.st_uid - warning(u"%s: Owner username not known. Storing UID=%d instead." % (src, val)) + warning(u"%s: Owner username not known. Storing UID=%d instead." % (unicodise(src), val)) elif attr == 'gname': try: val = grp.getgrgid(st.st_gid).gr_name except KeyError: attr = "gid" val = st.st_gid - warning(u"%s: Owner groupname not known. Storing GID=%d instead." % (src, val)) + warning(u"%s: Owner groupname not known. Storing GID=%d instead." % (unicodise(src), val)) else: val = getattr(st, 'st_' + attr) attrs[attr] = val This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-03 08:37:12
|
Revision: 325 http://s3tools.svn.sourceforge.net/s3tools/?rev=325&view=rev Author: ludvigm Date: 2009-01-03 08:37:08 +0000 (Sat, 03 Jan 2009) Log Message: ----------- * s3cmd: Don't fail when neither $HOME nor %USERPROFILE% is set. (fixes #2483388) Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2008-12-31 12:57:19 UTC (rev 324) +++ s3cmd/trunk/ChangeLog 2009-01-03 08:37:08 UTC (rev 325) @@ -1,3 +1,8 @@ +2009-01-03 Michal Ludvig <mi...@lo...> + + * s3cmd: Don't fail when neither $HOME nor %USERPROFILE% is set. + (fixes #2483388) + 2009-01-01 W. Tell <w_tell -at- sourceforge> * S3/S3.py, S3/Utils.py: Use 'hashlib' instead of md5 and sha Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2008-12-31 12:57:19 UTC (rev 324) +++ s3cmd/trunk/s3cmd 2009-01-03 08:37:08 UTC (rev 325) @@ -1118,6 +1118,7 @@ optparser = OptionParser(option_class=OptionMimeType, formatter=MyHelpFormatter()) #optparser.disable_interspersed_args() + config_file = None if os.getenv("HOME"): config_file = os.path.join(os.getenv("HOME"), ".s3cfg") elif os.name == "nt" and os.getenv("USERPROFILE"): This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-06 12:29:39
|
Revision: 327 http://s3tools.svn.sourceforge.net/s3tools/?rev=327&view=rev Author: ludvigm Date: 2009-01-06 12:02:11 +0000 (Tue, 06 Jan 2009) Log Message: ----------- * S3/ACL.py: New object for handling ACL issues. * S3/S3.py: Moved most of S3.get_acl() to ACL class. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py Added Paths: ----------- s3cmd/trunk/S3/ACL.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-06 12:00:29 UTC (rev 326) +++ s3cmd/trunk/ChangeLog 2009-01-06 12:02:11 UTC (rev 327) @@ -1,5 +1,7 @@ 2009-01-07 Michal Ludvig <mi...@lo...> + * S3/ACL.py: New object for handling ACL issues. + * S3/S3.py: Moved most of S3.get_acl() to ACL class. * S3/Utils.py: Reworked XML helpers - remove XMLNS before parsing the input XML to avoid having all Tags prefixed with {XMLNS} by ElementTree. Added: s3cmd/trunk/S3/ACL.py =================================================================== --- s3cmd/trunk/S3/ACL.py (rev 0) +++ s3cmd/trunk/S3/ACL.py 2009-01-06 12:02:11 UTC (rev 327) @@ -0,0 +1,74 @@ +## Amazon S3 - Access Control List representation +## Author: Michal Ludvig <mi...@lo...> +## http://www.logix.cz/michal +## License: GPL Version 2 + +from Utils import * + +try: + import xml.etree.ElementTree as ET +except ImportError: + import elementtree.ElementTree as ET + +class ACL(object): + EMPTY_ACL = """ + <AccessControlPolicy> + <AccessControlList> + </AccessControlList> + </AccessControlPolicy> + """ + GRANT_PUBLIC_READ = """ + <Grant> + <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"> + <URI>http://acs.amazonaws.com/groups/global/AllUsers</URI> + </Grantee> + <Permission>READ</Permission> + </Grant> + """ + def __init__(self, xml = None): + if not xml: + xml = ACL.EMPTY_ACL + self.tree = getTreeFromXml(xml) + + def getGrants(self): + acl = {} + for grant in self.tree.findall(".//Grant"): + grantee = grant.find(".//Grantee") + grantee = dict([(tag.tag, tag.text) for tag in grant.find(".//Grantee")]) + if grantee.has_key('DisplayName'): + user = grantee['DisplayName'] + elif grantee.has_key('URI'): + user = grantee['URI'] + if user == 'http://acs.amazonaws.com/groups/global/AllUsers': + user = "*anon*" + else: + user = grantee[grantee.keys()[0]] + acl[user] = grant.find('Permission').text + return acl + +if __name__ == "__main__": + xml = """<?xml version="1.0" encoding="UTF-8"?> +<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> +<Owner> + <ID>12345678901234567890</ID> + <DisplayName>owner-nickname</DisplayName> +</Owner> +<AccessControlList> + <Grant> + <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"> + <ID>12345678901234567890</ID> + <DisplayName>owner-nickname</DisplayName> + </Grantee> + <Permission>FULL_CONTROL</Permission> + </Grant> + <Grant> + <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"> + <URI>http://acs.amazonaws.com/groups/global/AllUsers</URI> + </Grantee> + <Permission>READ</Permission> + </Grant> +</AccessControlList> +</AccessControlPolicy> + """ + acl = ACL(xml) + print acl.getGrants() Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2009-01-06 12:00:29 UTC (rev 326) +++ s3cmd/trunk/S3/S3.py 2009-01-06 12:02:11 UTC (rev 327) @@ -25,6 +25,7 @@ from BidirMap import BidirMap from Config import Config from Exceptions import * +from ACL import ACL class S3(object): http_methods = BidirMap( @@ -250,19 +251,10 @@ request = self.create_request("OBJECT_GET", uri = uri, extra = "?acl") else: request = self.create_request("BUCKET_LIST", bucket = uri.bucket(), extra = "?acl") - acl = {} + response = self.send_request(request) - grants = getListFromXml(response['data'], "Grant") - for grant in grants: - if grant['Grantee'][0].has_key('DisplayName'): - user = grant['Grantee'][0]['DisplayName'] - if grant['Grantee'][0].has_key('URI'): - user = grant['Grantee'][0]['URI'] - if user == 'http://acs.amazonaws.com/groups/global/AllUsers': - user = "*anon*" - perm = grant['Permission'] - acl[user] = perm - return acl + acl = ACL(response['data']) + return acl.getGrants() ## Low level methods def urlencode_string(self, string): This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-06 12:29:39
|
Revision: 326 http://s3tools.svn.sourceforge.net/s3tools/?rev=326&view=rev Author: ludvigm Date: 2009-01-06 12:00:29 +0000 (Tue, 06 Jan 2009) Log Message: ----------- * S3/Utils.py: Reworked XML helpers - remove XMLNS before parsing the input XML to avoid having all Tags prefixed with {XMLNS} by ElementTree. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/Utils.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-03 08:37:08 UTC (rev 325) +++ s3cmd/trunk/ChangeLog 2009-01-06 12:00:29 UTC (rev 326) @@ -1,3 +1,9 @@ +2009-01-07 Michal Ludvig <mi...@lo...> + + * S3/Utils.py: Reworked XML helpers - remove XMLNS before + parsing the input XML to avoid having all Tags prefixed + with {XMLNS} by ElementTree. + 2009-01-03 Michal Ludvig <mi...@lo...> * s3cmd: Don't fail when neither $HOME nor %USERPROFILE% is set. Modified: s3cmd/trunk/S3/Utils.py =================================================================== --- s3cmd/trunk/S3/Utils.py 2009-01-03 08:37:08 UTC (rev 325) +++ s3cmd/trunk/S3/Utils.py 2009-01-06 12:00:29 UTC (rev 326) @@ -23,25 +23,7 @@ except ImportError: import elementtree.ElementTree as ET -def stripTagXmlns(xmlns, tag): - """ - Returns a function that, given a tag name argument, removes - eventual ElementTree xmlns from it. - - Example: - stripTagXmlns("{myXmlNS}tag") -> "tag" - """ - if not xmlns: - return tag - return re.sub(xmlns, "", tag) - -def fixupXPath(xmlns, xpath, max = 0): - if not xmlns: - return xpath - retval = re.subn("//", "//%s" % xmlns, xpath, max)[0] - return retval - -def parseNodes(nodes, xmlns = ""): +def parseNodes(nodes): ## WARNING: Ignores text nodes from mixed xml/text. ## For instance <tag1>some text<tag2>other text</tag2></tag1> ## will be ignore "some text" node @@ -49,9 +31,9 @@ for node in nodes: retval_item = {} for child in node.getchildren(): - name = stripTagXmlns(xmlns, child.tag) + name = child.tag if child.getchildren(): - retval_item[name] = parseNodes([child], xmlns) + retval_item[name] = parseNodes([child]) else: retval_item[name] = node.findtext(".//%s" % child.tag) retval.append(retval_item) @@ -62,26 +44,36 @@ return "" return re.compile("^(\{[^}]+\})").match(element.tag).groups()[0] +def stripNameSpace(xml): + """ + removeNameSpace(xml) -- remove top-level AWS namespace + """ + r = re.compile('^(<?[^>]+?>\s?)(<\w+) xmlns=[\'"](http://[^\'"]+)[\'"](.*)', re.MULTILINE) + xmlns = r.match(xml).groups()[2] + xml = r.sub("\\1\\2\\4", xml) + return xml, xmlns + def getTreeFromXml(xml): + xml, xmlns = stripNameSpace(xml) tree = ET.fromstring(xml) - tree.xmlns = getNameSpace(tree) + tree.attrib['xmlns'] = xmlns return tree def getListFromXml(xml, node): tree = getTreeFromXml(xml) - nodes = tree.findall('.//%s%s' % (tree.xmlns, node)) - return parseNodes(nodes, tree.xmlns) + nodes = tree.findall('.//%s' % (node)) + return parseNodes(nodes) def getTextFromXml(xml, xpath): tree = getTreeFromXml(xml) if tree.tag.endswith(xpath): return tree.text else: - return tree.findtext(fixupXPath(tree.xmlns, xpath)) + return tree.findtext(xpath) def getRootTagName(xml): tree = getTreeFromXml(xml) - return stripTagXmlns(tree.xmlns, tree.tag) + return tree.tag def dateS3toPython(date): date = re.compile("\.\d\d\dZ").sub(".000Z", date) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-06 13:07:06
|
Revision: 328 http://s3tools.svn.sourceforge.net/s3tools/?rev=328&view=rev Author: ludvigm Date: 2009-01-06 13:07:05 +0000 (Tue, 06 Jan 2009) Log Message: ----------- * S3/ACL.py: Keep ACL internally as a list of of 'Grantee' objects. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/ACL.py s3cmd/trunk/S3/S3.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-06 12:02:11 UTC (rev 327) +++ s3cmd/trunk/ChangeLog 2009-01-06 13:07:05 UTC (rev 328) @@ -1,5 +1,9 @@ 2009-01-07 Michal Ludvig <mi...@lo...> + * S3/ACL.py: Keep ACL internally as a list of of 'Grantee' objects. + +2009-01-07 Michal Ludvig <mi...@lo...> + * S3/ACL.py: New object for handling ACL issues. * S3/S3.py: Moved most of S3.get_acl() to ACL class. * S3/Utils.py: Reworked XML helpers - remove XMLNS before Modified: s3cmd/trunk/S3/ACL.py =================================================================== --- s3cmd/trunk/S3/ACL.py 2009-01-06 12:02:11 UTC (rev 327) +++ s3cmd/trunk/S3/ACL.py 2009-01-06 13:07:05 UTC (rev 328) @@ -10,6 +10,30 @@ except ImportError: import elementtree.ElementTree as ET +class Grantee(object): + ALL_USERS_URI = "http://acs.amazonaws.com/groups/global/AllUsers" + + xsi_type = None + tag = None + name = None + display_name = None + permission = None + + def __str__(self): + return '%(name)s : %(permission)s' % { "name" : self.name, "permission" : self.permission } + + def isAllUsers(self): + return self.tag == "URI" and self.name == Grantee.ALL_USERS_URI + + def isAnonRead(self): + return self.isAllUsers and self.permission == "READ" + +class GranteeAnonRead(Grantee): + xsi_type = "Group" + tag = "URI" + name = Grantee.ALL_USERS_URI + permission = "READ" + class ACL(object): EMPTY_ACL = """ <AccessControlPolicy> @@ -17,35 +41,58 @@ </AccessControlList> </AccessControlPolicy> """ - GRANT_PUBLIC_READ = """ - <Grant> - <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group"> - <URI>http://acs.amazonaws.com/groups/global/AllUsers</URI> - </Grantee> - <Permission>READ</Permission> - </Grant> - """ + + grants = [] + def __init__(self, xml = None): if not xml: xml = ACL.EMPTY_ACL self.tree = getTreeFromXml(xml) + self.grants = self.parseGrants() - def getGrants(self): + def parseGrants(self): + for grant in self.tree.findall(".//Grant"): + grantee = Grantee() + g = grant.find(".//Grantee") + grantee.xsi_type = g.attrib['{http://www.w3.org/2001/XMLSchema-instance}type'] + grantee.permission = grant.find('Permission').text + for el in g: + if el.tag == "DisplayName": + grantee.display_name = el.text + else: + grantee.tag = el.tag + grantee.name = el.text + self.grants.append(grantee) + return self.grants + + def getGrantList(self): acl = {} - for grant in self.tree.findall(".//Grant"): - grantee = grant.find(".//Grantee") - grantee = dict([(tag.tag, tag.text) for tag in grant.find(".//Grantee")]) - if grantee.has_key('DisplayName'): - user = grantee['DisplayName'] - elif grantee.has_key('URI'): - user = grantee['URI'] - if user == 'http://acs.amazonaws.com/groups/global/AllUsers': - user = "*anon*" + for grantee in self.grants: + if grantee.display_name: + user = grantee.display_name + elif grantee.isAllUsers(): + user = "*anon*" else: - user = grantee[grantee.keys()[0]] - acl[user] = grant.find('Permission').text + user = grantee.name + acl[user] = grantee.permission return acl + def isAnonRead(self): + for grantee in self.grants: + if grantee.isAnonRead(): + return True + return False + + def grantAnonRead(self): + if not self.isAnonRead(): + self.grants.append(GranteeAnonRead()) + + def revokeAnonRead(self): + self.grants = [g for g in self.grants if not g.isAnonRead()] + + def __str__(self): + return ET.tostring(self.tree) + if __name__ == "__main__": xml = """<?xml version="1.0" encoding="UTF-8"?> <AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> @@ -71,4 +118,9 @@ </AccessControlPolicy> """ acl = ACL(xml) - print acl.getGrants() + print "Grants:", acl.getGrantList() + acl.revokeAnonRead() + print "Grants:", acl.getGrantList() + acl.grantAnonRead() + print "Grants:", acl.getGrantList() + #print acl Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2009-01-06 12:02:11 UTC (rev 327) +++ s3cmd/trunk/S3/S3.py 2009-01-06 13:07:05 UTC (rev 328) @@ -254,7 +254,7 @@ response = self.send_request(request) acl = ACL(response['data']) - return acl.getGrants() + return acl.getGrantList() ## Low level methods def urlencode_string(self, string): This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-06 13:13:04
|
Revision: 329 http://s3tools.svn.sourceforge.net/s3tools/?rev=329&view=rev Author: ludvigm Date: 2009-01-06 13:13:03 +0000 (Tue, 06 Jan 2009) Log Message: ----------- * S3/Utils.py: Fix crash in stripNameSpace() when the XML has no NS. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/Utils.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-06 13:07:05 UTC (rev 328) +++ s3cmd/trunk/ChangeLog 2009-01-06 13:13:03 UTC (rev 329) @@ -1,6 +1,7 @@ 2009-01-07 Michal Ludvig <mi...@lo...> * S3/ACL.py: Keep ACL internally as a list of of 'Grantee' objects. + * S3/Utils.py: Fix crash in stripNameSpace() when the XML has no NS. 2009-01-07 Michal Ludvig <mi...@lo...> Modified: s3cmd/trunk/S3/Utils.py =================================================================== --- s3cmd/trunk/S3/Utils.py 2009-01-06 13:07:05 UTC (rev 328) +++ s3cmd/trunk/S3/Utils.py 2009-01-06 13:13:03 UTC (rev 329) @@ -39,24 +39,23 @@ retval.append(retval_item) return retval -def getNameSpace(element): - if not element.tag.startswith("{"): - return "" - return re.compile("^(\{[^}]+\})").match(element.tag).groups()[0] - def stripNameSpace(xml): """ removeNameSpace(xml) -- remove top-level AWS namespace """ r = re.compile('^(<?[^>]+?>\s?)(<\w+) xmlns=[\'"](http://[^\'"]+)[\'"](.*)', re.MULTILINE) - xmlns = r.match(xml).groups()[2] - xml = r.sub("\\1\\2\\4", xml) + if r.match(xml): + xmlns = r.match(xml).groups()[2] + xml = r.sub("\\1\\2\\4", xml) + else: + xmlns = None return xml, xmlns def getTreeFromXml(xml): xml, xmlns = stripNameSpace(xml) tree = ET.fromstring(xml) - tree.attrib['xmlns'] = xmlns + if xmlns: + tree.attrib['xmlns'] = xmlns return tree def getListFromXml(xml, node): This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-07 03:06:47
|
Revision: 330 http://s3tools.svn.sourceforge.net/s3tools/?rev=330&view=rev Author: ludvigm Date: 2009-01-07 03:06:35 +0000 (Wed, 07 Jan 2009) Log Message: ----------- * S3/ACL.py: Generate XML from a current list of Grantees Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/ACL.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-06 13:13:03 UTC (rev 329) +++ s3cmd/trunk/ChangeLog 2009-01-07 03:06:35 UTC (rev 330) @@ -1,5 +1,9 @@ 2009-01-07 Michal Ludvig <mi...@lo...> + * S3/ACL.py: Generate XML from a current list of Grantees + +2009-01-07 Michal Ludvig <mi...@lo...> + * S3/ACL.py: Keep ACL internally as a list of of 'Grantee' objects. * S3/Utils.py: Fix crash in stripNameSpace() when the XML has no NS. Modified: s3cmd/trunk/S3/ACL.py =================================================================== --- s3cmd/trunk/S3/ACL.py 2009-01-06 13:13:03 UTC (rev 329) +++ s3cmd/trunk/S3/ACL.py 2009-01-07 03:06:35 UTC (rev 330) @@ -19,14 +19,30 @@ display_name = None permission = None - def __str__(self): - return '%(name)s : %(permission)s' % { "name" : self.name, "permission" : self.permission } + def __repr__(self): + return 'Grantee("%(tag)s", "%(name)s", "%(permission)s")' % { + "tag" : self.tag, + "name" : self.name, + "permission" : self.permission + } def isAllUsers(self): return self.tag == "URI" and self.name == Grantee.ALL_USERS_URI def isAnonRead(self): return self.isAllUsers and self.permission == "READ" + + def getElement(self): + el = ET.Element("Grant") + grantee = ET.SubElement(el, "Grantee", { + 'xmlns:xsi' : 'http://www.w3.org/2001/XMLSchema-instance', + 'xsi:type' : self.xsi_type + }) + name = ET.SubElement(grantee, self.tag) + name.text = self.name + permission = ET.SubElement(el, "Permission") + permission.text = self.permission + return el class GranteeAnonRead(Grantee): xsi_type = "Group" @@ -35,20 +51,15 @@ permission = "READ" class ACL(object): - EMPTY_ACL = """ - <AccessControlPolicy> - <AccessControlList> - </AccessControlList> - </AccessControlPolicy> - """ + EMPTY_ACL = "<AccessControlPolicy><AccessControlList></AccessControlList></AccessControlPolicy>" - grants = [] + grantees = [] def __init__(self, xml = None): if not xml: xml = ACL.EMPTY_ACL self.tree = getTreeFromXml(xml) - self.grants = self.parseGrants() + self.parseGrants() def parseGrants(self): for grant in self.tree.findall(".//Grant"): @@ -62,12 +73,11 @@ else: grantee.tag = el.tag grantee.name = el.text - self.grants.append(grantee) - return self.grants + self.grantees.append(grantee) def getGrantList(self): acl = {} - for grantee in self.grants: + for grantee in self.grantees: if grantee.display_name: user = grantee.display_name elif grantee.isAllUsers(): @@ -78,20 +88,25 @@ return acl def isAnonRead(self): - for grantee in self.grants: + for grantee in self.grantees: if grantee.isAnonRead(): return True return False def grantAnonRead(self): if not self.isAnonRead(): - self.grants.append(GranteeAnonRead()) + self.grantees.append(GranteeAnonRead()) def revokeAnonRead(self): - self.grants = [g for g in self.grants if not g.isAnonRead()] + self.grantees = [g for g in self.grantees if not g.isAnonRead()] def __str__(self): - return ET.tostring(self.tree) + tree = getTreeFromXml(ACL.EMPTY_ACL) + tree.attrib['xmlns'] = "http://s3.amazonaws.com/doc/2006-03-01/" + acl = tree.find(".//AccessControlList") + for grantee in self.grantees: + acl.append(grantee.getElement()) + return ET.tostring(tree) if __name__ == "__main__": xml = """<?xml version="1.0" encoding="UTF-8"?> @@ -123,4 +138,4 @@ print "Grants:", acl.getGrantList() acl.grantAnonRead() print "Grants:", acl.getGrantList() - #print acl + print acl This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-07 04:43:38
|
Revision: 331 http://s3tools.svn.sourceforge.net/s3tools/?rev=331&view=rev Author: ludvigm Date: 2009-01-07 04:43:33 +0000 (Wed, 07 Jan 2009) Log Message: ----------- * s3cmd: Factored remote_keys generation from cmd_object_get() to fetch_remote_keys(). * s3cmd: Display Public URL in 'info' for AnonRead objects. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-07 03:06:35 UTC (rev 330) +++ s3cmd/trunk/ChangeLog 2009-01-07 04:43:33 UTC (rev 331) @@ -1,5 +1,8 @@ 2009-01-07 Michal Ludvig <mi...@lo...> + * s3cmd: Factored remote_keys generation from cmd_object_get() + to fetch_remote_keys(). + * s3cmd: Display Public URL in 'info' for AnonRead objects. * S3/ACL.py: Generate XML from a current list of Grantees 2009-01-07 Michal Ludvig <mi...@lo...> Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2009-01-07 03:06:35 UTC (rev 330) +++ s3cmd/trunk/S3/S3.py 2009-01-07 04:43:33 UTC (rev 331) @@ -254,7 +254,7 @@ response = self.send_request(request) acl = ACL(response['data']) - return acl.getGrantList() + return acl ## Low level methods def urlencode_string(self, string): Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2009-01-07 03:06:35 UTC (rev 330) +++ s3cmd/trunk/s3cmd 2009-01-07 04:43:33 UTC (rev 331) @@ -169,6 +169,65 @@ _bucket_delete_one(uri) output(u"Bucket '%s' removed" % uri.uri()) +def fetch_remote_keys(args): + remote_uris = [] + remote_keys = [] + + for arg in args: + uri = S3Uri(arg) + if not uri.type == 's3': + raise ParameterError("Expecting S3 URI instead of '%s'" % arg) + remote_uris.append(uri) + + if cfg.recursive: + for uri in remote_uris: + objectlist = _get_filelist_remote(uri) + for key in objectlist.iterkeys(): + object = S3Uri(objectlist[key]['object_uri_str']) + ## Remove leading '/' from remote filenames + if key.find("/") == 0: + key = key[1:] + download_item = { + 'remote_uri' : object, + 'key' : key + } + remote_keys.append(download_item) + else: + for uri in remote_uris: + uri_str = str(uri) + ## Wildcards used in remote URI? + ## If yes we'll need a bucket listing... + if uri_str.find('*') > -1 or uri_str.find('?') > -1: + first_wildcard = uri_str.find('*') + first_questionmark = uri_str.find('?') + if first_questionmark > -1 and first_questionmark < first_wildcard: + first_wildcard = first_questionmark + prefix = uri_str[:first_wildcard] + rest = uri_str[first_wildcard+1:] + ## Only request recursive listing if the 'rest' of the URI, + ## i.e. the part after first wildcard, contains '/' + need_recursion = rest.find('/') > -1 + objectlist = _get_filelist_remote(S3Uri(prefix), recursive = need_recursion) + for key in objectlist: + ## Check whether the 'key' matches the requested wildcards + if glob.fnmatch.fnmatch(objectlist[key]['object_uri_str'], uri_str): + download_item = { + 'remote_uri' : S3Uri(objectlist[key]['object_uri_str']), + 'key' : key, + } + remote_keys.append(download_item) + else: + ## No wildcards - simply append the given URI to the list + key = os.path.basename(uri.object()) + if not key: + raise ParameterError(u"Expecting S3 URI with a filename or --recursive: %s" % uri.uri()) + download_item = { + 'remote_uri' : uri, + 'key' : key + } + remote_keys.append(download_item) + return remote_keys + def cmd_object_put(args): s3 = S3(Config()) @@ -256,8 +315,6 @@ # {'remote_uri', 'local_filename'} download_list = [] - remote_uris = [] - if len(args) == 0: raise ParameterError("Nothing to download. Expecting S3 URI.") @@ -269,74 +326,22 @@ if len(args) == 0: raise ParameterError("Nothing to download. Expecting S3 URI.") - for arg in args: - uri = S3Uri(arg) - if not uri.type == 's3': - raise ParameterError("Expecting S3 URI instead of '%s'" % arg) - remote_uris.append(uri) + remote_keys = fetch_remote_keys(args) + total_count = len(remote_keys) - remote_keys = [] - if cfg.recursive: - if not os.path.isdir(destination_base): - raise ParameterError("Destination must be a directory for recursive get.") + if not os.path.isdir(destination_base) or destination_base == '-': + ## We were either given a file name (existing or not) or want STDOUT + if total_count > 1: + raise ParameterError("Destination must be a directory when downloading multiple sources.") + remote_keys[0]['local_filename'] = destination_base + elif os.path.isdir(destination_base): if destination_base[-1] != os.path.sep: destination_base += os.path.sep - for uri in remote_uris: - objectlist = _get_filelist_remote(uri) - for key in objectlist.iterkeys(): - object = S3Uri(objectlist[key]['object_uri_str']) - ## Remove leading '/' from remote filenames - if key.find("/") == 0: - key = key[1:] - destination = destination_base + key - download_item = { - 'remote_uri' : object, - 'local_filename' : destination - } - remote_keys.append(download_item) + for key in remote_keys: + key['local_filename'] = destination_base + key['key'] else: - if not os.path.isdir(destination_base) or destination_base == '-': - if len(remote_uris) > 1: - raise ParameterError("Destination must be a directory when downloading multiple sources.") - download_item = { - 'remote_uri' : remote_uris[0], - 'local_filename' : destination_base - } - remote_keys.append(download_item) - else: - if os.path.isdir(destination_base) and destination_base[-1] != os.path.sep: - destination_base += os.path.sep - for uri in remote_uris: - uri_str = str(uri) - ## Wildcards used on remote side? - ## If yes we'll need a bucket listing... - if uri_str.find('*') > -1 or uri_str.find('?') > -1: - first_wildcard = uri_str.find('*') - first_questionmark = uri_str.find('?') - if first_questionmark > -1 and first_questionmark < first_wildcard: - first_wildcard = first_questionmark - prefix = uri_str[:first_wildcard] - rest = uri_str[first_wildcard+1:] - ## Only request recursive listing if the 'rest' of the URI, - ## i.e. the part after first wildcard, contains '/' - need_recursion = rest.find('/') > -1 - objectlist = _get_filelist_remote(S3Uri(prefix), recursive = need_recursion) - if destination_base[-1] != os.path.sep: - destination_base += os.path.sep - for object in objectlist: - download_item = { - 'remote_uri' : S3Uri(objectlist[object]['object_uri_str']), - 'local_filename' : destination_base + object, - } - remote_keys.append(download_item) - else: - download_item = { - 'remote_uri' : uri, - 'local_filename' : destination_base + uri.basename(), - } - remote_keys.append(download_item) + raise InternalError("WTF? Is it a dir or not? -- %s" % destination_base) - total_count = len(remote_keys) seq = 0 for item in remote_keys: seq += 1 @@ -468,8 +473,11 @@ output(u"%s (bucket):" % uri.uri()) output(u" Location: %s" % info['bucket-location']) acl = s3.get_acl(uri) - for user in acl.keys(): - output(u" ACL: %s: %s" % (user, acl[user])) + acl_list = acl.getGrantList() + for user in acl_list: + output(u" ACL: %s: %s" % (user, acl_list[user])) + if acl.isAnonRead(): + output(u" URL: %s" % uri.public_url()) except S3Error, e: if S3.codes.has_key(e.info["Code"]): error(S3.codes[e.info["Code"]] % uri.bucket()) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-07 10:23:04
|
Revision: 332 http://s3tools.svn.sourceforge.net/s3tools/?rev=332&view=rev Author: ludvigm Date: 2009-01-07 10:22:56 +0000 (Wed, 07 Jan 2009) Log Message: ----------- * s3cmd: New command 'setacl'. * S3/S3.py: Implemented set_acl(). * S3/ACL.py: Fill in <Owner/> tag in ACL XML. * NEWS: Info about 'setacl'. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/NEWS s3cmd/trunk/S3/ACL.py s3cmd/trunk/S3/S3.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-07 04:43:33 UTC (rev 331) +++ s3cmd/trunk/ChangeLog 2009-01-07 10:22:56 UTC (rev 332) @@ -1,5 +1,12 @@ 2009-01-07 Michal Ludvig <mi...@lo...> + * s3cmd: New command 'setacl'. + * S3/S3.py: Implemented set_acl(). + * S3/ACL.py: Fill in <Owner/> tag in ACL XML. + * NEWS: Info about 'setacl'. + +2009-01-07 Michal Ludvig <mi...@lo...> + * s3cmd: Factored remote_keys generation from cmd_object_get() to fetch_remote_keys(). * s3cmd: Display Public URL in 'info' for AnonRead objects. Modified: s3cmd/trunk/NEWS =================================================================== --- s3cmd/trunk/NEWS 2009-01-07 04:43:33 UTC (rev 331) +++ s3cmd/trunk/NEWS 2009-01-07 10:22:56 UTC (rev 332) @@ -1,3 +1,7 @@ +s3cmd 0.9.9-pre5 +================ +* New command 'setacl' for setting ACL on existing objects. + s3cmd 0.9.9-pre4 - 2008-12-30 ================ * Support for non-recursive [ls] Modified: s3cmd/trunk/S3/ACL.py =================================================================== --- s3cmd/trunk/S3/ACL.py 2009-01-07 04:43:33 UTC (rev 331) +++ s3cmd/trunk/S3/ACL.py 2009-01-07 10:22:56 UTC (rev 332) @@ -51,18 +51,25 @@ permission = "READ" class ACL(object): - EMPTY_ACL = "<AccessControlPolicy><AccessControlList></AccessControlList></AccessControlPolicy>" + EMPTY_ACL = "<AccessControlPolicy><Owner><ID></ID></Owner><AccessControlList></AccessControlList></AccessControlPolicy>" grantees = [] + owner_id = "" + owner_nick = "" def __init__(self, xml = None): if not xml: xml = ACL.EMPTY_ACL - self.tree = getTreeFromXml(xml) - self.parseGrants() - - def parseGrants(self): - for grant in self.tree.findall(".//Grant"): + tree = getTreeFromXml(xml) + self.parseOwner(tree) + self.parseGrants(tree) + + def parseOwner(self, tree): + self.owner_id = tree.findtext(".//Owner//ID") + self.owner_nick = tree.findtext(".//Owner//DisplayName") + + def parseGrants(self, tree): + for grant in tree.findall(".//Grant"): grantee = Grantee() g = grant.find(".//Grantee") grantee.xsi_type = g.attrib['{http://www.w3.org/2001/XMLSchema-instance}type'] @@ -87,6 +94,9 @@ acl[user] = grantee.permission return acl + def getOwner(self): + return { 'id' : self.owner_id, 'nick' : self.owner_nick } + def isAnonRead(self): for grantee in self.grantees: if grantee.isAnonRead(): @@ -103,6 +113,8 @@ def __str__(self): tree = getTreeFromXml(ACL.EMPTY_ACL) tree.attrib['xmlns'] = "http://s3.amazonaws.com/doc/2006-03-01/" + owner = tree.find(".//Owner//ID") + owner.text = self.owner_id acl = tree.find(".//AccessControlList") for grantee in self.grantees: acl.append(grantee.getElement()) Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2009-01-07 04:43:33 UTC (rev 331) +++ s3cmd/trunk/S3/S3.py 2009-01-07 10:22:56 UTC (rev 332) @@ -256,6 +256,17 @@ acl = ACL(response['data']) return acl + def set_acl(self, uri, acl): + if uri.has_object(): + request = self.create_request("OBJECT_PUT", uri = uri, extra = "?acl") + else: + request = self.create_request("BUCKET_CREATE", bucket = uri.bucket(), extra = "?acl") + + body = str(acl) + debug(u"set_acl(%s): acl-xml: %s" % (uri, body)) + response = self.send_request(request, body) + return response + ## Low level methods def urlencode_string(self, string): if type(string) == unicode: Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2009-01-07 04:43:33 UTC (rev 331) +++ s3cmd/trunk/s3cmd 2009-01-07 10:22:56 UTC (rev 332) @@ -884,7 +884,34 @@ return cmd_sync_local2remote(src, dst) if S3Uri(src).type == "s3" and S3Uri(dst).type == "file": return cmd_sync_remote2local(src, dst) - + +def cmd_setacl(args): + s3 = S3(cfg) + + set_to_acl = cfg.acl_public and "Public" or "Private" + + remote_keys = fetch_remote_keys(args) + total_keys = len(remote_keys) + seq = 0 + for key in remote_keys: + seq += 1 + seq_label = "[%d of %d]" % (seq, total_keys) + uri = key['remote_uri'] + acl = s3.get_acl(uri) + if cfg.acl_public: + if acl.isAnonRead(): + info(u"%s: already Public, skippingi %s" % (uri, seq_label)) + continue + acl.grantAnonRead() + else: + if not acl.isAnonRead(): + info(u"%s: already Private, skipping %s" % (uri, seq_label)) + continue + acl.revokeAnonRead() + retsponse = s3.set_acl(uri, acl) + if retsponse['status'] == 200: + output(u"%s: ACL set to %s %s" % (uri, set_to_acl, seq_label)) + def resolve_list(lst, args): retval = [] for item in lst: @@ -1085,7 +1112,7 @@ {"cmd":"info", "label":"Get various information about Buckets or Objects", "param":"s3://BUCKET[/OBJECT]", "func":cmd_info, "argc":1}, {"cmd":"cp", "label":"Copy object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_cp, "argc":2}, {"cmd":"mv", "label":"Move object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_mv, "argc":2}, - #{"cmd":"setacl", "label":"Modify Access control list for Bucket or Object", "param":"s3://BUCKET[/OBJECT]", "func":cmd_setacl, "argc":1}, + {"cmd":"setacl", "label":"Modify Access control list for Bucket or Object", "param":"s3://BUCKET[/OBJECT]", "func":cmd_setacl, "argc":1}, ] def format_commands(progname): This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-07 12:39:44
|
Revision: 333 http://s3tools.svn.sourceforge.net/s3tools/?rev=333&view=rev Author: ludvigm Date: 2009-01-07 12:39:39 +0000 (Wed, 07 Jan 2009) Log Message: ----------- * S3/S3.py: Some errors during file upload were incorrectly interpreted as MD5 mismatch. (bug #2384990) Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-07 10:22:56 UTC (rev 332) +++ s3cmd/trunk/ChangeLog 2009-01-07 12:39:39 UTC (rev 333) @@ -1,3 +1,8 @@ +2009-01-08 Michal Ludvig <mi...@lo...> + + * S3/S3.py: Some errors during file upload were incorrectly + interpreted as MD5 mismatch. (bug #2384990) + 2009-01-07 Michal Ludvig <mi...@lo...> * s3cmd: New command 'setacl'. Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2009-01-07 10:22:56 UTC (rev 332) +++ s3cmd/trunk/S3/S3.py 2009-01-07 12:39:39 UTC (rev 333) @@ -430,7 +430,7 @@ md5_hash = md5() try: while (size_left > 0): - debug("SendFile: Reading up to %d bytes from '%s'" % (self.config.send_chunk, file.name)) + #debug("SendFile: Reading up to %d bytes from '%s'" % (self.config.send_chunk, file.name)) data = file.read(self.config.send_chunk) md5_hash.update(data) conn.send(data) @@ -448,6 +448,7 @@ response["data"] = http_response.read() response["size"] = size_total conn.close() + debug(u"Response: %s" % response) except Exception, e: if self.config.progress_meter: progress.done("failed") @@ -487,6 +488,20 @@ if not response['headers'].has_key('etag'): response['headers']['etag'] = '' + if response["status"] < 200 or response["status"] > 299: + if response["status"] >= 500: + ## AWS internal error - retry + if retries: + warning("Upload failed: %s (%s)" % (resource['uri'], S3Error(response))) + warning("Waiting %d sec..." % self._fail_wait(retries)) + time.sleep(self._fail_wait(retries)) + return self.send_file(request, file, labels, throttle, retries - 1) + else: + warning("Too many failures. Giving up on '%s'" % (file.name)) + raise S3UploadError + ## Non-recoverable error + raise S3Error(response) + debug("MD5 sums: computed=%s, received=%s" % (md5_computed, response["headers"]["etag"])) if response["headers"]["etag"].strip('"\'') != md5_hash.hexdigest(): warning("MD5 Sums don't match!") @@ -497,8 +512,6 @@ warning("Too many failures. Giving up on '%s'" % (file.name)) raise S3UploadError - if response["status"] < 200 or response["status"] > 299: - raise S3Error(response) return response def recv_file(self, request, stream, labels, start_position = 0, retries = _max_retries): This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-07 12:40:36
|
Revision: 334 http://s3tools.svn.sourceforge.net/s3tools/?rev=334&view=rev Author: ludvigm Date: 2009-01-07 12:40:31 +0000 (Wed, 07 Jan 2009) Log Message: ----------- * S3/ACL.py: Move attributes from class to instance. * run-tests.py: Tests for ACL. * s3cmd: Minor messages changes. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/ACL.py s3cmd/trunk/run-tests.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-07 12:39:39 UTC (rev 333) +++ s3cmd/trunk/ChangeLog 2009-01-07 12:40:31 UTC (rev 334) @@ -2,6 +2,9 @@ * S3/S3.py: Some errors during file upload were incorrectly interpreted as MD5 mismatch. (bug #2384990) + * S3/ACL.py: Move attributes from class to instance. + * run-tests.py: Tests for ACL. + * s3cmd: Minor messages changes. 2009-01-07 Michal Ludvig <mi...@lo...> Modified: s3cmd/trunk/S3/ACL.py =================================================================== --- s3cmd/trunk/S3/ACL.py 2009-01-07 12:39:39 UTC (rev 333) +++ s3cmd/trunk/S3/ACL.py 2009-01-07 12:40:31 UTC (rev 334) @@ -13,11 +13,12 @@ class Grantee(object): ALL_USERS_URI = "http://acs.amazonaws.com/groups/global/AllUsers" - xsi_type = None - tag = None - name = None - display_name = None - permission = None + def __init__(self): + self.xsi_type = None + self.tag = None + self.name = None + self.display_name = None + self.permission = None def __repr__(self): return 'Grantee("%(tag)s", "%(name)s", "%(permission)s")' % { @@ -45,21 +46,24 @@ return el class GranteeAnonRead(Grantee): - xsi_type = "Group" - tag = "URI" - name = Grantee.ALL_USERS_URI - permission = "READ" + def __init__(self): + Grantee.__init__(self) + self.xsi_type = "Group" + self.tag = "URI" + self.name = Grantee.ALL_USERS_URI + self.permission = "READ" class ACL(object): EMPTY_ACL = "<AccessControlPolicy><Owner><ID></ID></Owner><AccessControlList></AccessControlList></AccessControlPolicy>" - grantees = [] - owner_id = "" - owner_nick = "" - def __init__(self, xml = None): if not xml: xml = ACL.EMPTY_ACL + + self.grantees = [] + self.owner_id = "" + self.owner_nick = "" + tree = getTreeFromXml(xml) self.parseOwner(tree) self.parseGrants(tree) Modified: s3cmd/trunk/run-tests.py =================================================================== --- s3cmd/trunk/run-tests.py 2009-01-07 12:39:39 UTC (rev 333) +++ s3cmd/trunk/run-tests.py 2009-01-07 12:40:31 UTC (rev 334) @@ -238,29 +238,52 @@ # test_s3cmd("Recursive put", ['put', '--recursive', 'testsuite/etc', 's3://s3cmd-autotest-1/xyz/']) -## ====== Put public, guess MIME -test_s3cmd("Put public, guess MIME", ['put', '--guess-mime-type', '--acl-public', 'testsuite/etc/logo.png', 's3://s3cmd-autotest-1/xyz/etc/logo.png'], - must_find = [ "stored as s3://s3cmd-autotest-1/xyz/etc/logo.png" ]) - - ## ====== rmdir local test_rmdir("Removing local target", 'testsuite-out') ## ====== Sync from S3 -must_find = [ "stored as testsuite-out/etc/logo.png " ] +must_find = [ "stored as testsuite-out/binary/random-crap.md5 " ] if have_encoding: must_find.append("stored as testsuite-out/" + encoding + "/" + enc_pattern) test_s3cmd("Sync from S3", ['sync', 's3://s3cmd-autotest-1/xyz', 'testsuite-out'], must_find = must_find) +## ====== Put public, guess MIME +test_s3cmd("Put public, guess MIME", ['put', '--guess-mime-type', '--acl-public', 'testsuite/etc/logo.png', 's3://s3cmd-autotest-1/xyz/etc/logo.png'], + must_find = [ "stored as s3://s3cmd-autotest-1/xyz/etc/logo.png" ]) + + ## ====== Retrieve from URL if have_wget: - test("Retrieve from URL", ['wget', 'http://s3cmd-autotest-1.s3.amazonaws.com/xyz/etc/logo.png'], + test("Retrieve from URL", ['wget', '-O', 'testsuite-out/logo.png', 'http://s3cmd-autotest-1.s3.amazonaws.com/xyz/etc/logo.png'], must_find_re = [ 'logo.png.*saved \[22059/22059\]' ]) +## ====== Change ACL to Private +test_s3cmd("Change ACL to Private", ['setacl', '--acl-private', 's3://s3cmd-autotest-1/xyz/etc/l*.png'], + must_find = [ "logo.png: ACL set to Private" ]) + + +## ====== Verify Private ACL +if have_wget: + test("Verify Private ACL", ['wget', '-O', 'testsuite-out/logo.png', 'http://s3cmd-autotest-1.s3.amazonaws.com/xyz/etc/logo.png'], + retcode = 1, + must_find_re = [ 'ERROR 403: Forbidden' ]) + + +## ====== Change ACL to Public +test_s3cmd("Change ACL to Public", ['setacl', '--acl-public', '--recursive', 's3://s3cmd-autotest-1/xyz/etc/', '-v'], + must_find = [ "logo.png: ACL set to Public" ]) + + +## ====== Verify Public ACL +if have_wget: + test("Verify Public ACL", ['wget', '-O', 'testsuite-out/logo.png', 'http://s3cmd-autotest-1.s3.amazonaws.com/xyz/etc/logo.png'], + must_find_re = [ 'logo.png.*saved \[22059/22059\]' ]) + + ## ====== Sync more to S3 test_s3cmd("Sync more to S3", ['sync', 'testsuite', 's3://s3cmd-autotest-1/xyz/', '--no-encrypt' ]) @@ -279,7 +302,7 @@ ## ====== Sync more from S3 test_s3cmd("Sync more from S3", ['sync', '--delete-removed', 's3://s3cmd-autotest-1/xyz', 'testsuite-out'], - must_find = [ "deleted 'testsuite-out/etc/logo.png'", "stored as testsuite-out/etc2/Logo.PNG (22059 bytes", + must_find = [ "deleted 'testsuite-out/logo.png'", "stored as testsuite-out/etc2/Logo.PNG (22059 bytes", "stored as testsuite-out/.svn/format " ], must_not_find_re = [ "not-deleted.*etc/logo.png" ]) Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2009-01-07 12:39:39 UTC (rev 333) +++ s3cmd/trunk/s3cmd 2009-01-07 12:40:31 UTC (rev 334) @@ -898,9 +898,10 @@ seq_label = "[%d of %d]" % (seq, total_keys) uri = key['remote_uri'] acl = s3.get_acl(uri) + debug(u"acl: %s - %r" % (uri, acl.grantees)) if cfg.acl_public: if acl.isAnonRead(): - info(u"%s: already Public, skippingi %s" % (uri, seq_label)) + info(u"%s: already Public, skipping %s" % (uri, seq_label)) continue acl.grantAnonRead() else: This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-12 11:25:58
|
Revision: 335 http://s3tools.svn.sourceforge.net/s3tools/?rev=335&view=rev Author: ludvigm Date: 2009-01-12 11:25:52 +0000 (Mon, 12 Jan 2009) Log Message: ----------- * TODO: Updated. * s3cmd: renamed (fetch_)remote_keys to remote_list and a few other renames for consistency. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/TODO s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-07 12:40:31 UTC (rev 334) +++ s3cmd/trunk/ChangeLog 2009-01-12 11:25:52 UTC (rev 335) @@ -1,3 +1,9 @@ +2009-01-13 Michal Ludvig <mi...@lo...> + + * TODO: Updated. + * s3cmd: renamed (fetch_)remote_keys to remote_list and + a few other renames for consistency. + 2009-01-08 Michal Ludvig <mi...@lo...> * S3/S3.py: Some errors during file upload were incorrectly Modified: s3cmd/trunk/TODO =================================================================== --- s3cmd/trunk/TODO 2009-01-07 12:40:31 UTC (rev 334) +++ s3cmd/trunk/TODO 2009-01-12 11:25:52 UTC (rev 335) @@ -17,8 +17,8 @@ - For 1.0.0 - Add --include/--include-from/--rinclude* for sync - - Add 'setacl' command. - Add commands for CloudFront. + - Add 'geturl' command, both Unicode and urlencoded output. - After 1.0.0 - Speed up upload / download with multiple threads. @@ -29,7 +29,7 @@ - Keep backup files remotely on put/sync-to if requested (move the old 'object' to e.g. 'object~' and only then upload the new one). Could be more advanced to keep, say, last 5 - copies, etc. + copies, etc. - Implement GPG for sync (it's not that easy since it won't be easy to compare Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2009-01-07 12:40:31 UTC (rev 334) +++ s3cmd/trunk/s3cmd 2009-01-12 11:25:52 UTC (rev 335) @@ -169,9 +169,9 @@ _bucket_delete_one(uri) output(u"Bucket '%s' removed" % uri.uri()) -def fetch_remote_keys(args): +def fetch_remote_list(args): remote_uris = [] - remote_keys = [] + remote_list = [] for arg in args: uri = S3Uri(arg) @@ -191,7 +191,7 @@ 'remote_uri' : object, 'key' : key } - remote_keys.append(download_item) + remote_list.append(download_item) else: for uri in remote_uris: uri_str = str(uri) @@ -215,7 +215,7 @@ 'remote_uri' : S3Uri(objectlist[key]['object_uri_str']), 'key' : key, } - remote_keys.append(download_item) + remote_list.append(download_item) else: ## No wildcards - simply append the given URI to the list key = os.path.basename(uri.object()) @@ -225,8 +225,8 @@ 'remote_uri' : uri, 'key' : key } - remote_keys.append(download_item) - return remote_keys + remote_list.append(download_item) + return remote_list def cmd_object_put(args): s3 = S3(Config()) @@ -326,29 +326,29 @@ if len(args) == 0: raise ParameterError("Nothing to download. Expecting S3 URI.") - remote_keys = fetch_remote_keys(args) - total_count = len(remote_keys) + remote_list = fetch_remote_list(args) + remote_count = len(remote_list) if not os.path.isdir(destination_base) or destination_base == '-': ## We were either given a file name (existing or not) or want STDOUT - if total_count > 1: + if remote_count > 1: raise ParameterError("Destination must be a directory when downloading multiple sources.") - remote_keys[0]['local_filename'] = destination_base + remote_list[0]['local_filename'] = destination_base elif os.path.isdir(destination_base): if destination_base[-1] != os.path.sep: destination_base += os.path.sep - for key in remote_keys: + for key in remote_list: key['local_filename'] = destination_base + key['key'] else: raise InternalError("WTF? Is it a dir or not? -- %s" % destination_base) seq = 0 - for item in remote_keys: + for item in remote_list: seq += 1 uri = item['remote_uri'] ## Encode / Decode destination with "replace" to make sure it's compatible with current encoding destination = unicodise_safe(item['local_filename']) - seq_label = "[%d of %d]" % (seq, total_count) + seq_label = "[%d of %d]" % (seq, remote_count) start_position = 0 @@ -890,12 +890,12 @@ set_to_acl = cfg.acl_public and "Public" or "Private" - remote_keys = fetch_remote_keys(args) - total_keys = len(remote_keys) + remote_list = fetch_remote_list(args) + remote_count = len(remote_list) seq = 0 - for key in remote_keys: + for key in remote_list: seq += 1 - seq_label = "[%d of %d]" % (seq, total_keys) + seq_label = "[%d of %d]" % (seq, remote_count) uri = key['remote_uri'] acl = s3.get_acl(uri) debug(u"acl: %s - %r" % (uri, acl.grantees)) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-15 10:51:39
|
Revision: 337 http://s3tools.svn.sourceforge.net/s3tools/?rev=337&view=rev Author: ludvigm Date: 2009-01-15 10:51:30 +0000 (Thu, 15 Jan 2009) Log Message: ----------- * s3cmd, S3/S3Uri.py, NEWS: Support for recursive 'put'. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/NEWS s3cmd/trunk/S3/S3Uri.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-15 10:45:40 UTC (rev 336) +++ s3cmd/trunk/ChangeLog 2009-01-15 10:51:30 UTC (rev 337) @@ -1,3 +1,7 @@ +2009-01-15 Michal Ludvig <mi...@lo...> + + * s3cmd, S3/S3Uri.py, NEWS: Support for recursive 'put'. + 2009-01-13 Michal Ludvig <mi...@lo...> * TODO: Updated. Modified: s3cmd/trunk/NEWS =================================================================== --- s3cmd/trunk/NEWS 2009-01-15 10:45:40 UTC (rev 336) +++ s3cmd/trunk/NEWS 2009-01-15 10:51:30 UTC (rev 337) @@ -1,6 +1,7 @@ s3cmd 0.9.9-pre5 ================ * New command 'setacl' for setting ACL on existing objects. +* Recursive [put] with a slightly different semantic. s3cmd 0.9.9-pre4 - 2008-12-30 ================ Modified: s3cmd/trunk/S3/S3Uri.py =================================================================== --- s3cmd/trunk/S3/S3Uri.py 2009-01-15 10:45:40 UTC (rev 336) +++ s3cmd/trunk/S3/S3Uri.py 2009-01-15 10:51:30 UTC (rev 337) @@ -3,6 +3,7 @@ ## http://www.logix.cz/michal ## License: GPL Version 2 +import os import re import sys from BidirMap import BidirMap @@ -117,6 +118,12 @@ def uri(self): return "/".join(["file:/", self.path()]) + def isdir(self): + return os.path.isdir(self.path()) + + def dirname(self): + return os.path.dirname(self.path()) + if __name__ == "__main__": uri = S3Uri("s3://bucket/object") print "type() =", type(uri) Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2009-01-15 10:45:40 UTC (rev 336) +++ s3cmd/trunk/s3cmd 2009-01-15 10:51:30 UTC (rev 337) @@ -169,6 +169,29 @@ _bucket_delete_one(uri) output(u"Bucket '%s' removed" % uri.uri()) +def fetch_local_list(args): + local_uris = [] + local_list = {} + + for arg in args: + uri = S3Uri(arg) + if not uri.type == 'file': + raise ParameterError("Expecting filename or directory instead of: %s" % arg) + if uri.isdir() and not cfg.recursive: + raise ParameterError("Use --recursive to upload a directory: %s" % arg) + local_uris.append(uri) + + for uri in local_uris: + filelist = _get_filelist_local(uri) + for key in filelist: + upload_item = { + 'file_info' : filelist[key], + 'relative_key' : key, + } + local_list[key] = filelist[key] + + return local_list + def fetch_remote_list(args): remote_uris = [] remote_list = [] @@ -229,39 +252,48 @@ return remote_list def cmd_object_put(args): - s3 = S3(Config()) + cfg = Config() + s3 = S3(cfg) - uri_arg = args.pop() - check_args_type(args, 'file', 'filename') + ## Each item will be a dict with the following attributes + # {'remote_uri', 'local_filename'} + upload_list = [] - uri = S3Uri(uri_arg) - if uri.type != "s3": - raise ParameterError("Expecting S3 URI instead of '%s'" % uri_arg) + if len(args) == 0: + raise ParameterError("Nothing to upload. Expecting a local file or directory.") - if len(args) > 1 and uri.object() != "" and not Config().force: - error(u"When uploading multiple files the last argument must") - error(u"be a S3 URI specifying just the bucket name") - error(u"WITHOUT object name!") - error(u"Alternatively use --force argument and the specified") - error(u"object name will be prefixed to all stored filenames.") - sys.exit(1) - + dst_uri = S3Uri(args.pop()) + if dst_uri.type != 's3': + raise ParameterError("Destination must be S3Uri. Got: %s" % args[-1]) + + if len(args) == 0: + raise ParameterError("Nothing to upload. Expecting a local file or directory.") + + local_list = fetch_local_list(args) + local_count = len(local_list) + + if local_count > 1 and not dst_uri.object() == "" and not dst_uri.object().endswith("/"): + raise ParameterError(u"When uploading multiple files the last argument must be a S3 URI ending with '/' or a bucket name only!") + + sorted_local_keys = local_list.keys() + sorted_local_keys.sort() seq = 0 - total = len(args) - for file in args: + for key in sorted_local_keys: seq += 1 - uri_arg_final = str(uri) - if len(args) > 1 or uri.object() == "": - uri_arg_final += os.path.basename(file) - - uri_final = S3Uri(uri_arg_final) + + if local_count > 1 or dst_uri.object() == "": + uri_final = S3Uri(u"%s%s" % (dst_uri, key)) + else: + uri_final = dst_uri + extra_headers = {} - real_filename = file - seq_label = "[%d of %d]" % (seq, total) + full_name_orig = local_list[key]['full_name'] + full_name = full_name_orig + seq_label = "[%d of %d]" % (seq, local_count) if Config().encrypt: - exitcode, real_filename, extra_headers["x-amz-meta-s3tools-gpgenc"] = gpg_encrypt(file) + exitcode, full_name, extra_headers["x-amz-meta-s3tools-gpgenc"] = gpg_encrypt(full_name_orig) try: - response = s3.object_put(real_filename, uri_final, extra_headers, extra_label = seq_label) + response = s3.object_put(full_name, uri_final, extra_headers, extra_label = seq_label) except S3UploadError, e: error(u"Upload of '%s' failed too many times. Skipping that file." % real_filename) continue @@ -271,14 +303,14 @@ speed_fmt = formatSize(response["speed"], human_readable = True, floating_point = True) if not Config().progress_meter: output(u"File '%s' stored as %s (%d bytes in %0.1f seconds, %0.2f %sB/s) %s" % - (file, uri_final, response["size"], response["elapsed"], speed_fmt[0], speed_fmt[1], - seq_label)) + (unicodise(full_name_orig), uri_final, response["size"], response["elapsed"], + speed_fmt[0], speed_fmt[1], seq_label)) if Config().acl_public: output(u"Public URL of the object is: %s" % (uri_final.public_url())) - if Config().encrypt and real_filename != file: - debug(u"Removing temporary encrypted file: %s" % real_filename) - os.remove(real_filename) + if Config().encrypt and full_name != full_name_orig: + debug(u"Removing temporary encrypted file: %s" % unicodise(full_name)) + os.remove(full_name) def cmd_object_get(args): cfg = Config() @@ -487,29 +519,31 @@ def _get_filelist_local(local_uri): info(u"Compiling list of local files...") - local_path = deunicodise(local_uri.path()) - if os.path.isdir(local_path): - loc_base = os.path.join(local_path, "") + if local_uri.isdir(): + local_base = local_uri.basename() + local_path = deunicodise(local_uri.path()) filelist = os.walk(local_path) else: - loc_base = "." + os.path.sep - filelist = [( '.', [], [local_path] )] - loc_base_len = len(loc_base) + local_base = "" + local_path = deunicodise(local_uri.dirname()) + filelist = [( local_path, [], [deunicodise(local_uri.basename())] )] loc_list = {} for root, dirs, files in filelist: + rel_root = root.replace(local_path, local_base, 1) ## TODO: implement explicit exclude for f in files: full_name = os.path.join(root, f) + rel_name = os.path.join(rel_root, f) if not os.path.isfile(full_name): continue if os.path.islink(full_name): ## Synchronize symlinks... one day ## for now skip over continue - file = unicodise(full_name[loc_base_len:]) + relative_file = unicodise(rel_name) sr = os.stat_result(os.lstat(full_name)) - loc_list[file] = { - 'full_name_unicoded' : unicodise(full_name), + loc_list[relative_file] = { + 'full_name_unicode' : unicodise(full_name), 'full_name' : full_name, 'size' : sr.st_size, 'mtime' : sr.st_mtime, This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-20 06:01:51
|
Revision: 347 http://s3tools.svn.sourceforge.net/s3tools/?rev=347&view=rev Author: ludvigm Date: 2009-01-20 06:01:45 +0000 (Tue, 20 Jan 2009) Log Message: ----------- * s3cmd: Migrated 'sync' remote->local to the new scheme with fetch_{local,remote}_list(). Changed fetch_remote_list() to return dict() compatible with fetch_local_list(). Re-implemented --exclude / --include processing. * S3/Utils.py: functions for parsing RFC822 dates (for HTTP header responses). * S3/Config.py: placeholders for --include. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/Config.py s3cmd/trunk/S3/Utils.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-18 13:10:23 UTC (rev 346) +++ s3cmd/trunk/ChangeLog 2009-01-20 06:01:45 UTC (rev 347) @@ -1,3 +1,14 @@ +2009-01-20 Michal Ludvig <mi...@lo...> + + * s3cmd: Migrated 'sync' remote->local to the new + scheme with fetch_{local,remote}_list(). + Changed fetch_remote_list() to return dict() compatible + with fetch_local_list(). + Re-implemented --exclude / --include processing. + * S3/Utils.py: functions for parsing RFC822 dates (for HTTP + header responses). + * S3/Config.py: placeholders for --include. + 2009-01-15 Michal Ludvig <mi...@lo...> * s3cmd, S3/S3Uri.py, NEWS: Support for recursive 'put'. Modified: s3cmd/trunk/S3/Config.py =================================================================== --- s3cmd/trunk/S3/Config.py 2009-01-18 13:10:23 UTC (rev 346) +++ s3cmd/trunk/S3/Config.py 2009-01-20 06:01:45 UTC (rev 347) @@ -55,10 +55,14 @@ default_mime_type = "binary/octet-stream" guess_mime_type = False debug_syncmatch = False + # List of checks to be performed for 'sync' + sync_checks = ['size', 'md5'] # 'weak-timestamp' # List of compiled REGEXPs exclude = [] + include = [] # Dict mapping compiled REGEXPs back to their textual form debug_exclude = {} + debug_include = {} encoding = "utf-8" ## Creating a singleton Modified: s3cmd/trunk/S3/Utils.py =================================================================== --- s3cmd/trunk/S3/Utils.py 2009-01-18 13:10:23 UTC (rev 346) +++ s3cmd/trunk/S3/Utils.py 2009-01-20 06:01:45 UTC (rev 347) @@ -8,6 +8,7 @@ import re import string import random +import rfc822 try: from hashlib import md5 except ImportError: @@ -84,6 +85,12 @@ ## treats it as "localtime". Anyway... return time.mktime(dateS3toPython(date)) +def dateRFC822toPython(date): + return rfc822.parsedate(date) + +def dateRFC822toUnix(date): + return time.mktime(dateRFC822toPython(date)) + def formatSize(size, human_readable = False, floating_point = False): size = floating_point and float(size) or int(size) if human_readable: @@ -185,9 +192,9 @@ if not encoding: encoding = Config.Config().encoding - debug("Unicodising %r using %s" % (string, encoding)) if type(string) == unicode: return string + debug("Unicodising %r using %s" % (string, encoding)) try: return string.decode(encoding, errors) except UnicodeDecodeError: @@ -202,9 +209,9 @@ if not encoding: encoding = Config.Config().encoding - debug("DeUnicodising %r using %s" % (string, encoding)) if type(string) != unicode: return str(string) + debug("DeUnicodising %r using %s" % (string, encoding)) try: return string.encode(encoding, errors) except UnicodeEncodeError: Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2009-01-18 13:10:23 UTC (rev 346) +++ s3cmd/trunk/s3cmd 2009-01-20 06:01:45 UTC (rev 347) @@ -173,52 +173,54 @@ _bucket_delete_one(uri) output(u"Bucket '%s' removed" % uri.uri()) -def fetch_local_list(args): +def fetch_local_list(args, recursive = None): local_uris = [] local_list = {} + if type(args) not in (list, tuple): + args = [args] + + if recursive == None: + recursive = cfg.recursive + for arg in args: uri = S3Uri(arg) if not uri.type == 'file': raise ParameterError("Expecting filename or directory instead of: %s" % arg) - if uri.isdir() and not cfg.recursive: + if uri.isdir() and not recursive: raise ParameterError("Use --recursive to upload a directory: %s" % arg) local_uris.append(uri) for uri in local_uris: - filelist = _get_filelist_local(uri) - for key in filelist: - upload_item = { - 'file_info' : filelist[key], - 'relative_key' : key, - } - local_list[key] = filelist[key] + local_list.update(_get_filelist_local(uri)) return local_list -def fetch_remote_list(args): +def fetch_remote_list(args, require_attribs = False, recursive = None): remote_uris = [] - remote_list = [] + remote_list = {} + if type(args) not in (list, tuple): + args = [args] + + if recursive == None: + recursive = cfg.recursive + for arg in args: uri = S3Uri(arg) if not uri.type == 's3': raise ParameterError("Expecting S3 URI instead of '%s'" % arg) remote_uris.append(uri) - if cfg.recursive: + if recursive: for uri in remote_uris: objectlist = _get_filelist_remote(uri) - for key in objectlist.iterkeys(): - object = S3Uri(objectlist[key]['object_uri_str']) + for key in objectlist: + #object = S3Uri(objectlist[key]['object_uri_str']) ## Remove leading '/' from remote filenames - if key.find("/") == 0: - key = key[1:] - download_item = { - 'remote_uri' : object, - 'key' : key - } - remote_list.append(download_item) + #if key.find("/") == 0: + # key = key[1:] + remote_list[key] = objectlist[key] else: for uri in remote_uris: uri_str = str(uri) @@ -238,21 +240,25 @@ for key in objectlist: ## Check whether the 'key' matches the requested wildcards if glob.fnmatch.fnmatch(objectlist[key]['object_uri_str'], uri_str): - download_item = { - 'remote_uri' : S3Uri(objectlist[key]['object_uri_str']), - 'key' : key, - } - remote_list.append(download_item) + remote_list[key] = objectlist[key] else: ## No wildcards - simply append the given URI to the list key = os.path.basename(uri.object()) if not key: raise ParameterError(u"Expecting S3 URI with a filename or --recursive: %s" % uri.uri()) - download_item = { - 'remote_uri' : uri, - 'key' : key + remote_item = { + 'base_uri': uri, + 'object_uri_str': unicode(uri), + 'object_key': uri.object() } - remote_list.append(download_item) + if require_attribs: + response = S3(cfg).object_info(uri) + remote_item.update({ + 'size': int(response['headers']['content-length']), + 'md5': response['headers']['etag'].strip('"\''), + 'timestamp' : Utils.dateRFC822toUnix(response['headers']['date']) + }) + remote_list[key] = remote_item return remote_list def cmd_object_put(args): @@ -362,26 +368,27 @@ if len(args) == 0: raise ParameterError("Nothing to download. Expecting S3 URI.") - remote_list = fetch_remote_list(args) + remote_list = fetch_remote_list(args, require_attribs = False) remote_count = len(remote_list) if not os.path.isdir(destination_base) or destination_base == '-': ## We were either given a file name (existing or not) or want STDOUT if remote_count > 1: raise ParameterError("Destination must be a directory when downloading multiple sources.") - remote_list[0]['local_filename'] = destination_base + remote_list[remote_list.keys()[0]]['local_filename'] = deunicodise(destination_base) elif os.path.isdir(destination_base): if destination_base[-1] != os.path.sep: destination_base += os.path.sep for key in remote_list: - key['local_filename'] = destination_base + key['key'] + remote_list[key]['local_filename'] = destination_base + key else: raise InternalError("WTF? Is it a dir or not? -- %s" % destination_base) seq = 0 - for item in remote_list: + for key in remote_list: seq += 1 - uri = item['remote_uri'] + item = remote_list[key] + uri = S3Uri(item['object_uri_str']) ## Encode / Decode destination with "replace" to make sure it's compatible with current encoding destination = unicodise_safe(item['local_filename']) seq_label = "[%d of %d]" % (seq, remote_count) @@ -606,35 +613,45 @@ break return rem_list -def _compare_filelists(src_list, dst_list, src_is_local_and_dst_is_remote): - info(u"Verifying checksums...") +def _filelist_filter_exclude_include(src_list): + info(u"Applying --exclude/--include") cfg = Config() - exists_list = {} exclude_list = {} - if cfg.debug_syncmatch: - logging.root.setLevel(logging.DEBUG) for file in src_list.keys(): - if not cfg.debug_syncmatch: - debug(u"CHECK: %s" % (os.sep + file)) + debug(u"CHECK: %s" % file) excluded = False for r in cfg.exclude: - ## all paths start with '/' from the base dir - if r.search(os.sep + file): - ## Can't directly 'continue' to the outer loop - ## therefore this awkward excluded switch :-( + if r.search(file): excluded = True - if cfg.debug_syncmatch: - debug(u"EXCL: %s" % (os.sep + file)) - debug(u"RULE: '%s'" % (cfg.debug_exclude[r])) - else: - info(u"%s: excluded" % file) + debug(u"EXCL-MATCH: '%s'" % (cfg.debug_exclude[r])) break if excluded: - exclude_list = src_list[file] + ## No need to check for --include if not excluded + for r in cfg.include: + if r.search(file): + excluded = False + debug(u"INCL-MATCH: '%s'" % (cfg.debug_include[r])) + break + if excluded: + ## Still excluded - ok, action it + debug(u"EXCLUDE: %s" % file) + exclude_list[file] = src_list[file] del(src_list[file]) continue else: - debug(u"PASS: %s" % (os.sep + file)) + debug(u"PASS: %s" % (file)) + return src_list, exclude_list + +def _compare_filelists(src_list, dst_list, src_is_local_and_dst_is_remote): + info(u"Verifying attributes...") + cfg = Config() + exists_list = {} + if cfg.debug_syncmatch: + logging.root.setLevel(logging.DEBUG) + + for file in src_list.keys(): + if not cfg.debug_syncmatch: + debug(u"CHECK: %s" % file) if dst_list.has_key(file): ## Was --skip-existing requested? if cfg.skip_existing: @@ -645,9 +662,13 @@ del(dst_list[file]) continue + attribs_match = True ## Check size first - if dst_list[file]['size'] == src_list[file]['size']: - #debug(u"%s same size: %s" % (file, dst_list[file]['size'])) + if 'size' in cfg.sync_checks and dst_list[file]['size'] != src_list[file]['size']: + debug(u"XFER: %s (size mismatch: src=%s dst=%s)" % (file, src_list[file]['size'], dst_list[file]['size'])) + attribs_match = False + + if attribs_match and 'md5' in cfg.sync_checks: ## ... same size, check MD5 if src_is_local_and_dst_is_remote: src_md5 = Utils.hash_file_md5(src_list[file]['full_name']) @@ -655,28 +676,27 @@ else: src_md5 = src_list[file]['md5'] dst_md5 = Utils.hash_file_md5(dst_list[file]['full_name']) - if src_md5 == dst_md5: - #debug(u"%s md5 matches: %s" % (file, dst_md5)) - ## Checksums are the same. - ## Remove from source-list, all that is left there will be transferred - debug(u"IGNR: %s (transfer not needed: MD5 OK, Size OK)" % file) - exists_list[file] = src_list[file] - del(src_list[file]) - else: + if src_md5 != dst_md5: + ## Checksums are different. + attribs_match = False debug(u"XFER: %s (md5 mismatch: src=%s dst=%s)" % (file, src_md5, dst_md5)) - else: - debug(u"XFER: %s (size mismatch: src=%s dst=%s)" % (file, src_list[file]['size'], dst_list[file]['size'])) - + + if attribs_match: + ## Remove from source-list, all that is left there will be transferred + debug(u"IGNR: %s (transfer not needed)" % file) + exists_list[file] = src_list[file] + del(src_list[file]) + ## Remove from destination-list, all that is left there will be deleted - #debug(u"%s removed from destination list" % file) del(dst_list[file]) + if cfg.debug_syncmatch: warning(u"Exiting because of --debug-syncmatch") - sys.exit(0) + sys.exit(1) - return src_list, dst_list, exists_list, exclude_list + return src_list, dst_list, exists_list -def cmd_sync_remote2local(src, dst): +def cmd_sync_remote2local(args): def _parse_attrs_header(attrs_header): attrs = {} for attr in attrs_header.split("/"): @@ -686,45 +706,65 @@ s3 = S3(Config()) - src_uri = S3Uri(src) - dst_uri = S3Uri(dst) + destination_base = args[-1] + local_list = fetch_local_list(destination_base, recursive = True) + remote_list = fetch_remote_list(args[:-1], recursive = True, require_attribs = True) - src_base = src_uri.uri() - dst_base = dst_uri.path() - if not src_base[-1] == "/": src_base += "/" + local_count = len(local_list) + remote_count = len(remote_list) - rem_list = _get_filelist_remote(src_uri) - rem_count = len(rem_list) + info(u"Found %d remote files, %d local files" % (remote_count, local_count)) - loc_list = _get_filelist_local(dst_uri) - loc_count = len(loc_list) - - info(u"Found %d remote files, %d local files" % (rem_count, loc_count)) + remote_list, exclude_list = _filelist_filter_exclude_include(remote_list) - _compare_filelists(rem_list, loc_list, False) + remote_list, local_list, existing_list = _compare_filelists(remote_list, local_list, False) - info(u"Summary: %d remote files to download, %d local files to delete" % (len(rem_list), len(loc_list))) + local_count = len(local_list) + remote_count = len(remote_list) - for file in loc_list: + if not os.path.isdir(destination_base): + ## We were either given a file name (existing or not) or want STDOUT + if remote_count > 1: + raise ParameterError("Destination must be a directory when downloading multiple sources.") + remote_list[remote_list.keys()[0]]['local_filename'] = deunicodise(destination_base) + else: + if destination_base[-1] != os.path.sep: + destination_base += os.path.sep + for key in remote_list: + remote_list[key]['local_filename'] = deunicodise(destination_base + key) + + info(u"Summary: %d remote files to download, %d local files to delete" % (remote_count, local_count)) + + for file in local_list: if cfg.delete_removed: - os.unlink(dst_base + file) - output(u"deleted '%s'" % (dst_base + file)) + os.unlink(local_list[file]['full_name']) + output(u"deleted: %s" % local_list[file]['full_name']) else: - output(u"not-deleted '%s'" % file) + info(u"deleted: %s" % local_list[file]['full_name']) + if cfg.verbosity == logging.DEBUG: + for key in exclude_list: + debug(u"excluded: %s" % unicodise(key)) + for key in remote_list: + debug(u"download: %s" % unicodise(key)) + + if cfg.dry_run: + warning(u"Exitting now because of --dry-run") + return + total_size = 0 - total_count = len(rem_list) total_elapsed = 0.0 timestamp_start = time.time() seq = 0 dir_cache = {} - file_list = rem_list.keys() + file_list = remote_list.keys() file_list.sort() for file in file_list: seq += 1 - uri = S3Uri(src_base + file) - dst_file = dst_base + file - seq_label = "[%d of %d]" % (seq, total_count) + item = remote_list[file] + uri = S3Uri(item['object_uri_str']) + dst_file = item['local_filename'] + seq_label = "[%d of %d]" % (seq, remote_count) try: dst_dir = os.path.dirname(dst_file) if not dir_cache.has_key(dst_dir): @@ -734,10 +774,8 @@ continue try: open_flags = os.O_CREAT - if cfg.force: - open_flags |= os.O_TRUNC - else: - open_flags |= os.O_EXCL + open_flags |= os.O_TRUNC + # open_flags |= os.O_EXCL debug(u"dst_file=%s" % dst_file) # This will have failed should the file exist @@ -907,22 +945,15 @@ info(outstr) def cmd_sync(args): - src = args.pop(0) - dst = args.pop(0) - if (len(args)): - raise ParameterError("Too many parameters! Expected: %s" % commands['sync']['param']) + if (len(args) < 2): + raise ParameterError("Too few parameters! Expected: %s" % commands['sync']['param']) - if S3Uri(src).type == "s3" and not src.endswith('/'): - src += "/" + if S3Uri(args[0]).type == "file" and S3Uri(args[-1]).type == "s3": + return cmd_sync_local2remote(args) + if S3Uri(args[0]).type == "s3" and S3Uri(args[-1]).type == "file": + return cmd_sync_remote2local(args) + raise ParameterError("Invalid source/destination: '%s'" % "' '".join(args)) - if not dst.endswith('/'): - dst += "/" - - if S3Uri(src).type == "file" and S3Uri(dst).type == "s3": - return cmd_sync_local2remote(src, dst) - if S3Uri(src).type == "s3" and S3Uri(dst).type == "file": - return cmd_sync_remote2local(src, dst) - def cmd_setacl(args): s3 = S3(cfg) @@ -1326,8 +1357,7 @@ debug(u"processing rule: %s" % ex) exc = re.compile(glob.fnmatch.translate(ex)) cfg.exclude.append(exc) - if options.debug_syncmatch: - cfg.debug_exclude[exc] = ex + cfg.debug_exclude[exc] = ex ## Process REGEXP style excludes if options.rexclude is None: @@ -1343,8 +1373,7 @@ debug(u"processing rule: %s" % ex) exc = re.compile(ex) cfg.exclude.append(exc) - if options.debug_syncmatch: - cfg.debug_exclude[exc] = ex + cfg.debug_exclude[exc] = ex if cfg.encrypt and cfg.gpg_passphrase == "": error(u"Encryption requested but no passphrase set in config file.") This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-20 14:24:59
|
Revision: 348 http://s3tools.svn.sourceforge.net/s3tools/?rev=348&view=rev Author: ludvigm Date: 2009-01-20 14:24:42 +0000 (Tue, 20 Jan 2009) Log Message: ----------- * s3cmd: Migrated 'sync' local->remote to the new scheme with fetch_{local,remote}_list(). Enabled --dry-run for 'sync'. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-20 06:01:45 UTC (rev 347) +++ s3cmd/trunk/ChangeLog 2009-01-20 14:24:42 UTC (rev 348) @@ -1,3 +1,9 @@ +2009-01-21 Michal Ludvig <mi...@lo...> + + * s3cmd: Migrated 'sync' local->remote to the new + scheme with fetch_{local,remote}_list(). + Enabled --dry-run for 'sync'. + 2009-01-20 Michal Ludvig <mi...@lo...> * s3cmd: Migrated 'sync' remote->local to the new Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2009-01-20 06:01:45 UTC (rev 347) +++ s3cmd/trunk/s3cmd 2009-01-20 14:24:42 UTC (rev 348) @@ -21,10 +21,6 @@ from logging import debug, info, warning, error from distutils.spawn import find_executable -error("This s3cmd from SVN is broken!") -error("Use revision 335 or s3cmd-0.9.9-pre4") -sys.exit(1) - def output(message): sys.stdout.write(message + "\n") @@ -175,7 +171,7 @@ def fetch_local_list(args, recursive = None): local_uris = [] - local_list = {} + local_list = SortedDict() if type(args) not in (list, tuple): args = [args] @@ -198,7 +194,7 @@ def fetch_remote_list(args, require_attribs = False, recursive = None): remote_uris = [] - remote_list = {} + remote_list = SortedDict() if type(args) not in (list, tuple): args = [args] @@ -531,27 +527,27 @@ def _get_filelist_local(local_uri): info(u"Compiling list of local files...") if local_uri.isdir(): - local_base = local_uri.basename() + local_base = deunicodise(local_uri.basename()) local_path = deunicodise(local_uri.path()) filelist = os.walk(local_path) else: local_base = "" local_path = deunicodise(local_uri.dirname()) filelist = [( local_path, [], [deunicodise(local_uri.basename())] )] - loc_list = {} + loc_list = SortedDict() for root, dirs, files in filelist: rel_root = root.replace(local_path, local_base, 1) - ## TODO: implement explicit exclude for f in files: full_name = os.path.join(root, f) - rel_name = os.path.join(rel_root, f) if not os.path.isfile(full_name): continue if os.path.islink(full_name): ## Synchronize symlinks... one day ## for now skip over continue - relative_file = unicodise(rel_name) + relative_file = unicodise(os.path.join(rel_root, f)) + if relative_file.startswith('./'): + relative_file = relative_file[2:] sr = os.stat_result(os.lstat(full_name)) loc_list[relative_file] = { 'full_name_unicode' : unicodise(full_name), @@ -589,7 +585,7 @@ rem_base = rem_base[:rem_base.rfind('/')+1] remote_uri = S3Uri("s3://%s/%s" % (remote_uri.bucket(), rem_base)) rem_base_len = len(rem_base) - rem_list = {} + rem_list = SortedDict() break_now = False for object in response['list']: if object['Key'] == rem_base_original and object['Key'][-1] != os.path.sep: @@ -616,7 +612,7 @@ def _filelist_filter_exclude_include(src_list): info(u"Applying --exclude/--include") cfg = Config() - exclude_list = {} + exclude_list = SortedDict() for file in src_list.keys(): debug(u"CHECK: %s" % file) excluded = False @@ -645,7 +641,7 @@ def _compare_filelists(src_list, dst_list, src_is_local_and_dst_is_remote): info(u"Verifying attributes...") cfg = Config() - exists_list = {} + exists_list = SortedDict() if cfg.debug_syncmatch: logging.root.setLevel(logging.DEBUG) @@ -735,23 +731,22 @@ info(u"Summary: %d remote files to download, %d local files to delete" % (remote_count, local_count)) - for file in local_list: - if cfg.delete_removed: - os.unlink(local_list[file]['full_name']) - output(u"deleted: %s" % local_list[file]['full_name']) - else: - info(u"deleted: %s" % local_list[file]['full_name']) - - if cfg.verbosity == logging.DEBUG: + if cfg.dry_run: for key in exclude_list: - debug(u"excluded: %s" % unicodise(key)) + output(u"excluded: %s" % unicodise(key)) + for key in local_list: + output(u"delete: %s" % local_list[key]['full_name_unicode']) for key in remote_list: - debug(u"download: %s" % unicodise(key)) + output(u"download: %s -> %s" % (remote_list[key]['object_uri_str'], remote_list[key]['local_filename'])) - if cfg.dry_run: warning(u"Exitting now because of --dry-run") return + if cfg.delete_removed: + for key in local_list: + os.unlink(local_list[key]['full_name']) + output(u"deleted: %s" % local_list[key]['full_name_unicode']) + total_size = 0 total_elapsed = 0.0 timestamp_start = time.time() @@ -839,7 +834,7 @@ else: info(outstr) -def cmd_sync_local2remote(src, dst): +def cmd_sync_local2remote(args): def _build_attr_header(src): import pwd, grp attrs = {} @@ -870,57 +865,74 @@ s3 = S3(cfg) if cfg.encrypt: - error(u"S3cmd 'sync' doesn't support GPG encryption, sorry.") + error(u"S3cmd 'sync' doesn't yet support GPG encryption, sorry.") error(u"Either use unconditional 's3cmd put --recursive'") error(u"or disable encryption with --no-encrypt parameter.") sys.exit(1) + destination_base = args[-1] + local_list = fetch_local_list(args[:-1], recursive = True) + remote_list = fetch_remote_list(destination_base, recursive = True, require_attribs = True) - src_uri = S3Uri(src) - dst_uri = S3Uri(dst) + local_count = len(local_list) + remote_count = len(remote_list) - loc_list = _get_filelist_local(src_uri) - loc_count = len(loc_list) - - rem_list = _get_filelist_remote(dst_uri) - rem_count = len(rem_list) + info(u"Found %d local files, %d remote files" % (local_count, remote_count)) - info(u"Found %d local files, %d remote files" % (loc_count, rem_count)) + local_list, exclude_list = _filelist_filter_exclude_include(local_list) - _compare_filelists(loc_list, rem_list, True) + local_list, remote_list, existing_list = _compare_filelists(local_list, remote_list, True) - info(u"Summary: %d local files to upload, %d remote files to delete" % (len(loc_list), len(rem_list))) + local_count = len(local_list) + remote_count = len(remote_list) - for file in rem_list: - uri = S3Uri("s3://" + dst_uri.bucket()+"/"+rem_list[file]['object_key']) - if cfg.delete_removed: - response = s3.object_delete(uri) - output(u"deleted '%s'" % uri) - else: - output(u"not-deleted '%s'" % uri) + if not destination_base.endswith("/"): + if local_count > 1: + raise ParameterError("Destination S3 URI must end with '/' (ie must refer to a directory on the remote side).") + local_list[local_list.keys()[0]]['remote_uri'] = unicodise(destination_base) + else: + for key in local_list: + local_list[key]['remote_uri'] = unicodise(destination_base + key) + info(u"Summary: %d local files to upload, %d remote files to delete" % (local_count, remote_count)) + + if cfg.dry_run: + for key in exclude_list: + output(u"excluded: %s" % unicodise(key)) + for key in remote_list: + output(u"deleted: %s" % remote_list[key]['object_uri_str']) + for key in local_list: + output(u"upload: %s -> %s" % (local_list[key]['full_name_unicode'], local_list[key]['remote_uri'])) + + warning(u"Exitting now because of --dry-run") + return + + if cfg.delete_removed: + for key in remote_list: + uri = S3Uri(remote_list[key]['object_uri_str']) + s3.object_delete(uri) + output(u"deleted: '%s'" % uri) + total_size = 0 - total_count = len(loc_list) total_elapsed = 0.0 timestamp_start = time.time() seq = 0 - dst_base = dst_uri.uri() - if not dst_base[-1] == "/": dst_base += "/" - file_list = loc_list.keys() + file_list = local_list.keys() file_list.sort() for file in file_list: seq += 1 - src = loc_list[file] - uri = S3Uri(dst_base + file) - seq_label = "[%d of %d]" % (seq, total_count) + item = local_list[file] + src = item['full_name'] + uri = S3Uri(item['remote_uri']) + seq_label = "[%d of %d]" % (seq, local_count) attr_header = None if cfg.preserve_attrs: - attr_header = _build_attr_header(src['full_name']) + attr_header = _build_attr_header(src) debug(attr_header) try: - response = s3.object_put(src['full_name'], uri, attr_header, extra_label = seq_label) + response = s3.object_put(src, uri, attr_header, extra_label = seq_label) except S3UploadError, e: - error(u"%s: upload failed too many times. Skipping that file." % src['full_name_unicode']) + error(u"%s: upload failed too many times. Skipping that file." % item['full_name_unicode']) continue except InvalidFileError, e: warning(u"File can not be uploaded: %s" % e) @@ -928,8 +940,8 @@ speed_fmt = formatSize(response["speed"], human_readable = True, floating_point = True) if not cfg.progress_meter: output(u"File '%s' stored as %s (%d bytes in %0.1f seconds, %0.2f %sB/s) %s" % - (src, uri, response["size"], response["elapsed"], speed_fmt[0], speed_fmt[1], - seq_label)) + (item['full_name_unicode'], uri, response["size"], response["elapsed"], + speed_fmt[0], speed_fmt[1], seq_label)) total_size += response["size"] total_elapsed = time.time() - timestamp_start @@ -1239,7 +1251,7 @@ optparser.add_option("-c", "--config", dest="config", metavar="FILE", help="Config file name. Defaults to %default") optparser.add_option( "--dump-config", dest="dump_config", action="store_true", help="Dump current configuration after parsing config files and command line options and exit.") - #optparser.add_option("-n", "--dry-run", dest="dry_run", action="store_true", help="Only show what should be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other information though.") + optparser.add_option("-n", "--dry-run", dest="dry_run", action="store_true", help="Only show what should be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other information though (only for [sync] command)") optparser.add_option("-e", "--encrypt", dest="encrypt", action="store_true", help="Encrypt files before uploading to S3.") optparser.add_option( "--no-encrypt", dest="encrypt", action="store_false", help="Don't encrypt files.") This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <lu...@us...> - 2009-01-21 22:14:21
|
Revision: 349 http://s3tools.svn.sourceforge.net/s3tools/?rev=349&view=rev Author: ludvigm Date: 2009-01-21 22:14:17 +0000 (Wed, 21 Jan 2009) Log Message: ----------- * run-tests.py: Updated paths for the new sync semantics. * s3cmd, S3/S3.py: Small fixes to make testsuite happy. Modified Paths: -------------- s3cmd/trunk/ChangeLog s3cmd/trunk/S3/S3.py s3cmd/trunk/run-tests.py s3cmd/trunk/s3cmd Modified: s3cmd/trunk/ChangeLog =================================================================== --- s3cmd/trunk/ChangeLog 2009-01-20 14:24:42 UTC (rev 348) +++ s3cmd/trunk/ChangeLog 2009-01-21 22:14:17 UTC (rev 349) @@ -1,3 +1,9 @@ +2009-01-22 Michal Ludvig <mi...@lo...> + + * run-tests.py: Updated paths for the new sync + semantics. + * s3cmd, S3/S3.py: Small fixes to make testsuite happy. + 2009-01-21 Michal Ludvig <mi...@lo...> * s3cmd: Migrated 'sync' local->remote to the new Modified: s3cmd/trunk/S3/S3.py =================================================================== --- s3cmd/trunk/S3/S3.py 2009-01-20 14:24:42 UTC (rev 348) +++ s3cmd/trunk/S3/S3.py 2009-01-21 22:14:17 UTC (rev 349) @@ -207,7 +207,7 @@ if uri.type != "s3": raise ValueError("Expected URI type 's3', got '%s'" % uri.type) request = self.create_request("OBJECT_GET", uri = uri) - labels = { 'source' : uri, 'destination' : stream.name, 'extra' : extra_label } + labels = { 'source' : unicodise(uri.uri()), 'destination' : unicodise(stream.name), 'extra' : extra_label } response = self.recv_file(request, stream, labels, start_position) return response Modified: s3cmd/trunk/run-tests.py =================================================================== --- s3cmd/trunk/run-tests.py 2009-01-20 14:24:42 UTC (rev 348) +++ s3cmd/trunk/run-tests.py 2009-01-21 22:14:17 UTC (rev 349) @@ -153,6 +153,9 @@ cmd.append(dir_name) return test(label, cmd) +def test_flushdir(label, dir_name): + test_rmdir(label + "(rm)", dir_name) + return test_mkdir(label + "(mk)", dir_name) argv = sys.argv[1:] while argv: @@ -208,19 +211,18 @@ ## ====== Sync to S3 -test_s3cmd("Sync to S3", ['sync', 'testsuite', 's3://s3cmd-autotest-1/xyz/', '--exclude', '.svn/*', '--exclude', '*.png', '--no-encrypt', '--exclude-from', 'testsuite/exclude.encodings' ]) +test_s3cmd("Sync to S3", ['sync', 'testsuite/', 's3://s3cmd-autotest-1/xyz/', '--exclude', '.svn/*', '--exclude', '*.png', '--no-encrypt', '--exclude-from', 'testsuite/exclude.encodings' ], + must_not_find_re = [ "\.svn/", "\.png$" ]) if have_encoding: ## ====== Sync UTF-8 / GBK / ... to S3 - test_s3cmd("Sync %s to S3" % encoding, ['sync', 'testsuite/encodings/' + encoding, enc_base_remote, '--exclude', '.svn/*', '--no-encrypt' ]) + test_s3cmd("Sync %s to S3" % encoding, ['sync', 'testsuite/encodings/' + encoding, 's3://s3cmd-autotest-1/xyz/encodings/', '--exclude', '.svn/*', '--no-encrypt' ], + must_find = [ u"File 'testsuite/encodings/%(encoding)s/%(pattern)s' stored as 's3://s3cmd-autotest-1/xyz/encodings/%(encoding)s/%(pattern)s'" % { 'encoding' : encoding, 'pattern' : enc_pattern } ]) ## ====== List bucket content -must_find_re = [ u"D s3://s3cmd-autotest-1/xyz/binary/$", u"D s3://s3cmd-autotest-1/xyz/etc/$" ] +must_find_re = [ u"DIR s3://s3cmd-autotest-1/xyz/binary/$", u"DIR s3://s3cmd-autotest-1/xyz/etc/$" ] must_not_find = [ u"random-crap.md5", u".svn" ] -if have_encoding: - must_find_re.append(u"D %s$" % enc_base_remote) - must_not_find.append(enc_pattern) test_s3cmd("List bucket content", ['ls', 's3://s3cmd-autotest-1/xyz/'], must_find_re = must_find_re, must_not_find = must_not_find) @@ -229,7 +231,7 @@ ## ====== List bucket recursive must_find = [ u"s3://s3cmd-autotest-1/xyz/binary/random-crap.md5" ] if have_encoding: - must_find.append(enc_base_remote + enc_pattern) + must_find.append(u"s3://s3cmd-autotest-1/xyz/encodings/%(encoding)s/%(pattern)s" % { 'encoding' : encoding, 'pattern' : enc_pattern }) test_s3cmd("List bucket recursive", ['ls', '--recursive', 's3://s3cmd-autotest-1'], must_find = must_find, must_not_find = [ "logo.png" ]) @@ -238,21 +240,25 @@ # test_s3cmd("Recursive put", ['put', '--recursive', 'testsuite/etc', 's3://s3cmd-autotest-1/xyz/']) -## ====== rmdir local -test_rmdir("Removing local target", 'testsuite-out') +## ====== Clean up local destination dir +test_flushdir("Clean testsuite-out/", "testsuite-out") ## ====== Sync from S3 -must_find = [ "stored as testsuite-out/binary/random-crap.md5 " ] +must_find = [ "File 's3://s3cmd-autotest-1/xyz/binary/random-crap.md5' stored as 'testsuite-out/xyz/binary/random-crap.md5'" ] if have_encoding: - must_find.append("stored as testsuite-out/" + encoding + "/" + enc_pattern) + must_find.append(u"File 's3://s3cmd-autotest-1/xyz/encodings/%(encoding)s/%(pattern)s' stored as 'testsuite-out/xyz/encodings/%(encoding)s/%(pattern)s' " % { 'encoding' : encoding, 'pattern' : enc_pattern }) test_s3cmd("Sync from S3", ['sync', 's3://s3cmd-autotest-1/xyz', 'testsuite-out'], must_find = must_find) +## ====== Clean up local destination dir +test_flushdir("Clean testsuite-out/", "testsuite-out") + + ## ====== Put public, guess MIME test_s3cmd("Put public, guess MIME", ['put', '--guess-mime-type', '--acl-public', 'testsuite/etc/logo.png', 's3://s3cmd-autotest-1/xyz/etc/logo.png'], - must_find = [ "stored as s3://s3cmd-autotest-1/xyz/etc/logo.png" ]) + must_find = [ "stored as 's3://s3cmd-autotest-1/xyz/etc/logo.png'" ]) ## ====== Retrieve from URL @@ -285,7 +291,8 @@ ## ====== Sync more to S3 -test_s3cmd("Sync more to S3", ['sync', 'testsuite', 's3://s3cmd-autotest-1/xyz/', '--no-encrypt' ]) +test_s3cmd("Sync more to S3", ['sync', 'testsuite/', 's3://s3cmd-autotest-1/xyz/', '--no-encrypt' ], + must_find = [ "File 'testsuite/.svn/format' stored as 's3://s3cmd-autotest-1/xyz/.svn/format' " ]) ## ====== Rename within S3 @@ -302,8 +309,9 @@ ## ====== Sync more from S3 test_s3cmd("Sync more from S3", ['sync', '--delete-removed', 's3://s3cmd-autotest-1/xyz', 'testsuite-out'], - must_find = [ "deleted 'testsuite-out/logo.png'", "stored as testsuite-out/etc2/Logo.PNG (22059 bytes", - "stored as testsuite-out/.svn/format " ], + must_find = [ "deleted: testsuite-out/logo.png", + "File 's3://s3cmd-autotest-1/xyz/etc2/Logo.PNG' stored as 'testsuite-out/xyz/etc2/Logo.PNG' (22059 bytes", + "File 's3://s3cmd-autotest-1/xyz/.svn/format' stored as 'testsuite-out/xyz/.svn/format' " ], must_not_find_re = [ "not-deleted.*etc/logo.png" ]) Modified: s3cmd/trunk/s3cmd =================================================================== --- s3cmd/trunk/s3cmd 2009-01-20 14:24:42 UTC (rev 348) +++ s3cmd/trunk/s3cmd 2009-01-21 22:14:17 UTC (rev 349) @@ -118,8 +118,8 @@ raise for prefix in response['common_prefixes']: - output(u"%s %s" % ( - "D".rjust(28), + output(u"%s %s" % ( + "DIR".rjust(26), uri.compose_uri(bucket, prefix["Prefix"]))) for object in response["list"]: @@ -308,7 +308,7 @@ continue speed_fmt = formatSize(response["speed"], human_readable = True, floating_point = True) if not Config().progress_meter: - output(u"File '%s' stored as %s (%d bytes in %0.1f seconds, %0.2f %sB/s) %s" % + output(u"File '%s' stored as '%s' (%d bytes in %0.1f seconds, %0.2f %sB/s) %s" % (unicodise(full_name_orig), uri_final, response["size"], response["elapsed"], speed_fmt[0], speed_fmt[1], seq_label)) if Config().acl_public: @@ -772,7 +772,7 @@ open_flags |= os.O_TRUNC # open_flags |= os.O_EXCL - debug(u"dst_file=%s" % dst_file) + debug(u"dst_file=%s" % unicodise(dst_file)) # This will have failed should the file exist os.close(os.open(dst_file, open_flags)) # Yeah I know there is a race condition here. Sadly I don't know how to open() in exclusive mode. @@ -818,8 +818,8 @@ continue speed_fmt = formatSize(response["speed"], human_readable = True, floating_point = True) if not Config().progress_meter: - output(u"File '%s' stored as %s (%d bytes in %0.1f seconds, %0.2f %sB/s) %s" % - (uri, dst_file, response["size"], response["elapsed"], speed_fmt[0], speed_fmt[1], + output(u"File '%s' stored as '%s' (%d bytes in %0.1f seconds, %0.2f %sB/s) %s" % + (uri, unicodise(dst_file), response["size"], response["elapsed"], speed_fmt[0], speed_fmt[1], seq_label)) total_size += response["size"] @@ -939,7 +939,7 @@ continue speed_fmt = formatSize(response["speed"], human_readable = True, floating_point = True) if not cfg.progress_meter: - output(u"File '%s' stored as %s (%d bytes in %0.1f seconds, %0.2f %sB/s) %s" % + output(u"File '%s' stored as '%s' (%d bytes in %0.1f seconds, %0.2f %sB/s) %s" % (item['full_name_unicode'], uri, response["size"], response["elapsed"], speed_fmt[0], speed_fmt[1], seq_label)) total_size += response["size"] @@ -977,7 +977,7 @@ for key in remote_list: seq += 1 seq_label = "[%d of %d]" % (seq, remote_count) - uri = key['remote_uri'] + uri = S3Uri(remote_list[key]['object_uri_str']) acl = s3.get_acl(uri) debug(u"acl: %s - %r" % (uri, acl.grantees)) if cfg.acl_public: This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |