Thread: [cedar-backup-svn] SF.net SVN: cedar-backup:[1024] cedar-backup2/trunk (Page 3)
Brought to you by:
pronovic
|
From: <pro...@us...> - 2011-10-12 14:17:24
|
Revision: 1024
http://cedar-backup.svn.sourceforge.net/cedar-backup/?rev=1024&view=rev
Author: pronovic
Date: 2011-10-12 14:17:14 +0000 (Wed, 12 Oct 2011)
Log Message:
-----------
Update CREDITS
Modified Paths:
--------------
cedar-backup2/trunk/CREDITS
cedar-backup2/trunk/Changelog
Modified: cedar-backup2/trunk/CREDITS
===================================================================
--- cedar-backup2/trunk/CREDITS 2011-10-11 23:44:50 UTC (rev 1023)
+++ cedar-backup2/trunk/CREDITS 2011-10-12 14:17:14 UTC (rev 1024)
@@ -32,12 +32,15 @@
the optimized media blanking strategy as well as improvements to the DVD
writer implementation.
+The PostgreSQL extension was contributed by Antoine Beaupre, based on
+the existing MySQL extension.
+
+Lukasz K. Nowak helped debug the split functionality and also provided
+patches for parts of the documentation.
+
Zoran Bosnjak contributed changes to collect.py to implement recursive
collect behavior based on recursion level.
-The PostgreSQL extension was contributed by Antoine Beaupre, based on
-the existing MySQL extension.
-
Minor code snippets derived from newsgroup and mailing list postings are
not generally attributed unless I used someone else's source code verbatim.
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2011-10-11 23:44:50 UTC (rev 1023)
+++ cedar-backup2/trunk/Changelog 2011-10-12 14:17:14 UTC (rev 1024)
@@ -1,12 +1,14 @@
Version 2.21.0 unreleased
+ * Update CREDITS file to consistently credit all contributers.
+ * Minor tweaks based on PyLint analysis (mostly config changes).
* Make ISO image unit tests more robust in writersutiltests.py.
- Handle failures with unmount (wait 1 second and try again)
- Programmatically disable (and re-enable) the GNOME auto-mounter
* Implement configurable recursion for collect action.
- Update collect.py to handle recursion (patch by Zoran Bosnjak)
- Add new configuration item CollectDir.recursionLevel
- - Update user manual, CREDITS, etc. for new functionality
+ - Update user manual to discuss new functionality
Version 2.20.1 19 Oct 2010
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2011-10-12 14:23:01
|
Revision: 1026
http://cedar-backup.svn.sourceforge.net/cedar-backup/?rev=1026&view=rev
Author: pronovic
Date: 2011-10-12 14:22:54 +0000 (Wed, 12 Oct 2011)
Log Message:
-----------
Relase 2.21.0
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/release.py
cedar-backup2/trunk/Changelog
Modified: cedar-backup2/trunk/CedarBackup2/release.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/release.py 2011-10-12 14:17:51 UTC (rev 1025)
+++ cedar-backup2/trunk/CedarBackup2/release.py 2011-10-12 14:22:54 UTC (rev 1026)
@@ -35,6 +35,6 @@
EMAIL = "pro...@ie..."
COPYRIGHT = "2004-2011"
VERSION = "2.21.0"
-DATE = "unreleased"
+DATE = "12 Oct 2011"
URL = "http://cedar-backup.sourceforge.net/"
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2011-10-12 14:17:51 UTC (rev 1025)
+++ cedar-backup2/trunk/Changelog 2011-10-12 14:22:54 UTC (rev 1026)
@@ -1,4 +1,4 @@
-Version 2.21.0 unreleased
+Version 2.21.0 12 Oct 2011
* Update CREDITS file to consistently credit all contributers.
* Minor tweaks based on PyLint analysis (mostly config changes).
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2013-03-21 14:15:12
|
Revision: 1027
http://cedar-backup.svn.sourceforge.net/cedar-backup/?rev=1027&view=rev
Author: pronovic
Date: 2013-03-21 14:15:05 +0000 (Thu, 21 Mar 2013)
Log Message:
-----------
Apply patches from Jan Medlock
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/extend/split.py
cedar-backup2/trunk/Changelog
cedar-backup2/trunk/doc/cback.1
Modified: cedar-backup2/trunk/CedarBackup2/extend/split.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/extend/split.py 2011-10-12 14:22:54 UTC (rev 1026)
+++ cedar-backup2/trunk/CedarBackup2/extend/split.py 2013-03-21 14:15:05 UTC (rev 1027)
@@ -482,7 +482,7 @@
(result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=False)
if result != 0:
raise IOError("Error [%d] calling split for [%s]." % (result, sourcePath))
- pattern = re.compile(r"(creating file `)(%s)(.*)(')" % prefix)
+ pattern = re.compile(r"(creating file [`'])(%s)(.*)(')" % prefix)
match = pattern.search(output[-1:][0])
if match is None:
raise IOError("Unable to parse output from split command.")
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2011-10-12 14:22:54 UTC (rev 1026)
+++ cedar-backup2/trunk/Changelog 2013-03-21 14:15:05 UTC (rev 1027)
@@ -1,3 +1,9 @@
+Version 2.21.1 unreleased
+
+ * Apply patches provided by Jan Medlock as Debian bugs.
+ * Fix typo in manpage (showed -s instead of -D)
+ * Support output from latest /usr/bin/split (' vs. `)
+
Version 2.21.0 12 Oct 2011
* Update CREDITS file to consistently credit all contributers.
Modified: cedar-backup2/trunk/doc/cback.1
===================================================================
--- cedar-backup2/trunk/doc/cback.1 2011-10-12 14:22:54 UTC (rev 1026)
+++ cedar-backup2/trunk/doc/cback.1 2013-03-21 14:15:05 UTC (rev 1027)
@@ -125,7 +125,7 @@
Under some circumstances, this is useful information to include along with a
bug report.
.TP
-\fB\-s\fR, \fB\-\-diagnostics\fR
+\fB\-D\fR, \fB\-\-diagnostics\fR
Display runtime diagnostic information and then exit. This diagnostic
information is often useful when filing a bug report.
.SH ACTIONS
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2013-03-21 14:38:24
|
Revision: 1029
http://cedar-backup.svn.sourceforge.net/cedar-backup/?rev=1029&view=rev
Author: pronovic
Date: 2013-03-21 14:38:17 +0000 (Thu, 21 Mar 2013)
Log Message:
-----------
Release 2.21.1
Modified Paths:
--------------
cedar-backup2/trunk/CREDITS
cedar-backup2/trunk/CedarBackup2/release.py
cedar-backup2/trunk/Changelog
Modified: cedar-backup2/trunk/CREDITS
===================================================================
--- cedar-backup2/trunk/CREDITS 2013-03-21 14:33:51 UTC (rev 1028)
+++ cedar-backup2/trunk/CREDITS 2013-03-21 14:38:17 UTC (rev 1029)
@@ -22,10 +22,10 @@
Pronovici. Some portions have been based on other pieces of open-source
software, as indicated in the source code itself.
-Unless otherwise indicated, all Cedar Backup source code is Copyright
-(c) 2004-2011 Kenneth J. Pronovici and is released under the GNU General
-Public License. The contents of the GNU General Public License can be
-found in the LICENSE file, or can be downloaded from http://www.gnu.org/.
+Unless otherwise indicated, all Cedar Backup source code is Copyright (c)
+2004-2011,2013 Kenneth J. Pronovici and is released under the GNU General
+Public License, version 2. The contents of the GNU General Public License
+can be found in the LICENSE file, or can be downloaded from http://www.gnu.org/.
Various patches have been contributed to the Cedar Backup codebase by
Dmitry Rutsky. Major contributions include the initial implementation for
@@ -41,6 +41,9 @@
Zoran Bosnjak contributed changes to collect.py to implement recursive
collect behavior based on recursion level.
+Jan Medlock contributed patches to improve the manpage and to support
+recent versions of the /usr/bin/split command.
+
Minor code snippets derived from newsgroup and mailing list postings are
not generally attributed unless I used someone else's source code verbatim.
Modified: cedar-backup2/trunk/CedarBackup2/release.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/release.py 2013-03-21 14:33:51 UTC (rev 1028)
+++ cedar-backup2/trunk/CedarBackup2/release.py 2013-03-21 14:38:17 UTC (rev 1029)
@@ -33,8 +33,8 @@
AUTHOR = "Kenneth J. Pronovici"
EMAIL = "pro...@ie..."
-COPYRIGHT = "2004-2011"
-VERSION = "2.21.0"
-DATE = "12 Oct 2011"
+COPYRIGHT = "2004-2011,2013"
+VERSION = "2.21.1"
+DATE = "21 Mar 2013"
URL = "http://cedar-backup.sourceforge.net/"
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2013-03-21 14:33:51 UTC (rev 1028)
+++ cedar-backup2/trunk/Changelog 2013-03-21 14:38:17 UTC (rev 1029)
@@ -1,4 +1,4 @@
-Version 2.21.1 unreleased
+Version 2.21.1 21 Mar 2013
* Apply patches provided by Jan Medlock as Debian bugs.
* Fix typo in manpage (showed -s instead of -D)
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2013-05-10 02:05:17
|
Revision: 1041
http://sourceforge.net/p/cedar-backup/code/1041
Author: pronovic
Date: 2013-05-10 02:05:13 +0000 (Fri, 10 May 2013)
Log Message:
-----------
Eject-related kludges
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/actions/util.py
cedar-backup2/trunk/CedarBackup2/config.py
cedar-backup2/trunk/CedarBackup2/writers/cdwriter.py
cedar-backup2/trunk/CedarBackup2/writers/dvdwriter.py
cedar-backup2/trunk/Changelog
cedar-backup2/trunk/testcase/configtests.py
cedar-backup2/trunk/testcase/data/cback.conf.12
Modified: cedar-backup2/trunk/CedarBackup2/actions/util.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/actions/util.py 2013-05-10 02:01:14 UTC (rev 1040)
+++ cedar-backup2/trunk/CedarBackup2/actions/util.py 2013-05-10 02:05:13 UTC (rev 1041)
@@ -148,14 +148,15 @@
driveSpeed = config.store.driveSpeed
noEject = config.store.noEject
refreshMediaDelay = config.store.refreshMediaDelay
+ ejectDelay = config.store.ejectDelay
deviceType = _getDeviceType(config)
mediaType = _getMediaType(config)
if deviceMounted(devicePath):
raise IOError("Device [%s] is currently mounted." % (devicePath))
if deviceType == "cdwriter":
- return CdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay)
+ return CdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay)
elif deviceType == "dvdwriter":
- return DvdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay)
+ return DvdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay)
else:
raise ValueError("Device type [%s] is invalid." % deviceType)
@@ -270,6 +271,7 @@
raise ValueError("Only rewritable media types can be initialized.")
mediaLabel = buildMediaLabel()
writer = createWriter(config)
+ writer.refreshMedia()
writer.initializeImage(True, config.options.workingDir, mediaLabel) # always create a new disc
tempdir = tempfile.mkdtemp(dir=config.options.workingDir)
try:
Modified: cedar-backup2/trunk/CedarBackup2/config.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/config.py 2013-05-10 02:01:14 UTC (rev 1040)
+++ cedar-backup2/trunk/CedarBackup2/config.py 2013-05-10 02:05:13 UTC (rev 1041)
@@ -3410,6 +3410,7 @@
- The drive speed must be an integer >= 1
- The blanking behavior must be a C{BlankBehavior} object
- The refresh media delay must be an integer >= 0
+ - The eject delay must be an integer >= 0
Note that although the blanking factor must be a positive floating point
number, it is stored as a string. This is done so that we can losslessly go
@@ -3418,13 +3419,14 @@
@sort: __init__, __repr__, __str__, __cmp__, sourceDir,
mediaType, deviceType, devicePath, deviceScsiId,
driveSpeed, checkData, checkMedia, warnMidnite, noEject,
- blankBehavior, refreshMediaDelay
+ blankBehavior, refreshMediaDelay, ejectDelay
"""
def __init__(self, sourceDir=None, mediaType=None, deviceType=None,
devicePath=None, deviceScsiId=None, driveSpeed=None,
checkData=False, warnMidnite=False, noEject=False,
- checkMedia=False, blankBehavior=None, refreshMediaDelay=None):
+ checkMedia=False, blankBehavior=None, refreshMediaDelay=None,
+ ejectDelay=None):
"""
Constructor for the C{StoreConfig} class.
@@ -3440,6 +3442,7 @@
@param noEject: Indicates that the writer device should not be ejected.
@param blankBehavior: Controls optimized blanking behavior.
@param refreshMediaDelay: Delay, in seconds, to add after refreshing media
+ @param ejectDelay: Delay, in seconds, to add after ejecting media before closing the tray
@raise ValueError: If one of the values is invalid.
"""
@@ -3455,6 +3458,7 @@
self._noEject = None
self._blankBehavior = None
self._refreshMediaDelay = None
+ self._ejectDelay = None
self.sourceDir = sourceDir
self.mediaType = mediaType
self.deviceType = deviceType
@@ -3467,16 +3471,18 @@
self.noEject = noEject
self.blankBehavior = blankBehavior
self.refreshMediaDelay = refreshMediaDelay
+ self.ejectDelay = ejectDelay
def __repr__(self):
"""
Official string representation for class instance.
"""
- return "StoreConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (
+ return "StoreConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (
self.sourceDir, self.mediaType, self.deviceType,
self.devicePath, self.deviceScsiId, self.driveSpeed,
self.checkData, self.warnMidnite, self.noEject,
- self.checkMedia, self.blankBehavior, self.refreshMediaDelay)
+ self.checkMedia, self.blankBehavior, self.refreshMediaDelay,
+ self.ejectDelay)
def __str__(self):
"""
@@ -3552,6 +3558,11 @@
return -1
else:
return 1
+ if self.ejectDelay != other.ejectDelay:
+ if self.ejectDelay < other.ejectDelay:
+ return -1
+ else:
+ return 1
return 0
def _setSourceDir(self, value):
@@ -3765,6 +3776,31 @@
"""
return self._refreshMediaDelay
+ def _setEjectDelay(self, value):
+ """
+ Property target used to set the ejectDelay.
+ The value must be an integer >= 0.
+ @raise ValueError: If the value is not valid.
+ """
+ if value is None:
+ self._ejectDelay = None
+ else:
+ try:
+ value = int(value)
+ except TypeError:
+ raise ValueError("Action ejectDelay value must be an integer >= 0.")
+ if value < 0:
+ raise ValueError("Action ejectDelay value must be an integer >= 0.")
+ if value == 0:
+ value = None # normalize this out, since it's the default
+ self._ejectDelay = value
+
+ def _getEjectDelay(self):
+ """
+ Property target used to get the action ejectDelay.
+ """
+ return self._ejectDelay
+
sourceDir = property(_getSourceDir, _setSourceDir, None, "Directory whose contents should be written to media.")
mediaType = property(_getMediaType, _setMediaType, None, "Type of the media (see notes above).")
deviceType = property(_getDeviceType, _setDeviceType, None, "Type of the device (optional, see notes above).")
@@ -3777,6 +3813,7 @@
noEject = property(_getNoEject, _setNoEject, None, "Indicates that the writer device should not be ejected.")
blankBehavior = property(_getBlankBehavior, _setBlankBehavior, None, "Controls optimized blanking behavior.")
refreshMediaDelay = property(_getRefreshMediaDelay, _setRefreshMediaDelay, None, "Delay, in seconds, to add after refreshing media.")
+ ejectDelay = property(_getEjectDelay, _setEjectDelay, None, "Delay, in seconds, to add after ejecting media before closing the tray")
########################################################################
@@ -4587,6 +4624,7 @@
store.noEject = readBoolean(sectionNode, "no_eject")
store.blankBehavior = Config._parseBlankBehavior(sectionNode)
store.refreshMediaDelay = readInteger(sectionNode, "refresh_media_delay")
+ store.ejectDelay = readInteger(sectionNode, "eject_delay")
return store
@staticmethod
@@ -5234,6 +5272,8 @@
checkMedia //cb_config/store/check_media
warnMidnite //cb_config/store/warn_midnite
noEject //cb_config/store/no_eject
+ refreshMediaDelay //cb_config/store/refresh_media_delay
+ ejectDelay //cb_config/store/eject_delay
Blanking behavior configuration is added by the L{_addBlankBehavior}
method.
@@ -5257,6 +5297,7 @@
addBooleanNode(xmlDom, sectionNode, "warn_midnite", storeConfig.warnMidnite)
addBooleanNode(xmlDom, sectionNode, "no_eject", storeConfig.noEject)
addIntegerNode(xmlDom, sectionNode, "refresh_media_delay", storeConfig.refreshMediaDelay)
+ addIntegerNode(xmlDom, sectionNode, "eject_delay", storeConfig.ejectDelay)
Config._addBlankBehavior(xmlDom, sectionNode, storeConfig.blankBehavior)
@staticmethod
Modified: cedar-backup2/trunk/CedarBackup2/writers/cdwriter.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/writers/cdwriter.py 2013-05-10 02:01:14 UTC (rev 1040)
+++ cedar-backup2/trunk/CedarBackup2/writers/cdwriter.py 2013-05-10 02:05:13 UTC (rev 1041)
@@ -414,7 +414,7 @@
def __init__(self, device, scsiId=None, driveSpeed=None,
mediaType=MEDIA_CDRW_74, noEject=False,
- refreshMediaDelay=0, unittest=False):
+ refreshMediaDelay=0, ejectDelay=0, unittest=False):
"""
Initializes a CD writer object.
@@ -458,6 +458,9 @@
@param refreshMediaDelay: Refresh media delay to use, if any
@type refreshMediaDelay: Number of seconds, an integer >= 0
+ @param ejectDelay: Eject delay to use, if any
+ @type ejectDelay: Number of seconds, an integer >= 0
+
@param unittest: Turns off certain validations, for use in unit testing.
@type unittest: Boolean true/false
@@ -473,6 +476,7 @@
self._media = MediaDefinition(mediaType)
self._noEject = noEject
self._refreshMediaDelay = refreshMediaDelay
+ self._ejectDelay = ejectDelay
if not unittest:
(self._deviceType,
self._deviceVendor,
@@ -567,6 +571,12 @@
"""
return self._refreshMediaDelay
+ def _getEjectDelay(self):
+ """
+ Property target used to get the configured eject delay, in seconds.
+ """
+ return self._ejectDelay
+
device = property(_getDevice, None, None, doc="Filesystem device name for this writer.")
scsiId = property(_getScsiId, None, None, doc="SCSI id for the device, in the form C{[<method>:]scsibus,target,lun}.")
hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer, either SCSI id or device path.")
@@ -580,6 +590,7 @@
deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.")
deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.")
refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.")
+ ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.")
#################################################
@@ -808,16 +819,48 @@
If the writer was constructed with C{noEject=True}, then this is a no-op.
+ Starting with Debian wheezy on my backup hardware, I started seeing
+ consistent problems with the eject command. I couldn't tell whether
+ these problems were due to the device management system or to the new
+ kernel (3.2.0). Initially, I saw simple eject failures, possibly because
+ I was opening and closing the tray too quickly. I worked around that
+ behavior with the new ejectDelay flag.
+
+ Later, I sometimes ran into issues after writing an image to a disc:
+ eject would give errors like "unable to eject, last error: Inappropriate
+ ioctl for device". Various sources online (like Ubuntu bug #875543)
+ suggested that the drive was being locked somehow, and that the
+ workaround was to run 'eject -i off' to unlock it. Sure enough, that
+ fixed the problem for me, so now it's a normal error-handling strategy.
+
@raise IOError: If there is an error talking to the device.
"""
if not self._noEject:
if self._deviceHasTray and self._deviceCanEject:
args = CdWriter._buildOpenTrayArgs(self._device)
- command = resolveCommand(EJECT_COMMAND)
- result = executeCommand(command, args)[0]
+ result = executeCommand(EJECT_COMMAND, args)[0]
if result != 0:
- raise IOError("Error (%d) executing eject command to open tray." % result)
+ logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.")
+ self.unlockTray()
+ result = executeCommand(EJECT_COMMAND, args)[0]
+ if result != 0:
+ raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result)
+ logger.debug("Kludge was apparently successful.")
+ if self.ejectDelay is not None:
+ logger.debug("Per configuration, sleeping %d seconds after opening tray." % self.ejectDelay)
+ time.sleep(self.ejectDelay)
+ def unlockTray(self):
+ """
+ Unlocks the device's tray.
+ @raise IOError: If there is an error talking to the device.
+ """
+ args = CdWriter._buildUnlockTrayArgs(self._device)
+ command = resolveCommand(EJECT_COMMAND)
+ result = executeCommand(command, args)[0]
+ if result != 0:
+ raise IOError("Error (%d) executing eject command to unlock tray." % result)
+
def closeTray(self):
"""
Closes the device's tray.
@@ -847,24 +890,25 @@
Sometimes, a device gets confused about the state of its media. Often,
all it takes to solve the problem is to eject the media and then
- immediately reload it. (There is also a configurable refresh media delay
- which can be applied after the tray is closed, for situations where this
- makes a difference.)
+ immediately reload it. (There are also configurable eject and refresh
+ media delays which can be applied, for situations where this makes a
+ difference.)
This only works if the device has a tray and supports ejecting its media.
We have no way to know if the tray is currently open or closed, so we
just send the appropriate command and hope for the best. If the device
does not have a tray or does not support ejecting its media, then we do
- nothing. The configured delay still applies, though.
+ nothing. The configured delays still apply, though.
@raise IOError: If there is an error talking to the device.
"""
self.openTray()
self.closeTray()
+ self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes!
if self.refreshMediaDelay is not None:
logger.debug("Per configuration, sleeping %d seconds to stabilize media state." % self.refreshMediaDelay)
time.sleep(self.refreshMediaDelay)
- logger.debug("Sleep is complete; hopefully media state is stable now.")
+ logger.debug("Media refresh complete; hopefully media state is stable now.")
def writeImage(self, imagePath=None, newDisc=False, writeMulti=True):
"""
@@ -1127,6 +1171,23 @@
return args
@staticmethod
+ def _buildUnlockTrayArgs(device):
+ """
+ Builds a list of arguments to be passed to a C{eject} command.
+
+ The arguments will cause the C{eject} command to unlock the tray.
+
+ @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}.
+
+ @return: List suitable for passing to L{util.executeCommand} as C{args}.
+ """
+ args = []
+ args.append("-i")
+ args.append("off")
+ args.append(device)
+ return args
+
+ @staticmethod
def _buildCloseTrayArgs(device):
"""
Builds a list of arguments to be passed to a C{eject} command.
Modified: cedar-backup2/trunk/CedarBackup2/writers/dvdwriter.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/writers/dvdwriter.py 2013-05-10 02:01:14 UTC (rev 1040)
+++ cedar-backup2/trunk/CedarBackup2/writers/dvdwriter.py 2013-05-10 02:05:13 UTC (rev 1041)
@@ -362,7 +362,7 @@
def __init__(self, device, scsiId=None, driveSpeed=None,
mediaType=MEDIA_DVDPLUSRW, noEject=False,
- refreshMediaDelay=0, unittest=False):
+ refreshMediaDelay=0, ejectDelay=0, unittest=False):
"""
Initializes a DVD writer object.
@@ -398,6 +398,9 @@
@param refreshMediaDelay: Refresh media delay to use, if any
@type refreshMediaDelay: Number of seconds, an integer >= 0
+ @param ejectDelay: Eject delay to use, if any
+ @type ejectDelay: Number of seconds, an integer >= 0
+
@param unittest: Turns off certain validations, for use in unit testing.
@type unittest: Boolean true/false
@@ -413,6 +416,7 @@
self._driveSpeed = validateDriveSpeed(driveSpeed)
self._media = MediaDefinition(mediaType)
self._refreshMediaDelay = refreshMediaDelay
+ self._ejectDelay = ejectDelay
if noEject:
self._deviceHasTray = False
self._deviceCanEject = False
@@ -473,6 +477,12 @@
"""
return self._refreshMediaDelay
+ def _getEjectDelay(self):
+ """
+ Property target used to get the configured eject delay, in seconds.
+ """
+ return self._ejectDelay
+
device = property(_getDevice, None, None, doc="Filesystem device name for this writer.")
scsiId = property(_getScsiId, None, None, doc="SCSI id for the device (saved for reference only).")
hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer (always the device path).")
@@ -481,6 +491,7 @@
deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.")
deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.")
refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.")
+ ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.")
#################################################
@@ -612,6 +623,20 @@
does not have a tray or does not support ejecting its media, then we do
nothing.
+ Starting with Debian wheezy on my backup hardware, I started seeing
+ consistent problems with the eject command. I couldn't tell whether
+ these problems were due to the device management system or to the new
+ kernel (3.2.0). Initially, I saw simple eject failures, possibly because
+ I was opening and closing the tray too quickly. I worked around that
+ behavior with the new ejectDelay flag.
+
+ Later, I sometimes ran into issues after writing an image to a disc:
+ eject would give errors like "unable to eject, last error: Inappropriate
+ ioctl for device". Various sources online (like Ubuntu bug #875543)
+ suggested that the drive was being locked somehow, and that the
+ workaround was to run 'eject -i off' to unlock it. Sure enough, that
+ fixed the problem for me, so now it's a normal error-handling strategy.
+
@raise IOError: If there is an error talking to the device.
"""
if self._deviceHasTray and self._deviceCanEject:
@@ -619,8 +644,27 @@
args = [ self.device, ]
result = executeCommand(command, args)[0]
if result != 0:
- raise IOError("Error (%d) executing eject command to open tray." % result)
+ logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.")
+ self.unlockTray()
+ result = executeCommand(command, args)[0]
+ if result != 0:
+ raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result)
+ logger.debug("Kludge was apparently successful.")
+ if self.ejectDelay is not None:
+ logger.debug("Per configuration, sleeping %d seconds after opening tray." % self.ejectDelay)
+ time.sleep(self.ejectDelay)
+ def unlockTray(self):
+ """
+ Unlocks the device's tray via 'eject -i off'.
+ @raise IOError: If there is an error talking to the device.
+ """
+ command = resolveCommand(EJECT_COMMAND)
+ args = [ "-i", "off", self.device, ]
+ result = executeCommand(command, args)[0]
+ if result != 0:
+ raise IOError("Error (%d) executing eject command to unlock tray." % result)
+
def closeTray(self):
"""
Closes the device's tray.
@@ -647,24 +691,25 @@
Sometimes, a device gets confused about the state of its media. Often,
all it takes to solve the problem is to eject the media and then
- immediately reload it. (There is also a configurable refresh media delay
- which can be applied after the tray is closed, for situations where this
- makes a difference.)
+ immediately reload it. (There are also configurable eject and refresh
+ media delays which can be applied, for situations where this makes a
+ difference.)
This only works if the device has a tray and supports ejecting its media.
We have no way to know if the tray is currently open or closed, so we
just send the appropriate command and hope for the best. If the device
does not have a tray or does not support ejecting its media, then we do
- nothing. The configured delay still applies, though.
+ nothing. The configured delays still apply, though.
@raise IOError: If there is an error talking to the device.
"""
self.openTray()
self.closeTray()
+ self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes!
if self.refreshMediaDelay is not None:
logger.debug("Per configuration, sleeping %d seconds to stabilize media state." % self.refreshMediaDelay)
time.sleep(self.refreshMediaDelay)
- logger.debug("Sleep is complete; hopefully media state is stable now.")
+ logger.debug("Media refresh complete; hopefully media state is stable now.")
def writeImage(self, imagePath=None, newDisc=False, writeMulti=True):
"""
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2013-05-10 02:01:14 UTC (rev 1040)
+++ cedar-backup2/trunk/Changelog 2013-05-10 02:05:13 UTC (rev 1041)
@@ -1,3 +1,8 @@
+Version 2.22.0 unreleased
+
+ * Add eject-related kludges to work around observed behavior.
+
+
Version 2.21.1 21 Mar 2013
* Apply patches provided by Jan Medlock as Debian bugs.
Modified: cedar-backup2/trunk/testcase/configtests.py
===================================================================
--- cedar-backup2/trunk/testcase/configtests.py 2013-05-10 02:01:14 UTC (rev 1040)
+++ cedar-backup2/trunk/testcase/configtests.py 2013-05-10 02:05:13 UTC (rev 1041)
@@ -7851,13 +7851,14 @@
self.failUnlessEqual(False, store.noEject)
self.failUnlessEqual(None, store.blankBehavior)
self.failUnlessEqual(None, store.refreshMediaDelay)
+ self.failUnlessEqual(None, store.ejectDelay)
def testConstructor_002(self):
"""
Test constructor with all values filled in, with valid values.
"""
behavior = BlankBehavior("weekly", "1.3")
- store = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior, 12)
+ store = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior, 12, 13)
self.failUnlessEqual("/source", store.sourceDir)
self.failUnlessEqual("cdr-74", store.mediaType)
self.failUnlessEqual("cdwriter", store.deviceType)
@@ -7870,6 +7871,7 @@
self.failUnlessEqual(True, store.noEject)
self.failUnlessEqual(behavior, store.blankBehavior)
self.failUnlessEqual(12, store.refreshMediaDelay)
+ self.failUnlessEqual(13, store.ejectDelay)
def testConstructor_003(self):
"""
@@ -8306,8 +8308,42 @@
self.failUnlessAssignRaises(ValueError, store, "refreshMediaDelay", CollectDir())
self.failUnlessEqual(None, store.refreshMediaDelay)
+ def testConstructor_044(self):
+ """
+ Test assignment of ejectDelay attribute, None value.
+ """
+ store = StoreConfig(ejectDelay=4)
+ self.failUnlessEqual(4, store.ejectDelay)
+ store.ejectDelay = None
+ self.failUnlessEqual(None, store.ejectDelay)
+ def testConstructor_045(self):
+ """
+ Test assignment of ejectDelay attribute, valid value.
+ """
+ store = StoreConfig()
+ self.failUnlessEqual(None, store.ejectDelay)
+ store.ejectDelay = 4
+ self.failUnlessEqual(4, store.ejectDelay)
+ store.ejectDelay = "12"
+ self.failUnlessEqual(12, store.ejectDelay)
+ store.ejectDelay = "0"
+ self.failUnlessEqual(None, store.ejectDelay)
+ store.ejectDelay = 0
+ self.failUnlessEqual(None, store.ejectDelay)
+ def testConstructor_046(self):
+ """
+ Test assignment of ejectDelay attribute, invalid value (not an integer).
+ """
+ store = StoreConfig()
+ self.failUnlessEqual(None, store.ejectDelay)
+ self.failUnlessAssignRaises(ValueError, store, "ejectDelay", "blech")
+ self.failUnlessEqual(None, store.ejectDelay)
+ self.failUnlessAssignRaises(ValueError, store, "ejectDelay", CollectDir())
+ self.failUnlessEqual(None, store.ejectDelay)
+
+
############################
# Test comparison operators
############################
@@ -8332,8 +8368,8 @@
"""
behavior1 = BlankBehavior("weekly", "1.3")
behavior2 = BlankBehavior("weekly", "1.3")
- store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4)
- store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4)
+ store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5)
+ store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5)
self.failUnlessEqual(store1, store2)
self.failUnless(store1 == store2)
self.failUnless(not store1 < store2)
@@ -8362,8 +8398,8 @@
"""
behavior1 = BlankBehavior("weekly", "1.3")
behavior2 = BlankBehavior("weekly", "1.3")
- store1 = StoreConfig("/source1", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4)
- store2 = StoreConfig("/source2", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4)
+ store1 = StoreConfig("/source1", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5)
+ store2 = StoreConfig("/source2", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5)
self.failIfEqual(store1, store2)
self.failUnless(not store1 == store2)
self.failUnless(store1 < store2)
@@ -8392,8 +8428,8 @@
"""
behavior1 = BlankBehavior("weekly", "1.3")
behavior2 = BlankBehavior("weekly", "1.3")
- store1 = StoreConfig("/source", "cdrw-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4)
- store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4)
+ store1 = StoreConfig("/source", "cdrw-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5)
+ store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5)
self.failIfEqual(store1, store2)
self.failUnless(not store1 == store2)
self.failUnless(not store1 < store2)
@@ -8436,8 +8472,8 @@
"""
behavior1 = BlankBehavior("weekly", "1.3")
behavior2 = BlankBehavior("weekly", "1.3")
- store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4)
- store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/hdd", "0,0,0", 4, True, True, True, True, behavior2, 4)
+ store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5)
+ store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/hdd", "0,0,0", 4, True, True, True, True, behavior2, 4, 5)
self.failIfEqual(store1, store2)
self.failUnless(not store1 == store2)
self.failUnless(store1 < store2)
@@ -8466,8 +8502,8 @@
"""
behavior1 = BlankBehavior("weekly", "1.3")
behavior2 = BlankBehavior("weekly", "1.3")
- store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4)
- store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "ATA:0,0,0", 4, True, True, True, True, behavior2, 4)
+ store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5)
+ store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "ATA:0,0,0", 4, True, True, True, True, behavior2, 4, 5)
self.failIfEqual(store1, store2)
self.failUnless(not store1 == store2)
self.failUnless(store1 < store2)
@@ -8496,8 +8532,8 @@
"""
behavior1 = BlankBehavior("weekly", "1.3")
behavior2 = BlankBehavior("weekly", "1.3")
- store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior1, 4)
- store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4)
+ store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior1, 4, 5)
+ store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5)
...
[truncated message content] |
|
From: <pro...@us...> - 2013-05-10 02:16:14
|
Revision: 1044
http://sourceforge.net/p/cedar-backup/code/1044
Author: pronovic
Date: 2013-05-10 02:16:12 +0000 (Fri, 10 May 2013)
Log Message:
-----------
Prepare to release 2.22.0
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/release.py
cedar-backup2/trunk/Changelog
Modified: cedar-backup2/trunk/CedarBackup2/release.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/release.py 2013-05-10 02:10:25 UTC (rev 1043)
+++ cedar-backup2/trunk/CedarBackup2/release.py 2013-05-10 02:16:12 UTC (rev 1044)
@@ -34,7 +34,7 @@
AUTHOR = "Kenneth J. Pronovici"
EMAIL = "pro...@ie..."
COPYRIGHT = "2004-2011,2013"
-VERSION = "2.21.1"
-DATE = "21 Mar 2013"
+VERSION = "2.22.0"
+DATE = "09 May 2013"
URL = "http://cedar-backup.sourceforge.net/"
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2013-05-10 02:10:25 UTC (rev 1043)
+++ cedar-backup2/trunk/Changelog 2013-05-10 02:16:12 UTC (rev 1044)
@@ -1,8 +1,9 @@
-Version 2.22.0 unreleased
+Version 2.22.0 09 May 2013
* Add eject-related kludges to work around observed behavior.
+ * New config option eject_delay, to slow down open/close
+ * Unlock tray with 'eject -i off' to handle potential problems
-
Version 2.21.1 21 Mar 2013
* Apply patches provided by Jan Medlock as Debian bugs.
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-01 01:44:21
|
Revision: 1047
http://sourceforge.net/p/cedar-backup/code/1047
Author: pronovic
Date: 2014-10-01 01:44:17 +0000 (Wed, 01 Oct 2014)
Log Message:
-----------
Start implementing a new Amazon S3 extension
Modified Paths:
--------------
cedar-backup2/trunk/CREDITS
cedar-backup2/trunk/CedarBackup2/extend/__init__.py
cedar-backup2/trunk/CedarBackup2/release.py
cedar-backup2/trunk/Changelog
cedar-backup2/trunk/testcase/configtests.py
cedar-backup2/trunk/testcase/data/cback.conf.19
cedar-backup2/trunk/util/test.py
Added Paths:
-----------
cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
cedar-backup2/trunk/testcase/amazons3tests.py
cedar-backup2/trunk/testcase/data/amazons3.conf.1
cedar-backup2/trunk/testcase/data/amazons3.conf.2
Modified: cedar-backup2/trunk/CREDITS
===================================================================
--- cedar-backup2/trunk/CREDITS 2014-10-01 01:42:33 UTC (rev 1046)
+++ cedar-backup2/trunk/CREDITS 2014-10-01 01:44:17 UTC (rev 1047)
@@ -22,10 +22,11 @@
Pronovici. Some portions have been based on other pieces of open-source
software, as indicated in the source code itself.
-Unless otherwise indicated, all Cedar Backup source code is Copyright (c)
-2004-2011,2013 Kenneth J. Pronovici and is released under the GNU General
-Public License, version 2. The contents of the GNU General Public License
-can be found in the LICENSE file, or can be downloaded from http://www.gnu.org/.
+Unless otherwise indicated, all Cedar Backup source code is Copyright
+(c) 2004-2011,2013,2014 Kenneth J. Pronovici and is released under the GNU
+General Public License, version 2. The contents of the GNU General Public
+License can be found in the LICENSE file, or can be downloaded from
+http://www.gnu.org/.
Various patches have been contributed to the Cedar Backup codebase by
Dmitry Rutsky. Major contributions include the initial implementation for
Modified: cedar-backup2/trunk/CedarBackup2/extend/__init__.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/extend/__init__.py 2014-10-01 01:42:33 UTC (rev 1046)
+++ cedar-backup2/trunk/CedarBackup2/extend/__init__.py 2014-10-01 01:44:17 UTC (rev 1047)
@@ -38,5 +38,5 @@
# Using 'from CedarBackup2.extend import *' will just import the modules listed
# in the __all__ variable.
-__all__ = [ 'encrypt', 'mbox', 'mysql', 'postgresql', 'split', 'subversion', 'sysinfo', ]
+__all__ = [ 'amazons3', 'encrypt', 'mbox', 'mysql', 'postgresql', 'split', 'subversion', 'sysinfo', ]
Added: cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/extend/amazons3.py (rev 0)
+++ cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-01 01:44:17 UTC (rev 1047)
@@ -0,0 +1,467 @@
+# -*- coding: iso-8859-1 -*-
+# vim: set ft=python ts=3 sw=3 expandtab:
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+#
+# C E D A R
+# S O L U T I O N S "Software done right."
+# S O F T W A R E
+#
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+#
+# Copyright (c) 2014 Kenneth J. Pronovici.
+# All rights reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License,
+# Version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+#
+# Copies of the GNU General Public License are available from
+# the Free Software Foundation website, http://www.gnu.org/.
+#
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+#
+# Author : Kenneth J. Pronovici <pro...@ie...>
+# Language : Python (>= 2.5)
+# Project : Official Cedar Backup Extensions
+# Revision : $Id$
+# Purpose : "Store" type extension that writes data to Amazon S3.
+#
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+
+########################################################################
+# Module documentation
+########################################################################
+
+"""
+Store-type extension that writes data to Amazon S3.
+
+This extension requires a new configuration section <amazons3> and is intended
+to be run immediately after the standard stage action, replacing the standard
+store action. Aside from its own configuration, it requires the options and
+staging configuration sections in the standard Cedar Backup configuration file.
+
+This extension relies on the U{{Amazon S3Tools} <http://s3tools.org/>} package.
+It is a very thin wrapper around the C{s3cmd put} command. Before you use this
+extension, you need to set up your Amazon S3 account and configure C{s3cmd} as
+detailed in the U{{HOWTO} <http://s3tools.org/s3cmd-howto>}. The configured
+backup user will run the C{s3cmd} program, so make sure you configure S3 Tools
+as that user, and not root.
+
+It's up to you how to configure the S3 Tools connection to Amazon, but I
+recommend that you configure GPG encrpytion using a strong passphrase. One way
+to generate a strong passphrase is using your random number generator, i.e.
+C{dd if=/dev/urandom count=20 bs=1 | xxd -ps}. (See U{{StackExchange}
+<http://security.stackexchange.com/questions/14867/gpg-encryption-security>}
+for more details about that advice.) If decide to use encryption, make sure you
+save off the passphrase in a safe place, so you can get at your backup data
+later if you need to.
+
+This extension was written for and tested on Linux. I do not expect it to
+work on non-UNIX platforms.
+
+@author: Kenneth J. Pronovici <pro...@ie...>
+"""
+
+########################################################################
+# Imported modules
+########################################################################
+
+# System modules
+import os
+import logging
+import tempfile
+
+# Cedar Backup modules
+from CedarBackup2.util import resolveCommand, executeCommand
+from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode
+from CedarBackup2.xmlutil import readFirstChild, readString
+from CedarBackup2.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles
+
+
+########################################################################
+# Module-wide constants and variables
+########################################################################
+
+logger = logging.getLogger("CedarBackup2.log.extend.amazons3")
+
+S3CMD_COMMAND = [ "s3cmd", ]
+STORE_INDICATOR = "cback.amazons3"
+
+
+########################################################################
+# AmazonS3Config class definition
+########################################################################
+
+class AmazonS3Config(object):
+
+ """
+ Class representing Amazon S3 configuration.
+
+ Amazon S3 configuration is used for storing staging directories
+ in Amazon's cloud storage using the C{s3cmd} tool.
+
+ The following restrictions exist on data in this class:
+
+ - The s3Bucket value must be a non-empty string
+
+ @sort: __init__, __repr__, __str__, __cmp__, s3Bucket
+ """
+
+ def __init__(self, s3Bucket=None):
+ """
+ Constructor for the C{AmazonS3Config} class.
+
+ @param s3Bucket: Name of the Amazon S3 bucket in which to store the data
+
+ @raise ValueError: If one of the values is invalid.
+ """
+ self._s3Bucket = None
+ self.s3Bucket = s3Bucket
+
+ def __repr__(self):
+ """
+ Official string representation for class instance.
+ """
+ return "AmazonS3Config(%s)" % (self.s3Bucket)
+
+ def __str__(self):
+ """
+ Informal string representation for class instance.
+ """
+ return self.__repr__()
+
+ def __cmp__(self, other):
+ """
+ Definition of equals operator for this class.
+ Lists within this class are "unordered" for equality comparisons.
+ @param other: Other object to compare to.
+ @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other.
+ """
+ if other is None:
+ return 1
+ if self.s3Bucket != other.s3Bucket:
+ if self.s3Bucket < other.s3Bucket:
+ return -1
+ else:
+ return 1
+ return 0
+
+ def _setS3Bucket(self, value):
+ """
+ Property target used to set the S3 bucket.
+ """
+ if value is not None:
+ if len(value) < 1:
+ raise ValueError("S3 bucket must be non-empty string.")
+ self._s3Bucket = value
+
+ def _getS3Bucket(self):
+ """
+ Property target used to get the S3 bucket.
+ """
+ return self._s3Bucket
+
+ s3Bucket = property(_getS3Bucket, _setS3Bucket, None, doc="Amazon S3 Bucket")
+
+
+########################################################################
+# LocalConfig class definition
+########################################################################
+
+class LocalConfig(object):
+
+ """
+ Class representing this extension's configuration document.
+
+ This is not a general-purpose configuration object like the main Cedar
+ Backup configuration object. Instead, it just knows how to parse and emit
+ amazons3-specific configuration values. Third parties who need to read and
+ write configuration related to this extension should access it through the
+ constructor, C{validate} and C{addConfig} methods.
+
+ @note: Lists within this class are "unordered" for equality comparisons.
+
+ @sort: __init__, __repr__, __str__, __cmp__, amazons3, validate, addConfig
+ """
+
+ def __init__(self, xmlData=None, xmlPath=None, validate=True):
+ """
+ Initializes a configuration object.
+
+ If you initialize the object without passing either C{xmlData} or
+ C{xmlPath} then configuration will be empty and will be invalid until it
+ is filled in properly.
+
+ No reference to the original XML data or original path is saved off by
+ this class. Once the data has been parsed (successfully or not) this
+ original information is discarded.
+
+ Unless the C{validate} argument is C{False}, the L{LocalConfig.validate}
+ method will be called (with its default arguments) against configuration
+ after successfully parsing any passed-in XML. Keep in mind that even if
+ C{validate} is C{False}, it might not be possible to parse the passed-in
+ XML document if lower-level validations fail.
+
+ @note: It is strongly suggested that the C{validate} option always be set
+ to C{True} (the default) unless there is a specific need to read in
+ invalid configuration from disk.
+
+ @param xmlData: XML data representing configuration.
+ @type xmlData: String data.
+
+ @param xmlPath: Path to an XML file on disk.
+ @type xmlPath: Absolute path to a file on disk.
+
+ @param validate: Validate the document after parsing it.
+ @type validate: Boolean true/false.
+
+ @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in.
+ @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed.
+ @raise ValueError: If the parsed configuration document is not valid.
+ """
+ self._amazons3 = None
+ self.amazons3 = None
+ if xmlData is not None and xmlPath is not None:
+ raise ValueError("Use either xmlData or xmlPath, but not both.")
+ if xmlData is not None:
+ self._parseXmlData(xmlData)
+ if validate:
+ self.validate()
+ elif xmlPath is not None:
+ xmlData = open(xmlPath).read()
+ self._parseXmlData(xmlData)
+ if validate:
+ self.validate()
+
+ def __repr__(self):
+ """
+ Official string representation for class instance.
+ """
+ return "LocalConfig(%s)" % (self.amazons3)
+
+ def __str__(self):
+ """
+ Informal string representation for class instance.
+ """
+ return self.__repr__()
+
+ def __cmp__(self, other):
+ """
+ Definition of equals operator for this class.
+ Lists within this class are "unordered" for equality comparisons.
+ @param other: Other object to compare to.
+ @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other.
+ """
+ if other is None:
+ return 1
+ if self.amazons3 != other.amazons3:
+ if self.amazons3 < other.amazons3:
+ return -1
+ else:
+ return 1
+ return 0
+
+ def _setAmazonS3(self, value):
+ """
+ Property target used to set the amazons3 configuration value.
+ If not C{None}, the value must be a C{AmazonS3Config} object.
+ @raise ValueError: If the value is not a C{AmazonS3Config}
+ """
+ if value is None:
+ self._amazons3 = None
+ else:
+ if not isinstance(value, AmazonS3Config):
+ raise ValueError("Value must be a C{AmazonS3Config} object.")
+ self._amazons3 = value
+
+ def _getAmazonS3(self):
+ """
+ Property target used to get the amazons3 configuration value.
+ """
+ return self._amazons3
+
+ amazons3 = property(_getAmazonS3, _setAmazonS3, None, "AmazonS3 configuration in terms of a C{AmazonS3Config} object.")
+
+ def validate(self):
+ """
+ Validates configuration represented by the object.
+
+ AmazonS3 configuration must be filled in. Within that, the s3Bucket target must be filled in
+
+ @raise ValueError: If one of the validations fails.
+ """
+ if self.amazons3 is None:
+ raise ValueError("AmazonS3 section is required.")
+ if self.amazons3.s3Bucket is None:
+ raise ValueError("AmazonS3 s3Bucket must be set.")
+
+ def addConfig(self, xmlDom, parentNode):
+ """
+ Adds an <amazons3> configuration section as the next child of a parent.
+
+ Third parties should use this function to write configuration related to
+ this extension.
+
+ We add the following fields to the document::
+
+ s3Bucket //cb_config/amazons3/s3_bucket
+
+ @param xmlDom: DOM tree as from C{impl.createDocument()}.
+ @param parentNode: Parent that the section should be appended to.
+ """
+ if self.amazons3 is not None:
+ sectionNode = addContainerNode(xmlDom, parentNode, "amazons3")
+ addStringNode(xmlDom, sectionNode, "s3_bucket", self.amazons3.s3Bucket)
+
+ def _parseXmlData(self, xmlData):
+ """
+ Internal method to parse an XML string into the object.
+
+ This method parses the XML document into a DOM tree (C{xmlDom}) and then
+ calls a static method to parse the amazons3 configuration section.
+
+ @param xmlData: XML data to be parsed
+ @type xmlData: String data
+
+ @raise ValueError: If the XML cannot be successfully parsed.
+ """
+ (xmlDom, parentNode) = createInputDom(xmlData)
+ self._amazons3 = LocalConfig._parseAmazonS3(parentNode)
+
+ @staticmethod
+ def _parseAmazonS3(parent):
+ """
+ Parses an amazons3 configuration section.
+
+ We read the following individual fields::
+
+ s3Bucket //cb_config/amazons3/s3_bucket
+
+ @param parent: Parent node to search beneath.
+
+ @return: C{AmazonS3Config} object or C{None} if the section does not exist.
+ @raise ValueError: If some filled-in value is invalid.
+ """
+ amazons3 = None
+ section = readFirstChild(parent, "amazons3")
+ if section is not None:
+ amazons3 = AmazonS3Config()
+ amazons3.s3Bucket = readString(section, "s3_bucket")
+ return amazons3
+
+
+########################################################################
+# Public functions
+########################################################################
+
+###########################
+# executeAction() function
+###########################
+
+def executeAction(configPath, options, config):
+ """
+ Executes the amazons3 backup action.
+
+ @param configPath: Path to configuration file on disk.
+ @type configPath: String representing a path on disk.
+
+ @param options: Program command-line options.
+ @type options: Options object.
+
+ @param config: Program configuration.
+ @type config: Config object.
+
+ @raise ValueError: Under many generic error conditions
+ @raise IOError: If there are I/O problems reading or writing files
+ """
+ logger.debug("Executing amazons3 extended action.")
+ if config.options is None or config.stage is None:
+ raise ValueError("Cedar Backup configuration is not properly filled in.")
+ local = LocalConfig(xmlPath=configPath)
+ dailyDirs = findDailyDirs(config.stage.targetDir, STORE_INDICATOR)
+ for dailyDir in dailyDirs:
+ _storeDailyDir(dailyDir, local.amazons3.s3Bucket, config.options.backupUser, config.options.backupGroup)
+ writeIndicatorFile(dailyDir, STORE_INDICATOR, config.options.backupUser, config.options.backupGroup)
+ logger.info("Executed the amazons3 extended action successfully.")
+
+
+########################################################################
+# Utility functions
+########################################################################
+
+############################
+# _storeDailyDir() function
+############################
+
+def _storeDailyDir(stagingDir, dailyDir, s3Bucket, backupUser, backupGroup):
+ """
+ Store the contents of a daily staging directory to a bucket in the Amazon S3 cloud.
+ @param stagingDir: Configured staging directory (config.targetDir)
+ @param dailyDir: Daily directory to store in the cloud
+ @param s3Bucket: The Amazon S3 bucket to use as the target
+ @param backupUser: User that target files should be owned by
+ @param backupGroup: Group that target files should be owned by
+ """
+ s3BucketUrl = _deriveS3BucketUrl(stagingDir, dailyDir, s3Bucket)
+ _clearExistingBackup(s3BucketUrl)
+ _writeDailyDir(dailyDir, s3BucketUrl)
+
+
+##############################
+# _deriveBucketUrl() function
+##############################
+
+def _deriveS3BucketUrl(stagingDir, dailyDir, s3Bucket):
+ """
+ Derive the correct bucket URL for a daily directory.
+ @param stagingDir: Configured staging directory (config.targetDir)
+ @param dailyDir: Daily directory to store
+ @param s3Bucket: The Amazon S3 bucket to use as the target
+ @return: S3 bucket URL, with no trailing slash
+ """
+ subdir = dailyDir.replace("/opt/backup/staging", "")
+ if subdir.startswith("/"):
+ subdir = subdir[1:]
+ return "s3://%s/staging/%s" % (s3Bucket, dailyDir)
+
+
+##################################
+# _clearExistingBackup() function
+##################################
+
+def _clearExistingBackup(s3BucketUrl):
+ """
+ Clear any existing backup files for a daily directory.
+ @param s3BucketUrl: S3 bucket URL derived for the daily directory
+ """
+ emptydir = tempfile.mkdtemp()
+ try:
+ command = resolveCommand(S3CMD_COMMAND)
+ args = [ "sync", "--no-encrypt", "--recursive", "--delete-removed", emptyDir + "/", s3BucketUrl + "/", ]
+ result = executeCommand(command, args)[0]
+ if result != 0:
+ raise IOError("Error [%d] calling s3Cmd to clear existing backup [%s]." % (result, s3BucketUrl))
+ finally:
+ os.rmdir(emptydir)
+
+
+############################
+# _writeDailyDir() function
+############################
+
+def __writeDailyDir(dailyDir, s3BucketUrl):
+ """
+ Write the daily directory out to the Amazon S3 cloud.
+ @param dailyDir: Daily directory to store
+ @param s3BucketUrl: S3 bucket URL derived for the daily directory
+ """
+ command = resolveCommand(S3CMD_COMMAND)
+ args = [ "put", "--recursive", dailyDir + "/", s3BucketUrl + "/", ]
+ result = executeCommand(command, args)[0]
+ if result != 0:
+ raise IOError("Error [%d] calling s3Cmd to store daily directory [%s]." % (result, s3BucketUrl))
+
Property changes on: cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
___________________________________________________________________
Added: svn:keywords
## -0,0 +1 ##
+Id
\ No newline at end of property
Modified: cedar-backup2/trunk/CedarBackup2/release.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/release.py 2014-10-01 01:42:33 UTC (rev 1046)
+++ cedar-backup2/trunk/CedarBackup2/release.py 2014-10-01 01:44:17 UTC (rev 1047)
@@ -33,8 +33,8 @@
AUTHOR = "Kenneth J. Pronovici"
EMAIL = "pro...@ie..."
-COPYRIGHT = "2004-2011,2013"
-VERSION = "2.22.0"
-DATE = "09 May 2013"
+COPYRIGHT = "2004-2011,2013,2014"
+VERSION = "2.23.0"
+DATE = "unreleased"
URL = "http://cedar-backup.sourceforge.net/"
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2014-10-01 01:42:33 UTC (rev 1046)
+++ cedar-backup2/trunk/Changelog 2014-10-01 01:44:17 UTC (rev 1047)
@@ -1,14 +1,18 @@
+Version 2.23.0 unreleased
+
+ * Add new extension amazons3, as a new store-type action.
+
Version 2.22.0 09 May 2013
* Add eject-related kludges to work around observed behavior.
- * New config option eject_delay, to slow down open/close
- * Unlock tray with 'eject -i off' to handle potential problems
+ * New config option eject_delay, to slow down open/close
+ * Unlock tray with 'eject -i off' to handle potential problems
Version 2.21.1 21 Mar 2013
* Apply patches provided by Jan Medlock as Debian bugs.
- * Fix typo in manpage (showed -s instead of -D)
- * Support output from latest /usr/bin/split (' vs. `)
+ * Fix typo in manpage (showed -s instead of -D)
+ * Support output from latest /usr/bin/split (' vs. `)
Version 2.21.0 12 Oct 2011
Added: cedar-backup2/trunk/testcase/amazons3tests.py
===================================================================
--- cedar-backup2/trunk/testcase/amazons3tests.py (rev 0)
+++ cedar-backup2/trunk/testcase/amazons3tests.py 2014-10-01 01:44:17 UTC (rev 1047)
@@ -0,0 +1,584 @@
+#!/usr/bin/env python
+# -*- coding: iso-8859-1 -*-
+# vim: set ft=python ts=3 sw=3 expandtab:
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+#
+# C E D A R
+# S O L U T I O N S "Software done right."
+# S O F T W A R E
+#
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+#
+# Copyright (c) 2014 Kenneth J. Pronovici.
+# All rights reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License,
+# Version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+#
+# Copies of the GNU General Public License are available from
+# the Free Software Foundation website, http://www.gnu.org/.
+#
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+#
+# Author : Kenneth J. Pronovici <pro...@ie...>
+# Language : Python (>= 2.5)
+# Project : Cedar Backup, release 2
+# Revision : $Id$
+# Purpose : Tests amazons3 extension functionality.
+#
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+
+########################################################################
+# Module documentation
+########################################################################
+
+"""
+Unit tests for CedarBackup2/extend/amazons3.py.
+
+Code Coverage
+=============
+
+ This module contains individual tests for the the public classes implemented
+ in extend/amazons3.py. There are also tests for some of the private
+ functions.
+
+Naming Conventions
+==================
+
+ I prefer to avoid large unit tests which validate more than one piece of
+ functionality, and I prefer to avoid using overly descriptive (read: long)
+ test names, as well. Instead, I use lots of very small tests that each
+ validate one specific thing. These small tests are then named with an index
+ number, yielding something like C{testAddDir_001} or C{testValidate_010}.
+ Each method has a docstring describing what it's supposed to accomplish. I
+ feel that this makes it easier to judge how important a given failure is,
+ and also makes it somewhat easier to diagnose and fix individual problems.
+
+Testing XML Extraction
+======================
+
+ It's difficult to validated that generated XML is exactly "right",
+ especially when dealing with pretty-printed XML. We can't just provide a
+ constant string and say "the result must match this". Instead, what we do
+ is extract a node, build some XML from it, and then feed that XML back into
+ another object's constructor. If that parse process succeeds and the old
+ object is equal to the new object, we assume that the extract was
+ successful.
+
+ It would arguably be better if we could do a completely independent check -
+ but implementing that check would be equivalent to re-implementing all of
+ the existing functionality that we're validating here! After all, the most
+ important thing is that data can move seamlessly from object to XML document
+ and back to object.
+
+@author Kenneth J. Pronovici <pro...@ie...>
+"""
+
+
+########################################################################
+# Import modules and do runtime validations
+########################################################################
+
+# System modules
+import unittest
+import os
+import tempfile
+
+# Cedar Backup modules
+from CedarBackup2.filesystem import FilesystemList
+from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar, failUnlessAssignRaises, platformSupportsLinks
+from CedarBackup2.xmlutil import createOutputDom, serializeDom
+from CedarBackup2.extend.amazons3 import LocalConfig, AmazonS3Config
+
+
+#######################################################################
+# Module-wide configuration and constants
+#######################################################################
+
+DATA_DIRS = [ "./data", "./testcase/data", ]
+RESOURCES = [ "amazons3.conf.1", "amazons3.conf.2", "tree1.tar.gz", "tree2.tar.gz",
+ "tree8.tar.gz", "tree15.tar.gz", "tree16.tar.gz", "tree17.tar.gz",
+ "tree18.tar.gz", "tree19.tar.gz", "tree20.tar.gz", ]
+
+
+#######################################################################
+# Test Case Classes
+#######################################################################
+
+##########################
+# TestAmazonS3Config class
+##########################
+
+class TestAmazonS3Config(unittest.TestCase):
+
+ """Tests for the AmazonS3Config class."""
+
+ ##################
+ # Utility methods
+ ##################
+
+ def failUnlessAssignRaises(self, exception, obj, prop, value):
+ """Equivalent of L{failUnlessRaises}, but used for property assignments instead."""
+ failUnlessAssignRaises(self, exception, obj, prop, value)
+
+
+ ############################
+ # Test __repr__ and __str__
+ ############################
+
+ def testStringFuncs_001(self):
+ """
+ Just make sure that the string functions don't have errors (i.e. bad variable names).
+ """
+ obj = AmazonS3Config()
+ obj.__repr__()
+ obj.__str__()
+
+
+ ##################################
+ # Test constructor and attributes
+ ##################################
+
+ def testConstructor_001(self):
+ """
+ Test constructor with no values filled in.
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.s3Bucket)
+
+ def testConstructor_002(self):
+ """
+ Test constructor with all values filled in, with valid values.
+ """
+ amazons3 = AmazonS3Config("bucket")
+ self.failUnlessEqual("bucket", amazons3.s3Bucket)
+
+ def testConstructor_003(self):
+ """
+ Test assignment of s3Bucket attribute, None value.
+ """
+ amazons3 = AmazonS3Config(s3Bucket="bucket")
+ self.failUnlessEqual("bucket", amazons3.s3Bucket)
+ amazons3.s3Bucket = None
+ self.failUnlessEqual(None, amazons3.s3Bucket)
+
+ def testConstructor_004(self):
+ """
+ Test assignment of s3Bucket attribute, valid value.
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.s3Bucket)
+ amazons3.s3Bucket = "bucket"
+ self.failUnlessEqual("bucket", amazons3.s3Bucket)
+
+ def testConstructor_005(self):
+ """
+ Test assignment of s3Bucket attribute, invalid value (empty).
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.s3Bucket)
+ self.failUnlessAssignRaises(ValueError, amazons3, "s3Bucket", "")
+ self.failUnlessEqual(None, amazons3.s3Bucket)
+
+
+ ############################
+ # Test comparison operators
+ ############################
+
+ def testComparison_001(self):
+ """
+ Test comparison of two identical objects, all attributes None.
+ """
+ amazons31 = AmazonS3Config()
+ amazons32 = AmazonS3Config()
+ self.failUnlessEqual(amazons31, amazons32)
+ self.failUnless(amazons31 == amazons32)
+ self.failUnless(not amazons31 < amazons32)
+ self.failUnless(amazons31 <= amazons32)
+ self.failUnless(not amazons31 > amazons32)
+ self.failUnless(amazons31 >= amazons32)
+ self.failUnless(not amazons31 != amazons32)
+
+ def testComparison_002(self):
+ """
+ Test comparison of two identical objects, all attributes non-None.
+ """
+ amazons31 = AmazonS3Config("bucket")
+ amazons32 = AmazonS3Config("bucket")
+ self.failUnlessEqual(amazons31, amazons32)
+ self.failUnless(amazons31 == amazons32)
+ self.failUnless(not amazons31 < amazons32)
+ self.failUnless(amazons31 <= amazons32)
+ self.failUnless(not amazons31 > ...
[truncated message content] |
|
From: <pro...@us...> - 2014-10-01 20:16:07
|
Revision: 1051
http://sourceforge.net/p/cedar-backup/code/1051
Author: pronovic
Date: 2014-10-01 20:16:04 +0000 (Wed, 01 Oct 2014)
Log Message:
-----------
Add tests
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
cedar-backup2/trunk/testcase/amazons3tests.py
cedar-backup2/trunk/testcase/data/amazons3.conf.2
Modified: cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-01 19:39:24 UTC (rev 1050)
+++ cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-01 20:16:04 UTC (rev 1051)
@@ -82,8 +82,8 @@
# Cedar Backup modules
from CedarBackup2.util import resolveCommand, executeCommand, isRunningAsRoot
-from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode
-from CedarBackup2.xmlutil import readFirstChild, readString
+from CedarBackup2.xmlutil import createInputDom, addContainerNode, addBooleanNode, addStringNode
+from CedarBackup2.xmlutil import readFirstChild, readString, readBoolean
from CedarBackup2.actions.util import writeIndicatorFile
from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR
@@ -340,6 +340,7 @@
We add the following fields to the document::
+ warnMidnite //cb_config/amazons3/warn_midnite
s3Bucket //cb_config/amazons3/s3_bucket
@param xmlDom: DOM tree as from C{impl.createDocument()}.
@@ -347,6 +348,7 @@
"""
if self.amazons3 is not None:
sectionNode = addContainerNode(xmlDom, parentNode, "amazons3")
+ addBooleanNode(xmlDom, sectionNode, "warn_midnite", self.amazons3.warnMidnite)
addStringNode(xmlDom, sectionNode, "s3_bucket", self.amazons3.s3Bucket)
def _parseXmlData(self, xmlData):
@@ -371,6 +373,7 @@
We read the following individual fields::
+ warnMidnite //cb_config/amazons3/warn_midnite
s3Bucket //cb_config/amazons3/s3_bucket
@param parent: Parent node to search beneath.
@@ -382,6 +385,7 @@
section = readFirstChild(parent, "amazons3")
if section is not None:
amazons3 = AmazonS3Config()
+ amazons3.warnMidnite = readBoolean(section, "warn_midnite")
amazons3.s3Bucket = readString(section, "s3_bucket")
return amazons3
Modified: cedar-backup2/trunk/testcase/amazons3tests.py
===================================================================
--- cedar-backup2/trunk/testcase/amazons3tests.py 2014-10-01 19:39:24 UTC (rev 1050)
+++ cedar-backup2/trunk/testcase/amazons3tests.py 2014-10-01 20:16:04 UTC (rev 1051)
@@ -149,20 +149,23 @@
Test constructor with no values filled in.
"""
amazons3 = AmazonS3Config()
+ self.failUnlessEqual(False, amazons3.warnMidnite)
self.failUnlessEqual(None, amazons3.s3Bucket)
def testConstructor_002(self):
"""
Test constructor with all values filled in, with valid values.
"""
- amazons3 = AmazonS3Config("bucket")
+ amazons3 = AmazonS3Config(True, "bucket")
+ self.failUnlessEqual(True, amazons3.warnMidnite)
self.failUnlessEqual("bucket", amazons3.s3Bucket)
def testConstructor_003(self):
"""
Test assignment of s3Bucket attribute, None value.
"""
- amazons3 = AmazonS3Config(s3Bucket="bucket")
+ amazons3 = AmazonS3Config(warnMidnite=True, s3Bucket="bucket")
+ self.failUnlessEqual(True, amazons3.warnMidnite)
self.failUnlessEqual("bucket", amazons3.s3Bucket)
amazons3.s3Bucket = None
self.failUnlessEqual(None, amazons3.s3Bucket)
@@ -185,7 +188,35 @@
self.failUnlessAssignRaises(ValueError, amazons3, "s3Bucket", "")
self.failUnlessEqual(None, amazons3.s3Bucket)
+ def testConstructor_006(self):
+ """
+ Test assignment of warnMidnite attribute, valid value (real boolean).
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(False, amazons3.warnMidnite)
+ amazons3.warnMidnite = True
+ self.failUnlessEqual(True, amazons3.warnMidnite)
+ amazons3.warnMidnite = False
+ self.failUnlessEqual(False, amazons3.warnMidnite)
+ def testConstructor_007(self):
+ """
+ Test assignment of warnMidnite attribute, valid value (expression).
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(False, amazons3.warnMidnite)
+ amazons3.warnMidnite = 0
+ self.failUnlessEqual(False, amazons3.warnMidnite)
+ amazons3.warnMidnite = []
+ self.failUnlessEqual(False, amazons3.warnMidnite)
+ amazons3.warnMidnite = None
+ self.failUnlessEqual(False, amazons3.warnMidnite)
+ amazons3.warnMidnite = ['a']
+ self.failUnlessEqual(True, amazons3.warnMidnite)
+ amazons3.warnMidnite = 3
+ self.failUnlessEqual(True, amazons3.warnMidnite)
+
+
############################
# Test comparison operators
############################
@@ -236,8 +267,8 @@
"""
Test comparison of two differing objects, s3Bucket differs.
"""
- amazons31 = AmazonS3Config("bucket1")
- amazons32 = AmazonS3Config("bucket2")
+ amazons31 = AmazonS3Config(True, "bucket1")
+ amazons32 = AmazonS3Config(True, "bucket2")
self.failIfEqual(amazons31, amazons32)
self.failUnless(not amazons31 == amazons32)
self.failUnless(amazons31 < amazons32)
@@ -246,7 +277,21 @@
self.failUnless(not amazons31 >= amazons32)
self.failUnless(amazons31 != amazons32)
+ def testComparison_005(self):
+ """
+ Test comparison of two differing objects, warnMidnite differs.
+ """
+ amazons31 = AmazonS3Config(warnMidnite=False)
+ amazons32 = AmazonS3Config(warnMidnite=True)
+ self.failIfEqual(amazons31, amazons32)
+ self.failUnless(not amazons31 == amazons32)
+ self.failUnless(amazons31 < amazons32)
+ self.failUnless(amazons31 <= amazons32)
+ self.failUnless(not amazons31 > amazons32)
+ self.failUnless(not amazons31 >= amazons32)
+ self.failUnless(amazons31 != amazons32)
+
########################
# TestLocalConfig class
########################
@@ -414,13 +459,13 @@
def testComparison_004(self):
"""
- Test comparison of two differing objects, s3Bucket differs.
+ Test comparison of two differing objects, s3Bucket differs.
"""
config1 = LocalConfig()
- config1.amazons3 = AmazonS3Config("bucket1")
+ config1.amazons3 = AmazonS3Config(True, "bucket1")
config2 = LocalConfig()
- config2.amazons3 = AmazonS3Config("bucket2")
+ config2.amazons3 = AmazonS3Config(True, "bucket2")
self.failIfEqual(config1, config2)
self.failUnless(not config1 == config2)
@@ -464,7 +509,7 @@
Test validate on a non-empty amazons3 section with valid values filled in.
"""
config = LocalConfig()
- config.amazons3 = AmazonS3Config("bucket")
+ config.amazons3 = AmazonS3Config(True, "bucket")
config.validate()
@@ -493,9 +538,11 @@
contents = open(path).read()
config = LocalConfig(xmlPath=path, validate=False)
self.failIfEqual(None, config.amazons3)
+ self.failUnlessEqual(True, config.amazons3.warnMidnite)
self.failUnlessEqual("mybucket", config.amazons3.s3Bucket)
config = LocalConfig(xmlData=contents, validate=False)
self.failIfEqual(None, config.amazons3)
+ self.failUnlessEqual(True, config.amazons3.warnMidnite)
self.failUnlessEqual("mybucket", config.amazons3.s3Bucket)
@@ -516,7 +563,7 @@
"""
Test with values set.
"""
- amazons3 = AmazonS3Config("bucket")
+ amazons3 = AmazonS3Config(True, "bucket")
config = LocalConfig()
config.amazons3 = amazons3
self.validateAddConfig(config)
Modified: cedar-backup2/trunk/testcase/data/amazons3.conf.2
===================================================================
--- cedar-backup2/trunk/testcase/data/amazons3.conf.2 2014-10-01 19:39:24 UTC (rev 1050)
+++ cedar-backup2/trunk/testcase/data/amazons3.conf.2 2014-10-01 20:16:04 UTC (rev 1051)
@@ -2,6 +2,7 @@
<!-- Valid document -->
<cb_config>
<amazons3>
+ <warn_midnite>Y</warn_midnite>
<s3_bucket>mybucket</s3_bucket>
</amazons3>
</cb_config>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-01 21:42:20
|
Revision: 1053
http://sourceforge.net/p/cedar-backup/code/1053
Author: pronovic
Date: 2014-10-01 21:42:12 +0000 (Wed, 01 Oct 2014)
Log Message:
-----------
Finish user documentation for amazons3 extension
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
cedar-backup2/trunk/manual/src/depends.xml
cedar-backup2/trunk/manual/src/extensions.xml
Modified: cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-01 20:30:20 UTC (rev 1052)
+++ cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-01 21:42:12 UTC (rev 1053)
@@ -46,10 +46,10 @@
Since it is intended to replace the store action, it does not rely on any store
configuration.
-The underlying functionality relies on the U{{Amazon S3Tools} <http://s3tools.org/>}
+The underlying functionality relies on the U{Amazon S3Tools <http://s3tools.org/>}
package. It is a very thin wrapper around the C{s3cmd put} command. Before
you use this extension, you need to set up your Amazon S3 account and configure
-C{s3cmd} as detailed in the U{{HOWTO} <http://s3tools.org/s3cmd-howto>}. The
+C{s3cmd} as detailed in the U{HOWTO <http://s3tools.org/s3cmd-howto>}. The
extension assumes that the backup is being executed as root, and switches over
to the configured backup user to run the C{s3cmd} program. So, make sure you
configure S3 Tools as the backup user and not root.
@@ -57,7 +57,7 @@
It's up to you how to configure the S3 Tools connection to Amazon, but I
recommend that you configure GPG encryption using a strong passphrase. One way
to generate a strong passphrase is using your system random number generator,
-i.e. C{dd if=/dev/urandom count=20 bs=1 | xxd -ps}. (See U{{StackExchange}
+i.e. C{dd if=/dev/urandom count=20 bs=1 | xxd -ps}. (See U{StackExchange
<http://security.stackexchange.com/questions/14867/gpg-encryption-security>}
for more details about that advice.) If you decide to use encryption, make sure
you save off the passphrase in a safe place, so you can get at your backup data
Modified: cedar-backup2/trunk/manual/src/depends.xml
===================================================================
--- cedar-backup2/trunk/manual/src/depends.xml 2014-10-01 20:30:20 UTC (rev 1052)
+++ cedar-backup2/trunk/manual/src/depends.xml 2014-10-01 21:42:12 UTC (rev 1053)
@@ -41,13 +41,13 @@
<variablelist>
<varlistentry>
- <term>Python 2.5</term>
+ <term>Python 2.5 (or later)</term>
<listitem>
<para>
Version 2.5 of the Python interpreter was released on 19 Sep
- 2006, so most current Linux and BSD distributions should
- include it.
+ 2006, so virtually any current Linux and BSD distributions should
+ include it or a later version.
</para>
<informaltable>
@@ -556,6 +556,47 @@
</varlistentry>
<varlistentry>
+ <term><command>s3cmd</command></term>
+ <listitem>
+
+ <para>
+ The <command>s3cmd</command> command is used by the Amazon S3
+ extension to communicate with Amazon AWS.
+ </para>
+
+ <informaltable>
+ <tgroup cols="2">
+ <colspec colnum="1" colwidth="1*"/>
+ <colspec colnum="2" colwidth="3.5*"/>
+ <thead>
+ <row>
+ <entry>Source</entry>
+ <entry>URL</entry>
+ </row>
+ </thead>
+ <tbody>
+ <row>
+ <entry>upstream</entry>
+ <entry><ulink url="http://s3tools.org/s3cmd"/></entry>
+ </row>
+ <row>
+ <entry>Debian</entry>
+ <entry><ulink url="https://packages.debian.org/stable/s3cmd"/></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </informaltable>
+
+ <para>
+ If you can't find a package for your system, install from the package
+ source, using the <quote>upstream</quote> link.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+
+ <varlistentry>
<term><command>split</command></term>
<listitem>
Modified: cedar-backup2/trunk/manual/src/extensions.xml
===================================================================
--- cedar-backup2/trunk/manual/src/extensions.xml 2014-10-01 20:30:20 UTC (rev 1052)
+++ cedar-backup2/trunk/manual/src/extensions.xml 2014-10-01 21:42:12 UTC (rev 1053)
@@ -93,6 +93,134 @@
<!-- ################################################################# -->
+ <sect1 id="cedar-extensions-amazons3">
+
+ <title>Amazon S3 Extension</title>
+
+ <para>
+ The Amazon S3 extension writes data to Amazon S3 cloud storage rather
+ than to physical media. It is intended to replace the store action,
+ but you can also use it alongside the store action if you'd prefer to
+ backup your data in more than one place. This extension must be run
+ after the stage action.
+ </para>
+
+ <para>
+ The underlying functionality relies on the
+ <ulink url="http://s3tools.org/">Amazon S3 Tools</ulink> package. It
+ is a very thin wrapper around the <literal>s3cmd put</literal>
+ command. Before you use this extension, you need to set up your
+ Amazon S3 account and configure <literal>s3cmd</literal> as detailed
+ in the <ulink url="http://s3tools.org/s3cmd-howto">HOWTO</ulink>.
+ The extension assumes that the backup is being executed as root, and
+ switches over to the configured backup user to run the
+ <literal>s3cmd</literal> program. So, make sure you configure the S3
+ tools as the backup user and not root.
+ </para>
+
+ <para>
+ When configuring the S3 tools connection to Amazon AWS, you probably want
+ to configure GPG encryption using a strong passphrase. One way
+ to generate a strong passphrase is using your system random number generator,
+ i.e. <literal>dd if=/dev/urandom count=20 bs=1 | xxd -ps</literal>. (See
+ <ulink url="http://security.stackexchange.com/questions/14867/gpg-encryption-security">StackExchange</ulink>
+ for more details about that advice.) If you decide to use encryption, make sure
+ you save off the passphrase in a safe place, so you can get at your backup data
+ later if you need to.
+ </para>
+
+ <para>
+ This extension was written for and tested on Linux. It will throw an exception
+ if run on Windows.
+ </para>
+
+ <para>
+ To enable this extension, add the following section to the Cedar Backup
+ configuration file:
+ </para>
+
+ <programlisting>
+<extensions> <action>
+ <action>
+ <name>amazons3</name>
+ <module>CedarBackup2.extend.amazons3</module>
+ <function>executeAction</function>
+ <index>201</index> <!-- just after stage -->
+ </action>
+</extensions>
+ </programlisting>
+
+ <para>
+ This extension relies on the options and staging configuration sections
+ in the standard Cedar Backup configuration file, and then also
+ requires its own <literal>amazons3</literal> configuration section.
+ This is an example configuration section:
+ </para>
+
+ <programlisting>
+<amazons3>
+ <s3_bucket>example.com-backup/staging</s3_bucket>
+</amazons3>
+ </programlisting>
+
+ <para>
+ The following elements are part of the Amazon S3 configuration section:
+ </para>
+
+ <variablelist>
+
+ <varlistentry>
+ <term><literal>warn_midnite</literal></term>
+ <listitem>
+ <para>Whether to generate warnings for crossing midnite.</para>
+ <para>
+ This field indicates whether warnings should be generated
+ if the Amazon S3 operation has to cross a midnite boundary in
+ order to find data to write to the cloud. For instance, a
+ warning would be generated if valid store data was only
+ found in the day before or day after the current day.
+ </para>
+ <para>
+ Configuration for some users is such that the store
+ operation will always cross a midnite boundary, so they
+ will not care about this warning. Other users will expect
+ to never cross a boundary, and want to be notified that
+ something <quote>strange</quote> might have happened.
+ </para>
+ <para>
+ This field is optional. If it doesn't exist, then
+ <literal>N</literal> will be assumed.
+ </para>
+ <para>
+ <emphasis>Restrictions:</emphasis> Must be a boolean (<literal>Y</literal> or <literal>N</literal>).
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><literal>s3_bucket</literal></term>
+ <listitem>
+ <para>The name of the Amazon S3 bucket that data will be written to.</para>
+ <para>
+ This field configures the S3 bucket that your data will be
+ written to. In S3, buckets are named globally. For
+ uniqueness, you would typically use the name of your domain
+ followed by some suffix, such as <literal>example.com-backup</literal>.
+ If you want, you can specify a subdirectory within the bucket,
+ such as <literal>example.com-backup/staging</literal>.
+ </para>
+ <para>
+ <emphasis>Restrictions:</emphasis> Must be non-empty.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ </sect1>
+
+ <!-- ################################################################# -->
+
<sect1 id="cedar-extensions-subversion">
<title>Subversion Extension</title>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-01 21:52:02
|
Revision: 1054
http://sourceforge.net/p/cedar-backup/code/1054
Author: pronovic
Date: 2014-10-01 21:51:59 +0000 (Wed, 01 Oct 2014)
Log Message:
-----------
Update copyright statements
Modified Paths:
--------------
cedar-backup2/trunk/manual/src/depends.xml
cedar-backup2/trunk/manual/src/extensions.xml
cedar-backup2/trunk/util/test.py
Modified: cedar-backup2/trunk/manual/src/depends.xml
===================================================================
--- cedar-backup2/trunk/manual/src/depends.xml 2014-10-01 21:42:12 UTC (rev 1053)
+++ cedar-backup2/trunk/manual/src/depends.xml 2014-10-01 21:51:59 UTC (rev 1054)
@@ -7,7 +7,7 @@
#
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#
-# Copyright (c) 2005-2007,2010 Kenneth J. Pronovici.
+# Copyright (c) 2005-2007,2010,2014 Kenneth J. Pronovici.
# All rights reserved.
#
# This work is free; you can redistribute it and/or modify it
Modified: cedar-backup2/trunk/manual/src/extensions.xml
===================================================================
--- cedar-backup2/trunk/manual/src/extensions.xml 2014-10-01 21:42:12 UTC (rev 1053)
+++ cedar-backup2/trunk/manual/src/extensions.xml 2014-10-01 21:51:59 UTC (rev 1054)
@@ -7,7 +7,7 @@
#
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#
-# Copyright (c) 2005-2008,2010 Kenneth J. Pronovici.
+# Copyright (c) 2005-2008,2010,2014 Kenneth J. Pronovici.
# All rights reserved.
#
# This work is free; you can redistribute it and/or modify it
Modified: cedar-backup2/trunk/util/test.py
===================================================================
--- cedar-backup2/trunk/util/test.py 2014-10-01 21:42:12 UTC (rev 1053)
+++ cedar-backup2/trunk/util/test.py 2014-10-01 21:51:59 UTC (rev 1054)
@@ -9,7 +9,7 @@
#
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#
-# Copyright (c) 2004-2008,2010 Kenneth J. Pronovici.
+# Copyright (c) 2004-2008,2010,2014 Kenneth J. Pronovici.
# All rights reserved.
#
# This program is free software; you can redistribute it and/or
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-01 21:52:38
|
Revision: 1055
http://sourceforge.net/p/cedar-backup/code/1055
Author: pronovic
Date: 2014-10-01 21:52:29 +0000 (Wed, 01 Oct 2014)
Log Message:
-----------
Document Amazon S3 in the description
Modified Paths:
--------------
cedar-backup2/trunk/README
cedar-backup2/trunk/setup.py
Modified: cedar-backup2/trunk/README
===================================================================
--- cedar-backup2/trunk/README 2014-10-01 21:51:59 UTC (rev 1054)
+++ cedar-backup2/trunk/README 2014-10-01 21:52:29 UTC (rev 1055)
@@ -24,8 +24,11 @@
with the expectation that the disc will be changed or overwritten at the
beginning of each week. If your hardware is new enough, Cedar Backup can
write multisession discs, allowing you to add incremental data to a disc on
-a daily basis.
+a daily basis.
+Alternately, Cedar Backup can write your backups to the Amazon S3 cloud
+rather than relying on physical media.
+
Besides offering command-line utilities to manage the backup process, Cedar
Backup provides a well-organized library of backup-related functionality,
written in the Python programming language.
Modified: cedar-backup2/trunk/setup.py
===================================================================
--- cedar-backup2/trunk/setup.py 2014-10-01 21:51:59 UTC (rev 1054)
+++ cedar-backup2/trunk/setup.py 2014-10-01 21:52:29 UTC (rev 1055)
@@ -44,6 +44,9 @@
multisession discs, allowing you to add incremental data to a disc on a daily
basis.
+Alternately, Cedar Backup can write your backups to the Amazon S3 cloud
+rather than relying on physical media.
+
Besides offering command-line utilities to manage the backup process, Cedar
Backup provides a well-organized library of backup-related functionality,
written in the Python programming language.
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-01 21:54:32
|
Revision: 1056
http://sourceforge.net/p/cedar-backup/code/1056
Author: pronovic
Date: 2014-10-01 21:54:28 +0000 (Wed, 01 Oct 2014)
Log Message:
-----------
Release v2.23.0
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/release.py
cedar-backup2/trunk/Changelog
Modified: cedar-backup2/trunk/CedarBackup2/release.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/release.py 2014-10-01 21:52:29 UTC (rev 1055)
+++ cedar-backup2/trunk/CedarBackup2/release.py 2014-10-01 21:54:28 UTC (rev 1056)
@@ -35,6 +35,6 @@
EMAIL = "pro...@ie..."
COPYRIGHT = "2004-2011,2013,2014"
VERSION = "2.23.0"
-DATE = "unreleased"
+DATE = "01 Oct 2014"
URL = "http://cedar-backup.sourceforge.net/"
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2014-10-01 21:52:29 UTC (rev 1055)
+++ cedar-backup2/trunk/Changelog 2014-10-01 21:54:28 UTC (rev 1056)
@@ -1,6 +1,8 @@
-Version 2.23.0 unreleased
+Version 2.23.0 01 Oct 2014
- * Add new extension amazons3, as a new store-type action.
+ * Add new extension amazons3 to replace the store action.
+ * Update user manual to clarify a few of the dependencies.
+ * Fix encryption unit test that started failing due to my new GPG key.
Version 2.22.0 09 May 2013
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-01 22:00:40
|
Revision: 1058
http://sourceforge.net/p/cedar-backup/code/1058
Author: pronovic
Date: 2014-10-01 22:00:33 +0000 (Wed, 01 Oct 2014)
Log Message:
-----------
Change release to 2.23.1
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/release.py
cedar-backup2/trunk/Changelog
Modified: cedar-backup2/trunk/CedarBackup2/release.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/release.py 2014-10-01 21:55:00 UTC (rev 1057)
+++ cedar-backup2/trunk/CedarBackup2/release.py 2014-10-01 22:00:33 UTC (rev 1058)
@@ -34,7 +34,7 @@
AUTHOR = "Kenneth J. Pronovici"
EMAIL = "pro...@ie..."
COPYRIGHT = "2004-2011,2013,2014"
-VERSION = "2.23.0"
+VERSION = "2.23.1"
DATE = "01 Oct 2014"
URL = "http://cedar-backup.sourceforge.net/"
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2014-10-01 21:55:00 UTC (rev 1057)
+++ cedar-backup2/trunk/Changelog 2014-10-01 22:00:33 UTC (rev 1058)
@@ -1,4 +1,4 @@
-Version 2.23.0 01 Oct 2014
+Version 2.23.1 01 Oct 2014
* Add new extension amazons3 to replace the store action.
* Update user manual to clarify a few of the dependencies.
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-02 01:36:16
|
Revision: 1061
http://sourceforge.net/p/cedar-backup/code/1061
Author: pronovic
Date: 2014-10-02 01:36:07 +0000 (Thu, 02 Oct 2014)
Log Message:
-----------
Update documentation to discuss minimum version for s3cmd
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
cedar-backup2/trunk/Changelog
cedar-backup2/trunk/manual/src/depends.xml
cedar-backup2/trunk/manual/src/extensions.xml
Modified: cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-02 01:23:46 UTC (rev 1060)
+++ cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-02 01:36:07 UTC (rev 1061)
@@ -47,12 +47,13 @@
configuration.
The underlying functionality relies on the U{Amazon S3Tools <http://s3tools.org/>}
-package. It is a very thin wrapper around the C{s3cmd put} command. Before
-you use this extension, you need to set up your Amazon S3 account and configure
-C{s3cmd} as detailed in the U{HOWTO <http://s3tools.org/s3cmd-howto>}. The
-extension assumes that the backup is being executed as root, and switches over
-to the configured backup user to run the C{s3cmd} program. So, make sure you
-configure S3 Tools as the backup user and not root.
+package, version 1.5.0-rc1 or newer. It is a very thin wrapper around the
+C{s3cmd put} command. Before you use this extension, you need to set up your
+Amazon S3 account and configure C{s3cmd} as detailed in the U{HOWTO
+<http://s3tools.org/s3cmd-howto>}. The extension assumes that the backup is
+being executed as root, and switches over to the configured backup user to run
+the C{s3cmd} program. So, make sure you configure S3 Tools as the backup user
+and not root.
It's up to you how to configure the S3 Tools connection to Amazon, but I
recommend that you configure GPG encryption using a strong passphrase. One way
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2014-10-02 01:23:46 UTC (rev 1060)
+++ cedar-backup2/trunk/Changelog 2014-10-02 01:36:07 UTC (rev 1061)
@@ -1,4 +1,4 @@
-Version 2.23.1 01 Oct 2014
+Version 2.23.2 01 Oct 2014
* Add new extension amazons3 to replace the store action.
* Update user manual to clarify a few of the dependencies.
Modified: cedar-backup2/trunk/manual/src/depends.xml
===================================================================
--- cedar-backup2/trunk/manual/src/depends.xml 2014-10-02 01:23:46 UTC (rev 1060)
+++ cedar-backup2/trunk/manual/src/depends.xml 2014-10-02 01:36:07 UTC (rev 1061)
@@ -561,8 +561,23 @@
<para>
The <command>s3cmd</command> command is used by the Amazon S3
- extension to communicate with Amazon AWS.
+ extension to communicate with Amazon AWS. Cedar Backup requires
+ version 1.5.0-rc1 or later. Earlier versions have problems
+ uploading large files in the background (non-TTY), and there
+ was also a syntax change that the extension relies on.
</para>
+
+ <para>
+ As of this writing, the version of s3cmd in Debian wheezy is
+ not new enough, and it is not possible to pin the correct
+ version from testing or unstable due to a generated
+ dependency on python:all (which does not exist in wheezy).
+ It is possible to force dpkg to install the package anyway:
+ download the appropriate <literal>.deb</literal> file, and
+ then install with <literal>dpkg --force-all</literal>.
+ Alternately, the Cedar Solutions APT source contains a
+ backported version of 1.5.0~rc1-2.
+ </para>
<informaltable>
<tgroup cols="2">
@@ -579,10 +594,6 @@
<entry>upstream</entry>
<entry><ulink url="http://s3tools.org/s3cmd"/></entry>
</row>
- <row>
- <entry>Debian</entry>
- <entry><ulink url="https://packages.debian.org/stable/s3cmd"/></entry>
- </row>
</tbody>
</tgroup>
</informaltable>
Modified: cedar-backup2/trunk/manual/src/extensions.xml
===================================================================
--- cedar-backup2/trunk/manual/src/extensions.xml 2014-10-02 01:23:46 UTC (rev 1060)
+++ cedar-backup2/trunk/manual/src/extensions.xml 2014-10-02 01:36:07 UTC (rev 1061)
@@ -107,11 +107,12 @@
<para>
The underlying functionality relies on the
- <ulink url="http://s3tools.org/">Amazon S3 Tools</ulink> package. It
- is a very thin wrapper around the <literal>s3cmd put</literal>
- command. Before you use this extension, you need to set up your
- Amazon S3 account and configure <literal>s3cmd</literal> as detailed
- in the <ulink url="http://s3tools.org/s3cmd-howto">HOWTO</ulink>.
+ <ulink url="http://s3tools.org/">Amazon S3 Tools</ulink> package, version
+ 1.5.0-rc1 or newer. It is a very thin wrapper around the
+ <literal>s3cmd put</literal> command. Before you use this extension,
+ you need to set up your Amazon S3 account and configure
+ <literal>s3cmd</literal> as detailed in the
+ <ulink url="http://s3tools.org/s3cmd-howto">HOWTO</ulink>.
The extension assumes that the backup is being executed as root, and
switches over to the configured backup user to run the
<literal>s3cmd</literal> program. So, make sure you configure the S3
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-03 00:04:32
|
Revision: 1064
http://sourceforge.net/p/cedar-backup/code/1064
Author: pronovic
Date: 2014-10-03 00:04:21 +0000 (Fri, 03 Oct 2014)
Log Message:
-----------
Start rewriting amazons3 using aws-cli
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
cedar-backup2/trunk/testcase/amazons3tests.py
cedar-backup2/trunk/testcase/data/amazons3.conf.2
Modified: cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-02 01:38:10 UTC (rev 1063)
+++ cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-03 00:04:21 UTC (rev 1064)
@@ -46,24 +46,32 @@
Since it is intended to replace the store action, it does not rely on any store
configuration.
-The underlying functionality relies on the U{Amazon S3Tools <http://s3tools.org/>}
-package, version 1.5.0-rc1 or newer. It is a very thin wrapper around the
-C{s3cmd put} command. Before you use this extension, you need to set up your
-Amazon S3 account and configure C{s3cmd} as detailed in the U{HOWTO
-<http://s3tools.org/s3cmd-howto>}. The extension assumes that the backup is
-being executed as root, and switches over to the configured backup user to run
-the C{s3cmd} program. So, make sure you configure S3 Tools as the backup user
+The underlying functionality relies on the U{AWS CLI interface
+<http://aws.amazon.com/documentation/cli/>}. Before you use this extension,
+you need to set up your Amazon S3 account and configure the AWS CLI connection
+per Amazon's documentation. The extension assumes that the backup is being
+executed as root, and switches over to the configured backup user to
+communicate with AWS. So, make sure you configure AWS CLI as the backup user
and not root.
-It's up to you how to configure the S3 Tools connection to Amazon, but I
-recommend that you configure GPG encryption using a strong passphrase. One way
-to generate a strong passphrase is using your system random number generator,
-i.e. C{dd if=/dev/urandom count=20 bs=1 | xxd -ps}. (See U{StackExchange
-<http://security.stackexchange.com/questions/14867/gpg-encryption-security>}
-for more details about that advice.) If you decide to use encryption, make sure
-you save off the passphrase in a safe place, so you can get at your backup data
-later if you need to.
+You can optionally configure Cedar Backup to encrypt data before sending it
+to S3. To do that, provide a complete command line using the ${input} and
+${output} variables to represent the original input file and the encrypted
+output file. This command will be executed as the backup user.
+For instance, you can use something like this with GPG::
+
+ /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
+
+The GPG mechanism depends on a strong passprhase for security. One way to
+generate a strong passphrase is using your system random number generator, i.e.
+C{dd if=/dev/urandom count=20 bs=1 | xxd -ps}. (See U{StackExchange
+http://security.stackexchange.com/questions/14867/gpg-encryption-security>} for
+more details about that advice.) If you decide to use encryption, make sure you
+save off the passphrase in a safe place, so you can get at your backup data
+later if you need to. And obviously, make sure to set permissions on the
+passphrase file so it can only be read by the backup user.
+
This extension was written for and tested on Linux. It will throw an exception
if run on Windows.
@@ -96,7 +104,7 @@
logger = logging.getLogger("CedarBackup2.log.extend.amazons3")
SU_COMMAND = [ "su" ]
-S3CMD_COMMAND = [ "s3cmd", ]
+AWS_COMMAND = [ "aws" ]
STORE_INDICATOR = "cback.amazons3"
@@ -115,30 +123,34 @@
The following restrictions exist on data in this class:
- - The s3Bucket value must be a non-empty string
+ - The s3Bucket value, if set, must be a non-empty string
+ - The encryptCommand valu, if set, must be a non-empty string
@sort: __init__, __repr__, __str__, __cmp__, warnMidnite, s3Bucket
"""
- def __init__(self, warnMidnite=None, s3Bucket=None):
+ def __init__(self, warnMidnite=None, s3Bucket=None, encryptCommand=None):
"""
Constructor for the C{AmazonS3Config} class.
+ @param warnMidnite: Whether to generate warnings for crossing midnite.
@param s3Bucket: Name of the Amazon S3 bucket in which to store the data
- @param warnMidnite: Whether to generate warnings for crossing midnite.
+ @param encryptCommand: Command used to encrypt backup data before upload to S3
@raise ValueError: If one of the values is invalid.
"""
self._warnMidnite = None
self._s3Bucket = None
+ self._encryptCommand = None
self.warnMidnite = warnMidnite
self.s3Bucket = s3Bucket
+ self.encryptCommand = encryptCommand
def __repr__(self):
"""
Official string representation for class instance.
"""
- return "AmazonS3Config(%s, %s)" % (self.warnMidnite, self.s3Bucket)
+ return "AmazonS3Config(%s, %s, %s)" % (self.warnMidnite, self.s3Bucket, self.encryptCommand)
def __str__(self):
"""
@@ -164,6 +176,11 @@
return -1
else:
return 1
+ if self.encryptCommand != other.encryptCommand:
+ if self.encryptCommand < other.encryptCommand:
+ return -1
+ else:
+ return 1
return 0
def _setWarnMidnite(self, value):
@@ -197,8 +214,24 @@
"""
return self._s3Bucket
+ def _setEncryptCommand(self, value):
+ """
+ Property target used to set the encrypt command.
+ """
+ if value is not None:
+ if len(value) < 1:
+ raise ValueError("Encrypt command must be non-empty string.")
+ self._encryptCommand = value
+
+ def _getEncryptCommand(self):
+ """
+ Property target used to get the encrypt command.
+ """
+ return self._encryptCommand
+
warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.")
s3Bucket = property(_getS3Bucket, _setS3Bucket, None, doc="Amazon S3 Bucket in which to store data")
+ encryptCommand = property(_getEncryptCommand, _setEncryptCommand, None, doc="Command used to encrypt backup data before upload to S3")
########################################################################
@@ -341,8 +374,9 @@
We add the following fields to the document::
- warnMidnite //cb_config/amazons3/warn_midnite
- s3Bucket //cb_config/amazons3/s3_bucket
+ warnMidnite //cb_config/amazons3/warn_midnite
+ s3Bucket //cb_config/amazons3/s3_bucket
+ encryptCommand //cb_config/amazons3/encrypt
@param xmlDom: DOM tree as from C{impl.createDocument()}.
@param parentNode: Parent that the section should be appended to.
@@ -351,6 +385,7 @@
sectionNode = addContainerNode(xmlDom, parentNode, "amazons3")
addBooleanNode(xmlDom, sectionNode, "warn_midnite", self.amazons3.warnMidnite)
addStringNode(xmlDom, sectionNode, "s3_bucket", self.amazons3.s3Bucket)
+ addStringNode(xmlDom, sectionNode, "encrypt", self.amazons3.encryptCommand)
def _parseXmlData(self, xmlData):
"""
@@ -374,8 +409,9 @@
We read the following individual fields::
- warnMidnite //cb_config/amazons3/warn_midnite
- s3Bucket //cb_config/amazons3/s3_bucket
+ warnMidnite //cb_config/amazons3/warn_midnite
+ s3Bucket //cb_config/amazons3/s3_bucket
+ encryptCommand //cb_config/amazons3/encrypt
@param parent: Parent node to search beneath.
@@ -388,6 +424,7 @@
amazons3 = AmazonS3Config()
amazons3.warnMidnite = readBoolean(section, "warn_midnite")
amazons3.s3Bucket = readString(section, "s3_bucket")
+ amazons3.encryptCommand = readString(section, "encrypt")
return amazons3
@@ -503,6 +540,7 @@
the configured Amazon S3 bucket from local configuration. The directories
will be placed into the image at the root by date, so staging directory
C{/opt/stage/2005/02/10} will be placed into the S3 bucket at C{/2005/02/10}.
+ If an encrypt commmand is provided, the files will be encrypted first.
@param config: Config object.
@param local: Local config object.
@@ -518,6 +556,7 @@
logger.debug("S3 bucket URL is [%s]" % s3BucketUrl)
_clearExistingBackup(config, s3BucketUrl)
_writeStagingDir(config, stagingDir, s3BucketUrl)
+ _verifyStagingDir(config, stagingDir, s3BucketUrl)
##################################
@@ -546,22 +585,17 @@
@param config: Config object.
@param s3BucketUrl: S3 bucket URL derived for the staging directory
"""
- emptyDir = tempfile.mkdtemp()
- try:
- suCommand = resolveCommand(SU_COMMAND)
- s3CmdCommand = resolveCommand(S3CMD_COMMAND)
- actualCommand = "%s sync --no-encrypt --recursive --delete-removed --force %s/ %s/" % (s3CmdCommand[0], emptyDir, s3BucketUrl)
- result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0]
- if result != 0:
- raise IOError("Error [%d] calling s3Cmd to clear existing backup [%s]." % (result, s3BucketUrl))
- finally:
- if os.path.exists(emptyDir):
- os.rmdir(emptyDir)
+ suCommand = resolveCommand(SU_COMMAND)
+ awsCommand = resolveCommand(AWS_COMMAND)
+ actualCommand = "%s s3 rm --recursive %s/" % (awsCommand[0], s3BucketUrl)
+ result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0]
+ if result != 0:
+ raise IOError("Error [%d] calling AWS CLI to clear existing backup [%s]." % (result, s3BucketUrl))
-###########################
-# _writeStaging() function
-###########################
+##############################
+# _writeStagingDir() function
+##############################
def _writeStagingDir(config, stagingDir, s3BucketUrl):
"""
@@ -570,11 +604,19 @@
@param stagingDir: Staging directory to write
@param s3BucketUrl: S3 bucket URL derived for the staging directory
"""
- suCommand = resolveCommand(SU_COMMAND)
- s3CmdCommand = resolveCommand(S3CMD_COMMAND)
- actualCommand = "%s put --recursive %s/ %s/" % (s3CmdCommand[0], stagingDir, s3BucketUrl)
- result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0]
- if result != 0:
- raise IOError("Error [%d] calling s3Cmd to store staging directory [%s]." % (result, s3BucketUrl))
+ pass
+###############################
+# _verifyStagingDir() function
+###############################
+
+def _verifyStagingDir(config, stagingDir, s3BucketUrl):
+ """
+ Verify that a staging directory was properly written to the Amazon S3 cloud.
+ @param config: Config object.
+ @param stagingDir: Staging directory to write
+ @param s3BucketUrl: S3 bucket URL derived for the staging directory
+ """
+ pass
+
Modified: cedar-backup2/trunk/testcase/amazons3tests.py
===================================================================
--- cedar-backup2/trunk/testcase/amazons3tests.py 2014-10-02 01:38:10 UTC (rev 1063)
+++ cedar-backup2/trunk/testcase/amazons3tests.py 2014-10-03 00:04:21 UTC (rev 1064)
@@ -151,45 +151,32 @@
amazons3 = AmazonS3Config()
self.failUnlessEqual(False, amazons3.warnMidnite)
self.failUnlessEqual(None, amazons3.s3Bucket)
+ self.failUnlessEqual(None, amazons3.encryptCommand)
def testConstructor_002(self):
"""
Test constructor with all values filled in, with valid values.
"""
- amazons3 = AmazonS3Config(True, "bucket")
+ amazons3 = AmazonS3Config(True, "bucket", "encrypt")
self.failUnlessEqual(True, amazons3.warnMidnite)
self.failUnlessEqual("bucket", amazons3.s3Bucket)
+ self.failUnlessEqual("encrypt", amazons3.encryptCommand)
def testConstructor_003(self):
"""
Test assignment of s3Bucket attribute, None value.
"""
- amazons3 = AmazonS3Config(warnMidnite=True, s3Bucket="bucket")
+ amazons3 = AmazonS3Config(warnMidnite=True, s3Bucket="bucket", encryptCommand="encrypt")
self.failUnlessEqual(True, amazons3.warnMidnite)
self.failUnlessEqual("bucket", amazons3.s3Bucket)
+ self.failUnlessEqual("encrypt", amazons3.encryptCommand)
amazons3.s3Bucket = None
+ self.failUnlessEqual(True, amazons3.warnMidnite)
self.failUnlessEqual(None, amazons3.s3Bucket)
+ self.failUnlessEqual("encrypt", amazons3.encryptCommand)
def testConstructor_004(self):
"""
- Test assignment of s3Bucket attribute, valid value.
- """
- amazons3 = AmazonS3Config()
- self.failUnlessEqual(None, amazons3.s3Bucket)
- amazons3.s3Bucket = "bucket"
- self.failUnlessEqual("bucket", amazons3.s3Bucket)
-
- def testConstructor_005(self):
- """
- Test assignment of s3Bucket attribute, invalid value (empty).
- """
- amazons3 = AmazonS3Config()
- self.failUnlessEqual(None, amazons3.s3Bucket)
- self.failUnlessAssignRaises(ValueError, amazons3, "s3Bucket", "")
- self.failUnlessEqual(None, amazons3.s3Bucket)
-
- def testConstructor_006(self):
- """
Test assignment of warnMidnite attribute, valid value (real boolean).
"""
amazons3 = AmazonS3Config()
@@ -199,7 +186,7 @@
amazons3.warnMidnite = False
self.failUnlessEqual(False, amazons3.warnMidnite)
- def testConstructor_007(self):
+ def testConstructor_005(self):
"""
Test assignment of warnMidnite attribute, valid value (expression).
"""
@@ -216,7 +203,43 @@
amazons3.warnMidnite = 3
self.failUnlessEqual(True, amazons3.warnMidnite)
+ def testConstructor_006(self):
+ """
+ Test assignment of s3Bucket attribute, valid value.
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.s3Bucket)
+ amazons3.s3Bucket = "bucket"
+ self.failUnlessEqual("bucket", amazons3.s3Bucket)
+ def testConstructor_007(self):
+ """
+ Test assignment of s3Bucket attribute, invalid value (empty).
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.s3Bucket)
+ self.failUnlessAssignRaises(ValueError, amazons3, "s3Bucket", "")
+ self.failUnlessEqual(None, amazons3.s3Bucket)
+
+ def testConstructor_008(self):
+ """
+ Test assignment of encryptCommand attribute, valid value.
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.encryptCommand)
+ amazons3.encryptCommand = "encrypt"
+ self.failUnlessEqual("encrypt", amazons3.encryptCommand)
+
+ def testConstructor_008(self):
+ """
+ Test assignment of encryptCommand attribute, invalid value (empty).
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.encryptCommand)
+ self.failUnlessAssignRaises(ValueError, amazons3, "encryptCommand", "")
+ self.failUnlessEqual(None, amazons3.encryptCommand)
+
+
############################
# Test comparison operators
############################
@@ -239,8 +262,8 @@
"""
Test comparison of two identical objects, all attributes non-None.
"""
- amazons31 = AmazonS3Config("bucket")
- amazons32 = AmazonS3Config("bucket")
+ amazons31 = AmazonS3Config(True, "bucket", "encrypt")
+ amazons32 = AmazonS3Config(True, "bucket", "encrypt")
self.failUnlessEqual(amazons31, amazons32)
self.failUnless(amazons31 == amazons32)
self.failUnless(not amazons31 < amazons32)
@@ -251,6 +274,20 @@
def testComparison_003(self):
"""
+ Test comparison of two differing objects, warnMidnite differs.
+ """
+ amazons31 = AmazonS3Config(warnMidnite=False)
+ amazons32 = AmazonS3Config(warnMidnite=True)
+ self.failIfEqual(amazons31, amazons32)
+ self.failUnless(not amazons31 == amazons32)
+ self.failUnless(amazons31 < amazons32)
+ self.failUnless(amazons31 <= amazons32)
+ self.failUnless(not amazons31 > amazons32)
+ self.failUnless(not amazons31 >= amazons32)
+ self.failUnless(amazons31 != amazons32)
+
+ def testComparison_004(self):
+ """
Test comparison of two differing objects, s3Bucket differs (one None).
"""
amazons31 = AmazonS3Config()
@@ -263,12 +300,12 @@
self.failUnless(not amazons31 >= amazons32)
self.failUnless(amazons31 != amazons32)
- def testComparison_004(self):
+ def testComparison_005(self):
"""
Test comparison of two differing objects, s3Bucket differs.
"""
- amazons31 = AmazonS3Config(True, "bucket1")
- amazons32 = AmazonS3Config(True, "bucket2")
+ amazons31 = AmazonS3Config(True, "bucket1", "encrypt")
+ amazons32 = AmazonS3Config(True, "bucket2", "encrypt")
self.failIfEqual(amazons31, amazons32)
self.failUnless(not amazons31 == amazons32)
self.failUnless(amazons31 < amazons32)
@@ -277,12 +314,12 @@
self.failUnless(not amazons31 >= amazons32)
self.failUnless(amazons31 != amazons32)
- def testComparison_005(self):
+ def testComparison_006(self):
"""
- Test comparison of two differing objects, warnMidnite differs.
+ Test comparison of two differing objects, encryptCommand differs (one None).
"""
- amazons31 = AmazonS3Config(warnMidnite=False)
- amazons32 = AmazonS3Config(warnMidnite=True)
+ amazons31 = AmazonS3Config()
+ amazons32 = AmazonS3Config(encryptCommand="encrypt")
self.failIfEqual(amazons31, amazons32)
self.failUnless(not amazons31 == amazons32)
self.failUnless(amazons31 < amazons32)
@@ -291,7 +328,21 @@
self.failUnless(not amazons31 >= amazons32)
self.failUnless(amazons31 != amazons32)
+ def testComparison_007(self):
+ """
+ Test comparison of two differing objects, encryptCommand differs.
+ """
+ amazons31 = AmazonS3Config(True, "bucket", "encrypt1")
+ amazons32 = AmazonS3Config(True, "bucket", "encrypt2")
+ self.failIfEqual(amazons31, amazons32)
+ self.failUnless(not amazons31 == amazons32)
+ self.failUnless(amazons31 < amazons32)
+ self.failUnless(amazons31 <= amazons32)
+ self.failUnless(not amazons31 > amazons32)
+ self.failUnless(not amazons31 >= amazons32)
+ self.failUnless(amazons31 != amazons32)
+
########################
# TestLocalConfig class
########################
@@ -462,10 +513,10 @@
Test comparison of two differing objects, s3Bucket differs.
"""
config1 = LocalConfig()
- config1.amazons3 = AmazonS3Config(True, "bucket1")
+ config1.amazons3 = AmazonS3Config(True, "bucket1", "encrypt")
config2 = LocalConfig()
- config2.amazons3 = AmazonS3Config(True, "bucket2")
+ config2.amazons3 = AmazonS3Config(True, "bucket2", "encrypt")
self.failIfEqual(config1, config2)
self.failUnless(not config1 == config2)
@@ -540,10 +591,12 @@
self.failIfEqual(None, config.amazons3)
self.failUnlessEqual(True, config.amazons3.warnMidnite)
self.failUnlessEqual("mybucket", config.amazons3.s3Bucket)
+ self.failUnlessEqual("encrypt", config.amazons3.encryptCommand)
config = LocalConfig(xmlData=contents, validate=False)
self.failIfEqual(None, config.amazons3)
self.failUnlessEqual(True, config.amazons3.warnMidnite)
self.failUnlessEqual("mybucket", config.amazons3.s3Bucket)
+ self.failUnlessEqual("encrypt", config.amazons3.encryptCommand)
###################
@@ -563,51 +616,12 @@
"""
Test with values set.
"""
- amazons3 = AmazonS3Config(True, "bucket")
+ amazons3 = AmazonS3Config(True, "bucket", "encrypt")
config = LocalConfig()
config.amazons3 = amazons3
self.validateAddConfig(config)
-######################
-# TestFunctions class
-######################
-
-class TestFunctions(unittest.TestCase):
-
- """Tests for the functions in amazons3.py."""
-
- ################
- # Setup methods
- ################
-
- def setUp(self):
- try:
- self.tmpdir = tempfile.mkdtemp()
- self.resources = findResources(RESOURCES, DATA_DIRS)
- except Exception, e:
- self.fail(e)
-
- def tearDown(self):
- try:
- removedir(self.tmpdir)
- except: pass
-
-
- ##################
- # Utility methods
- ##################
-
- def extractTar(self, tarname):
- """Extracts a tarfile with a particular name."""
- extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname])
-
- def buildPath(self, components):
- """Builds a complete search path from a list of components."""
- components.insert(0, self.tmpdir)
- return buildPath(components)
-
-
#######################################################################
# Suite definition
#######################################################################
@@ -617,7 +631,6 @@
return unittest.TestSuite((
unittest.makeSuite(TestAmazonS3Config, 'test'),
unittest.makeSuite(TestLocalConfig, 'test'),
- unittest.makeSuite(TestFunctions, 'test'),
))
Modified: cedar-backup2/trunk/testcase/data/amazons3.conf.2
===================================================================
--- cedar-backup2/trunk/testcase/data/amazons3.conf.2 2014-10-02 01:38:10 UTC (rev 1063)
+++ cedar-backup2/trunk/testcase/data/amazons3.conf.2 2014-10-03 00:04:21 UTC (rev 1064)
@@ -4,5 +4,6 @@
<amazons3>
<warn_midnite>Y</warn_midnite>
<s3_bucket>mybucket</s3_bucket>
+ <encrypt>encrypt</encrypt>
</amazons3>
</cb_config>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-03 14:33:55
|
Revision: 1069
http://sourceforge.net/p/cedar-backup/code/1069
Author: pronovic
Date: 2014-10-03 14:33:52 +0000 (Fri, 03 Oct 2014)
Log Message:
-----------
Minor documntation updates
Modified Paths:
--------------
cedar-backup2/trunk/Changelog
cedar-backup2/trunk/INSTALL
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2014-10-03 01:57:10 UTC (rev 1068)
+++ cedar-backup2/trunk/Changelog 2014-10-03 14:33:52 UTC (rev 1069)
@@ -1,7 +1,7 @@
Version 2.23.3 unreleased
- * Add new extension amazons3 to replace the store action.
- * Update user manual to clarify a few of the dependencies.
+ * Add new extension amazons3 as an optional replacement for the store action.
+ * Update user manual and INSTALL to clarify a few of the dependencies.
* Fix encryption unit test that started failing due to my new GPG key.
Version 2.22.0 09 May 2013
Modified: cedar-backup2/trunk/INSTALL
===================================================================
--- cedar-backup2/trunk/INSTALL 2014-10-03 01:57:10 UTC (rev 1068)
+++ cedar-backup2/trunk/INSTALL 2014-10-03 14:33:52 UTC (rev 1069)
@@ -19,7 +19,9 @@
python setup.py --help
For more information on how to install it. You must have a Python
-interpreter version 2.5 or better to use these modules.
+interpreter version 2.5 or better to use these modules. Some external
+tools are also required for certain features to work. See the user
+manual for more details.
In the simplest case, you will probably just use:
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-03 16:43:25
|
Revision: 1073
http://sourceforge.net/p/cedar-backup/code/1073
Author: pronovic
Date: 2014-10-03 16:43:17 +0000 (Fri, 03 Oct 2014)
Log Message:
-----------
Finish documentation
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
cedar-backup2/trunk/manual/src/depends.xml
cedar-backup2/trunk/manual/src/extensions.xml
Modified: cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-03 16:26:45 UTC (rev 1072)
+++ cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-03 16:43:17 UTC (rev 1073)
@@ -55,8 +55,8 @@
and not root.
You can optionally configure Cedar Backup to encrypt data before sending it
-to S3. To do that, provide a complete command line using the ${input} and
-${output} variables to represent the original input file and the encrypted
+to S3. To do that, provide a complete command line using the C{${input}} and
+C{${output}} variables to represent the original input file and the encrypted
output file. This command will be executed as the backup user.
For instance, you can use something like this with GPG::
@@ -64,11 +64,13 @@
/usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
The GPG mechanism depends on a strong passphrase for security. One way to
-generate a strong passphrase is using your system random number generator, i.e.
-C{dd if=/dev/urandom count=20 bs=1 | xxd -ps}. (See U{StackExchange
-http://security.stackexchange.com/questions/14867/gpg-encryption-security>} for
-more details about that advice.) If you decide to use encryption, make sure you
-save off the passphrase in a safe place, so you can get at your backup data
+generate a strong passphrase is using your system random number generator, i.e.::
+
+ dd if=/dev/urandom count=20 bs=1 | xxd -ps
+
+(See U{StackExchange <http://security.stackexchange.com/questions/14867/gpg-encryption-security>}
+for more details about that advice.) If you decide to use encryption, make sure
+you save off the passphrase in a safe place, so you can get at your backup data
later if you need to. And obviously, make sure to set permissions on the
passphrase file so it can only be read by the backup user.
Modified: cedar-backup2/trunk/manual/src/depends.xml
===================================================================
--- cedar-backup2/trunk/manual/src/depends.xml 2014-10-03 16:26:45 UTC (rev 1072)
+++ cedar-backup2/trunk/manual/src/depends.xml 2014-10-03 16:43:17 UTC (rev 1073)
@@ -556,27 +556,45 @@
</varlistentry>
<varlistentry>
- <term><command>s3cmd</command></term>
+ <term><command>split</command></term>
<listitem>
<para>
- The <command>s3cmd</command> command is used by the Amazon S3
- extension to communicate with Amazon AWS. Cedar Backup requires
- version 1.5.0-rc1 or later. Earlier versions have problems
- uploading large files in the background (non-TTY), and there
- was also a syntax change that the extension relies on.
+ The <command>split</command> command is used by the split
+ extension to split up large files.
</para>
+
+ <para>
+ This command is typically part of the core operating system
+ install and is not distributed in a separate package.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><command>AWS CLI</command></term>
+ <listitem>
+
+ <para>
+ AWS CLI is Amazon's official command-line tool for interacting
+ with the Amazon Web Services infrastruture. Cedar Backup uses
+ AWS CLI to copy backup data up to Amazon S3 cloud storage.
+ </para>
+
+ <para>
+ The initial implementation of the amazons3 extension was written
+ using AWS CLI 1.4. As of this writing, not all Linux distributions
+ include a package for this version. On these platforms, the
+ easiest way to install it is via PIP: <code>apt-get install python-pip</code>,
+ and then <code>pip install awscli</code>. The Debian package includes
+ an appropriate dependency starting with the jesse release.
+ </para>
<para>
- As of this writing, the version of s3cmd in Debian wheezy is
- not new enough, and it is not possible to pin the correct
- version from testing or unstable due to a generated
- dependency on python:all (which does not exist in wheezy).
- It is possible to force dpkg to install the package anyway:
- download the appropriate <literal>.deb</literal> file, and
- then install with <literal>dpkg --force-all</literal>.
- Alternately, the Cedar Solutions APT source contains a
- backported version of 1.5.0~rc1-2.
+ After you install AWS CLI, you need to configure your connection
+ to AWS with an appropriate access id and access key. Amazon provides a good
+ <ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html">setup guide</ulink>.
</para>
<informaltable>
@@ -592,38 +610,15 @@
<tbody>
<row>
<entry>upstream</entry>
- <entry><ulink url="http://s3tools.org/s3cmd"/></entry>
+ <entry><ulink url="http://aws.amazon.com/documentation/cli/"/></entry>
</row>
</tbody>
</tgroup>
</informaltable>
- <para>
- If you can't find a package for your system, install from the package
- source, using the <quote>upstream</quote> link.
- </para>
-
</listitem>
</varlistentry>
-
- <varlistentry>
- <term><command>split</command></term>
- <listitem>
-
- <para>
- The <command>split</command> command is used by the split
- extension to split up large files.
- </para>
-
- <para>
- This command is typically part of the core operating system
- install and is not distributed in a separate package.
- </para>
-
- </listitem>
- </varlistentry>
-
</variablelist>
</simplesect>
Modified: cedar-backup2/trunk/manual/src/extensions.xml
===================================================================
--- cedar-backup2/trunk/manual/src/extensions.xml 2014-10-03 16:26:45 UTC (rev 1072)
+++ cedar-backup2/trunk/manual/src/extensions.xml 2014-10-03 16:43:17 UTC (rev 1073)
@@ -107,35 +107,50 @@
<para>
The underlying functionality relies on the
- <ulink url="http://s3tools.org/">Amazon S3 Tools</ulink> package, version
- 1.5.0-rc1 or newer. It is a very thin wrapper around the
- <literal>s3cmd put</literal> command. Before you use this extension,
- you need to set up your Amazon S3 account and configure
- <literal>s3cmd</literal> as detailed in the
- <ulink url="http://s3tools.org/s3cmd-howto">HOWTO</ulink>.
+ <ulink url="http://aws.amazon.com/documentation/cli/">AWS CLI</ulink> toolset.
+ Before you use this extension, you need to set up your Amazon S3
+ account and configure AWS CLI as detailed in Amazons's
+ <ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html">setup guide</ulink>.
The extension assumes that the backup is being executed as root, and
switches over to the configured backup user to run the
- <literal>s3cmd</literal> program. So, make sure you configure the S3
- tools as the backup user and not root.
+ <literal>aws</literal> program. So, make sure you configure the AWS
+ CLI tools as the backup user and not root.
</para>
<para>
- When configuring the S3 tools connection to Amazon AWS, you probably want
- to configure GPG encryption using a strong passphrase. One way
- to generate a strong passphrase is using your system random number generator,
- i.e. <literal>dd if=/dev/urandom count=20 bs=1 | xxd -ps</literal>. (See
- <ulink url="http://security.stackexchange.com/questions/14867/gpg-encryption-security">StackExchange</ulink>
- for more details about that advice.) If you decide to use encryption, make sure
- you save off the passphrase in a safe place, so you can get at your backup data
- later if you need to.
+ You can optionally configure Cedar Backup to encrypt data before
+ sending it to S3. To do that, provide a complete command line using
+ the <literal>${input}</literal> and <literal>${output}</literal>
+ variables to represent the original input file and the encrypted
+ output file. This command will be executed as the backup user.
</para>
<para>
- This extension was written for and tested on Linux. It will throw an exception
- if run on Windows.
+ For instance, you can use something like this with GPG:
</para>
+ <programlisting>
+/usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
+ </programlisting>
+
<para>
+ The GPG mechanism depends on a strong passphrase for security. One way to
+ generate a strong passphrase is using your system random number generator, i.e.:
+ </para>
+
+ <programlisting>
+dd if=/dev/urandom count=20 bs=1 | xxd -ps
+ </programlisting>
+
+ <para>
+ (See <ulink url="http://security.stackexchange.com/questions/14867/gpg-encryption-security">StackExchange</ulink>
+ for more details about that advice.) If you decide to use encryption, make sure you
+ save off the passphrase in a safe place, so you can get at your backup data
+ later if you need to. And obviously, make sure to set permissions on the
+ passphrase file so it can only be read by the backup user.
+ </para>
+
+ <para>
To enable this extension, add the following section to the Cedar Backup
configuration file:
</para>
@@ -155,7 +170,7 @@
This extension relies on the options and staging configuration sections
in the standard Cedar Backup configuration file, and then also
requires its own <literal>amazons3</literal> configuration section.
- This is an example configuration section:
+ This is an example configuration section with encryption disabled:
</para>
<programlisting>
@@ -178,11 +193,11 @@
This field indicates whether warnings should be generated
if the Amazon S3 operation has to cross a midnite boundary in
order to find data to write to the cloud. For instance, a
- warning would be generated if valid store data was only
+ warning would be generated if valid data was only
found in the day before or day after the current day.
</para>
<para>
- Configuration for some users is such that the store
+ Configuration for some users is such that the amazons3
operation will always cross a midnite boundary, so they
will not care about this warning. Other users will expect
to never cross a boundary, and want to be notified that
@@ -216,6 +231,25 @@
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>encrypt</literal></term>
+ <listitem>
+ <para>Command used to encrypt backup data before upload to S3</para>
+ <para>
+ If this field is provided, then data will be encrypted before
+ it is uploaded to Amazon S3. You must provide the entire
+ command used to encrypt a file, including the
+ <literal>${input}</literal> and <literal>${output}</literal>
+ variables. An example GPG command is shown above, but you
+ can use any mechanism you choose. The command will be run as
+ the configured backup user.
+ </para>
+ <para>
+ <emphasis>Restrictions:</emphasis> If provided, must be non-empty.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</sect1>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-03 17:55:37
|
Revision: 1074
http://sourceforge.net/p/cedar-backup/code/1074
Author: pronovic
Date: 2014-10-03 17:55:34 +0000 (Fri, 03 Oct 2014)
Log Message:
-----------
Release 2.23.3
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/release.py
cedar-backup2/trunk/Changelog
Modified: cedar-backup2/trunk/CedarBackup2/release.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/release.py 2014-10-03 16:43:17 UTC (rev 1073)
+++ cedar-backup2/trunk/CedarBackup2/release.py 2014-10-03 17:55:34 UTC (rev 1074)
@@ -35,6 +35,6 @@
EMAIL = "pro...@ie..."
COPYRIGHT = "2004-2011,2013,2014"
VERSION = "2.23.3"
-DATE = "unreleased"
+DATE = "03 Oct 2014"
URL = "http://cedar-backup.sourceforge.net/"
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2014-10-03 16:43:17 UTC (rev 1073)
+++ cedar-backup2/trunk/Changelog 2014-10-03 17:55:34 UTC (rev 1074)
@@ -1,4 +1,4 @@
-Version 2.23.3 unreleased
+Version 2.23.3 03 Oct 2014
* Add new extension amazons3 as an optional replacement for the store action.
* Update user manual and INSTALL to clarify a few of the dependencies.
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-07 18:20:28
|
Revision: 1080
http://sourceforge.net/p/cedar-backup/code/1080
Author: pronovic
Date: 2014-10-07 18:20:20 +0000 (Tue, 07 Oct 2014)
Log Message:
-----------
Fix or ignore pylint warnings
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/filesystem.py
cedar-backup2/trunk/CedarBackup2/xmlutil.py
cedar-backup2/trunk/doc/procedure.txt
cedar-backup2/trunk/pylint-code.rc
cedar-backup2/trunk/pylint-test.rc
cedar-backup2/trunk/testcase/amazons3tests.py
Modified: cedar-backup2/trunk/CedarBackup2/filesystem.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/filesystem.py 2014-10-03 19:22:50 UTC (rev 1079)
+++ cedar-backup2/trunk/CedarBackup2/filesystem.py 2014-10-07 18:20:20 UTC (rev 1080)
@@ -911,7 +911,7 @@
@return: ASCII-safe SHA digest for the file.
@raise OSError: If the file cannot be opened.
"""
- # pylint: disable=C0103
+ # pylint: disable=C0103,E1101
try:
import hashlib
s = hashlib.sha1()
Modified: cedar-backup2/trunk/CedarBackup2/xmlutil.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/xmlutil.py 2014-10-03 19:22:50 UTC (rev 1079)
+++ cedar-backup2/trunk/CedarBackup2/xmlutil.py 2014-10-07 18:20:20 UTC (rev 1080)
@@ -58,7 +58,7 @@
@author: Kenneth J. Pronovici <pro...@ie...>
"""
-# pylint: disable=C0111,C0103,W0511,W0104
+# pylint: disable=C0111,C0103,W0511,W0104,W0106
########################################################################
# Imported modules
Modified: cedar-backup2/trunk/doc/procedure.txt
===================================================================
--- cedar-backup2/trunk/doc/procedure.txt 2014-10-03 19:22:50 UTC (rev 1079)
+++ cedar-backup2/trunk/doc/procedure.txt 2014-10-07 18:20:20 UTC (rev 1080)
@@ -4,7 +4,7 @@
- Make final update to Changelog
- Update CedarBackup2/release.py
- Run unit tests one last time (make test)
-- Run pychecker tests one last time (make check)
+- Run pychecker tests one last time (make check or make allcheck)
- Build the source distributions (make distrib)
- Run the util/release script for the right version
Modified: cedar-backup2/trunk/pylint-code.rc
===================================================================
--- cedar-backup2/trunk/pylint-code.rc 2014-10-03 19:22:50 UTC (rev 1079)
+++ cedar-backup2/trunk/pylint-code.rc 2014-10-07 18:20:20 UTC (rev 1080)
@@ -70,7 +70,7 @@
#enable-msg=
# Disable the message(s) with the given id(s).
-disable=I0011,W0702,W0703,W0704,C0302,C0321,R0902,R0911,R0912,R0913,R0914,R0915
+disable=I0011,W0702,W0703,W0704,C0302,C0321,R0902,R0911,R0912,R0913,R0914,R0915,R0801
[REPORTS]
Modified: cedar-backup2/trunk/pylint-test.rc
===================================================================
--- cedar-backup2/trunk/pylint-test.rc 2014-10-03 19:22:50 UTC (rev 1079)
+++ cedar-backup2/trunk/pylint-test.rc 2014-10-07 18:20:20 UTC (rev 1080)
@@ -73,7 +73,7 @@
#enable-msg=
# Disable the message(s) with the given id(s).
-disable=I0011,W0212,W0702,W0703,W0704,C0302,C0301,C0321,C0111,R0201,R0902,R0904,R0911,R0912,R0913,R0914,R0915
+disable=I0011,W0212,W0702,W0703,W0704,C0302,C0301,C0321,C0111,R0201,R0902,R0904,R0911,R0912,R0913,R0914,R0915,R0801
[REPORTS]
Modified: cedar-backup2/trunk/testcase/amazons3tests.py
===================================================================
--- cedar-backup2/trunk/testcase/amazons3tests.py 2014-10-03 19:22:50 UTC (rev 1079)
+++ cedar-backup2/trunk/testcase/amazons3tests.py 2014-10-07 18:20:20 UTC (rev 1080)
@@ -86,12 +86,9 @@
# System modules
import unittest
-import os
-import tempfile
# Cedar Backup modules
-from CedarBackup2.filesystem import FilesystemList
-from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar, failUnlessAssignRaises, platformSupportsLinks
+from CedarBackup2.testutil import findResources, failUnlessAssignRaises
from CedarBackup2.xmlutil import createOutputDom, serializeDom
from CedarBackup2.extend.amazons3 import LocalConfig, AmazonS3Config
@@ -230,7 +227,7 @@
amazons3.encryptCommand = "encrypt"
self.failUnlessEqual("encrypt", amazons3.encryptCommand)
- def testConstructor_008(self):
+ def testConstructor_009(self):
"""
Test assignment of encryptCommand attribute, invalid value (empty).
"""
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-07 19:15:17
|
Revision: 1082
http://sourceforge.net/p/cedar-backup/code/1082
Author: pronovic
Date: 2014-10-07 19:15:10 +0000 (Tue, 07 Oct 2014)
Log Message:
-----------
Add support for missing --diagnostics flag in cback-span script
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/tools/span.py
cedar-backup2/trunk/Changelog
cedar-backup2/trunk/doc/cback-span.1
Modified: cedar-backup2/trunk/CedarBackup2/tools/span.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/tools/span.py 2014-10-07 19:13:18 UTC (rev 1081)
+++ cedar-backup2/trunk/CedarBackup2/tools/span.py 2014-10-07 19:15:10 UTC (rev 1082)
@@ -73,6 +73,7 @@
from CedarBackup2.actions.util import createWriter
from CedarBackup2.actions.store import writeIndicatorFile
from CedarBackup2.actions.util import findDailyDirs
+from CedarBackup2.util import Diagnostics
########################################################################
@@ -165,6 +166,9 @@
if options.version:
_version()
return 0
+ if options.diagnostics:
+ _diagnostics()
+ return 0
try:
logfile = setupLogging(options)
@@ -271,6 +275,23 @@
fd.write("\n")
+##########################
+# _diagnostics() function
+##########################
+
+def _diagnostics(fd=sys.stdout):
+ """
+ Prints runtime diagnostics information.
+ @param fd: File descriptor used to print information.
+ @note: The C{fd} is used rather than C{print} to facilitate unit testing.
+ """
+ fd.write("\n")
+ fd.write("Diagnostics:\n")
+ fd.write("\n")
+ Diagnostics().printDiagnostics(fd=fd, prefix=" ")
+ fd.write("\n")
+
+
############################
# _executeAction() function
############################
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2014-10-07 19:13:18 UTC (rev 1081)
+++ cedar-backup2/trunk/Changelog 2014-10-07 19:15:10 UTC (rev 1082)
@@ -1,3 +1,7 @@
+Version 2.23.4 unreleased
+
+ * Add support for missing --diagnostics flag in cback-span script.
+
Version 2.23.3 03 Oct 2014
* Add new extension amazons3 as an optional replacement for the store action.
Modified: cedar-backup2/trunk/doc/cback-span.1
===================================================================
--- cedar-backup2/trunk/doc/cback-span.1 2014-10-07 19:13:18 UTC (rev 1081)
+++ cedar-backup2/trunk/doc/cback-span.1 2014-10-07 19:15:10 UTC (rev 1082)
@@ -15,7 +15,7 @@
.\" #
.\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
.\"
-.TH cback\-span "1" "July 2010" "Cedar Backup" "Kenneth J. Pronovici"
+.TH cback\-span "1" "Oct 2014" "Cedar Backup" "Kenneth J. Pronovici"
.SH NAME
cback\-span \- Span staged data among multiple discs
.SH SYNOPSIS
@@ -87,6 +87,10 @@
than just progating last message it received back up to the user interface.
Under some circumstances, this is useful information to include along with a
bug report.
+.TP
+\fB\-D\fR, \fB\-\-diagnostics\fR
+Display runtime diagnostic information and then exit. This diagnostic
+information is often useful when filing a bug report.
.SH RETURN VALUES
.PP
This command returns 0 (zero) upon normal completion, and six other error
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-07 19:17:27
|
Revision: 1083
http://sourceforge.net/p/cedar-backup/code/1083
Author: pronovic
Date: 2014-10-07 19:17:12 +0000 (Tue, 07 Oct 2014)
Log Message:
-----------
Start implementing new tool cback-amazons3-sync
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/tools/__init__.py
cedar-backup2/trunk/Changelog
cedar-backup2/trunk/MANIFEST.in
cedar-backup2/trunk/util/test.py
Added Paths:
-----------
cedar-backup2/trunk/CedarBackup2/tools/amazons3.py
cedar-backup2/trunk/doc/cback-amazons3-sync.1
cedar-backup2/trunk/testcase/synctests.py
cedar-backup2/trunk/util/cback-amazons3-sync
Modified: cedar-backup2/trunk/CedarBackup2/tools/__init__.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/tools/__init__.py 2014-10-07 19:15:10 UTC (rev 1082)
+++ cedar-backup2/trunk/CedarBackup2/tools/__init__.py 2014-10-07 19:17:12 UTC (rev 1083)
@@ -45,5 +45,5 @@
# Using 'from CedarBackup2.tools import *' will just import the modules listed
# in the __all__ variable.
-__all__ = [ 'span', ]
+__all__ = [ 'span', 'amazons3', ]
Added: cedar-backup2/trunk/CedarBackup2/tools/amazons3.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/tools/amazons3.py (rev 0)
+++ cedar-backup2/trunk/CedarBackup2/tools/amazons3.py 2014-10-07 19:17:12 UTC (rev 1083)
@@ -0,0 +1,978 @@
+# -*- coding: iso-8859-1 -*-
+# vim: set ft=python ts=3 sw=3 expandtab:
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+#
+# C E D A R
+# S O L U T I O N S "Software done right."
+# S O F T W A R E
+#
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+#
+# Copyright (c) 2014 Kenneth J. Pronovici.
+# All rights reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License,
+# Version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+#
+# Copies of the GNU General Public License are available from
+# the Free Software Foundation website, http://www.gnu.org/.
+#
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+#
+# Author : Kenneth J. Pronovici <pro...@ie...>
+# Language : Python (>= 2.5)
+# Project : Cedar Backup, release 2
+# Revision : $Id$
+# Purpose : Cedar Backup tool to synchronize an Amazon S3 bucket.
+#
+# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+
+########################################################################
+# Notes
+########################################################################
+
+"""
+Synchonizes a local directory with an Amazon S3 bucket.
+
+No configuration is required; all necessary information is taken from the
+command-line. The only thing configuration would help with is the path
+resolver interface, and it doesn't seem worth it to require configuration just
+to get that.
+
+@author: Kenneth J. Pronovici <pro...@ie...>
+"""
+
+########################################################################
+# Imported modules and constants
+########################################################################
+
+# System modules
+import sys
+import os
+import logging
+import getopt
+
+# Cedar Backup modules
+from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT
+from CedarBackup2.cli import setupLogging, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE
+from CedarBackup2.util import Diagnostics, splitCommandLine, encodePath
+
+
+########################################################################
+# Module-wide constants and variables
+########################################################################
+
+logger = logging.getLogger("CedarBackup2.log.tools.amazons3")
+
+SHORT_SWITCHES = "hVbql:o:m:OdsDv"
+LONG_SWITCHES = [ 'help', 'version', 'verbose', 'quiet',
+ 'logfile=', 'owner=', 'mode=',
+ 'output', 'debug', 'stack', 'diagnostics', "verifyOnly", ]
+
+
+#######################################################################
+# Options class
+#######################################################################
+
+class Options(object):
+
+ ######################
+ # Class documentation
+ ######################
+
+ """
+ Class representing command-line options for the cback-amazons3-sync script.
+
+ The C{Options} class is a Python object representation of the command-line
+ options of the cback script.
+
+ The object representation is two-way: a command line string or a list of
+ command line arguments can be used to create an C{Options} object, and then
+ changes to the object can be propogated back to a list of command-line
+ arguments or to a command-line string. An C{Options} object can even be
+ created from scratch programmatically (if you have a need for that).
+
+ There are two main levels of validation in the C{Options} class. The first
+ is field-level validation. Field-level validation comes into play when a
+ given field in an object is assigned to or updated. We use Python's
+ C{property} functionality to enforce specific validations on field values,
+ and in some places we even use customized list classes to enforce
+ validations on list members. You should expect to catch a C{ValueError}
+ exception when making assignments to fields if you are programmatically
+ filling an object.
+
+ The second level of validation is post-completion validation. Certain
+ validations don't make sense until an object representation of options is
+ fully "complete". We don't want these validations to apply all of the time,
+ because it would make building up a valid object from scratch a real pain.
+ For instance, we might have to do things in the right order to keep from
+ throwing exceptions, etc.
+
+ All of these post-completion validations are encapsulated in the
+ L{Options.validate} method. This method can be called at any time by a
+ client, and will always be called immediately after creating a C{Options}
+ object from a command line and before exporting a C{Options} object back to
+ a command line. This way, we get acceptable ease-of-use but we also don't
+ accept or emit invalid command lines.
+
+ @note: Lists within this class are "unordered" for equality comparisons.
+
+ @sort: __init__, __repr__, __str__, __cmp__
+ """
+
+ ##############
+ # Constructor
+ ##############
+
+ def __init__(self, argumentList=None, argumentString=None, validate=True):
+ """
+ Initializes an options object.
+
+ If you initialize the object without passing either C{argumentList} or
+ C{argumentString}, the object will be empty and will be invalid until it
+ is filled in properly.
+
+ No reference to the original arguments is saved off by this class. Once
+ the data has been parsed (successfully or not) this original information
+ is discarded.
+
+ The argument list is assumed to be a list of arguments, not including the
+ name of the command, something like C{sys.argv[1:]}. If you pass
+ C{sys.argv} instead, things are not going to work.
+
+ The argument string will be parsed into an argument list by the
+ L{util.splitCommandLine} function (see the documentation for that
+ function for some important notes about its limitations). There is an
+ assumption that the resulting list will be equivalent to C{sys.argv[1:]},
+ just like C{argumentList}.
+
+ Unless the C{validate} argument is C{False}, the L{Options.validate}
+ method will be called (with its default arguments) after successfully
+ parsing any passed-in command line. This validation ensures that
+ appropriate actions, etc. have been specified. Keep in mind that even if
+ C{validate} is C{False}, it might not be possible to parse the passed-in
+ command line, so an exception might still be raised.
+
+ @note: The command line format is specified by the L{_usage} function.
+ Call L{_usage} to see a usage statement for the cback script.
+
+ @note: It is strongly suggested that the C{validate} option always be set
+ to C{True} (the default) unless there is a specific need to read in
+ invalid command line arguments.
+
+ @param argumentList: Command line for a program.
+ @type argumentList: List of arguments, i.e. C{sys.argv}
+
+ @param argumentString: Command line for a program.
+ @type argumentString: String, i.e. "cback --verbose stage store"
+
+ @param validate: Validate the command line after parsing it.
+ @type validate: Boolean true/false.
+
+ @raise getopt.GetoptError: If the command-line arguments could not be parsed.
+ @raise ValueError: If the command-line arguments are invalid.
+ """
+ self._help = False
+ self._version = False
+ self._verbose = False
+ self._quiet = False
+ self._logfile = None
+ self._owner = None
+ self._mode = None
+ self._output = False
+ self._debug = False
+ self._stacktrace = False
+ self._diagnostics = False
+ self._verifyOnly = False
+ self._sourceDir = None
+ self._s3BucketUrl = None
+ if argumentList is not None and argumentString is not None:
+ raise ValueError("Use either argumentList or argumentString, but not both.")
+ if argumentString is not None:
+ argumentList = splitCommandLine(argumentString)
+ if argumentList is not None:
+ self._parseArgumentList(argumentList)
+ if validate:
+ self.validate()
+
+
+ #########################
+ # String representations
+ #########################
+
+ def __repr__(self):
+ """
+ Official string representation for class instance.
+ """
+ return self.buildArgumentString(validate=False)
+
+ def __str__(self):
+ """
+ Informal string representation for class instance.
+ """
+ return self.__repr__()
+
+
+ #############################
+ # Standard comparison method
+ #############################
+
+ def __cmp__(self, other):
+ """
+ Definition of equals operator for this class.
+ Lists within this class are "unordered" for equality comparisons.
+ @param other: Other object to compare to.
+ @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other.
+ """
+ if other is None:
+ return 1
+ if self.help != other.help:
+ if self.help < other.help:
+ return -1
+ else:
+ return 1
+ if self.version != other.version:
+ if self.version < other.version:
+ return -1
+ else:
+ return 1
+ if self.verbose != other.verbose:
+ if self.verbose < other.verbose:
+ return -1
+ else:
+ return 1
+ if self.quiet != other.quiet:
+ if self.quiet < other.quiet:
+ return -1
+ else:
+ return 1
+ if self.logfile != other.logfile:
+ if self.logfile < other.logfile:
+ return -1
+ else:
+ return 1
+ if self.owner != other.owner:
+ if self.owner < other.owner:
+ return -1
+ else:
+ return 1
+ if self.mode != other.mode:
+ if self.mode < other.mode:
+ return -1
+ else:
+ return 1
+ if self.output != other.output:
+ if self.output < other.output:
+ return -1
+ else:
+ return 1
+ if self.debug != other.debug:
+ if self.debug < other.debug:
+ return -1
+ else:
+ return 1
+ if self.stacktrace != other.stacktrace:
+ if self.stacktrace < other.stacktrace:
+ return -1
+ else:
+ return 1
+ if self.diagnostics != other.diagnostics:
+ if self.diagnostics < other.diagnostics:
+ return -1
+ else:
+ return 1
+ if self.verifyOnly != other.verifyOnly:
+ if self.verifyOnly < other.verifyOnly:
+ return -1
+ else:
+ return 1
+ if self.sourceDir != other.sourceDir:
+ if self.sourceDir < other.sourceDir:
+ return -1
+ else:
+ return 1
+ if self.s3BucketUrl != other.s3BucketUrl:
+ if self.s3BucketUrl < other.s3BucketUrl:
+ return -1
+ else:
+ return 1
+ return 0
+
+
+ #############
+ # Properties
+ #############
+
+ def _setHelp(self, value):
+ """
+ Property target used to set the help flag.
+ No validations, but we normalize the value to C{True} or C{False}.
+ """
+ if value:
+ self._help = True
+ else:
+ self._help = False
+
+ def _getHelp(self):
+ """
+ Property target used to get the help flag.
+ """
+ return self._help
+
+ def _setVersion(self, value):
+ """
+ Property target used to set the version flag.
+ No validations, but we normalize the value to C{True} or C{False}.
+ """
+ if value:
+ self._version = True
+ else:
+ self._version = False
+
+ def _getVersion(self):
+ """
+ Property target used to get the version flag.
+ """
+ return self._version
+
+ def _setVerbose(self, value):
+ """
+ Property target used to set the verbose flag.
+ No validations, but we normalize the value to C{True} or C{False}.
+ """
+ if value:
+ self._verbose = True
+ else:
+ self._verbose = False
+
+ def _getVerbose(self):
+ """
+ Property target used to get the verbose flag.
+ """
+ return self._verbose
+
+ def _setQuiet(self, value):
+ """
+ Property target used to set the quiet flag.
+ No validations, but we normalize the value to C{True} or C{False}.
+ """
+ if value:
+ self._quiet = True
+ else:
+ self._quiet = False
+
+ def _getQuiet(self):
+ """
+ Property target used to get the quiet flag.
+ """
+ return self._quiet
+
+ def _setLogfile(self, value):
+ """
+ Property target used to set the logfile parameter.
+ @raise ValueError: If the value cannot be encoded properly.
+ """
+ if value is not None:
+ if len(value) < 1:
+ raise ValueError("The logfile parameter must be a non-empty string.")
+ self._logfile = encodePath(value)
+
+ def _getLogfile(self):
+ """
+ Property target used to get the logfile parameter.
+ """
+ return self._logfile
+
+ def _setOwner(self, value):
+ """
+ Property target used to set the owner parameter.
+ If not C{None}, the owner must be a C{(user,group)} tuple or list.
+ Strings (and inherited children of strings) are explicitly disallowed.
+ The value will be normalized to a tuple.
+ @raise ValueError: If the value is not valid.
+ """
+ if value is None:
+ self._owner = None
+ else:
+ if isinstance(value, str):
+ raise ValueError("Must specify user and group tuple for owner parameter.")
+ if len(value) != 2:
+ raise ValueError("Must specify user and group tuple for owner parameter.")
+ if len(value[0]) < 1 or len(value[1]) < 1:
+ raise ValueError("User and group tuple values must be non-empty strings.")
+ self._owner = (value[0], value[1])
+
+ def _getOwner(self):
+ """
+ Property target used to get the owner parameter.
+ The parameter is a tuple of C{(user, group)}.
+ """
+ return self._owner
+
+ def _setMode(self, value):
+ """
+ Property target used to set the mode parameter.
+ """
+ if value is None:
+ self._mode = None
+ else:
+ try:
+ if isinstance(value, str):
+ value = int(value, 8)
+ else:
+ value = int(value)
+ except TypeError:
+ raise ValueError("Mode must be an octal integer >= 0, i.e. 644.")
+ if value < 0:
+ raise ValueError("Mode must be an octal integer >= 0. i.e. 644.")
+ self._mode = value
+
+ def _getMode(self):
+ """
+ Property target used to get the mode parameter.
+ """
+ return self._mode
+
+ def _setOutput(self, value):
+ """
+ Property target used to set the output flag.
+ No validations, but we normalize the value to C{True} or C{False}.
+ """
+ if value:
+ self._output = True
+ else:
+ self._output = False
+
+ def _getOutput(self):
+ """
+ Property target used to get the output flag.
+ """
+ return self._output
+
+ def _setDebug(self, value):
+ """
+ Property target used to set the debug flag.
+ No validations, but we normalize the value to C{True} or C{False}.
+ """
+ if value:
+ self._debug = True
+ else:
+ self._debug = False
+
+ def _getDebug(self):
+ """
+ Property target used to get the debug flag.
+ """
+ return self._debug
+
+ def _setStacktrace(self, value):
+ """
+ Property target used to set the stacktrace flag.
+ No validations, but we normalize the value to C{True} or C{False}.
+ """
+ if value:
+ self._stacktrace = True
+ else:
+ self._stacktrace = False
+
+ def _getStacktrace(self):
+ """
+ Property target used to get the stacktrace flag.
+ """
+ return self._stacktrace
+
+ def _setDiagnostics(self, value):
+ """
+ Property target used to set the diagnostics flag.
+ No validations, but we normalize the value to C{True} or C{False}.
+ """
+ if value:
+ self._diagnostics = True
+ else:
+ self._diagnostics = False
+
+ def _getDiagnostics(self):
+ """
+ Property target used to get the diagnostics flag.
+ """
+ return self._diagnostics
+
+ def _setVerifyOnly(self, value):
+ """
+ Property target used to set the verifyOnly flag.
+ No validations, but we normalize the value to C{True} or C{False}.
+ """
+ if value:
+ self._verifyOnly = True
+ else:
+ self._verifyOnly = False
+
+ def _getVerifyOnly(self):
+ """
+ Property target used to get the verifyOnly flag.
+ """
+ return self._verifyOnly
+
+ def _setSourceDir(self, value):
+ """
+ Property target used to set the sourceDir parameter.
+ """
+ if value is not None:
+ if len(value) < 1:
+ raise ValueError("The sourceDir parameter must be a non-empty string.")
+ self._sourceDir = value
+
+ def _getSourceDir(self):
+ """
+ Property target used to get the sourceDir parameter.
+ """
+ return self._sourceDir
+
+ def _setS3BucketUrl(self, value):
+ """
+ Property target used to set the s3BucketUrl parameter.
+ """
+ if value is not None:
+ if len(value) < 1:
+ raise ValueError("The s3BucketUrl parameter must be a non-empty string.")
+ self._s3BucketUrl = value
+
+ def _getS3BucketUrl(self):
+ """
+ Property target used to get the s3BucketUrl parameter.
+ """
+ return self._s3BucketUrl
+
+ help = property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) flag.")
+ version = property(_getVersion, _setVersion, None, "Command-line version (C{-V,--version}) flag.")
+ verbose = property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,--verbose}) flag.")
+ quiet = property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet}) flag.")
+ logfile = property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,--logfile}) parameter.")
+ owner = property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner}) parameter, as tuple C{(user,group)}.")
+ mode = property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) parameter.")
+ output = property(_getOutput, _setOutput, None, "Command-line output (C{-O,--output}) flag.")
+ debug = property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug}) flag.")
+ stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.")
+ diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.")
+ verifyOnly = property(_getVerifyOnly, _setVerifyOnly, None, "Command-line verifyOnly (C{-v,--verifyOnly}) flag.")
+ sourceDir = property(_getSourceDir, _setSourceDir, None, "Command-line sourceDir, source of sync.")
+ s3BucketUrl = property(_getS3BucketUrl, _setS3BucketUrl, None, "Command-line s3BucketUrl, target of sync.")
+
+
+ ##################
+ # Utility methods
+ ##################
+
+ def validate(self):
+ """
+ Validates command-line options represented by the object.
+
+ Unless C{--help} or C{--version} are supplied, at least one action must
+ be specified. Other validations (as for allowed values for particular
+ options) will be taken care of at assignment time by the properties
+ functionality.
+
+ @note: The command line format is specified by the L{_usage} function.
+ Call L{_usage} to see a usage statement for the cback script.
+
+ @raise ValueError: If one of the validations fails.
+ """
+ if not self.help and not self.version and not self.diagnostics:
+ if self.sourceDir is None or self.s3BucketUrl is None:
+ raise ValueError("Source directory and S3 bucket URL are both required.")
+
+ def buildArgumentList(self, validate=True):
+ """
+ Extracts options into a list of command line arguments.
+
+ The original order of the various arguments (if, indeed, the object was
+ initialized with a command-line) is not preserved in this generated
+ argument list. Besides that, the argument list is normalized to use the
+ long option names (i.e. --version rather than -V). The resulting list
+ will be suitable for passing back to the constructor in the
+ C{argumentList} parameter. Unlike L{buildArgumentString}, string
+ arguments are not quoted here, because there is no need for it.
+
+ Unless the C{validate} parameter is C{False}, the L{Options.validate}
+ method will be called (with its default arguments) against the
+ options before extracting the command line. If the options are not valid,
+ then an argument list will not be extracted.
+
+ @note: It is strongly suggested that the C{validate} option always be set
+ to C{True} (the default) unless there is a specific need to extract an
+ invalid command line.
+
+ @param validate: Validate the options before extracting the command line.
+ @type validate: Boolean true/false.
+
+ @return: List representation of command-line arguments.
+ @raise ValueError: If options within the object are invalid.
+ """
+ if validate:
+ self.validate()
+ argumentList = []
+ if self._help:
+ argumentList.append("--help")
+ if self.version:
+ argumentList.append("--version")
+ if self.verbose:
+ argumentList.append("--verbose")
+ if self.quiet:
+ argumentList.append("--quiet")
+ if self.logfile is not None:
+ argumentList.append("--logfile")
+ argumentList.append(self.logfile)
+ if self.owner is not None:
+ argumentList.append("--owner")
+ argumentList.append("%s:%s" % (self.owner[0], self.owner[1]))
+ if self.mode is not None:
+ argumentList.append("--mode")
+ argumentList.append("%o" % self.mode)
+ if self.output:
+ argumentList.append("--output")
+ if self.debug:
+ argumentList.append("--debug")
+ if self.stacktrace:
+ argumentList.append("--stack")
+ if self.diagnostics:
+ argumentList.append("--diagnostics")
+ if self.verifyOnly:
+ argumentList.append("--verifyOnly")
+ if self.sourceDir is not None:
+ argumentList.append(self.sourceDir)
+ if self.s3BucketUrl is not None:
+ argumentList.append(self.s3BucketUrl)
+ return argumentList
+
+ def buildArgumentString(self, validate=True):
+ """
+ Extracts options into a string of command-line arguments.
+
+ The original order of the various arguments (if, indeed, the object was
+ initialized with a command-line) is not preserved in this generated
+ argument string. Besides that, the argument string is normalized to use
+ the long option names (i.e. --version rather than -V) and to quote all
+ string arguments with double quotes (C{"}). The resulting string will be
+ suitable for passing back to the constructor in the C{argumentString}
+ parameter.
+
+ Unless the C{validate} parameter is C{False}, the L{Options.validate}
+ method will be called (with its default arguments) against the options
+ before extracting the command line. If the options are not valid, then
+ an argument string will not be extracted.
+
+ @note: It is strongly suggested that the C{validate} option always be set
+ to C{True} (the default) unless there is a specific need to extract an
+ invalid command line.
+
+ @param validate: Validate the options before extracting the command line.
+ @type validate: Boolean true/false.
+
+ @return: String representation of command-line arguments.
+ @raise ValueError: If options within the object are invalid.
+ """
+ if validate:
+ self.validate()
+ argumentString = ""
+ if self._help:
+ argumentString += "--help "
+ if self.version:
+ argumentString += "--version "
+ if self.verbose:
+ argumentString += "--verbose "
+ if self.quiet:
+ argumentString += "--quiet "
+ if self.logfile is not None:
+ argumentString += "--logfile \"%s\" " % self.logfile
+ if self.owner is not None:
+ argumentString += "--owner \"%s:%s\" " % (self.owner[0], self.owner[1])
+ if self.mode is not None:
+ argumentString += "--mode %o " % self.mode
+ if self.output:
+ argumentString += "--output "
+ if self.debug:
+ argumentString += "--debug "
+ if self.stacktrace:
+ argumentString += "--stack "
+ if self.diagnostics:
+ argumentString += "--diagnostics "
+ if self.verifyOnly:
+ argumentString += "--verifyOnly "
+ if self.sourceDir is not None:
+ argumentString += "\"%s\" " % self.sourceDir
+ if self.s3BucketUrl is not None:
+ argumentString += "\"%s\" " % self.s3BucketUrl
+ return argumentString
+
+ def _parseArgumentList(self, argumentList):
+ """
+ Internal method to parse a list of command-line arguments.
+
+ Most of the validation we do here has to do with whether the arguments
+ can be parsed and whether any values which exist are valid. We don't do
+ any validation as to whether required elements exist or whether elements
+ exist in the proper combination (instead, that's the job of the
+ L{validate} method).
+
+ For any of the options which supply parameters, if the option is
+ duplicated with long and short switches (i.e. C{-l} and a C{--logfile})
+ then the long switch is used. If the same option is duplicated with the
+ same switch (long or short), then the last entry on the command line is
+ used.
+
+ @param argumentList: List of arguments to a command.
+ @type argumentList: List of arguments to a command, i.e. C{sys.argv[1:]}
+
+ @raise ValueError: If the argument list cannot be successfully parsed.
+ """
+ switches = { }
+ opts, remaining = getopt.getopt(argumentList, SHORT_SWITCHES, LONG_SWITCHES)
+ for o, a in opts: # push the switches into a hash
+ switches[o] = a
+ if switches.has_key("-h") or switches.has_key("--help"):
+ self.help = True
+ if switches.has_key("-V") or switches.has_key("--version"):
+ self.version = True
+ if switches.has_key("-b") or switches.has_key("--verbose"):
+ self.verbose = True
+ if switches.has_key("-q") or switches.has_key("--quiet"):
+ self.quiet = True
+ if switches.has_key("-l"):
+ self.logfile = switches["-l"]
+ if switches.has_key("--logfile"):
+ self.logfile = switches["--logfile"]
+ if switches.has_key("-o"):
+ self.owner = switches["-o"].split(":", 1)
+ if switches.has_key("--owner"):
+ self.owner = switches["--owner"].split(":", 1)
+ if switches.has_key("-m"):
+ self.mode = switches["-m"]
+ if switches.has_key("--mode"):
+ self.mode = switches["--mode"]
+ if switches.has_key("-O") or switches.has_key("--output"):
+ self.output = True
+ if switches.has_key("-d") or switches.has_key("--debug"):
+ self.debug = True
+ if switches.has_key("-s") or switches.has_key("--stack"):
+ self.stacktrace = True
+ if switches.has_key("-D") or switches.has_key("--diagnostics"):
+ self.diagnostics = True
+ if switches.has_key("-v") or switches.has_key("--verifyOnly"):
+ self.verifyOnly = True
+ try:
+ (self.sourceDir, self.s3BucketUrl) = remaining
+ except ValueError:
+ pass
+
+
+#######################################################################
+# Public functions
+#######################################################################
+
+#################
+# cli() function
+#################
+
+def cli():
+ """
+ Implements the command-line interface for the C{cback-amazons3-sync} script.
+
+ Essentially, this is the "main routine" for the cback-am...
[truncated message content] |
|
From: <pro...@us...> - 2014-10-07 21:56:51
|
Revision: 1085
http://sourceforge.net/p/cedar-backup/code/1085
Author: pronovic
Date: 2014-10-07 21:56:46 +0000 (Tue, 07 Oct 2014)
Log Message:
-----------
Continued development on s3 sync tool
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/tools/amazons3.py
cedar-backup2/trunk/doc/cback-amazons3-sync.1
cedar-backup2/trunk/testcase/synctests.py
Modified: cedar-backup2/trunk/CedarBackup2/tools/amazons3.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/tools/amazons3.py 2014-10-07 21:52:19 UTC (rev 1084)
+++ cedar-backup2/trunk/CedarBackup2/tools/amazons3.py 2014-10-07 21:56:46 UTC (rev 1085)
@@ -56,11 +56,16 @@
import os
import logging
import getopt
+import json
+import chardet
+import warnings
# Cedar Backup modules
from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT
+from CedarBackup2.filesystem import FilesystemList
from CedarBackup2.cli import setupLogging, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE
from CedarBackup2.util import Diagnostics, splitCommandLine, encodePath
+from CedarBackup2.util import executeCommand
########################################################################
@@ -69,10 +74,13 @@
logger = logging.getLogger("CedarBackup2.log.tools.amazons3")
-SHORT_SWITCHES = "hVbql:o:m:OdsDv"
+AWS_COMMAND = [ "aws" ]
+
+SHORT_SWITCHES = "hVbql:o:m:OdsDvw"
LONG_SWITCHES = [ 'help', 'version', 'verbose', 'quiet',
'logfile=', 'owner=', 'mode=',
- 'output', 'debug', 'stack', 'diagnostics', "verifyOnly", ]
+ 'output', 'debug', 'stack', 'diagnostics',
+ 'verifyOnly', 'ignoreWarnings', ]
#######################################################################
@@ -189,6 +197,7 @@
self._stacktrace = False
self._diagnostics = False
self._verifyOnly = False
+ self._ignoreWarnings = False
self._sourceDir = None
self._s3BucketUrl = None
if argumentList is not None and argumentString is not None:
@@ -291,6 +300,11 @@
return -1
else:
return 1
+ if self.ignoreWarnings != other.ignoreWarnings:
+ if self.ignoreWarnings < other.ignoreWarnings:
+ return -1
+ else:
+ return 1
if self.sourceDir != other.sourceDir:
if self.sourceDir < other.sourceDir:
return -1
@@ -518,6 +532,22 @@
"""
return self._verifyOnly
+ def _setIgnoreWarnings(self, value):
+ """
+ Property target used to set the ignoreWarnings flag.
+ No validations, but we normalize the value to C{True} or C{False}.
+ """
+ if value:
+ self._ignoreWarnings = True
+ else:
+ self._ignoreWarnings = False
+
+ def _getIgnoreWarnings(self):
+ """
+ Property target used to get the ignoreWarnings flag.
+ """
+ return self._ignoreWarnings
+
def _setSourceDir(self, value):
"""
Property target used to set the sourceDir parameter.
@@ -560,6 +590,7 @@
stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.")
diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.")
verifyOnly = property(_getVerifyOnly, _setVerifyOnly, None, "Command-line verifyOnly (C{-v,--verifyOnly}) flag.")
+ ignoreWarnings = property(_getIgnoreWarnings, _setIgnoreWarnings, None, "Command-line ignoreWarnings (C{-w,--ignoreWarnings}) flag.")
sourceDir = property(_getSourceDir, _setSourceDir, None, "Command-line sourceDir, source of sync.")
s3BucketUrl = property(_getS3BucketUrl, _setS3BucketUrl, None, "Command-line s3BucketUrl, target of sync.")
@@ -643,6 +674,8 @@
argumentList.append("--diagnostics")
if self.verifyOnly:
argumentList.append("--verifyOnly")
+ if self.ignoreWarnings:
+ argumentList.append("--ignoreWarnings")
if self.sourceDir is not None:
argumentList.append(self.sourceDir)
if self.s3BucketUrl is not None:
@@ -703,6 +736,8 @@
argumentString += "--diagnostics "
if self.verifyOnly:
argumentString += "--verifyOnly "
+ if self.ignoreWarnings:
+ argumentString += "--ignoreWarnings "
if self.sourceDir is not None:
argumentString += "\"%s\" " % self.sourceDir
if self.s3BucketUrl is not None:
@@ -764,6 +799,8 @@
self.diagnostics = True
if switches.has_key("-v") or switches.has_key("--verifyOnly"):
self.verifyOnly = True
+ if switches.has_key("-w") or switches.has_key("--ignoreWarnings"):
+ self.ignoreWarnings = True
try:
(self.sourceDir, self.s3BucketUrl) = remaining
except ValueError:
@@ -839,6 +876,7 @@
logger.info("Cedar Backup Amazon S3 sync run started.")
logger.info("Options were [%s]" % options)
logger.info("Logfile is [%s]" % logfile)
+ Diagnostics().logDiagnostics(method=logger.info)
if options.stacktrace:
_executeAction(options)
@@ -964,15 +1002,161 @@
@raise Exception: Under many generic error conditions
"""
- if not os.path.isdir(options.sourceDir):
- raise Exception("Source directory does not exist on disk.")
+ sourceFiles = _buildSourceFiles(options.sourceDir)
+ if not options.ignoreWarnings:
+ _checkSourceFiles(options.sourceDir, sourceFiles)
+ if not options.verifyOnly:
+ _synchronizeBucket(options.sourceDir, options.s3BucketUrl)
+ _verifyBucketContents(options.sourceDir, sourceFiles, options.s3BucketUrl)
+################################
+# _buildSourceFiles() function
+################################
+
+def _buildSourceFiles(sourceDir):
+ """
+ Build a list of files in a source directory
+ @param sourceDir: Local source directory
+ @return: FilesystemList with contents of source directory
+ """
+ if not os.path.isdir(sourceDir):
+ raise ValueError("Source directory does not exist on disk.")
+ sourceFiles = FilesystemList()
+ sourceFiles.addDirContents(sourceDir)
+ return sourceFiles
+
+
+###############################
+# _checkSourceFiles() function
+###############################
+
+def _checkSourceFiles(sourceDir, sourceFiles):
+ """
+ Check source files, trying to guess which ones will have encoding problems.
+ @param sourceDir: Local source directory
+ @param sourceDir: Local source directory
+ @raises ValueError: If a problem file is found
+ @see U{http://opensourcehacker.com/2011/09/16/fix-linux-filename-encodings-with-python/}
+ @see U{http://serverfault.com/questions/82821/how-to-tell-the-language-encoding-of-a-filename-on-linux}
+ @see U{http://randysofia.com/2014/06/06/aws-cli-and-your-locale/}
+ """
+ with warnings.catch_warnings():
+ warnings.simplefilter("ignore") # So we don't print unicode warnings from comparisons
+
+ encoding = Diagnostics().encoding
+
+ failed = False
+ for entry in sourceFiles:
+ result = chardet.detect(entry)
+ source = entry.decode(result["encoding"])
+ try:
+ target = source.encode(encoding)
+ if source != target:
+ logger.error("Inconsistent encoding for [%s]: got %s, but need %s" % (entry, result["encoding"], encoding))
+ failed = True
+ except UnicodeEncodeError:
+ logger.error("Inconsistent encoding for [%s]: got %s, but need %s" % (entry, result["encoding"], encoding))
+ failed = True
+
+ if not failed:
+ logger.info("Completed checking source filename encoding (no problems found).")
+ else:
+ logger.error("Some filenames have inconsistent encodings and will likely cause sync problems.")
+ logger.error("You may be able to fix this by setting a more sensible locale in your environment.")
+ logger.error("Aternately, you can rename the problem files to be valid in the indicated locale.")
+ logger.error("To ignore this warning and proceed anyway, use --ignoreWarnings")
+ raise ValueError("Some filenames have inconsistent encodings and will likely cause sync problems.")
+
+
+################################
+# _synchronizeBucket() function
+################################
+
+def _synchronizeBucket(sourceDir, s3BucketUrl):
+ """
+ Synchronize a local directory to an Amazon S3 bucket.
+ @param sourceDir: Local source directory
+ @param s3BucketUrl: Target S3 bucket URL
+ """
+ logger.info("Synchronizing local source directory up to Amazon S3.")
+ args = [ "s3", "sync", sourceDir, s3BucketUrl, "--delete", "--recursive", ]
+ result = executeCommand(AWS_COMMAND, args, returnOutput=False)[0]
+ if result != 0:
+ raise IOError("Error [%d] calling AWS CLI synchronize bucket." % result)
+
+
+###################################
+# _verifyBucketContents() function
+###################################
+
+def _verifyBucketContents(sourceDir, sourceFiles, s3BucketUrl):
+ """
+ Verify that a source directory is equivalent to an Amazon S3 bucket.
+ @param sourceDir: Local source directory
+ @param sourceFiles: Filesystem list containing contents of source directory
+ @param s3BucketUrl: Target S3 bucket URL
+ """
+ # As of this writing, the documentation for the S3 API that we're using
+ # below says that up to 1000 elements at a time are returned, and that we
+ # have to manually handle pagination by looking for the IsTruncated element.
+ # However, in practice, this is not true. I have been testing with
+ # "aws-cli/1.4.4 Python/2.7.3 Linux/3.2.0-4-686-pae", installed through PIP.
+ # No matter how many items exist in my bucket and prefix, I get back a
+ # single JSON result. I've tested with buckets containing nearly 6000
+ # elements.
+ #
+ # If I turn on debugging, it's clear that underneath, something in the API
+ # is executing multiple list-object requests against AWS, and stiching
+ # results together to give me back the final JSON result. The debug output
+ # clearly incldues multiple requests, and each XML response (except for the
+ # final one) contains <IsTruncated>true</IsTruncated>.
+ #
+ # This feature is not mentioned in the offical changelog for any of the
+ # releases going back to 1.0.0. It appears to happen in the botocore
+ # library, but I'll admit I can't actually find the code that implements it.
+ # For now, all I can do is rely on this behavior and hope that the
+ # documentation is out-of-date. I'm not going to write code that tries to
+ # parse out IsTruncated if I can't actually test that code.
+
+ (bucket, prefix) = s3BucketUrl.replace("s3://", "").split("/", 1)
+
+ query = "Contents[].{Key: Key, Size: Size}"
+ args = [ "s3api", "list-objects", "--bucket", bucket, "--prefix", prefix, "--query", query, ]
+ (result, data) = executeCommand(AWS_COMMAND, args, returnOutput=True)
+ if result != 0:
+ raise IOError("Error [%d] calling AWS CLI verify bucket contents." % result)
+
+ contents = { }
+ for entry in json.loads("".join(data)):
+ key = entry["Key"].replace(prefix, "")
+ size = long(entry["Size"])
+ contents[key] = size
+
+ failed = False
+ for entry in sourceFiles:
+ if os.path.isfile(entry):
+ key = entry.replace(sourceDir, "")
+ size = long(os.stat(entry).st_size)
+ if not key in contents:
+ logger.error("File was apparently not uploaded: [%s]" % entry)
+ failed = True
+ else:
+ if size != contents[key]:
+ logger.error("File size differs [%s]: expected %s bytes but got %s bytes" % (entry, size, contents[key]))
+ failed = True
+
+ if not failed:
+ logger.info("Completed verifying Amazon S3 bucket contents (no problems found).")
+ else:
+ logger.error("There were differences between source directory and target S3 bucket.")
+ raise ValueError("There were differences between source directory and target S3 bucket.")
+
+
#########################################################################
# Main routine
########################################################################
if __name__ == "__main__":
- result = cli()
- sys.exit(result)
+ sys.exit(cli())
Modified: cedar-backup2/trunk/doc/cback-amazons3-sync.1
===================================================================
--- cedar-backup2/trunk/doc/cback-amazons3-sync.1 2014-10-07 21:52:19 UTC (rev 1084)
+++ cedar-backup2/trunk/doc/cback-amazons3-sync.1 2014-10-07 21:56:46 UTC (rev 1085)
@@ -10,8 +10,8 @@
.\" # Author : Kenneth J. Pronovici <pro...@ie...>
.\" # Language : nroff
.\" # Project : Cedar Backup, release 2
-.\" # Revision : $Id: cback-span.1 1011 2010-07-10 23:58:29Z pronovic $
-.\" # Purpose : Manpage for cback-span script
+.\" # Revision : $Id$
+.\" # Purpose : Manpage for cback-amazons3-sync script
.\" #
.\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
.\"
@@ -114,16 +114,17 @@
.SH NOTES
.PP
This tool is a wrapper over the Amazon AWS CLI interface found in the aws(1)
-command.
+command. Specifically, cback\-amazons3\-sync invokes "aws s3 sync" followed by
+"aws s3api list-objects".
.PP
Cedar Backup itself is designed to run as root. However, this command can be
run safely as any user that is configured to use the Amazon AWS CLI interface.
-The aws(1) command will be executed by the same user which is executing the
-cback-amazons3-sync.
+The aws(1) command will be executed by the same user which is executing
+cback\-amazons3\-sync.
.PP
You must configure the AWS CLI interface to have a valid connection to Amazon
-S3 infrastructure before using this command. For more information about how to
-accomplish this, see the Cedar Backup user guide.
+S3 infrastructure before using cback\-amazons3\-sync. For more information
+about how to accomplish this, see the Cedar Backup user guide.
.SH SEE ALSO
cback(1)
.SH FILES
Modified: cedar-backup2/trunk/testcase/synctests.py
===================================================================
--- cedar-backup2/trunk/testcase/synctests.py 2014-10-07 21:52:19 UTC (rev 1084)
+++ cedar-backup2/trunk/testcase/synctests.py 2014-10-07 21:56:46 UTC (rev 1085)
@@ -188,6 +188,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -208,6 +209,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -228,6 +230,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -248,6 +251,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -268,6 +272,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -288,6 +293,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -308,6 +314,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -328,6 +335,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -348,6 +356,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -368,6 +377,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -388,6 +398,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -408,6 +419,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -428,6 +440,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -448,6 +461,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -468,6 +482,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -488,6 +503,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -508,6 +524,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -528,6 +545,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -548,6 +566,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -568,6 +587,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -612,6 +632,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -632,6 +653,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -652,6 +674,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -672,6 +695,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -740,6 +764,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -760,6 +785,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -780,6 +806,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -800,6 +827,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -868,6 +896,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -888,6 +917,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -908,6 +938,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -928,6 +959,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -948,6 +980,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -968,6 +1001,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -988,6 +1022,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -1008,6 +1043,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -1028,6 +1064,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -1048,6 +1085,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -1068,6 +1106,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -1088,6 +1127,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -1108,6 +1148,7 @@
self.failUnlessEqual(True, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -1128,6 +1169,7 @@
self.failUnlessEqual(True, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -1148,6 +1190,7 @@
self.failUnlessEqual(True, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -1168,6 +1211,7 @@
self.failUnlessEqual(True, options.stacktrace)
self.failUnlessEqual(False, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnlessEqual(None, options.sourceDir)
self.failUnlessEqual(None, options.s3BucketUrl)
@@ -1188,6 +1232,7 @@
self.failUnlessEqual(False, options.stacktrace)
self.failUnlessEqual(True, options.diagnostics)
self.failUnlessEqual(False, options.verifyOnly)
+ self.failUnlessEqual(False, options.ignoreWarnings)
self.failUnl...
[truncated message content] |
|
From: <pro...@us...> - 2014-10-07 22:29:17
|
Revision: 1086
http://sourceforge.net/p/cedar-backup/code/1086
Author: pronovic
Date: 2014-10-07 22:29:07 +0000 (Tue, 07 Oct 2014)
Log Message:
-----------
Start writing user manual for cback-amazons3-sync
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/tools/amazons3.py
cedar-backup2/trunk/manual/src/commandline.xml
cedar-backup2/trunk/manual/src/extensions.xml
Modified: cedar-backup2/trunk/CedarBackup2/tools/amazons3.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/tools/amazons3.py 2014-10-07 21:56:46 UTC (rev 1085)
+++ cedar-backup2/trunk/CedarBackup2/tools/amazons3.py 2014-10-07 22:29:07 UTC (rev 1086)
@@ -916,36 +916,37 @@
fd.write(" Cedar Backup Amazon S3 sync tool.\n")
fd.write("\n")
fd.write(" This Cedar Backup utility synchronizes a local directory to an Amazon S3\n")
- fd.write(" bucket. After the sync is complete, a validation step is takane. An\n")
- fd.write(" error is reported if the contents of the bucket do not match \n")
- fd.write(" the source directory, or if the indicated size for any file differs.\n")
+ fd.write(" bucket. After the sync is complete, a validation step is taken. An\n")
+ fd.write(" error is reported if the contents of the bucket do not match the\n")
+ fd.write(" source directory, or if the indicated size for any file differs.\n")
fd.write(" This tool is a wrapper over the AWS CLI command-line tool.\n")
fd.write("\n")
fd.write(" The following arguments are required:\n")
fd.write("\n")
- fd.write(" sourceDir The local source directory on disk (must exist)\n")
- fd.write(" s3BucketUrl The URL to the target Amazon S3 bucket\n")
+ fd.write(" sourceDir The local source directory on disk (must exist)\n")
+ fd.write(" s3BucketUrl The URL to the target Amazon S3 bucket\n")
fd.write("\n")
fd.write(" The following switches are accepted:\n")
fd.write("\n")
- fd.write(" -h, --help Display this usage/help listing\n")
- fd.write(" -V, --version Display version information\n")
- fd.write(" -b, --verbose Print verbose output as well as logging to disk\n")
- fd.write(" -q, --quiet Run quietly (display no output to the screen)\n")
- fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE)
- fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1]))
- fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE)
- fd.write(" -O, --output Record some sub-command (i.e. aws) output to the log\n")
- fd.write(" -d, --debug Write debugging information to the log (implies --output)\n")
- fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width!
- fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n")
- fd.write(" -v, --verifyOnly Only verify the S3 bucket contents, do not make changes\n")
+ fd.write(" -h, --help Display this usage/help listing\n")
+ fd.write(" -V, --version Display version information\n")
+ fd.write(" -b, --verbose Print verbose output as well as logging to disk\n")
+ fd.write(" -q, --quiet Run quietly (display no output to the screen)\n")
+ fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE)
+ fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1]))
+ fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE)
+ fd.write(" -O, --output Record some sub-command (i.e. aws) output to the log\n")
+ fd.write(" -d, --debug Write debugging information to the log (implies --output)\n")
+ fd.write(" -s, --stack Dump Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width!
+ fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n")
+ fd.write(" -v, --verifyOnly Only verify the S3 bucket contents, do not make changes\n")
+ fd.write(" -w, --ignoreWarnings Ignore warnings about problematic filename encodings\n")
fd.write("\n")
fd.write(" Typical usage would be something like:\n")
fd.write("\n")
- fd.write(" cback-amazons3-sync /home/myuser/subdir s3://example.com-backup/myuser\n")
+ fd.write(" cback-amazons3-sync /home/myuser s3://example.com-backup/myuser\n")
fd.write("\n")
- fd.write(" This will sync the contents of /home/myuser/subdir into the indiated bucket.\n")
+ fd.write(" This will sync the contents of /home/myuser into the indicated bucket.\n")
fd.write("\n")
Modified: cedar-backup2/trunk/manual/src/commandline.xml
===================================================================
--- cedar-backup2/trunk/manual/src/commandline.xml 2014-10-07 21:56:46 UTC (rev 1085)
+++ cedar-backup2/trunk/manual/src/commandline.xml 2014-10-07 22:29:07 UTC (rev 1086)
@@ -43,15 +43,25 @@
<title>Overview</title>
<para>
- Cedar Backup comes with two command-line programs, the
- <command>cback</command> and <command>cback-span</command> commands.
+ Cedar Backup comes with three command-line programs:
+ <command>cback</command>, <command>cback-amazons3-sync</command>, and
+ <command>cback-span</command>.
+ </para>
+
+ <para>
The <command>cback</command> command is the primary command line
interface and the only Cedar Backup program that most users will ever
need.
</para>
<para>
- Users that have a <emphasis>lot</emphasis> of data to back up —
+ The <command>cback-amazons3-sync</command> tool is used for
+ synchronizing entire directories of files up to an Amazon S3 cloud
+ storage bucket, outside of the normal Cedar Backup process.
+ </para>
+
+ <para>
+ Users who have a <emphasis>lot</emphasis> of data to back up —
more than will fit on a single CD or DVD — can use the
interactive <command>cback-span</command> tool to split their data
between multiple discs.
@@ -364,6 +374,289 @@
<!-- ################################################################# -->
+ <sect1 id="cedar-commandline-sync">
+
+ <title>The <command>cback-amazons3-sync</command> command</title>
+
+ <!-- ################################################################# -->
+
+ <sect2 id="cedar-commandline-sync-intro">
+
+ <title>Introduction</title>
+
+ <para>
+ The <command>cback-amazons3-sync</command> tool is used for
+ synchronizing entire directories of files up to an Amazon S3 cloud
+ storage bucket, outside of the normal Cedar Backup process.
+ </para>
+
+ <para>
+ This might be a good option for some types of data, as long as you
+ understand the limitations around retrieving previous versions of
+ objects that get modified or deleted as part of a sync. S3 does
+ support versioning, but it won't be quite as easy to get at those
+ previous versions as with an explicit incremental backup like
+ <command>cback</command> provides. Cedar Backup does not provide
+ any tooling that would help you retrieve previous versions.
+ </para>
+
+ <para>
+ The underlying functionality relies on the
+ <ulink url="http://aws.amazon.com/documentation/cli/">AWS CLI</ulink> toolset.
+ Before you use this extension, you need to set up your Amazon S3
+ account and configure AWS CLI as detailed in Amazons's
+ <ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html">setup guide</ulink>.
+ The <command>aws</command> will be executed as the same user that
+ is executing the <command>cback-amazons3-sync</command> command, so
+ make sure you configure it as the proper user. (This is different
+ than the amazons3 extension, which is designed to execute as root
+ and switches over to the configured backup user to execute AWS CLI
+ commands.)
+ </para>
+
+ </sect2>
+
+ <!-- ################################################################# -->
+
+ <sect2 id="cedar-commandline-sync-syntax">
+
+ <title>Syntax</title>
+
+ <para>
+ The <command>cback-amazons3-sync</command> command has the following syntax:
+ </para>
+
+ <screen>
+ Usage: cback-amazons3-sync [switches] sourceDir s3bucketUrl
+
+ Cedar Backup Amazon S3 sync tool.
+
+ This Cedar Backup utility synchronizes a local directory to an Amazon S3
+ bucket. After the sync is complete, a validation step is taken. An
+ error is reported if the contents of the bucket do not match the
+ source directory, or if the indicated size for any file differs.
+ This tool is a wrapper over the AWS CLI command-line tool.
+
+ The following arguments are required:
+
+ sourceDir The local source directory on disk (must exist)
+ s3BucketUrl The URL to the target Amazon S3 bucket
+
+ The following switches are accepted:
+
+ -h, --help Display this usage/help listing
+ -V, --version Display version information
+ -b, --verbose Print verbose output as well as logging to disk
+ -q, --quiet Run quietly (display no output to the screen)
+ -l, --logfile Path to logfile (default: /var/log/cback.log)
+ -o, --owner Logfile ownership, user:group (default: root:adm)
+ -m, --mode Octal logfile permissions mode (default: 640)
+ -O, --output Record some sub-command (i.e. aws) output to the log
+ -d, --debug Write debugging information to the log (implies --output)
+ -s, --stack Dump Python stack trace instead of swallowing exceptions
+ -D, --diagnostics Print runtime diagnostics to the screen and exit
+ -v, --verifyOnly Only verify the S3 bucket contents, do not make changes
+ -w, --ignoreWarnings Ignore warnings about problematic filename encodings
+
+ Typical usage would be something like:
+
+ cback-amazons3-sync /home/myuser s3://example.com-backup/myuser
+
+ This will sync the contents of /home/myuser into the indicated bucket.
+ </screen>
+
+ </sect2>
+
+
+ <!-- ################################################################# -->
+
+ <sect2 id="cedar-commandline-sync-options">
+
+ <title>Switches</title>
+
+ <variablelist>
+
+ <varlistentry>
+ <term><option>-h</option>, <option>--help</option></term>
+ <listitem>
+ <para>Display usage/help listing.</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-V</option>, <option>--version</option></term>
+ <listitem>
+ <para>Display version information.</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-b</option>, <option>--verbose</option></term>
+ <listitem>
+ <para>
+ Print verbose output to the screen as well writing to the
+ logfile. When this option is enabled, most information
+ that would normally be written to the logfile will also be
+ written to the screen.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-q</option>, <option>--quiet</option></term>
+ <listitem>
+ <para>Run quietly (display no output to the screen).</para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-l</option>, <option>--logfile</option></term>
+ <listitem>
+ <para>
+ Specify the path to an alternate logfile. The default
+ logfile file is <filename>/var/log/cback.log</filename>.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-o</option>, <option>--owner</option></term>
+ <listitem>
+ <para>
+ Specify the ownership of the logfile, in the form
+ <literal>user:group</literal>. The default ownership is
+ <literal>root:adm</literal>, to match the Debian standard
+ for most logfiles. This value will only be used when
+ creating a new logfile. If the logfile already exists when
+ the <command>cback-amazons3-sync</command> command is
+ executed, it will retain its existing ownership and mode.
+ Only user and group names may be used, not numeric uid and
+ gid values.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-m</option>, <option>--mode</option></term>
+ <listitem>
+ <para>
+ Specify the permissions for the logfile, using the
+ numeric mode as in chmod(1). The default mode is
+ <literal>0640</literal> (<literal>-rw-r-----</literal>).
+ This value will only be used when creating a new logfile.
+ If the logfile already exists when the
+ <command>cback-amazons3-sync</command> command is executed,
+ it will retain its existing ownership and mode.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-O</option>, <option>--output</option></term>
+ <listitem>
+ <para>
+ Record some sub-command output to the logfile. When this
+ option is enabled, all output from system commands will be
+ logged. This might be useful for debugging or just for
+ reference.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-d</option>, <option>--debug</option></term>
+ <listitem>
+ <para>
+ Write debugging information to the logfile. This option
+ produces a high volume of output, and would generally only
+ be needed when debugging a problem. This option implies
+ the <option>--output</option> option, as well.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-s</option>, <option>--stack</option></term>
+ <listitem>
+ <para>
+ Dump a Python stack trace instead of swallowing
+ exceptions. This forces Cedar Backup to dump the entire
+ Python stack trace associated with an error, rather than
+ just propagating last message it received back up to the
+ user interface. Under some circumstances, this is useful
+ information to include along with a bug report.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-D</option>, <option>--diagnostics</option></term>
+ <listitem>
+ <para>
+ Display runtime diagnostic information and then exit.
+ This diagnostic information is often useful when filing a
+ bug report.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-v</option>, <option>--verifyOnly</option></term>
+ <listitem>
+ <para>
+ Only verify the S3 bucket contents against the directory
+ on disk. Do not make any changes to the S3 bucket or
+ transfer any files. This is intended as a quick check
+ to see whether the sync is up-to-date.
+ </para>
+
+ <para>
+ Although no files are transferred, the tool will still
+ execute the source filename encoding check, discussed
+ below along with <option>--ignoreWarnings</option>.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><option>-w</option>, <option>--ignoreWarnings</option></term>
+ <listitem>
+ <para>
+ The AWS CLI S3 sync process is very picky about filename
+ encoding. Files that the Linux filesystem handles with no
+ problems can cause problems in S3 if the filename cannot be
+ encoded properly in your configured locale. As of this
+ writing, filenames like this will cause the sync process
+ to abort without transferring all files as expected.
+ </para>
+
+ <para>
+ To avoid confusion, the <command>cback-amazons3-sync</command>
+ tries to guess which files in the source directory will
+ cause problems, and refuses to execute AWS CLI S3 sync if
+ any problematic files exist. If you'd rather proceed
+ anyway, use <option>--ignoreWarnings</option>.
+ </para>
+
+ <para>
+ If problematic files are found, then you have basically
+ two options: either correct your locale (i.e. if you have
+ set <literal>LANG=C</literal>) or rename the file so it
+ can be encoded properly in your locale. The error messages
+ will tell you the expected encoding (from your locale) and
+ the actual detected encoding for the filename.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ </variablelist>
+
+ </sect2>
+
+ </sect1>
+
+ <!-- ################################################################# -->
+
<sect1 id="cedar-commandline-cbackspan">
<title>The <command>cback-span</command> command</title>
Modified: cedar-backup2/trunk/manual/src/extensions.xml
===================================================================
--- cedar-backup2/trunk/manual/src/extensions.xml 2014-10-07 21:56:46 UTC (rev 1085)
+++ cedar-backup2/trunk/manual/src/extensions.xml 2014-10-07 22:29:07 UTC (rev 1086)
@@ -113,8 +113,10 @@
<ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html">setup guide</ulink>.
The extension assumes that the backup is being executed as root, and
switches over to the configured backup user to run the
- <literal>aws</literal> program. So, make sure you configure the AWS
- CLI tools as the backup user and not root.
+ <command>aws</command> program. So, make sure you configure the AWS
+ CLI tools as the backup user and not root. (This is different than
+ the amazons3 sync tool extension, which exceutes AWS CLI command as
+ the same user that is running the tool.)
</para>
<para>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2014-10-08 00:20:02
|
Revision: 1088
http://sourceforge.net/p/cedar-backup/code/1088
Author: pronovic
Date: 2014-10-08 00:19:58 +0000 (Wed, 08 Oct 2014)
Log Message:
-----------
Release 2.24.0
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/release.py
cedar-backup2/trunk/Changelog
Modified: cedar-backup2/trunk/CedarBackup2/release.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/release.py 2014-10-08 00:18:15 UTC (rev 1087)
+++ cedar-backup2/trunk/CedarBackup2/release.py 2014-10-08 00:19:58 UTC (rev 1088)
@@ -34,7 +34,7 @@
AUTHOR = "Kenneth J. Pronovici"
EMAIL = "pro...@ie..."
COPYRIGHT = "2004-2011,2013,2014"
-VERSION = "2.23.3"
-DATE = "03 Oct 2014"
+VERSION = "2.24.0"
+DATE = "07 Oct 2014"
URL = "http://cedar-backup.sourceforge.net/"
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2014-10-08 00:18:15 UTC (rev 1087)
+++ cedar-backup2/trunk/Changelog 2014-10-08 00:19:58 UTC (rev 1088)
@@ -1,4 +1,4 @@
-Version 2.23.4 unreleased
+Version 2.24.0 07 Oct 2014
* Implement a new tool called cback-amazons3-sync.
* Add support for missing --diagnostics flag in cback-span script.
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <pro...@us...> - 2015-01-05 20:24:34
|
Revision: 1096
http://sourceforge.net/p/cedar-backup/code/1096
Author: pronovic
Date: 2015-01-05 20:24:25 +0000 (Mon, 05 Jan 2015)
Log Message:
-----------
Add optional size-limit configuration for amazons3 extension.
Modified Paths:
--------------
cedar-backup2/trunk/CREDITS
cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
cedar-backup2/trunk/CedarBackup2/xmlutil.py
cedar-backup2/trunk/Changelog
cedar-backup2/trunk/manual/src/extensions.xml
cedar-backup2/trunk/testcase/amazons3tests.py
cedar-backup2/trunk/testcase/data/amazons3.conf.2
Modified: cedar-backup2/trunk/CREDITS
===================================================================
--- cedar-backup2/trunk/CREDITS 2014-11-17 21:51:30 UTC (rev 1095)
+++ cedar-backup2/trunk/CREDITS 2015-01-05 20:24:25 UTC (rev 1096)
@@ -23,7 +23,7 @@
software, as indicated in the source code itself.
Unless otherwise indicated, all Cedar Backup source code is Copyright
-(c) 2004-2011,2013,2014 Kenneth J. Pronovici and is released under the GNU
+(c) 2004-2011,2013-2015 Kenneth J. Pronovici and is released under the GNU
General Public License, version 2. The contents of the GNU General Public
License can be found in the LICENSE file, or can be downloaded from
http://www.gnu.org/.
Modified: cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-11-17 21:51:30 UTC (rev 1095)
+++ cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2015-01-05 20:24:25 UTC (rev 1096)
@@ -8,7 +8,7 @@
#
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#
-# Copyright (c) 2014 Kenneth J. Pronovici.
+# Copyright (c) 2014-2015 Kenneth J. Pronovici.
# All rights reserved.
#
# This program is free software; you can redistribute it and/or
@@ -94,10 +94,10 @@
import shutil
# Cedar Backup modules
-from CedarBackup2.filesystem import FilesystemList
-from CedarBackup2.util import resolveCommand, executeCommand, isRunningAsRoot, changeOwnership
-from CedarBackup2.xmlutil import createInputDom, addContainerNode, addBooleanNode, addStringNode
-from CedarBackup2.xmlutil import readFirstChild, readString, readBoolean
+from CedarBackup2.filesystem import FilesystemList, BackupFileList
+from CedarBackup2.util import resolveCommand, executeCommand, isRunningAsRoot, changeOwnership, isStartOfWeek
+from CedarBackup2.xmlutil import createInputDom, addContainerNode, addBooleanNode, addStringNode, addLongNode
+from CedarBackup2.xmlutil import readFirstChild, readString, readBoolean, readLong
from CedarBackup2.actions.util import writeIndicatorFile
from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR
@@ -130,32 +130,42 @@
- The s3Bucket value must be a non-empty string
- The encryptCommand value, if set, must be a non-empty string
+ - The full backup size limit, if set, must be a number of bytes >= 0
+ - The incremental backup size limit, if set, must be a number of bytes >= 0
@sort: __init__, __repr__, __str__, __cmp__, warnMidnite, s3Bucket
"""
- def __init__(self, warnMidnite=None, s3Bucket=None, encryptCommand=None):
+ def __init__(self, warnMidnite=None, s3Bucket=None, encryptCommand=None,
+ fullBackupSizeLimit=None, incrementalBackupSizeLimit=None):
"""
Constructor for the C{AmazonS3Config} class.
@param warnMidnite: Whether to generate warnings for crossing midnite.
@param s3Bucket: Name of the Amazon S3 bucket in which to store the data
@param encryptCommand: Command used to encrypt backup data before upload to S3
+ @param fullBackupSizeLimit: Maximum size of a full backup, in bytes
+ @param incrementalBackupSizeLimit: Maximum size of an incremental backup, in bytes
@raise ValueError: If one of the values is invalid.
"""
self._warnMidnite = None
self._s3Bucket = None
self._encryptCommand = None
+ self._fullBackupSizeLimit = None
+ self._incrementalBackupSizeLimit = None
self.warnMidnite = warnMidnite
self.s3Bucket = s3Bucket
self.encryptCommand = encryptCommand
+ self.fullBackupSizeLimit = fullBackupSizeLimit
+ self.incrementalBackupSizeLimit = incrementalBackupSizeLimit
def __repr__(self):
"""
Official string representation for class instance.
"""
- return "AmazonS3Config(%s, %s, %s)" % (self.warnMidnite, self.s3Bucket, self.encryptCommand)
+ return "AmazonS3Config(%s, %s, %s, %s, %s)" % (self.warnMidnite, self.s3Bucket, self.encryptCommand,
+ self.fullBackupSizeLimit, self.incrementalBackupSizeLimit)
def __str__(self):
"""
@@ -186,6 +196,16 @@
return -1
else:
return 1
+ if self.fullBackupSizeLimit != other.fullBackupSizeLimit:
+ if self.fullBackupSizeLimit < other.fullBackupSizeLimit:
+ return -1
+ else:
+ return 1
+ if self.incrementalBackupSizeLimit != other.incrementalBackupSizeLimit:
+ if self.incrementalBackupSizeLimit < other.incrementalBackupSizeLimit:
+ return -1
+ else:
+ return 1
return 0
def _setWarnMidnite(self, value):
@@ -234,9 +254,59 @@
"""
return self._encryptCommand
+ def _setFullBackupSizeLimit(self, value):
+ """
+ Property target used to set the full backup size limit.
+ The value must be an integer >= 0.
+ @raise ValueError: If the value is not valid.
+ """
+ if value is None:
+ self._fullBackupSizeLimit = None
+ else:
+ try:
+ value = int(value)
+ except TypeError:
+ raise ValueError("Full backup size limit must be an integer >= 0.")
+ if value < 0:
+ raise ValueError("Full backup size limit must be an integer >= 0.")
+ self._fullBackupSizeLimit = value
+
+ def _getFullBackupSizeLimit(self):
+ """
+ Property target used to get the full backup size limit.
+ """
+ return self._fullBackupSizeLimit
+
+ def _setIncrementalBackupSizeLimit(self, value):
+ """
+ Property target used to set the incremental backup size limit.
+ The value must be an integer >= 0.
+ @raise ValueError: If the value is not valid.
+ """
+ if value is None:
+ self._incrementalBackupSizeLimit = None
+ else:
+ try:
+ value = int(value)
+ except TypeError:
+ raise ValueError("Incremental backup size limit must be an integer >= 0.")
+ if value < 0:
+ raise ValueError("Incremental backup size limit must be an integer >= 0.")
+ self._incrementalBackupSizeLimit = value
+
+ def _getIncrementalBackupSizeLimit(self):
+ """
+ Property target used to get the incremental backup size limit.
+ """
+ return self._incrementalBackupSizeLimit
+
warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.")
s3Bucket = property(_getS3Bucket, _setS3Bucket, None, doc="Amazon S3 Bucket in which to store data")
encryptCommand = property(_getEncryptCommand, _setEncryptCommand, None, doc="Command used to encrypt data before upload to S3")
+ fullBackupSizeLimit = property(_getFullBackupSizeLimit, _setFullBackupSizeLimit, None,
+ doc="Maximum size of a full backup, in bytes")
+ incrementalBackupSizeLimit = property(_getIncrementalBackupSizeLimit, _setIncrementalBackupSizeLimit, None,
+ doc="Maximum size of an incremental backup, in bytes")
########################################################################
@@ -379,9 +449,11 @@
We add the following fields to the document::
- warnMidnite //cb_config/amazons3/warn_midnite
- s3Bucket //cb_config/amazons3/s3_bucket
- encryptCommand //cb_config/amazons3/encrypt
+ warnMidnite //cb_config/amazons3/warn_midnite
+ s3Bucket //cb_config/amazons3/s3_bucket
+ encryptCommand //cb_config/amazons3/encrypt
+ fullBackupSizeLimit //cb_config/amazons3/full_size_limit
+ incrementalBackupSizeLimit //cb_config/amazons3/incr_size_limit
@param xmlDom: DOM tree as from C{impl.createDocument()}.
@param parentNode: Parent that the section should be appended to.
@@ -391,6 +463,8 @@
addBooleanNode(xmlDom, sectionNode, "warn_midnite", self.amazons3.warnMidnite)
addStringNode(xmlDom, sectionNode, "s3_bucket", self.amazons3.s3Bucket)
addStringNode(xmlDom, sectionNode, "encrypt", self.amazons3.encryptCommand)
+ addLongNode(xmlDom, sectionNode, "full_size_limit", self.amazons3.fullBackupSizeLimit)
+ addLongNode(xmlDom, sectionNode, "incr_size_limit", self.amazons3.incrementalBackupSizeLimit)
def _parseXmlData(self, xmlData):
"""
@@ -414,9 +488,11 @@
We read the following individual fields::
- warnMidnite //cb_config/amazons3/warn_midnite
- s3Bucket //cb_config/amazons3/s3_bucket
- encryptCommand //cb_config/amazons3/encrypt
+ warnMidnite //cb_config/amazons3/warn_midnite
+ s3Bucket //cb_config/amazons3/s3_bucket
+ encryptCommand //cb_config/amazons3/encrypt
+ fullBackupSizeLimit //cb_config/amazons3/full_size_limit
+ incrementalBackupSizeLimit //cb_config/amazons3/incr_size_limit
@param parent: Parent node to search beneath.
@@ -430,6 +506,8 @@
amazons3.warnMidnite = readBoolean(section, "warn_midnite")
amazons3.s3Bucket = readString(section, "s3_bucket")
amazons3.encryptCommand = readString(section, "encrypt")
+ amazons3.fullBackupSizeLimit = readLong(section, "full_size_limit")
+ amazons3.incrementalBackupSizeLimit = readLong(section, "incr_size_limit")
return amazons3
@@ -468,6 +546,7 @@
raise ValueError("Cedar Backup configuration is not properly filled in.")
local = LocalConfig(xmlPath=configPath)
stagingDirs = _findCorrectDailyDir(options, config, local)
+ _applySizeLimits(options, config, local, stagingDirs)
_writeToAmazonS3(config, local, stagingDirs)
_writeStoreIndicator(config, stagingDirs)
logger.info("Executed the amazons3 extended action successfully.")
@@ -534,6 +613,47 @@
##############################
+# _applySizeLimits() function
+##############################
+
+def _applySizeLimits(options, config, local, stagingDirs):
+ """
+ Apply size limits, throwing an exception if any limits are exceeded.
+
+ Size limits are optional. If a limit is set to None, it does not apply.
+ The full size limit applies if the full option is set or if today is the
+ start of the week. The incremental size limit applies otherwise. Limits
+ are applied to the total size of all the relevant staging directories.
+
+ @param options: Options object.
+ @param config: Config object.
+ @param local: Local config object.
+ @param stagingDirs: Dictionary mapping directory path to date suffix.
+
+ @raise ValueError: Under many generic error conditions
+ @raise ValueError: If a size limit has been exceeded
+ """
+ if options.full or isStartOfWeek(config.options.startingDay):
+ logger.debug("Using Amazon S3 size limit for full backups.")
+ limit = local.amazons3.fullBackupSizeLimit
+ else:
+ logger.debug("Using Amazon S3 size limit for incremental backups.")
+ limit = local.amazons3.incrementalBackupSizeLimit
+ if limit is None:
+ logger.debug("No Amazon S3 size limit will be applied.")
+ else:
+ logger.debug("Amazon S3 size limit is: %d bytes" % limit)
+ contents = BackupFileList()
+ for stagingDir in stagingDirs:
+ contents.addDir(stagingDir)
+ total = contents.totalSize()
+ logger.debug("Amazon S3 backup size is is: %d bytes" % total)
+ if total > limit:
+ logger.debug("Amazon S3 size limit exceeded: %.0f bytes > %d bytes" % (total, limit))
+ raise ValueError("Amazon S3 size limit exceeded: %.0f bytes > %d bytes" % (total, limit))
+
+
+##############################
# _writeToAmazonS3() function
##############################
Modified: cedar-backup2/trunk/CedarBackup2/xmlutil.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/xmlutil.py 2014-11-17 21:51:30 UTC (rev 1095)
+++ cedar-backup2/trunk/CedarBackup2/xmlutil.py 2015-01-05 20:24:25 UTC (rev 1096)
@@ -246,6 +246,26 @@
else:
return int(result)
+def readLong(parent, name):
+ """
+ Returns long integer contents of the first child with a given name immediately
+ beneath the parent.
+
+ By "immediately beneath" the parent, we mean from among nodes that are
+ direct children of the passed-in parent node.
+
+ @param parent: Parent node to search beneath.
+ @param name: Name of node to search for.
+
+ @return: Long integer contents of node or C{None} if no matching nodes are found.
+ @raise ValueError: If the string at the location can't be converted to an integer.
+ """
+ result = readString(parent, name)
+ if result is None:
+ return None
+ else:
+ return long(result)
+
def readFloat(parent, name):
"""
Returns float contents of the first child with a given name immediately
@@ -353,8 +373,30 @@
if nodeValue is None:
return addStringNode(xmlDom, parentNode, nodeName, None)
else:
- return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue)
+ return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue) # %d works for both int and long
+def addLongNode(xmlDom, parentNode, nodeName, nodeValue):
+ """
+ Adds a text node as the next child of a parent, to contain a long integer.
+
+ If the C{nodeValue} is None, then the node will be created, but will be
+ empty (i.e. will contain no text node child).
+
+ The integer will be converted to a string using "%d". The result will be
+ added to the document via L{addStringNode}.
+
+ @param xmlDom: DOM tree as from C{impl.createDocument()}.
+ @param parentNode: Parent node to create child for.
+ @param nodeName: Name of the new container node.
+ @param nodeValue: The value to put into the node.
+
+ @return: Reference to the newly-created node.
+ """
+ if nodeValue is None:
+ return addStringNode(xmlDom, parentNode, nodeName, None)
+ else:
+ return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue) # %d works for both int and long
+
def addBooleanNode(xmlDom, parentNode, nodeName, nodeValue):
"""
Adds a text node as the next child of a parent, to contain a boolean.
Modified: cedar-backup2/trunk/Changelog
===================================================================
--- cedar-backup2/trunk/Changelog 2014-11-17 21:51:30 UTC (rev 1095)
+++ cedar-backup2/trunk/Changelog 2015-01-05 20:24:25 UTC (rev 1096)
@@ -1,3 +1,7 @@
+Version 2.24.2 unreleased
+
+ * Add optional size-limit configuration for amazons3 extension.
+
Version 2.24.1 07 Oct 2014
* Implement a new tool called cback-amazons3-sync.
Modified: cedar-backup2/trunk/manual/src/extensions.xml
===================================================================
--- cedar-backup2/trunk/manual/src/extensions.xml 2014-11-17 21:51:30 UTC (rev 1095)
+++ cedar-backup2/trunk/manual/src/extensions.xml 2015-01-05 20:24:25 UTC (rev 1096)
@@ -115,11 +115,28 @@
switches over to the configured backup user to run the
<command>aws</command> program. So, make sure you configure the AWS
CLI tools as the backup user and not root. (This is different than
- the amazons3 sync tool extension, which exceutes AWS CLI command as
+ the amazons3 sync tool extension, which executes AWS CLI command as
the same user that is running the tool.)
</para>
<para>
+ When using physical media via the standard store action, there is an
+ implicit limit to the size of a backup, since a backup must fit on a
+ single disc. Since there is no physical media, no such limit exists
+ for Amazon S3 backups. This leaves open the possibility that Cedar
+ Backup might construct an unexpectedly-large backup that the
+ administrator is not aware of. Over time, this might become
+ expensive, either in terms of network bandwidth or in terms of Amazon
+ S3 storage and I/O charges. To mitigate this risk, set a reasonable
+ maximum size using the configuration elements shown below. If the
+ backup fails, you have a chance to review what made the backup larger
+ than you expected, and you can either correct the problem (i.e. remove
+ a large temporary directory that got inadvertently included in the
+ backup) or change configuration to take into account the new "normal"
+ maximum size.
+ </para>
+
+ <para>
You can optionally configure Cedar Backup to encrypt data before
sending it to S3. To do that, provide a complete command line using
the <literal>${input}</literal> and <literal>${output}</literal>
@@ -251,7 +268,39 @@
</para>
</listitem>
</varlistentry>
+
+ <varlistentry>
+ <term><literal>full_size_limit</literal></term>
+ <listitem>
+ <para>Maximum size of a full backup, in bytes</para>
+ <para>
+ If this field is provided, then a size limit will be applied
+ to full backups. If the total size of the selected staging
+ directory is greater than the limit, then the backup will
+ fail.
+ </para>
+ <para>
+ <emphasis>Restrictions:</emphasis> If provided, must be an integer greater than zero.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term><literal>incr_size_limit</literal></term>
+ <listitem>
+ <para>Maximum size of an incremental backup, in bytes</para>
+ <para>
+ If this field is provided, then a size limit will be applied
+ to incremental backups. If the total size of the selected
+ staging directory is greater than the limit, then the backup
+ will fail.
+ </para>
+ <para>
+ <emphasis>Restrictions:</emphasis> If provided, must be an integer greater than zero.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</sect1>
Modified: cedar-backup2/trunk/testcase/amazons3tests.py
===================================================================
--- cedar-backup2/trunk/testcase/amazons3tests.py 2014-11-17 21:51:30 UTC (rev 1095)
+++ cedar-backup2/trunk/testcase/amazons3tests.py 2015-01-05 20:24:25 UTC (rev 1096)
@@ -9,7 +9,7 @@
#
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#
-# Copyright (c) 2014 Kenneth J. Pronovici.
+# Copyright (c) 2014-2015 Kenneth J. Pronovici.
# All rights reserved.
#
# This program is free software; you can redistribute it and/or
@@ -145,49 +145,40 @@
"""
Test constructor with no values filled in.
"""
- amazons3 = AmazonS3Config()
+ amazons3 = AmazonS3Config()
self.failUnlessEqual(False, amazons3.warnMidnite)
self.failUnlessEqual(None, amazons3.s3Bucket)
self.failUnlessEqual(None, amazons3.encryptCommand)
+ self.failUnlessEqual(None, amazons3.fullBackupSizeLimit)
+ self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit)
def testConstructor_002(self):
"""
Test constructor with all values filled in, with valid values.
"""
- amazons3 = AmazonS3Config(True, "bucket", "encrypt")
+ amazons3 = AmazonS3Config(True, "bucket", "encrypt", 1, 2)
self.failUnlessEqual(True, amazons3.warnMidnite)
self.failUnlessEqual("bucket", amazons3.s3Bucket)
self.failUnlessEqual("encrypt", amazons3.encryptCommand)
+ self.failUnlessEqual(1L, amazons3.fullBackupSizeLimit)
+ self.failUnlessEqual(2L, amazons3.incrementalBackupSizeLimit)
def testConstructor_003(self):
"""
- Test assignment of s3Bucket attribute, None value.
- """
- amazons3 = AmazonS3Config(warnMidnite=True, s3Bucket="bucket", encryptCommand="encrypt")
- self.failUnlessEqual(True, amazons3.warnMidnite)
- self.failUnlessEqual("bucket", amazons3.s3Bucket)
- self.failUnlessEqual("encrypt", amazons3.encryptCommand)
- amazons3.s3Bucket = None
- self.failUnlessEqual(True, amazons3.warnMidnite)
- self.failUnlessEqual(None, amazons3.s3Bucket)
- self.failUnlessEqual("encrypt", amazons3.encryptCommand)
-
- def testConstructor_004(self):
- """
Test assignment of warnMidnite attribute, valid value (real boolean).
"""
- amazons3 = AmazonS3Config()
+ amazons3 = AmazonS3Config()
self.failUnlessEqual(False, amazons3.warnMidnite)
amazons3.warnMidnite = True
self.failUnlessEqual(True, amazons3.warnMidnite)
amazons3.warnMidnite = False
self.failUnlessEqual(False, amazons3.warnMidnite)
- def testConstructor_005(self):
+ def testConstructor_004(self):
"""
Test assignment of warnMidnite attribute, valid value (expression).
"""
- amazons3 = AmazonS3Config()
+ amazons3 = AmazonS3Config()
self.failUnlessEqual(False, amazons3.warnMidnite)
amazons3.warnMidnite = 0
self.failUnlessEqual(False, amazons3.warnMidnite)
@@ -200,11 +191,20 @@
amazons3.warnMidnite = 3
self.failUnlessEqual(True, amazons3.warnMidnite)
+ def testConstructor_005(self):
+ """
+ Test assignment of s3Bucket attribute, None value.
+ """
+ amazons3 = AmazonS3Config(s3Bucket="bucket")
+ self.failUnlessEqual("bucket", amazons3.s3Bucket)
+ amazons3.s3Bucket = None
+ self.failUnlessEqual(None, amazons3.s3Bucket)
+
def testConstructor_006(self):
"""
Test assignment of s3Bucket attribute, valid value.
"""
- amazons3 = AmazonS3Config()
+ amazons3 = AmazonS3Config()
self.failUnlessEqual(None, amazons3.s3Bucket)
amazons3.s3Bucket = "bucket"
self.failUnlessEqual("bucket", amazons3.s3Bucket)
@@ -213,30 +213,111 @@
"""
Test assignment of s3Bucket attribute, invalid value (empty).
"""
- amazons3 = AmazonS3Config()
+ amazons3 = AmazonS3Config()
self.failUnlessEqual(None, amazons3.s3Bucket)
self.failUnlessAssignRaises(ValueError, amazons3, "s3Bucket", "")
self.failUnlessEqual(None, amazons3.s3Bucket)
def testConstructor_008(self):
"""
+ Test assignment of encryptCommand attribute, None value.
+ """
+ amazons3 = AmazonS3Config(encryptCommand="encrypt")
+ self.failUnlessEqual("encrypt", amazons3.encryptCommand)
+ amazons3.encryptCommand = None
+ self.failUnlessEqual(None, amazons3.encryptCommand)
+
+ def testConstructor_009(self):
+ """
Test assignment of encryptCommand attribute, valid value.
"""
- amazons3 = AmazonS3Config()
+ amazons3 = AmazonS3Config()
self.failUnlessEqual(None, amazons3.encryptCommand)
amazons3.encryptCommand = "encrypt"
self.failUnlessEqual("encrypt", amazons3.encryptCommand)
- def testConstructor_009(self):
+ def testConstructor_010(self):
"""
Test assignment of encryptCommand attribute, invalid value (empty).
"""
- amazons3 = AmazonS3Config()
+ amazons3 = AmazonS3Config()
self.failUnlessEqual(None, amazons3.encryptCommand)
self.failUnlessAssignRaises(ValueError, amazons3, "encryptCommand", "")
self.failUnlessEqual(None, amazons3.encryptCommand)
+ def testConstructor_011(self):
+ """
+ Test assignment of fullBackupSizeLimit attribute, None value.
+ """
+ amazons3 = AmazonS3Config(fullBackupSizeLimit=100)
+ self.failUnlessEqual(100L, amazons3.fullBackupSizeLimit)
+ amazons3.fullBackupSizeLimit = None
+ self.failUnlessEqual(None, amazons3.fullBackupSizeLimit)
+ def testConstructor_012(self):
+ """
+ Test assignment of fullBackupSizeLimit attribute, valid long value.
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.fullBackupSizeLimit)
+ amazons3.fullBackupSizeLimit = 7516192768L
+ self.failUnlessEqual(7516192768L, amazons3.fullBackupSizeLimit)
+
+ def testConstructor_013(self):
+ """
+ Test assignment of fullBackupSizeLimit attribute, valid string value.
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.fullBackupSizeLimit)
+ amazons3.fullBackupSizeLimit = "7516192768"
+ self.failUnlessEqual(7516192768L, amazons3.fullBackupSizeLimit)
+
+ def testConstructor_014(self):
+ """
+ Test assignment of fullBackupSizeLimit attribute, invalid value.
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.fullBackupSizeLimit)
+ self.failUnlessAssignRaises(ValueError, amazons3, "fullBackupSizeLimit", "xxx")
+ self.failUnlessEqual(None, amazons3.fullBackupSizeLimit)
+
+ def testConstructor_015(self):
+ """
+ Test assignment of incrementalBackupSizeLimit attribute, None value.
+ """
+ amazons3 = AmazonS3Config(incrementalBackupSizeLimit=100)
+ self.failUnlessEqual(100L, amazons3.incrementalBackupSizeLimit)
+ amazons3.incrementalBackupSizeLimit = None
+ self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit)
+
+ def testConstructor_016(self):
+ """
+ Test assignment of incrementalBackupSizeLimit attribute, valid long value.
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit)
+ amazons3.incrementalBackupSizeLimit = 7516192768L
+ self.failUnlessEqual(7516192768L, amazons3.incrementalBackupSizeLimit)
+
+ def testConstructor_017(self):
+ """
+ Test assignment of incrementalBackupSizeLimit attribute, valid string value.
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit)
+ amazons3.incrementalBackupSizeLimit = "7516192768"
+ self.failUnlessEqual(7516192768L, amazons3.incrementalBackupSizeLimit)
+
+ def testConstructor_018(self):
+ """
+ Test assignment of incrementalBackupSizeLimit attribute, invalid value.
+ """
+ amazons3 = AmazonS3Config()
+ self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit)
+ self.failUnlessAssignRaises(ValueError, amazons3, "incrementalBackupSizeLimit", "xxx")
+ self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit)
+
+
############################
# Test comparison operators
############################
@@ -259,8 +340,8 @@
"""
Test comparison of two identical objects, all attributes non-None.
"""
- amazons31 = AmazonS3Config(True, "bucket", "encrypt")
- amazons32 = AmazonS3Config(True, "bucket", "encrypt")
+ amazons31 = AmazonS3Config(True, "bucket", "encrypt", 1, 2)
+ amazons32 = AmazonS3Config(True, "bucket", "encrypt", 1, 2)
self.failUnlessEqual(amazons31, amazons32)
self.failUnless(amazons31 == amazons32)
self.failUnless(not amazons31 < amazons32)
@@ -301,8 +382,8 @@
"""
Test comparison of two differing objects, s3Bucket differs.
"""
- amazons31 = AmazonS3Config(True, "bucket1", "encrypt")
- amazons32 = AmazonS3Config(True, "bucket2", "encrypt")
+ amazons31 = AmazonS3Config(s3Bucket="bucket1")
+ amazons32 = AmazonS3Config(s3Bucket="bucket2")
self.failIfEqual(amazons31, amazons32)
self.failUnless(not amazons31 == amazons32)
self.failUnless(amazons31 < amazons32)
@@ -329,8 +410,8 @@
"""
Test comparison of two differing objects, encryptCommand differs.
"""
- amazons31 = AmazonS3Config(True, "bucket", "encrypt1")
- amazons32 = AmazonS3Config(True, "bucket", "encrypt2")
+ amazons31 = AmazonS3Config(encryptCommand="encrypt1")
+ amazons32 = AmazonS3Config(encryptCommand="encrypt2")
self.failIfEqual(amazons31, amazons32)
self.failUnless(not amazons31 == amazons32)
self.failUnless(amazons31 < amazons32)
@@ -339,7 +420,63 @@
self.failUnless(not amazons31 >= amazons32)
self.failUnless(amazons31 != amazons32)
+ def testComparison_008(self):
+ """
+ Test comparison of two differing objects, fullBackupSizeLimit differs (one None).
+ """
+ amazons31 = AmazonS3Config()
+ amazons32 = AmazonS3Config(fullBackupSizeLimit=1L)
+ self.failIfEqual(amazons31, amazons32)
+ self.failUnless(not amazons31 == amazons32)
+ self.failUnless(amazons31 < amazons32)
+ self.failUnless(amazons31 <= amazons32)
+ self.failUnless(not amazons31 > amazons32)
+ self.failUnless(not amazons31 >= amazons32)
+ self.failUnless(amazons31 != amazons32)
+ def testComparison_009(self):
+ """
+ Test comparison of two differing objects, fullBackupSizeLimit differs.
+ """
+ amazons31 = AmazonS3Config(fullBackupSizeLimit=1L)
+ amazons32 = AmazonS3Config(fullBackupSizeLimit=2L)
+ self.failIfEqual(amazons31, amazons32)
+ self.failUnless(not amazons31 == amazons32)
+ self.failUnless(amazons31 < amazons32)
+ self.failUnless(amazons31 <= amazons32)
+ self.failUnless(not amazons31 > amazons32)
+ self.failUnless(not amazons31 >= amazons32)
+ self.failUnless(amazons31 != amazons32)
+
+ def...
[truncated message content] |