[cedar-backup-svn] SF.net SVN: cedar-backup:[1073] cedar-backup2/trunk
Brought to you by:
pronovic
|
From: <pro...@us...> - 2014-10-03 16:43:25
|
Revision: 1073
http://sourceforge.net/p/cedar-backup/code/1073
Author: pronovic
Date: 2014-10-03 16:43:17 +0000 (Fri, 03 Oct 2014)
Log Message:
-----------
Finish documentation
Modified Paths:
--------------
cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
cedar-backup2/trunk/manual/src/depends.xml
cedar-backup2/trunk/manual/src/extensions.xml
Modified: cedar-backup2/trunk/CedarBackup2/extend/amazons3.py
===================================================================
--- cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-03 16:26:45 UTC (rev 1072)
+++ cedar-backup2/trunk/CedarBackup2/extend/amazons3.py 2014-10-03 16:43:17 UTC (rev 1073)
@@ -55,8 +55,8 @@
and not root.
You can optionally configure Cedar Backup to encrypt data before sending it
-to S3. To do that, provide a complete command line using the ${input} and
-${output} variables to represent the original input file and the encrypted
+to S3. To do that, provide a complete command line using the C{${input}} and
+C{${output}} variables to represent the original input file and the encrypted
output file. This command will be executed as the backup user.
For instance, you can use something like this with GPG::
@@ -64,11 +64,13 @@
/usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
The GPG mechanism depends on a strong passphrase for security. One way to
-generate a strong passphrase is using your system random number generator, i.e.
-C{dd if=/dev/urandom count=20 bs=1 | xxd -ps}. (See U{StackExchange
-http://security.stackexchange.com/questions/14867/gpg-encryption-security>} for
-more details about that advice.) If you decide to use encryption, make sure you
-save off the passphrase in a safe place, so you can get at your backup data
+generate a strong passphrase is using your system random number generator, i.e.::
+
+ dd if=/dev/urandom count=20 bs=1 | xxd -ps
+
+(See U{StackExchange <http://security.stackexchange.com/questions/14867/gpg-encryption-security>}
+for more details about that advice.) If you decide to use encryption, make sure
+you save off the passphrase in a safe place, so you can get at your backup data
later if you need to. And obviously, make sure to set permissions on the
passphrase file so it can only be read by the backup user.
Modified: cedar-backup2/trunk/manual/src/depends.xml
===================================================================
--- cedar-backup2/trunk/manual/src/depends.xml 2014-10-03 16:26:45 UTC (rev 1072)
+++ cedar-backup2/trunk/manual/src/depends.xml 2014-10-03 16:43:17 UTC (rev 1073)
@@ -556,27 +556,45 @@
</varlistentry>
<varlistentry>
- <term><command>s3cmd</command></term>
+ <term><command>split</command></term>
<listitem>
<para>
- The <command>s3cmd</command> command is used by the Amazon S3
- extension to communicate with Amazon AWS. Cedar Backup requires
- version 1.5.0-rc1 or later. Earlier versions have problems
- uploading large files in the background (non-TTY), and there
- was also a syntax change that the extension relies on.
+ The <command>split</command> command is used by the split
+ extension to split up large files.
</para>
+
+ <para>
+ This command is typically part of the core operating system
+ install and is not distributed in a separate package.
+ </para>
+
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><command>AWS CLI</command></term>
+ <listitem>
+
+ <para>
+ AWS CLI is Amazon's official command-line tool for interacting
+ with the Amazon Web Services infrastruture. Cedar Backup uses
+ AWS CLI to copy backup data up to Amazon S3 cloud storage.
+ </para>
+
+ <para>
+ The initial implementation of the amazons3 extension was written
+ using AWS CLI 1.4. As of this writing, not all Linux distributions
+ include a package for this version. On these platforms, the
+ easiest way to install it is via PIP: <code>apt-get install python-pip</code>,
+ and then <code>pip install awscli</code>. The Debian package includes
+ an appropriate dependency starting with the jesse release.
+ </para>
<para>
- As of this writing, the version of s3cmd in Debian wheezy is
- not new enough, and it is not possible to pin the correct
- version from testing or unstable due to a generated
- dependency on python:all (which does not exist in wheezy).
- It is possible to force dpkg to install the package anyway:
- download the appropriate <literal>.deb</literal> file, and
- then install with <literal>dpkg --force-all</literal>.
- Alternately, the Cedar Solutions APT source contains a
- backported version of 1.5.0~rc1-2.
+ After you install AWS CLI, you need to configure your connection
+ to AWS with an appropriate access id and access key. Amazon provides a good
+ <ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html">setup guide</ulink>.
</para>
<informaltable>
@@ -592,38 +610,15 @@
<tbody>
<row>
<entry>upstream</entry>
- <entry><ulink url="http://s3tools.org/s3cmd"/></entry>
+ <entry><ulink url="http://aws.amazon.com/documentation/cli/"/></entry>
</row>
</tbody>
</tgroup>
</informaltable>
- <para>
- If you can't find a package for your system, install from the package
- source, using the <quote>upstream</quote> link.
- </para>
-
</listitem>
</varlistentry>
-
- <varlistentry>
- <term><command>split</command></term>
- <listitem>
-
- <para>
- The <command>split</command> command is used by the split
- extension to split up large files.
- </para>
-
- <para>
- This command is typically part of the core operating system
- install and is not distributed in a separate package.
- </para>
-
- </listitem>
- </varlistentry>
-
</variablelist>
</simplesect>
Modified: cedar-backup2/trunk/manual/src/extensions.xml
===================================================================
--- cedar-backup2/trunk/manual/src/extensions.xml 2014-10-03 16:26:45 UTC (rev 1072)
+++ cedar-backup2/trunk/manual/src/extensions.xml 2014-10-03 16:43:17 UTC (rev 1073)
@@ -107,35 +107,50 @@
<para>
The underlying functionality relies on the
- <ulink url="http://s3tools.org/">Amazon S3 Tools</ulink> package, version
- 1.5.0-rc1 or newer. It is a very thin wrapper around the
- <literal>s3cmd put</literal> command. Before you use this extension,
- you need to set up your Amazon S3 account and configure
- <literal>s3cmd</literal> as detailed in the
- <ulink url="http://s3tools.org/s3cmd-howto">HOWTO</ulink>.
+ <ulink url="http://aws.amazon.com/documentation/cli/">AWS CLI</ulink> toolset.
+ Before you use this extension, you need to set up your Amazon S3
+ account and configure AWS CLI as detailed in Amazons's
+ <ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html">setup guide</ulink>.
The extension assumes that the backup is being executed as root, and
switches over to the configured backup user to run the
- <literal>s3cmd</literal> program. So, make sure you configure the S3
- tools as the backup user and not root.
+ <literal>aws</literal> program. So, make sure you configure the AWS
+ CLI tools as the backup user and not root.
</para>
<para>
- When configuring the S3 tools connection to Amazon AWS, you probably want
- to configure GPG encryption using a strong passphrase. One way
- to generate a strong passphrase is using your system random number generator,
- i.e. <literal>dd if=/dev/urandom count=20 bs=1 | xxd -ps</literal>. (See
- <ulink url="http://security.stackexchange.com/questions/14867/gpg-encryption-security">StackExchange</ulink>
- for more details about that advice.) If you decide to use encryption, make sure
- you save off the passphrase in a safe place, so you can get at your backup data
- later if you need to.
+ You can optionally configure Cedar Backup to encrypt data before
+ sending it to S3. To do that, provide a complete command line using
+ the <literal>${input}</literal> and <literal>${output}</literal>
+ variables to represent the original input file and the encrypted
+ output file. This command will be executed as the backup user.
</para>
<para>
- This extension was written for and tested on Linux. It will throw an exception
- if run on Windows.
+ For instance, you can use something like this with GPG:
</para>
+ <programlisting>
+/usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
+ </programlisting>
+
<para>
+ The GPG mechanism depends on a strong passphrase for security. One way to
+ generate a strong passphrase is using your system random number generator, i.e.:
+ </para>
+
+ <programlisting>
+dd if=/dev/urandom count=20 bs=1 | xxd -ps
+ </programlisting>
+
+ <para>
+ (See <ulink url="http://security.stackexchange.com/questions/14867/gpg-encryption-security">StackExchange</ulink>
+ for more details about that advice.) If you decide to use encryption, make sure you
+ save off the passphrase in a safe place, so you can get at your backup data
+ later if you need to. And obviously, make sure to set permissions on the
+ passphrase file so it can only be read by the backup user.
+ </para>
+
+ <para>
To enable this extension, add the following section to the Cedar Backup
configuration file:
</para>
@@ -155,7 +170,7 @@
This extension relies on the options and staging configuration sections
in the standard Cedar Backup configuration file, and then also
requires its own <literal>amazons3</literal> configuration section.
- This is an example configuration section:
+ This is an example configuration section with encryption disabled:
</para>
<programlisting>
@@ -178,11 +193,11 @@
This field indicates whether warnings should be generated
if the Amazon S3 operation has to cross a midnite boundary in
order to find data to write to the cloud. For instance, a
- warning would be generated if valid store data was only
+ warning would be generated if valid data was only
found in the day before or day after the current day.
</para>
<para>
- Configuration for some users is such that the store
+ Configuration for some users is such that the amazons3
operation will always cross a midnite boundary, so they
will not care about this warning. Other users will expect
to never cross a boundary, and want to be notified that
@@ -216,6 +231,25 @@
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><literal>encrypt</literal></term>
+ <listitem>
+ <para>Command used to encrypt backup data before upload to S3</para>
+ <para>
+ If this field is provided, then data will be encrypted before
+ it is uploaded to Amazon S3. You must provide the entire
+ command used to encrypt a file, including the
+ <literal>${input}</literal> and <literal>${output}</literal>
+ variables. An example GPG command is shown above, but you
+ can use any mechanism you choose. The command will be run as
+ the configured backup user.
+ </para>
+ <para>
+ <emphasis>Restrictions:</emphasis> If provided, must be non-empty.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</sect1>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|