--- a/s3cmd.1
+++ b/s3cmd.1
@@ -1,3 +1,8 @@
+
+.\" !!! IMPORTANT: This file is generated from s3cmd --help output using format-manpage.pl
+.\" !!!            Do your changes either in s3cmd file or in 'format-manpage.pl' otherwise
+.\" !!!            they will be overwritten!
+
 .TH s3cmd 1
 .SH NAME
 s3cmd \- tool for managing Amazon S3 storage space and Amazon CloudFront content delivery network
@@ -62,10 +67,19 @@
 s3cmd \fBdelpolicy\fR \fIs3://BUCKET\fR
 Delete Bucket Policy
 .TP
+s3cmd \fBmultipart\fR \fIs3://BUCKET [Id]\fR
+show multipart uploads
+.TP
+s3cmd \fBabortmp\fR \fIs3://BUCKET/OBJECT Id\fR
+abort a multipart upload
+.TP
+s3cmd \fBlistmp\fR \fIs3://BUCKET/OBJECT Id\fR
+list parts of a multipart upload
+.TP
 s3cmd \fBaccesslog\fR \fIs3://BUCKET\fR
 Enable/disable bucket access logging
 .TP
-s3cmd \fBsign\fR \fISTRING\-TO\-SIGN\fR
+s3cmd \fBsign\fR \fISTRING-TO-SIGN\fR
 Sign arbitrary string using the secret key
 .TP
 s3cmd \fBsignurl\fR \fIs3://BUCKET/OBJECT expiry_epoch\fR
@@ -78,13 +92,13 @@
 .PP
 Commands for static WebSites configuration
 .TP
-s3cmd \fBws\-create\fR \fIs3://BUCKET\fR
+s3cmd \fBws-create\fR \fIs3://BUCKET\fR
 Create Website from bucket
 .TP
-s3cmd \fBws\-delete\fR \fIs3://BUCKET\fR
+s3cmd \fBws-delete\fR \fIs3://BUCKET\fR
 Delete Website
 .TP
-s3cmd \fBws\-info\fR \fIs3://BUCKET\fR
+s3cmd \fBws-info\fR \fIs3://BUCKET\fR
 Info about Website
 
 
@@ -123,13 +137,17 @@
 show this help message and exit
 .TP
 \fB\-\-configure\fR
-Invoke interactive (re)configuration tool. Optionally use as '\-\-configure s3://come\-bucket' to test access to a specific bucket instead of attempting to list them all.
+Invoke interactive (re)configuration tool. Optionally
+use as '\fB--configure\fR s3://some-bucket' to test access
+to a specific bucket instead of attempting to list
+them all.
 .TP
 \fB\-c\fR FILE, \fB\-\-config\fR=FILE
 Config file name. Defaults to /home/mludvig/.s3cfg
 .TP
 \fB\-\-dump\-config\fR
-Dump current configuration after parsing config files and command line options and exit.
+Dump current configuration after parsing config files
+and command line options and exit.
 .TP
 \fB\-\-access_key\fR=ACCESS_KEY
 AWS Access Key
@@ -138,7 +156,10 @@
 AWS Secret Key
 .TP
 \fB\-n\fR, \fB\-\-dry\-run\fR
-Only show what should be uploaded or downloaded but don't actually do it. May still perform S3 requests to get bucket listings and other information though (only for file transfer commands)
+Only show what should be uploaded or downloaded but
+don't actually do it. May still perform S3 requests to
+get bucket listings and other information though (only
+for file transfer commands)
 .TP
 \fB\-e\fR, \fB\-\-encrypt\fR
 Encrypt files before uploading to S3.
@@ -150,34 +171,60 @@
 Force overwrite and other dangerous operations.
 .TP
 \fB\-\-continue\fR
-Continue getting a partially downloaded file (only for [get] command).
+Continue getting a partially downloaded file (only for
+[get] command).
+.TP
+\fB\-\-continue\-put\fR
+Continue uploading partially uploaded files or
+multipart upload parts.  Restarts/parts files that
+don't have matching size and md5.  Skips files/parts
+that do.  Note: md5sum checks are not always
+sufficient to check (part) file equality.  Enable this
+at your own risk.
+.TP
+\fB\-\-upload\-id\fR=UPLOAD_ID
+UploadId for Multipart Upload, in case you want
+continue an existing upload (equivalent to \fB--continue-\fR
+put) and there are multiple partial uploads.  Use
+s3cmd multipart [URI] to see what UploadIds are
+associated with the given URI.
 .TP
 \fB\-\-skip\-existing\fR
-Skip over files that exist at the destination (only for [get] and [sync] commands).
+Skip over files that exist at the destination (only
+for [get] and [sync] commands).
 .TP
 \fB\-r\fR, \fB\-\-recursive\fR
 Recursive upload, download or removal.
 .TP
 \fB\-\-check\-md5\fR
-Check MD5 sums when comparing files for [sync]. (default)
+Check MD5 sums when comparing files for [sync].
+(default)
 .TP
 \fB\-\-no\-check\-md5\fR
-Do not check MD5 sums when comparing files for [sync]. Only size will be compared. May significantly speed up transfer but may also miss some changed files.
+Do not check MD5 sums when comparing files for [sync].
+Only size will be compared. May significantly speed up
+transfer but may also miss some changed files.
 .TP
 \fB\-P\fR, \fB\-\-acl\-public\fR
 Store objects with ACL allowing read for anyone.
 .TP
 \fB\-\-acl\-private\fR
-Store objects with default ACL allowing access for you only.
+Store objects with default ACL allowing access for you
+only.
 .TP
 \fB\-\-acl\-grant\fR=PERMISSION:EMAIL or USER_CANONICAL_ID
-Grant stated permission to a given amazon user. Permission is one of: read, write, read_acp, write_acp, full_control, all
+Grant stated permission to a given amazon user.
+Permission is one of: read, write, read_acp,
+write_acp, full_control, all
 .TP
 \fB\-\-acl\-revoke\fR=PERMISSION:USER_CANONICAL_ID
-Revoke stated permission for a given amazon user. Permission is one of: read, write, read_acp, wr     ite_acp, full_control, all
+Revoke stated permission for a given amazon user.
+Permission is one of: read, write, read_acp, wr
+ite_acp, full_control, all
 .TP
 \fB\-\-delete\-removed\fR
-Delete remote objects with no corresponding local file [sync]
+Delete remote objects with no corresponding local file
+[sync]
 .TP
 \fB\-\-no\-delete\-removed\fR
 Don't delete remote objects.
@@ -188,109 +235,148 @@
 \fB\-\-delay\-updates\fR
 Put all updated files into place at end [sync]
 .TP
+\fB\-\-max\-delete\fR=NUM
+Do not delete more than NUM files. [del] and [sync]
+.TP
 \fB\-\-add\-destination\fR=ADDITIONAL_DESTINATIONS
-Additional destination for parallel uploads, in addition to last arg.  May be repeated.
+Additional destination for parallel uploads, in
+addition to last arg.  May be repeated.
 .TP
 \fB\-\-delete\-after\-fetch\fR
-Delete remote objects after fetching to local file (only for [get] and [sync] commands).
-.TP
-\fB\-\-max\-delete\fR=NUM 
-Do not delete more than NUM files.  If that limit would be exceeded, a warning is output and none are deleted. [del] and [sync]
+Delete remote objects after fetching to local file
+(only for [get] and [sync] commands).
 .TP
 \fB\-p\fR, \fB\-\-preserve\fR
-Preserve filesystem attributes (mode, ownership, timestamps). Default for [sync] command.
+Preserve filesystem attributes (mode, ownership,
+timestamps). Default for [sync] command.
 .TP
 \fB\-\-no\-preserve\fR
 Don't store FS attributes
 .TP
 \fB\-\-exclude\fR=GLOB
-Filenames and paths matching GLOB will be excluded from sync
+Filenames and paths matching GLOB will be excluded
+from sync
 .TP
 \fB\-\-exclude\-from\fR=FILE
-Read \-\-exclude GLOBs from FILE
+Read --exclude GLOBs from FILE
 .TP
 \fB\-\-rexclude\fR=REGEXP
-Filenames and paths matching REGEXP (regular expression) will be excluded from sync
+Filenames and paths matching REGEXP (regular
+expression) will be excluded from sync
 .TP
 \fB\-\-rexclude\-from\fR=FILE
-Read \-\-rexclude REGEXPs from FILE
+Read --rexclude REGEXPs from FILE
 .TP
 \fB\-\-include\fR=GLOB
-Filenames and paths matching GLOB will be included even if previously excluded by one of \-\-(r)exclude(\-from) patterns
+Filenames and paths matching GLOB will be included
+even if previously excluded by one of
+\fB--(r)exclude(-from)\fR patterns
 .TP
 \fB\-\-include\-from\fR=FILE
-Read \-\-include GLOBs from FILE
+Read --include GLOBs from FILE
 .TP
 \fB\-\-rinclude\fR=REGEXP
-Same as \-\-include but uses REGEXP (regular expression) instead of GLOB
+Same as --include but uses REGEXP (regular expression)
+instead of GLOB
 .TP
 \fB\-\-rinclude\-from\fR=FILE
-Read \-\-rinclude REGEXPs from FILE
+Read --rinclude REGEXPs from FILE
+.TP
+\fB\-\-ignore\-failed\-copy\fR
+Don't exit unsuccessfully because of missing keys
 .TP
 \fB\-\-files\-from\fR=FILE
-Read list of source-file names from FILE. Use \- to read from stdin.
-May be repeated.
+Read list of source-file names from FILE. Use - to
+read from stdin.
 .TP
 \fB\-\-bucket\-location\fR=BUCKET_LOCATION
-Datacentre to create bucket in. As of now the datacenters are: US (default), EU, ap\-northeast\-1, ap\-southeast\-1, sa\-east\-1, us\-west\-1 and us\-west\-2
+Datacentre to create bucket in. As of now the
+datacenters are: US (default), EU, ap-northeast-1, ap-
+southeast-1, sa-east-1, us-west-1 and us-west-2
 .TP
 \fB\-\-reduced\-redundancy\fR, \fB\-\-rr\fR
-Store object with 'Reduced redundancy'. Lower per-GB price. [put, cp, mv]
+Store object with 'Reduced redundancy'. Lower per-GB
+price. [put, cp, mv]
 .TP
 \fB\-\-access\-logging\-target\-prefix\fR=LOG_TARGET_PREFIX
-Target prefix for access logs (S3 URI) (for [cfmodify] and [accesslog] commands)
+Target prefix for access logs (S3 URI) (for [cfmodify]
+and [accesslog] commands)
 .TP
 \fB\-\-no\-access\-logging\fR
-Disable access logging (for [cfmodify] and [accesslog] commands)
-.TP
-\fB\-\-default\-mime\-type\fR
-Default MIME-type for stored objects. Application default is binary/octet\-stream.
+Disable access logging (for [cfmodify] and [accesslog]
+commands)
+.TP
+\fB\-\-default\-mime\-type\fR=DEFAULT_MIME_TYPE
+Default MIME-type for stored objects. Application
+default is binary/octet-stream.
 .TP
 \fB\-M\fR, \fB\-\-guess\-mime\-type\fR
-Guess MIME-type of files by their extension or mime magic. Fall back to default MIME-type as specified by \fB\-\-default\-mime\-type\fR option
+Guess MIME-type of files by their extension or mime
+magic. Fall back to default MIME-Type as specified by
+\fB--default-mime-type\fR option
 .TP
 \fB\-\-no\-guess\-mime\-type\fR
-Don't guess MIME-type and use the default type instead.
+Don't guess MIME-type and use the default type
+instead.
 .TP
 \fB\-\-no\-mime\-magic\fR
 Don't use mime magic when guessing MIME-type.
 .TP
 \fB\-m\fR MIME/TYPE, \fB\-\-mime\-type\fR=MIME/TYPE
-Force MIME-type. Override both \fB\-\-default\-mime\-type\fR and \fB\-\-guess\-mime\-type\fR.
+Force MIME-type. Override both \fB--default-mime-type\fR and
+\fB--guess-mime-type\fR.
 .TP
 \fB\-\-add\-header\fR=NAME:VALUE
-Add a given HTTP header to the upload request. Can be used multiple times. For instance set 'Expires' or 'Cache\-Control' headers (or both) using this options if you like.
+Add a given HTTP header to the upload request. Can be
+used multiple times. For instance set 'Expires' or
+'Cache-Control' headers (or both) using this options
+if you like.
+.TP
+\fB\-\-server\-side\-encryption\fR
+Specifies that server-side encryption will be used
+when putting objects.
 .TP
 \fB\-\-encoding\fR=ENCODING
-Override autodetected terminal and filesystem encoding (character set). Autodetected: UTF\-8
+Override autodetected terminal and filesystem encoding
+(character set). Autodetected: UTF-8
 .TP
 \fB\-\-disable\-content\-encoding\fR
-Don't include a Content-encoding header to the the uploaded objects. Default: Off
+Don't include a Content-encoding header to the the
+uploaded objects
 .TP
 \fB\-\-add\-encoding\-exts\fR=EXTENSIONs
-Add encoding to these comma delimited extensions i.e. (css,js,html) when uploading to S3 )
+Add encoding to these comma delimited extensions i.e.
+(css,js,html) when uploading to S3 )
 .TP
 \fB\-\-verbatim\fR
-Use the S3 name as given on the command line. No pre-processing, encoding, etc. Use with caution!
+Use the S3 name as given on the command line. No pre-
+processing, encoding, etc. Use with caution!
 .TP
 \fB\-\-disable\-multipart\fR
-Disable multipart upload on files bigger than \-\-multipart\-chunk\-size\-mb
+Disable multipart upload on files bigger than
+\fB--multipart-chunk-size-mb\fR
 .TP
 \fB\-\-multipart\-chunk\-size\-mb\fR=SIZE
-Size of each chunk of a multipart upload. Files bigger than SIZE are automatically uploaded as multithreaded-multipart, smaller files are uploaded using the traditional method. SIZE is in Mega-Bytes,
-default chunk size is 15MB, minimum allowed chunk size is 5MB, maximum is 5GB.
+Size of each chunk of a multipart upload. Files bigger
+than SIZE are automatically uploaded as multithreaded-
+multipart, smaller files are uploaded using the
+traditional method. SIZE is in Mega-Bytes, default
+chunk size is noneMB, minimum allowed chunk size is
+5MB, maximum is 5GB.
 .TP
 \fB\-\-list\-md5\fR
-Include MD5 sums in bucket listings (only for 'ls' command).
+Include MD5 sums in bucket listings (only for 'ls'
+command).
 .TP
 \fB\-H\fR, \fB\-\-human\-readable\-sizes\fR
-Print sizes in human readable form (eg 1kB instead of 1234).
+Print sizes in human readable form (eg 1kB instead of
+1234).
 .TP
 \fB\-\-ws\-index\fR=WEBSITE_INDEX
-Name of index-document (only for [ws\-create] command)
+Name of index-document (only for [ws-create] command)
 .TP
 \fB\-\-ws\-error\fR=WEBSITE_ERROR
-Name of error-document (only for [ws\-create] command)
+Name of error-document (only for [ws-create] command)
 .TP
 \fB\-\-progress\fR
 Display progress meter (default on TTY).
@@ -299,32 +385,43 @@
 Don't display progress meter (default on non-TTY).
 .TP
 \fB\-\-enable\fR
-Enable given CloudFront distribution (only for [cfmodify] command)
+Enable given CloudFront distribution (only for
+[cfmodify] command)
 .TP
 \fB\-\-disable\fR
-Enable given CloudFront distribution (only for [cfmodify] command)
+Enable given CloudFront distribution (only for
+[cfmodify] command)
 .TP
 \fB\-\-cf\-invalidate\fR
-Invalidate the uploaded filed in CloudFront. Also see [cfinval] command.
+Invalidate the uploaded filed in CloudFront. Also see
+[cfinval] command.
 .TP
 \fB\-\-cf\-invalidate\-default\-index\fR
-When using Custom Origin and S3 static website, invalidate the default index file.
+When using Custom Origin and S3 static website,
+invalidate the default index file.
 .TP
 \fB\-\-cf\-no\-invalidate\-default\-index\-root\fR
-When using Custom Origin and S3 static website, don't invalidate the path to the default index file.
+When using Custom Origin and S3 static website, don't
+invalidate the path to the default index file.
 .TP
 \fB\-\-cf\-add\-cname\fR=CNAME
-Add given CNAME to a CloudFront distribution (only for [cfcreate] and [cfmodify] commands)
+Add given CNAME to a CloudFront distribution (only for
+[cfcreate] and [cfmodify] commands)
 .TP
 \fB\-\-cf\-remove\-cname\fR=CNAME
-Remove given CNAME from a CloudFront distribution (only for [cfmodify] command)
+Remove given CNAME from a CloudFront distribution
+(only for [cfmodify] command)
 .TP
 \fB\-\-cf\-comment\fR=COMMENT
-Set COMMENT for a given CloudFront distribution (only for [cfcreate] and [cfmodify] commands)
+Set COMMENT for a given CloudFront distribution (only
+for [cfcreate] and [cfmodify] commands)
 .TP
 \fB\-\-cf\-default\-root\-object\fR=DEFAULT_ROOT_OBJECT
-Set the default root object to return when no object is specified in the URL. Use a relative path, i.e. default/index.html instead of /default/index.html or s3://bucket/default/index.html (only for
-[cfcreate] and [cfmodify] commands)
+Set the default root object to return when no object
+is specified in the URL. Use a relative path, i.e.
+default/index.html instead of /default/index.html or
+s3://bucket/default/index.html (only for [cfcreate]
+and [cfmodify] commands)
 .TP
 \fB\-v\fR, \fB\-\-verbose\fR
 Enable verbose output.
@@ -333,7 +430,7 @@
 Enable debug output.
 .TP
 \fB\-\-version\fR
-Show s3cmd version (1.5.0-alpha3) and exit.
+Show s3cmd version (1.5.0-beta1) and exit.
 .TP
 \fB\-F\fR, \fB\-\-follow\-symlinks\fR
 Follow symbolic links as if they are regular files
@@ -352,11 +449,11 @@
 .PP
 Basic usage common in backup scenarios is as simple as:
 .nf
-	s3cmd sync /local/path/ s3://test\-bucket/backup/
+	s3cmd sync /local/path/ s3://test-bucket/backup/
 .fi
 .PP
 This command will find all files under /local/path directory and copy them 
-to corresponding paths under s3://test\-bucket/backup on the remote side.
+to corresponding paths under s3://test-bucket/backup on the remote side.
 For example:
 .nf
 	/local/path/\fBfile1.ext\fR         \->  s3://bucket/backup/\fBfile1.ext\fR
@@ -366,7 +463,7 @@
 However if the local path doesn't end with a slash the last directory's name
 is used on the remote side as well. Compare these with the previous example:
 .nf
-	s3cmd sync /local/path s3://test\-bucket/backup/
+	s3cmd sync /local/path s3://test-bucket/backup/
 .fi
 will sync:
 .nf
@@ -376,7 +473,7 @@
 .PP
 To retrieve the files back from S3 use inverted syntax:
 .nf
-	s3cmd sync s3://test\-bucket/backup/ /tmp/restore/
+	s3cmd sync s3://test-bucket/backup/ /tmp/restore/
 .fi
 that will download files:
 .nf
@@ -387,7 +484,7 @@
 Without the trailing slash on source the behaviour is similar to 
 what has been demonstrated with upload:
 .nf
-	s3cmd sync s3://test\-bucket/backup /tmp/restore/
+	s3cmd sync s3://test-bucket/backup /tmp/restore/
 .fi
 will download the files as:
 .nf
@@ -419,14 +516,9 @@
 .PP
 For example to exclude all files with ".jpg" extension except those beginning with a number use:
 .PP
-	\-\-exclude '*.jpg' \-\-rinclude '[0\-9].*\.jpg'
-
-.SH ENVIRONMENT
-.TP
-.B TMP
-Directory used to write temp files (/tmp by default)
+	\-\-exclude '*.jpg' \-\-rinclude '[0-9].*\.jpg'
 .SH SEE ALSO
-For the most up to date list of options run
+For the most up to date list of options run 
 .B s3cmd \-\-help
 .br
 For more info about usage, examples and other related info visit project homepage at