From: SourceForge.net <no...@so...> - 2012-12-30 13:28:01
|
Bugs item #3410593, was opened at 2011-09-16 09:07 Message generated for change (Comment added) made by gabriel_preda You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=887015&aid=3410593&group_id=178907 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: s3cmd Group: Malfunction Status: Open Resolution: None Priority: 5 Private: No Submitted By: https://www.google.com/accounts () Assigned to: Nobody/Anonymous (nobody) Summary: S3cmd fails to upload 5G+ files Initial Comment: OS: CentOS 5.5 kernel: 2.6.18-194.32.1.el5 s3cmd version 1.0.0 S3cmd fails to upload files larger than 5Gb. It returns message: "WARNING: Upload failed: /file.tar.gz ((32, 'Broken pipe'))" and then retries a lot of times on lower speed. Would it be related to previous limit of 5 Gb? Current S3 limit per object is 5 Tb. Thanks, Maxim ---------------------------------------------------------------------- Comment By: Gabriel PREDA (gabriel_preda) Date: 2012-12-30 05:28 Message: Amazon says: The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from 1 byte to 5T. The largest object that can be uploaded in a single PUT is 5G. For objects larger than 100M, customers should consider using the Multipart Upload capability. So the limit per object is indeed 5T but in a PUT operation you can only upload 5G at a time. - Can we have s3cmd at least refuse for the time being to upload files bigger than 5G ? - Can we have it use Multipart Upload capability for big files ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=887015&aid=3410593&group_id=178907 |