I (now) know Amazon S3 has a 5GB limit on individual files. I was wondering whether s3cmd can enforce this prior to upload?
I just uploaded a 20GB file and only at the end (100% uploaded) did I get an error. Would be nice to know at the start.
I'm using s3cmd-0.9.9.91 on a file "foo"
> sudo ~/s3cmd-0.9.9.91/s3cmd put foo s3://…
19657654272 of 19657654272 100% in … done
ERROR: S3 error: 400 (EntityTooLarge): Your proposed upload exceeds the maximum allowed object size
Yes, s3cmd should warn or refuse to upload too big files. I'll add it to my todo list.