I've got a very large bucket, holding something like 10-20M files.
s3cmd ls s3Url
Is not only taking forever(which is sort of to be expected with 10-20K
separate requests), it's consuming lots of memory.
Glancing at the code in S3.bucket_list(), it looks like each LIST
request's (python) list is being appended to the ultimately returned
list, which is then dumped to STDOUT in one fell swoop. Are the options
I'm not aware of to cause s3cmd to dump directly to STDOUT instead of
buffering all the results? If not, then I guess this would be a feature
Thanks for a great tool!
Get latest updates about Open Source Projects, Conferences and News.