Hi Michal

Thank you for your advice.
I wrote several scripts which helped me to move files quickly.
To help other people do the same I wrote details below:

1.Get list of files:
s3cmd ls s3://bucket/ > bucket_list

2. Write perl file parsing.pl to remove all information about bucket and date and keep only name of file
#!/usr/bin/perl -X
while (defined($str = <STDIN>))
{ print $1."\n" if ($str =~ /(s3:\/\/[\w\/\.]+)/i); }

3.Make it executive and run:
./parsing.pl <bucket_list >bucket_list_parsed

4.Split file for several files with 10 000 lines in each. After that we will have several files name xaa, xab etc
split -l 10000 bucket_list_parsed

5.Write shel script copy_buckets.sh and make it executive:
#!/bin/sh
I=$1;
for i in `cat $I`
do
    s3cmd --skip-existing -r -v --acl-public cp $i s3://NEWbucket/
done

6. Run several copies of this script with splitted file's name as input value:
./copy_buckets.sh xaa

7. Enjoy!

Best regards,
Maxim

Hi Maxim,

how about 's3cmd cp' for copy or 's3cmd mv' for move? Would that be what
you're after? Neither of these two fetches the files back to your
server, both copy them remotely - within a bucket, between buckets and
even between US and EU datacentres.

You'll need s3cmd 0.9.9+ to have cp/mv available.

Michal

On 10/15/2009 10:47 PM, Maxim wrote:
> Hello dear community.
>
> Could you please give me peace of advice how to directly copy large
> number of files from one bucket to another inside one AWS account?
> There is no subfolders - just long list of files. I don't like to
> download files onto server and upload it then into S3. I would like to
> copy it directly between 2 buckets.
>
> Thank you in advance for your answers.
>
> Regards,
> Maxim