From: Lancashire, Pete <plancashire@ci...> - 2007-01-30 17:03:21
> Try the use_lazy_deletes option. That will mv the
> directory in question aside, then do the cp -al and
> rsync, then delete the lock file, and finally rm.
Curious, what name is given in the mv ? If unique
every time, to reduce impact I can see making my
rm a script that runs /bin/rm with its priority
niced down a few notches.
After digging around how file information/inodes
are cached in linux, I found what the number of
files ids/inodes that can be cached by the
system. The kernel variable is
/proc/sys/fs/file-max (old versions kernels also
had inode-max as well as file-max).
On this particular system it was around 130K, I
changed to 800K. I took the risk since it looks
like I have about 1 GB of RAM not being used.
I do not know the ramifications of making it this
After doing that, the rm/cp speed improved, my guess
is that after a rsnapshot run (hourly,daily,etc) all
or most of the file/id information got cached.
Then on the next run, rm was able to pull information
form cache rather then from disk.
I now get all 'servers' to complete in 3-4 minutes.
Including any files sent across the wire.
Old run times a few times exceeded 10 minutes
-pete .. time to surf the source .. or .. curiosity
killed the cat
> -----Original Message-----
> From: David Cantrell [mailto:david@...]
> Sent: Tuesday, January 30, 2007 6:58 AM
> To: rsnapshot-discuss@...
> Subject: Re: [rsnapshot-discuss] time to delete (rm) all the
> On Sun, Jan 28, 2007 at 04:50:08PM -0800, Lancashire, Pete wrote:
> > I for now am stuck with a pretty old RAID5
> > array for storing my backups. the o/s is
> > linux (Centos 4.4) and when I start up my
> > hourlys rsnapshot will need to delete or reduce
> > the hard link count on ~750,000 inodes. This
> > will increase to ~1.5 Million in about 2 weeks.
> > This operation seems to be the longest of the
> > three biggies (rm 5, cp -al 0 to 1, rsyncs).
> > Any suggestions ?
> > [script]
> > this will let the delete take place in the backgroud
> Try the use_lazy_deletes option. That will mv the directory
> in question
> aside, then do the cp -al and rsync, then delete the lock file, and
> finally rm. Because at this point there's no lock file, other
> rsnapshots can then be started and run in parallel with the rm.
> As an aside, I wouldn't expect rm to take significantly longer than
> cp -al. The difference you're seeing may be down to caching - by the
> time cp -al runs, big chunks of the filesystem are in memory from when
> the rm was run.
> David Cantrell | A machine for turning tea into grumpiness
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the
> chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
rsnapshot-discuss mailing list
From: David Cantrell <david@ca...> - 2007-01-30 17:27:17
On Tue, Jan 30, 2007 at 09:01:04AM -0800, Lancashire, Pete wrote:
> Curious, what name is given in the mv ? If unique
> every time
The version in CVS creates a unique name - _delete.$$, where $$ is that
rsnapshot invocation's PID. AFAIR, the most recent released version
doesn't use unique names.
> to reduce impact I can see making my
> rm a script that runs /bin/rm with its priority
> niced down a few notches.
It's worth trying. It might also be worth fiddling with ionice.
David Cantrell | Enforcer, South London Linguistic Massive
Fashion label: n: a liferaft for personalities
which lack intrinsic buoyancy