Tino Schwarze <backuppc.lists@...> wrote on 11/26/2008 12:32:19 PM:
> > > Is there some way to determine approximately how many clients (or
> > > much data) can be backup up with backuppc?
> > Via EPIA EN 1.2GHz processor, 512MB RAM, single 750GB PATA hard drive
> > backs up 4 different servers, with a total data storage of ~600GB: 2
> > ~250GB servers, the remaining servers with less than 50GB between
> > Full backups weekly, incrementals daily. Oh, and this is over
> > Everybody freaks out about the amount of RAM: they tell me I need more
> > better speed. However, I've watched vmstat throughout the process and
> > swap BI/BO remains at 0.
> Have you looked at the wait column (last one) in vmstat?
As in iowait? Usually the CPU is "100%" used, but much of it is iowait
(~40-60%, IIRC). Of course, you *expect* that: I'm I/O bound, after all,
with only a single IDE hard drive... Also, I'm not using compression,
which is also why I'm not worried about CPU.
> How many files
> per server do you have?
~500,000 each. The pool has ~800,000 files.
> I've got over 2 million files in the pool. If
> your server is swapping, you _need_ more RAM.
It is not. The amount of swap in use, IIRC, is measured in *kilobytes*.
But who cares how much data is sitting in the swap file? If I had a swap
file with *gigabytes* of data in it, who cares? As long as swap BI/BO is
0, it can sit there quite comfortably. Remember, swap is a *verb*: if
you are not *actively* swapping, it does not matter.
But it's a moot point: the swap space in use is effectively 0. Obviously,
there isn't even enough RAM pressure to force the system to push anything
to disk. I'm running a fairly stripped-down version of CentOS 5 (no X, no
CUPS, etc.), but not *that* stripped-down...
> If it is not, more RAM
> will still serve as a disk cache, e.g. for filesystem structure and
> metadata and helps a lot to reduce I/O wait times.
And ~300MB of cache isn't enough? That's what vmstat says it's using. I
have a feeling that is enough: I'm sure that the relevant metadata fits
in 300MB... :) Either the metadata that is needed is needed frequently
(and therefore given a secure place in the 300MB of cache), or it *isn't*
used frequently, in which case it won't help performance much.
Obviously, *anything* that minimizes I/O load will help, because I'm I/O
bound. But as I'm sure you know, increasing cache gives you diminishing
returns as it increases. My guess is that I've reached this at 300MB. For
the heck of it, I'll see if I can find a 1GB stick to throw in the server
and see if it makes a difference. I'll post the results when I've got
Yes, more RAM might cost me a whopping $30. But I can find no evidence
that this is going to help a *bit*. Even with only 512MB RAM, this system
is *not* being constrained by RAM.
It *is* I/O constrained. But the only way to improve this is with a RAID
array, and a RAID array with higher write performance than a single drive
is not automatic. The dramatic RAID 5 write performance penalty makes it
unattractive; RAID 1 isn't going to hurt write performance, but it isn't
going to help, either; That means going all the way to RAID 10: 4
drives, and only 50% available space. It's cheaper to build a second
server. (That and I use cute little boxes that won't accept 4
drives--hence the EIPA motherboard)
And no, RAID 0 is not an option. I don't mind the failure rate of a
single drive, but the failure rate of *any* one of multiple drives wiping
out my data is too much.