From: Graham K. <gr...@eq...> - 2011-01-27 15:48:58
|
On Thu, Jan 27, 2011 at 09:45:29AM -0500, Phil Stracchino wrote: > On 01/27/11 09:28, Graham Keeling wrote: > > I shall summarise what I thought I had already said, because it is quite clear > > to me. > > > > You have a terabyte disk that you want to use for backups. > > > > You split it into 100 Volumes, set 10GB max volume size each, and 1 job > > per volume. > > > > All your backup jobs are 5KB. > > You can then only use 500KB of disk space before you run out of volumes. > > Assuming that you've also set max volumes = 100. > > However, I believe in this case the problem is summarized by PEBCAK. If > all of your backup jobs are 5KB, this would be a completely > brain-damaged way to set up your storage. Please don't be deliberately obtuse. The point of the example is to use an extreme to easily demonstrate real problems. I used the specific figure of 5KB to provide continuity with your own example in which you used the same figure. The same problems exist in more realistic situations. Assuming that I somehow know that all my backups will range from 100MB to 10GB, then what should I set? a) 10000 volumes, 100MB max size? b) 100 volumes, 10GB max size? a) gives me wasted space when a backup is not a multiple of 100MB in size, and possible overhead problems due to the number of volumes. b) gives me wasted space when a backup is not a multiple of 10GB in size. |