From: Paul G. <pau...@xs...> - 2006-10-28 10:19:58
|
Dear all, I run BackupPC on system 10.0.0.170, and would like to backup to system 10.0.0.254. System 10.0.0.254 can't run BackupPC, but can be reached through ftp and SMB. Is that possible? And how? I'm new to BackupPC, so please forgive me if the question has been answered earlier. I couldn't find an answer in the archive, though. Regards, Paul Guijt |
From: Rodrigo R. <rr...@uc...> - 2006-10-28 13:25:11
|
"Paul Guijt" <pau...@xs...> writes: Hi Paul > Dear all, > > I run BackupPC on system 10.0.0.170, and would like to backup to system > 10.0.0.254. System 10.0.0.254 can't run BackupPC, but can be reached through > ftp and SMB. You want some sort of replication? I think that the main problem of replicating the BackupPC base is that you must keep the hard links in the copy/synchronization process. I think that is not (at least, directly) possible with SMB or ftp, maybe if do some magic using tar, but still, I know nothing about tar and hard links. Can't you install rsync on 10.0.0.254? That should be the easiest way, rsync can preserve hard links with the -H option, additionally it would transfer only the differences between the two hosts. Best wishes, Rodrigo > > Is that possible? And how? I'm new to BackupPC, so please forgive me if the > question has been answered earlier. I couldn't find an answer in the > archive, though. > > Regards, > Paul Guijt > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642_______________________________________________ > BackupPC-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/backuppc-users > http://backuppc.sourceforge.net/ |
From: Tomasz C. <ma...@wp...> - 2006-10-28 16:29:45
|
Rodrigo Real wrote: > "Paul Guijt" <pau...@xs...> writes: > > Hi Paul > >> Dear all, >> >> I run BackupPC on system 10.0.0.170, and would like to backup to system >> 10.0.0.254. System 10.0.0.254 can't run BackupPC, but can be reached through >> ftp and SMB. > > You want some sort of replication? > > I think that the main problem of replicating the BackupPC base is that > you must keep the hard links in the copy/synchronization process. I > think that is not (at least, directly) possible with SMB or ftp, maybe > if do some magic using tar, but still, I know nothing about tar and > hard links. > > Can't you install rsync on 10.0.0.254? That should be the easiest way, > rsync can preserve hard links with the -H option, additionally it > would transfer only the differences between the two hosts. I think he means something else: have BackupPC process running on 10.0.0.170, and have backup physically on 10.0.0.254. What I do, is running iscsi-target[1] on a storage server (a server with lots of space), and iscsi-initiator[2] on a server running BackupPC. If that sounds complicated, perhaps the best way to do it is to use NFS, CIFS (Samba) would be the next choice. [1] http://iscsi-target.sf.net [2] http://open-iscsi.org, use the latest svn -- Tomasz Chmielewski http://wpkg.org |
From: Carl W. S. <ch...@re...> - 2006-10-28 21:32:03
|
On 10/28 10:23 , Rodrigo Real wrote: > Can't you install rsync on 10.0.0.254? That should be the easiest way, > rsync can preserve hard links with the -H option, additionally it > would transfer only the differences between the two hosts. on the topic of server replication; the problem with using rsync is that it uses a tremendous amount of memory when dealing with millions of files. I tried it once; rsync'ing a 100GB pool (with perhaps 5 or 6 million files) from one disk to another on a box with 512MB RAM and perhaps 1GB swap. the box eventually ran out of memory to the point that I had to power-cycle it to regain control. it's the only time in recent memory that I can think of a linux box needing to be rebooted for a software problem. I know of people who do it; but they're doing it on machines with more memory and fewer files. -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com |
From: Rodrigo R. <rr...@uc...> - 2006-10-29 01:02:29
|
Carl Wilhelm Soderstrom <ch...@re...> writes: > On 10/28 10:23 , Rodrigo Real wrote: >> Can't you install rsync on 10.0.0.254? That should be the easiest way, >> rsync can preserve hard links with the -H option, additionally it >> would transfer only the differences between the two hosts. > > on the topic of server replication; the problem with using rsync is that it > uses a tremendous amount of memory when dealing with millions of files. I > tried it once; rsync'ing a 100GB pool (with perhaps 5 or 6 million files) > from one disk to another on a box with 512MB RAM and perhaps 1GB swap. I had never tryed that, but I can't think of another way of doing it. I am sure it is hard to rsync two huge mass of files. Maybe it is possible to split the rsync process in some parts, but it will still be hard. Best wishes, Rodrigo > > the box eventually ran out of memory to the point that I had to power-cycle > it to regain control. it's the only time in recent memory that I can think > of a linux box needing to be rebooted for a software problem. > > I know of people who do it; but they're doing it on machines with more > memory and fewer files. > > -- > Carl Soderstrom > Systems Administrator > Real-Time Enterprises > www.real-time.com > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/backuppc-users > http://backuppc.sourceforge.net/ |
From: Tomasz C. <ma...@wp...> - 2006-10-29 09:42:20
|
Rodrigo Real wrote: > Carl Wilhelm Soderstrom <ch...@re...> writes: > >> On 10/28 10:23 , Rodrigo Real wrote: >>> Can't you install rsync on 10.0.0.254? That should be the easiest way, >>> rsync can preserve hard links with the -H option, additionally it >>> would transfer only the differences between the two hosts. >> on the topic of server replication; the problem with using rsync is that it >> uses a tremendous amount of memory when dealing with millions of files. I >> tried it once; rsync'ing a 100GB pool (with perhaps 5 or 6 million files) >> from one disk to another on a box with 512MB RAM and perhaps 1GB swap. > > I had never tryed that, but I can't think of another way of doing > it. I am sure it is hard to rsync two huge mass of files. Maybe it is > possible to split the rsync process in some parts, but it will still > be hard. > > Best wishes, > Rodrigo > >> the box eventually ran out of memory to the point that I had to power-cycle >> it to regain control. it's the only time in recent memory that I can think >> of a linux box needing to be rebooted for a software problem. >> >> I know of people who do it; but they're doing it on machines with more >> memory and fewer files. Just add lots of swap. I was able to rsync an archive with several million files on a machine with just 256 MB RAM; it had several gigabytes of swap. -- Tomasz Chmielewski http://wpkg.org |
From: Les S. <le...@cy...> - 2006-10-29 09:55:27
|
>>> I know of people who do it; but they're doing it on machines with more >>> memory and fewer files. >>> > > Just add lots of swap. > > I was able to rsync an archive with several million files on a machine > with just 256 MB RAM; it had several gigabytes of swap. > > > Out of curiosity how long did it take and how big was the entire data size on the first run? did it eat up all the cpu time when running? les |
From: James W. <ja...@se...> - 2006-10-29 18:19:17
|
In an effort to reduce the time spent doing the BackupPC_nightly, I upped the number done in parallel from 2 to 8. Now the system is 99% wait. What is the best mix, half wait and half CPU bound? I'm concerned about performance because I am backing up over 200 machines and so far, I'm only getting around to each machine about once a week. Any performance tuning ideas you can give me would be appreciated. As I said before, I also eliminated the blackout window. |
From: Les M. <le...@fu...> - 2006-10-29 18:58:33
|
On Sun, 2006-10-29 at 12:19, James Ward wrote: > In an effort to reduce the time spent doing the BackupPC_nightly, I > upped the number done in parallel from 2 to 8. Now the system is 99% > wait. What is the best mix, half wait and half CPU bound? Disk head motion is bound to be the bottleneck here and that's pretty much single threaded. The only large improvements could come from sorting the operations in inode order (didn't someone do that some time ago?) or spreading the disk operations over more heads with raid0 or LVM. If you are using raid5, that's probably the main problem. > I'm concerned about performance because I am backing up over 200 > machines and so far, I'm only getting around to each machine about > once a week. Any performance tuning ideas you can give me would be > appreciated. As I said before, I also eliminated the blackout window. The other thing that might be improved is the backup speed. Rsync can be slow even for incrementals if the runs span huge numbers of files. Splitting them into separate filesystem or directory runs might help. If that's not possible, adding RAM to the server might help. If you have the bandwidth, tar might be faster on the incrementals - and you can stagger the fulls or try to get them on weekends. -- Les Mikesell le...@fu... |
From: Mark W. <ma...@ma...> - 2006-10-30 00:39:35
|
What about using 2 BackupPC servers. Split all the backups over two physical servers. 1 server does 100 PC's the other server does the other 100. James Ward wrote: > In an effort to reduce the time spent doing the BackupPC_nightly, I > upped the number done in parallel from 2 to 8. Now the system is 99% > wait. What is the best mix, half wait and half CPU bound? > > I'm concerned about performance because I am backing up over 200 > machines and so far, I'm only getting around to each machine about > once a week. Any performance tuning ideas you can give me would be > appreciated. As I said before, I also eliminated the blackout window. > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > BackupPC-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/backuppc-users > http://backuppc.sourceforge.net/ > > |
From: Tomasz C. <ma...@wp...> - 2006-10-29 10:13:25
|
Les Stott wrote: >>>> I know of people who do it; but they're doing it on machines with more >>>> memory and fewer files. >>>> >> Just add lots of swap. >> >> I was able to rsync an archive with several million files on a machine >> with just 256 MB RAM; it had several gigabytes of swap. >> >> >> > Out of curiosity how long did it take and how big was the entire data > size on the first run? did it eat up all the cpu time when running? As I remember, it took a couple of hours; but I was copying from one HDD to another. It was about 80 GB or so, full of hardlinks. The machine was pretty responsive, but I had to increase the swap size several times and start from scratch again. -- Tomasz Chmielewski http://wpkg.org |
From: Les M. <le...@fu...> - 2006-10-29 15:36:22
|
On Sun, 2006-10-29 at 04:13, Tomasz Chmielewski wrote: > > Out of curiosity how long did it take and how big was the entire data > > size on the first run? did it eat up all the cpu time when running? > > As I remember, it took a couple of hours; but I was copying from one HDD > to another. > It was about 80 GB or so, full of hardlinks. > > The machine was pretty responsive, but I had to increase the swap size > several times and start from scratch again. Did you use the -H option with rsync to maintain the hardlinks? -- Les Mikesell le...@fu... |
From: Tomasz C. <ma...@wp...> - 2006-10-29 15:39:09
|
Les Mikesell wrote: > On Sun, 2006-10-29 at 04:13, Tomasz Chmielewski wrote: > >>> Out of curiosity how long did it take and how big was the entire data >>> size on the first run? did it eat up all the cpu time when running? >> As I remember, it took a couple of hours; but I was copying from one HDD >> to another. >> It was about 80 GB or so, full of hardlinks. >> >> The machine was pretty responsive, but I had to increase the swap size >> several times and start from scratch again. > > Did you use the -H option with rsync to maintain the hardlinks? Yes. -- Tomasz Chmielewski http://wpkg.org |
From: Les M. <le...@fu...> - 2006-10-29 16:27:45
|
On Sun, 2006-10-29 at 09:38, Tomasz Chmielewski wrote: > > > >>> Out of curiosity how long did it take and how big was the entire data > >>> size on the first run? did it eat up all the cpu time when running? > >> As I remember, it took a couple of hours; but I was copying from one HDD > >> to another. > >> It was about 80 GB or so, full of hardlinks. > >> > >> The machine was pretty responsive, but I had to increase the swap size > >> several times and start from scratch again. > > > > Did you use the -H option with rsync to maintain the hardlinks? > > Yes. The time is probably more related to the number of files with links than the size of the archive. I tried this with an archive of about 100 GB and gave up after it ran for 3 days without completing. -- Les Mikesell le...@fu... |