Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Right-click on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(19) |
Nov
(2) |
Dec
(23) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(18) |
Feb
(15) |
Mar
(27) |
Apr
(6) |
May
(40) |
Jun
(53) |
Jul
(67) |
Aug
(107) |
Sep
(75) |
Oct
(74) |
Nov
(73) |
Dec
(63) |
2003 |
Jan
(93) |
Feb
(97) |
Mar
(72) |
Apr
(129) |
May
(110) |
Jun
(97) |
Jul
(151) |
Aug
(124) |
Sep
(66) |
Oct
(216) |
Nov
(105) |
Dec
(151) |
2004 |
Jan
(107) |
Feb
(181) |
Mar
(235) |
Apr
(212) |
May
(231) |
Jun
(231) |
Jul
(264) |
Aug
(278) |
Sep
(173) |
Oct
(259) |
Nov
(164) |
Dec
(244) |
2005 |
Jan
(318) |
Feb
(206) |
Mar
(287) |
Apr
(222) |
May
(240) |
Jun
(255) |
Jul
(166) |
Aug
(289) |
Sep
(233) |
Oct
(200) |
Nov
(307) |
Dec
(170) |
2006 |
Jan
(289) |
Feb
(270) |
Mar
(306) |
Apr
(150) |
May
(181) |
Jun
(263) |
Jul
(181) |
Aug
(291) |
Sep
(147) |
Oct
(155) |
Nov
(381) |
Dec
(310) |
2007 |
Jan
(431) |
Feb
(306) |
Mar
(378) |
Apr
(216) |
May
(313) |
Jun
(235) |
Jul
(373) |
Aug
(171) |
Sep
(459) |
Oct
(642) |
Nov
(464) |
Dec
(419) |
2008 |
Jan
(374) |
Feb
(445) |
Mar
(400) |
Apr
(406) |
May
(374) |
Jun
(346) |
Jul
(387) |
Aug
(302) |
Sep
(255) |
Oct
(374) |
Nov
(292) |
Dec
(488) |
2009 |
Jan
(392) |
Feb
(240) |
Mar
(245) |
Apr
(483) |
May
(310) |
Jun
(494) |
Jul
(265) |
Aug
(515) |
Sep
(514) |
Oct
(284) |
Nov
(338) |
Dec
(329) |
2010 |
Jan
(305) |
Feb
(246) |
Mar
(404) |
Apr
(391) |
May
(302) |
Jun
(166) |
Jul
(166) |
Aug
(234) |
Sep
(222) |
Oct
(267) |
Nov
(219) |
Dec
(244) |
2011 |
Jan
(189) |
Feb
(220) |
Mar
(353) |
Apr
(322) |
May
(270) |
Jun
(202) |
Jul
(172) |
Aug
(215) |
Sep
(226) |
Oct
(169) |
Nov
(163) |
Dec
(152) |
2012 |
Jan
(182) |
Feb
(221) |
Mar
(117) |
Apr
(151) |
May
(169) |
Jun
(135) |
Jul
(140) |
Aug
(108) |
Sep
(148) |
Oct
(97) |
Nov
(119) |
Dec
(66) |
2013 |
Jan
(105) |
Feb
(127) |
Mar
(265) |
Apr
(84) |
May
(75) |
Jun
(116) |
Jul
(89) |
Aug
(118) |
Sep
(132) |
Oct
(247) |
Nov
(98) |
Dec
(109) |
2014 |
Jan
(81) |
Feb
(101) |
Mar
(101) |
Apr
(79) |
May
(132) |
Jun
(102) |
Jul
(91) |
Aug
(114) |
Sep
(104) |
Oct
(126) |
Nov
(146) |
Dec
(46) |
2015 |
Jan
(51) |
Feb
(44) |
Mar
(83) |
Apr
(40) |
May
(68) |
Jun
(43) |
Jul
(38) |
Aug
(33) |
Sep
(88) |
Oct
(54) |
Nov
(53) |
Dec
(119) |
2016 |
Jan
(268) |
Feb
(42) |
Mar
(86) |
Apr
(73) |
May
(239) |
Jun
(93) |
Jul
(89) |
Aug
(60) |
Sep
(49) |
Oct
(66) |
Nov
(70) |
Dec
(34) |
2017 |
Jan
(81) |
Feb
(103) |
Mar
(161) |
Apr
(137) |
May
(230) |
Jun
(111) |
Jul
(135) |
Aug
(92) |
Sep
(118) |
Oct
(85) |
Nov
(110) |
Dec
(84) |
2018 |
Jan
(75) |
Feb
(59) |
Mar
(48) |
Apr
(43) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(9) |
2
(4) |
3
(4) |
4
(25) |
5
(13) |
6
(39) |
7
(45) |
8
(20) |
9
(14) |
10
(19) |
11
(15) |
12
(16) |
13
(9) |
14
(4) |
15
(5) |
16
(1) |
17
(7) |
18
(7) |
19
(22) |
20
(17) |
21
(11) |
22
(15) |
23
(2) |
24
(4) |
25
(10) |
26
(24) |
27
(37) |
28
(30) |
29
(17) |
|
From: Robin Lee Powell <rlpowell@di...> - 2008-02-05 21:38:40
|
I've been using my own scripts http://digitalkingdom.org/~rlpowell/hobbies/backups.html to remotely mirror backuppc's date in an encrypted fashion. The problem is, the time rsync takes seems to keep growing. I expect this to continue more-or-less without bound, and it's already pretty onerous. So, I need to find a way to rsync on a per-file basis, but still be encrypted. Hence, questions: 1. Does anyone know a way to mount a filesystem so that only one process can see it? If I could do that, I could have an encrypted loop filesystem mounted remotely for rsync's sole use, and just rsync into that. I'm not willing to have the encripted filesystem mounted globally on the remote machine, as I am not the sole user, nor do I own it. 2. Does backuppc ever change pool files (rather than simply replace them)? If the answer is no, I don't need to worry about the rsync-friendliness of any per-file encryption method I might use. 3. Ignoring logs and such, is anything outside of the pool or cpool dir ever *not* a hard link into the pool/cpool dir? If the answer is no, then per-file encryption is relatively easy. One way is to rsync -H a copy of all backuppc data and encrypt each pool file (in a way that doesn't break hard links) and then rsync that encrypted copy out remotely. Another way is to roll encryption in to the backuppc compression program. Can anyone think of other ways to solve this problem? Thanks. -Robin -- Lojban Reason #17: http://en.wikipedia.org/wiki/Buffalo_buffalo Proud Supporter of the Singularity Institute - http://singinst.org/ http://www.digitalkingdom.org/~rlpowell/ *** http://www.lojban.org/ |
From: Gilles Guiot <g_guiot@ho...> - 2008-02-05 20:16:42
|
Hello thanks for the reply. Just solved the problem. uninstalled completely backuppc and apache. (remove --purge + rm backuppc) then installed apache2 (just noted it was the installed package when apt-get install apache). Then selected apache2 at the backuppc installation prompt.. et voilà.... silly mistake on my part apache and apache2 have indeed different conf files... this one is solved :) From: backuppc@... To: g_guiot@...; backuppc-users@... Subject: RE: [BackupPC-users] (no subject) Date: Tue, 5 Feb 2008 18:53:03 +0000 Being a novice I hesitate to give advice but I had the same problem which I solved by adding a link apache.conf to backuppc.conf as I found on this site: http://wiki.debian.org/DebianEdu/HowTo/BackupPC Then restart apache. Hope that helps GerryMc -----Original Message----- From: backuppc-users-bounces@... [mailto:backuppc-users-bounces@...]On Behalf Of Gilles Guiot Sent: 05 February 2008 16:29 To: backuppc-users@... Subject: [BackupPC-users] (no subject) Hello all, I'm on a server installed with debian testing (iso in date of 2008/02/04) I installed the latest version of backup PC byt apt-get. Everything went fine, like with other installs of backuppc. Only glitch is that i cant 'access the interface http://localhost/backuppc I keep getting a message : "the resquested url /backuppc wasn't found on this server. Apache says it's working fine, and the LOG in var/lib/backuppc/log only tells me = _________________________________________________________________ Shed those extra pounds with MSN and The Biggest Loser! http://biggestloser.msn.com/ |
From: Rich Rauenzahn <rich@sh...> - 2008-02-05 19:37:30
|
> There are several secure ways to set up a read-only backup system, but > that loses the convenience of browsing and restoring files via the web > interface. But, users can still directly download files or tar archives, > so it is a reasonable approach, and probably the right thing to do for now. > And, if I do need to restore a system, I can temporarily change the read-only attribute in rsyncd.conf -- or do it by hand. I like that manual step of root intervention. Rich |
From: Joe Krahn <krahn@ni...> - 2008-02-05 19:32:26
|
Rich Rauenzahn wrote: > > > Joe Krahn wrote: >> (Maybe this should be posted to -devel?) >> Unrestricted remote root access by a non-root user is generally not a >> secure design. There are many ways to restrict the access to backup >> > > This seems like a good chance to explain how I handle the rsync security > -- I prefer it over the sudo method and did not like the idea of a > remote ssh root login. > > For remote backups, I setup a nonpriv account that I configure for > password-less login from the backup server. I then setup rsyncd to > listen only on localhost on the remote host. I also set an > rsyncd.secrets file and configure the rsyncd.conf shares to be read-only. > To backup, I create a tunnel using the password-less login and then > backup over the tunnel. For local backups, you obviously don't need the > tunnel -- just connect to localhost. > > Rich There are several secure ways to set up a read-only backup system, but that loses the convenience of browsing and restoring files via the web interface. But, users can still directly download files or tar archives, so it is a reasonable approach, and probably the right thing to do for now. Joe |
From: GerryMc <backuppc@cl...> - 2008-02-05 18:53:06
|
Being a novice I hesitate to give advice but I had the same problem which I solved by adding a link apache.conf to backuppc.conf as I found on this site: http://wiki.debian.org/DebianEdu/HowTo/BackupPC Then restart apache. Hope that helps GerryMc -----Original Message----- From: backuppc-users-bounces@... [mailto:backuppc-users-bounces@...]On Behalf Of Gilles Guiot Sent: 05 February 2008 16:29 To: backuppc-users@... Subject: [BackupPC-users] (no subject) Hello all, I'm on a server installed with debian testing (iso in date of 2008/02/04) I installed the latest version of backup PC byt apt-get. Everything went fine, like with other installs of backuppc. Only glitch is that i cant 'access the interface http://localhost/backuppc I keep getting a message : "the resquested url /backuppc wasn't found on this server. Apache says it's working fine, and the LOG in var/lib/backuppc/log only tells me = |
From: Rich Rauenzahn <rich@sh...> - 2008-02-05 17:47:12
|
Joe Krahn wrote: > (Maybe this should be posted to -devel?) > Unrestricted remote root access by a non-root user is generally not a > secure design. There are many ways to restrict the access to backup > This seems like a good chance to explain how I handle the rsync security -- I prefer it over the sudo method and did not like the idea of a remote ssh root login. For remote backups, I setup a nonpriv account that I configure for password-less login from the backup server. I then setup rsyncd to listen only on localhost on the remote host. I also set an rsyncd.secrets file and configure the rsyncd.conf shares to be read-only. To backup, I create a tunnel using the password-less login and then backup over the tunnel. For local backups, you obviously don't need the tunnel -- just connect to localhost. Rich |
From: Joe Krahn <krahn@ni...> - 2008-02-05 17:21:16
|
(Maybe this should be posted to -devel?) Unrestricted remote root access by a non-root user is generally not a secure design. There are many ways to restrict the access to backup activities, but they can't be enforced if the access includes unrestricted write access. I think that the secure approach is to require that restores be run by root from the local machine, rather than allowing a remote push. (Isn't that true for other backup systems?) I think the best approach is for remote restores to be allowed for non-privileged files, but run under user account access from the user requesting the restore. Remote restoration of privileged files should require some sort of authentication from the local root account. This should not be too hard to set up using ssh restrictions, if BackupPC includes the user name as one of the arguments substituted in the backup command, and some user ssh key management. You can restrict remote-root access to read-only using the command= setting in the ssh authorized_keys file. It runs a pre-defined command in place of the requested ssh command. The proxy command could handle authentication for write access, or you could just require that restores are handled with by downloading a tar/zip archive, or to a chrooted temporary directory. Does this sound like a good plan to other BackupPC users? Most of this can be done just by getting a $User variable into the rsync command substitutions. To do it well, BackupPC needs user-specific configurations to handle the ssh keys for each user. It will also allow for user-specific e-mail settings. It is also good to allow different user names for the same person. We have several people with Linux user names that are different from their Windows domain user names. I think that these would be fairly easy to implement for someone familiar with the BackupPC source code. Joe Krahn |
From: Gilles Guiot <g_guiot@ho...> - 2008-02-05 16:29:26
|
Hello all, I'm on a server installed with debian testing (iso in date of 2008/02/04) I installed the latest version of backup PC byt apt-get. Everything went fine, like with other installs of backuppc. Only glitch is that i cant 'access the interface http://localhost/backuppc I keep getting a message : "the resquested url /backuppc wasn't found on this server. Apache says it's working fine, and the LOG in var/lib/backuppc/log only tells me 2008-02-05 16:37:09 Reading hosts file 2008-02-05 16:37:09 Added host localhost to backup list 2008-02-05 16:37:09 BackupPC started, pid 4834 2008-02-05 16:37:09 Running BackupPC_trashClean (pid=4835) 2008-02-05 16:37:09 Next wakeup is 2008-02-05 17:00:00 2008-02-05 16:37:58 Got signal TERM... cleaning up 2008-02-05 16:39:38 Reading hosts file 2008-02-05 16:39:38 BackupPC started, pid 2715 2008-02-05 16:39:39 Running BackupPC_trashClean (pid=2721) 2008-02-05 16:39:39 Next wakeup is 2008-02-05 17:00:00 2008-02-05 17:00:00 Next wakeup is 2008-02-05 18:00:00 2008-02-05 17:00:00 Started full backup on localhost (pid=4017, share=/etc) 2008-02-05 17:00:01 Finished full backup on localhost 2008-02-05 17:00:01 Running BackupPC_link localhost (pid=4028) 2008-02-05 17:00:02 Finished localhost (BackupPC_link localhost) I guess there is just a clumsy erro on my part but i can't see it.. I would be grateful for any kind mind lending me a hand here ... Gilles _________________________________________________________________ Shed those extra pounds with MSN and The Biggest Loser! http://biggestloser.msn.com/ |
From: Jonathan Dumaresq <jdumaresq@ci...> - 2008-02-05 14:36:34
|
Wow good explanation here. I will try to answer some of the interogation 1- The os I think to use is Ubuntu (gutsy) serveur edition 2- The expansion card that I have is a PERC/II that I think I a raid controller. I have 4 channel on it. I use 3 of them If I understand my raid card correctly, I dont think I will be able to do a 16Hdd array since thay are not on the same channel. But I could be wrong on that. This is nearly the first time I play with hardware raid. I use software raid on linux for mirroring. I have 1 gig of ram on the server. The array setup is as follow 1 array of 2 hdd 1 array of 6 hdd 1 array of 10 hdd Jonathan _____ De : backuppc-users-bounces@... [mailto:backuppc-users-bounces@...] De la part de dan Envoyé : 4 février 2008 22:15 À : Justin Best Cc : backuppc-users Objet : Re: [BackupPC-users] Information needed My suggestion is: for OS 2 disks, RAID1 which is a mirror. likely you will have no use for high performance here, all the work will be done on the other array(s) for samba. have you considered putting all 16 drives into 1 array, a RAID6+ hot spare? RAID6 being a RAID with 2 redundant disks. I say this because you will likely be doing backups at night and samba sharing during the day so you will get better performance and more flexible file storage. on top of the larger array you would then run LVM. On top of the larger 16 drive RAID6+HS you would then run LVM, and you could seperate out the volumes to your liking. this would give you roughly 500GB unformatted with a redundancy. As for filesystem, I would recommend EITHER ext3 as it is so standard, and very reliable, OR XFS. XFS is a very good filesystem that is very LVM friendly and can be grown without unmounting. It is also very good and very fast at making and deleting files and hardlinks which is what backuppc uses. XFS *CAN* be significantly faster than ext3 for backuppc. I personally have XFS on 1 backuppc server and ext3 on another. they are both exceptional. you did not mention the OS you will run here, I assumed linux. you do have some very very good choices in a linux, freebsd, or nexenta/opensolaris. I run linux servers for production but am testing a freebsd and a nexenta server specifically for network performance and the ZFS filesystem. freebsd and nexenta both have a faster network stack than linux which i have notice but only slightly(2-5% or so??). ZFS is pretty awesome though. its like software RAID + LVM + a 6 pack or Red Bull. they both also have the classic unix filesystem UFS which is pretty good. you could compare its performance and reliability to ext3, though they are not very similar under the hood. also, you didn't mention if you were using a hardware raid card or just a scsi card. if you need to run softraid, then and OS is suitable. if you choose linux, you can use the md system and have a pretty fast softraid though it will likely consume one of those processor when you hit the disk hard. if you go with nexenta or freebsd, you will use ZFS **BUT** you will need to have 1+ GB RAM. ZFS uses a lot of RAM, especially if you use filesystem level compression(a nice bennefit, you can turn off compression in backuppc and let the filesystem do the work, compressed ZFS is faster than backuppc's compression although it will tax a CPU pretty hard. ZFS is multithreaded though while backuppc's compression is not, so you would likely see faster compression with ZFS with 4 CPUs doing the work, but you probably only have 1 PCI bus which means that you will have a hard limit of about 66MB/s due to the 132MB/s PCI bus. hope i could help.. On Feb 4, 2008 3:40 PM, Justin Best <justinb@...> wrote: > BackupPC uses *hardlinks* for pooling. > DOH! You are so right. Being mainly a Windows admin, I don't think I ever was completely clear on the difference between hardlinks and symlinks until a few minutes ago, when I looked it up. For anyone else who is confused on hardlinks vs softlinks, I would recommend the following page: http://linuxgazette.net/105/pitcher.html ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ BackupPC-users mailing list BackupPC-users@... List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/ |
From: Neil Wilson <neilw@dc...> - 2008-02-05 11:39:56
|
Lammersdorf, Lorenz wrote: > good morning, > > i'm not very familiar with suse, i'm more into debian. usually apache2 creates two directories called "mods-available" and "mods-enabled". depending on your installation, you'll find there perl.load containing just one line of code: > > LoadModule perl_module /usr/lib/apache2/modules/mod_perl.so > > if mod_perl.so is not installed, you're not able to run perl scripts via apache. take a look at this: http://www.novell.com/products/linuxpackages/suselinux/apache2-mod_perl.html (it's an older version of opensuse, but it works for your installation too). > > regards > lorenz Thanks again for your help. I've looked through my apache2 under Suse, and I've found the following. /etc/apache2/sysconfig.d/loadmodule.conf contains this line along with a whole bunch of other lines. LoadModule perl_module /usr/lib64/apache2/mod_perl.so and the module exists under the /usr/lib64/apache2/ directory so this should already be working then. Could there be something wrong with my /etc/apache2/conf.d/mod_perl.conf ? Below is the output of this file. #################################################### <Directory "/srv/www/perl-lib"> AllowOverride None Options None Order allow,deny Deny from all </Directory> <IfModule mod_perl.c> PerlRequire "/etc/apache2/mod_perl-startup.pl" ScriptAlias /perl/ "/srv/www/cgi-bin/" <Location /perl/> # mod_perl mode SetHandler perl-script PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options +ExecCGI </Location> ScriptAlias /cgi-perl/ "/srv/www/cgi-bin/" <Location /cgi-perl> # perl cgi mode SetHandler perl-script PerlResponseHandler ModPerl::PerlRun PerlOptions +ParseHeaders Options +ExecCGI </Location> # The /cgi-bin/ ScriptAlias is already set up in httpd.conf </IfModule> # vim: ft=apache #################################################### httpd.conf references defualt-server.conf for more options, and in my default apache2/default-server.conf I have the following for ScriptAlias. <Directory "/srv/www/cgi-bin"> AllowOverride None Options +ExecCGI -Includes Order allow,deny Allow from all </Directory> Thanks again for all your help. Regards. Neil. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. |
From: Neil Wilson <neilw@dc...> - 2008-02-05 07:57:34
|
Lammersdorf, Lorenz wrote: > neil, > > have you installed apache with perl support? on apache2 you need to active ./mods-available/perl.load (just place a symlink in ./mods-enabled) and you need to include "SetHandler perl-script" into your conf-file. > > regards > lorenz > Thanks for your assistance Lorenz, I've looked around at this for quite a while, but I can't find anything relating to perl.load, or a place to put a symlink in mods-enabled. I already have "SetHandler perl-script" set in my vhost config. In my /etc/sysconfig/apache2 I have the following, which to me sounds like perl is loaded. APACHE_MODULES=".....php5 perl python....." There are a lot more modules also I've just excluded them in this. I'm using SLES 10, but I have a similar problem under Suse 10.2 so if anyone has anything that can guide me in the right direction under Suse in general then please assist, I'll be most grateful. Thanks. Regards Neil. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. |
From: dan <dandenson@gm...> - 2008-02-05 03:17:10
|
My suggestion is: for OS 2 disks, RAID1 which is a mirror. likely you will have no use for high performance here, all the work will be done on the other array(s) for samba. have you considered putting all 16 drives into 1 array, a RAID6+ hot spare? RAID6 being a RAID with 2 redundant disks. I say this because you will likely be doing backups at night and samba sharing during the day so you will get better performance and more flexible file storage. on top of the larger array you would then run LVM. On top of the larger 16 drive RAID6+HS you would then run LVM, and you could seperate out the volumes to your liking. this would give you roughly 500GB unformatted with a redundancy. As for filesystem, I would recommend EITHER ext3 as it is so standard, and very reliable, OR XFS. XFS is a very good filesystem that is very LVM friendly and can be grown without unmounting. It is also very good and very fast at making and deleting files and hardlinks which is what backuppc uses. XFS *CAN* be significantly faster than ext3 for backuppc. I personally have XFS on 1 backuppc server and ext3 on another. they are both exceptional. you did not mention the OS you will run here, I assumed linux. you do have some very very good choices in a linux, freebsd, or nexenta/opensolaris. I run linux servers for production but am testing a freebsd and a nexenta server specifically for network performance and the ZFS filesystem. freebsd and nexenta both have a faster network stack than linux which i have notice but only slightly(2-5% or so??). ZFS is pretty awesome though. its like software RAID + LVM + a 6 pack or Red Bull. they both also have the classic unix filesystem UFS which is pretty good. you could compare its performance and reliability to ext3, though they are not very similar under the hood. also, you didn't mention if you were using a hardware raid card or just a scsi card. if you need to run softraid, then and OS is suitable. if you choose linux, you can use the md system and have a pretty fast softraid though it will likely consume one of those processor when you hit the disk hard. if you go with nexenta or freebsd, you will use ZFS **BUT** you will need to have 1+ GB RAM. ZFS uses a lot of RAM, especially if you use filesystem level compression(a nice bennefit, you can turn off compression in backuppc and let the filesystem do the work, compressed ZFS is faster than backuppc's compression although it will tax a CPU pretty hard. ZFS is multithreaded though while backuppc's compression is not, so you would likely see faster compression with ZFS with 4 CPUs doing the work, but you probably only have 1 PCI bus which means that you will have a hard limit of about 66MB/s due to the 132MB/s PCI bus. hope i could help.. On Feb 4, 2008 3:40 PM, Justin Best <justinb@...> wrote: > > BackupPC uses *hardlinks* for pooling. > > > DOH! You are so right. > > Being mainly a Windows admin, I don't think I ever was completely > clear on the difference between hardlinks and symlinks until a few > minutes ago, when I looked it up. For anyone else who is confused on > hardlinks vs softlinks, I would recommend the following page: > > http://linuxgazette.net/105/pitcher.html > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > BackupPC-users mailing list > BackupPC-users@... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |
From: dan <dandenson@gm...> - 2008-02-05 02:53:09
|
both options above work well but i prefer the mount on /var/lib/backuppc method. On Feb 4, 2008 1:03 PM, Nils Breunese (Lemonbit) <nils@...> wrote: > Jocke wrote: > > > I also are thinking of moving the backuppc storage and have read > > about the new mount. I have backuppc installed, so what should be > > the work process here? > > > > 1. Backup /var/lib/backuppc with tar > > 2. Delete /var/lib/backuppc > > 3. Mount the new disk at /var/lib/backuppc > > 4. Restore the tar'ed backup at /var/lib/backuppc > > > > Would that be correct? > > That ought to work. You could also move the data in one go: > > (0. Stop BackupPC) > 1. Mount the new drive somewhere (e.g. /mnt/temp) > 2. Move all data from the old to the new location > 3. Unmount the old and new drive > 4. Mount the new drive on /var/lib/backuppc > (4. Start BackupPC) > > Nils Breunese. > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > BackupPC-users mailing list > BackupPC-users@... > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki: http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > |