You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(19) |
Nov
(2) |
Dec
(23) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(18) |
Feb
(15) |
Mar
(27) |
Apr
(6) |
May
(40) |
Jun
(53) |
Jul
(67) |
Aug
(107) |
Sep
(75) |
Oct
(74) |
Nov
(73) |
Dec
(63) |
2003 |
Jan
(93) |
Feb
(97) |
Mar
(72) |
Apr
(129) |
May
(110) |
Jun
(97) |
Jul
(151) |
Aug
(124) |
Sep
(66) |
Oct
(216) |
Nov
(105) |
Dec
(151) |
2004 |
Jan
(107) |
Feb
(181) |
Mar
(235) |
Apr
(212) |
May
(231) |
Jun
(231) |
Jul
(264) |
Aug
(278) |
Sep
(173) |
Oct
(259) |
Nov
(164) |
Dec
(244) |
2005 |
Jan
(318) |
Feb
(206) |
Mar
(287) |
Apr
(222) |
May
(240) |
Jun
(255) |
Jul
(166) |
Aug
(289) |
Sep
(233) |
Oct
(200) |
Nov
(307) |
Dec
(170) |
2006 |
Jan
(289) |
Feb
(270) |
Mar
(306) |
Apr
(150) |
May
(181) |
Jun
(263) |
Jul
(181) |
Aug
(291) |
Sep
(147) |
Oct
(155) |
Nov
(381) |
Dec
(310) |
2007 |
Jan
(431) |
Feb
(306) |
Mar
(378) |
Apr
(216) |
May
(313) |
Jun
(235) |
Jul
(373) |
Aug
(171) |
Sep
(459) |
Oct
(642) |
Nov
(464) |
Dec
(419) |
2008 |
Jan
(374) |
Feb
(445) |
Mar
(400) |
Apr
(406) |
May
(374) |
Jun
(346) |
Jul
(387) |
Aug
(302) |
Sep
(255) |
Oct
(374) |
Nov
(292) |
Dec
(488) |
2009 |
Jan
(392) |
Feb
(240) |
Mar
(245) |
Apr
(483) |
May
(310) |
Jun
(494) |
Jul
(265) |
Aug
(515) |
Sep
(514) |
Oct
(284) |
Nov
(338) |
Dec
(329) |
2010 |
Jan
(305) |
Feb
(246) |
Mar
(404) |
Apr
(391) |
May
(302) |
Jun
(166) |
Jul
(166) |
Aug
(234) |
Sep
(222) |
Oct
(267) |
Nov
(219) |
Dec
(244) |
2011 |
Jan
(189) |
Feb
(220) |
Mar
(353) |
Apr
(322) |
May
(270) |
Jun
(202) |
Jul
(172) |
Aug
(215) |
Sep
(226) |
Oct
(169) |
Nov
(163) |
Dec
(152) |
2012 |
Jan
(182) |
Feb
(221) |
Mar
(117) |
Apr
(151) |
May
(169) |
Jun
(135) |
Jul
(140) |
Aug
(108) |
Sep
(148) |
Oct
(97) |
Nov
(119) |
Dec
(66) |
2013 |
Jan
(105) |
Feb
(127) |
Mar
(265) |
Apr
(84) |
May
(75) |
Jun
(116) |
Jul
(89) |
Aug
(118) |
Sep
(132) |
Oct
(247) |
Nov
(98) |
Dec
(109) |
2014 |
Jan
(81) |
Feb
(101) |
Mar
(101) |
Apr
(79) |
May
(132) |
Jun
(102) |
Jul
(91) |
Aug
(114) |
Sep
(104) |
Oct
(126) |
Nov
(146) |
Dec
(46) |
2015 |
Jan
(51) |
Feb
(44) |
Mar
(83) |
Apr
(40) |
May
(68) |
Jun
(43) |
Jul
(38) |
Aug
(33) |
Sep
(88) |
Oct
(54) |
Nov
(53) |
Dec
(119) |
2016 |
Jan
(268) |
Feb
(42) |
Mar
(86) |
Apr
(73) |
May
(239) |
Jun
(93) |
Jul
(89) |
Aug
(60) |
Sep
(49) |
Oct
(66) |
Nov
(70) |
Dec
(34) |
2017 |
Jan
(81) |
Feb
(103) |
Mar
(161) |
Apr
(137) |
May
(230) |
Jun
(111) |
Jul
(135) |
Aug
(92) |
Sep
(118) |
Oct
(85) |
Nov
(110) |
Dec
(84) |
2018 |
Jan
(75) |
Feb
(59) |
Mar
(48) |
Apr
(50) |
May
(63) |
Jun
(44) |
Jul
(44) |
Aug
(61) |
Sep
(42) |
Oct
(108) |
Nov
(76) |
Dec
(48) |
2019 |
Jan
(38) |
Feb
(47) |
Mar
(18) |
Apr
(98) |
May
(47) |
Jun
(53) |
Jul
(48) |
Aug
(52) |
Sep
(33) |
Oct
(20) |
Nov
(30) |
Dec
(38) |
2020 |
Jan
(29) |
Feb
(49) |
Mar
(37) |
Apr
(87) |
May
(66) |
Jun
(98) |
Jul
(25) |
Aug
(49) |
Sep
(22) |
Oct
(124) |
Nov
(66) |
Dec
(26) |
2021 |
Jan
(131) |
Feb
(109) |
Mar
(71) |
Apr
(56) |
May
(29) |
Jun
(12) |
Jul
(36) |
Aug
(38) |
Sep
(54) |
Oct
(17) |
Nov
(38) |
Dec
(23) |
2022 |
Jan
(56) |
Feb
(56) |
Mar
(73) |
Apr
(25) |
May
(15) |
Jun
(22) |
Jul
(20) |
Aug
(36) |
Sep
(24) |
Oct
(21) |
Nov
(78) |
Dec
(42) |
2023 |
Jan
(47) |
Feb
(45) |
Mar
(31) |
Apr
(4) |
May
(15) |
Jun
(10) |
Jul
(37) |
Aug
(24) |
Sep
(21) |
Oct
(15) |
Nov
(15) |
Dec
(20) |
2024 |
Jan
(24) |
Feb
(37) |
Mar
(14) |
Apr
(22) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Carl W. S. <ch...@re...> - 2004-07-22 16:38:17
|
On 07/22 12:22 , Tony Nelson wrote: > How would you calculate a summary time for that? Just add it all up? acutally, on second thought, what I'm looking for is the total amount of time that the server spent *idle*. so if it takes 20 hours of working time every day (even if broken up into several sections) to back up all the machines which are supposed to be backed up (letting the queue take care of keeping the machine as busy as possible); I know I have some headroom to add more machines. (albeit at the cost of some backups possibly running over into hours when they might be in use, and/or missing their allowed backup time entirely). if the idle timer shows 0.0 minutes free throughout the day though; I can go to the management and say "this backup server does not have the necessary processing power, memory, and disk speed to keep up with our backup needs. it takes 25 hours to do a day's worth of backups, and some people's data is being missed." I'm trying to back up 400GB of data on 42 hosts, with a 750MHz Duron and a bunch of IDE disks. I have $0 budget for this thing, but it's just too damn useful, so I put it together from off-the-shelf parts. the backup speed limitation is definitely on the server side; since with 2 backups running in parallel, it shows a system load of 2, and 0% free CPU. thanks for asking tho. :) -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com |
From: Tony N. <tn...@st...> - 2004-07-22 16:22:20
|
Quoting Carl Wilhelm Soderstrom <ch...@re...>: [snip] > -- perhaps a 'summary time' as well, so I know how close to 24 hours it's > taking me to back up all my machines. :) > Hey Carl, long time.. Not sure what you mean by 'summary time'.. and how on earth you would calculate it for me.. I let BackupPC schedule most of my backups, so it starts chugging around 6PM.. and around midnight it's done with those.. Then at 2am I manually fire one backup up which ran for 33 minutes last night.. and at 3am I backup our database servers (they dump at 2).. these finished at 3:46.. How would you calculate a summary time for that? Just add it all up? Does that really have any meaningful value? Or am I paddling my boat on concrete again? Tony |
From: Carl W. S. <ch...@re...> - 2004-07-22 16:11:16
|
yeah, like you need more of these, I'm sure... in the Host Summary: - it would be really nice to be able to sort the hosts by a particular column. f.ex. click on a column heading, and the table gets sorted & displayed in ascending order based on that item. click it again, and it's sorted in descending order. this would be really nice for seeing what hosts haven't been backed up in a while. - include a column for backup time, so I can look at the host summary, and get some idea how long this backup will take, without having to look into the individual host's summary. -- perhaps a 'summary time' as well, so I know how close to 24 hours it's taking me to back up all my machines. :) - total disk space used. I know this is in the 'Status' screen, but that's the only thing I really look at on that screen. I spend most of my time looking at the Host Summary, so I'd like to have that data there as well. :) just a few small UI suggestions. tell me what you think. :) -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com |
From: Tony N. <tn...@st...> - 2004-07-22 15:15:38
|
Quoting Craig Barratt <cba...@us...>: > Tony Nelson writes: > > > 1) Multiple Data Segments, IE: /dev/hdX, mounted at > /usr/local/backuppc/data1, > > /dev/hdY, mounted as /usr/local/backuppc/data2, and the ability to specify > > which data storage area a machine gets written to. > > I thought about this as a feature, but it would require the pool to be > duplicated too. Surely an LVM would be the best choice? > I honestly haven't investigate the LVM features available in Linux at this point. I guess I'll have to go RTFM. I don't actually need this feature as my pool is 350G/1.2T .. but who knows what future needs may be. > > 2) (Non-Trivial) With all my remote offices, I'd like to be able to config > > my primary server in NY to send requests (BackupPC to BackupPC probably) > > to enable the primary web server to display status/logs/etc from the > > remotes. > > At a minimum, can't you just point your NY browser at the remote > apache, assuming you have a secure connection? That is how it works now. Ideally, to train my staff, I would prefer they all just goto http://ny-backuppc and in the menu, be able to select remote servers to view status of.. maybe they just redirect under the covers and I can setup auto-magic authentication.. something to keep my folks from having to log into 9 different instances every day to verify backups ran successfully.. that's really all I'm looking for. Also, I guess some type of consolidated report wouldn't be bad either.. but I haven't thought it all the way through.. of course, BackupPC doesn't *need* these features.. which is why I put them in a "wishlist". Tony This email message from Starpoint Solutions LLC is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Opinions, conclusions and other information in this message that do not relate to the official business of Starpoint Solutions shall be understood as neither given nor endorsed by it. |
From: Carl W. S. <ch...@re...> - 2004-07-22 15:09:33
|
On 07/22 12:40 , Craig Barratt wrote: > The only choices with multiple physical drives are using a > raid or LVM setup, or running two instances of BackupPC. I'm using LVM quite happily on my backup server; combining a mirrored pair of 120GB drives and another pair of 160GB drives. (this just happened to be the best arrangement I could build out of parts on hand; it's not what I'd choose deliberately, but it isn't bad). -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com |
From: Carl W. S. <ch...@re...> - 2004-07-22 14:24:39
|
On 07/22 12:28 , Craig Barratt wrote: > Any failed full backup (not incremental) that has already saved some > files should be kept as a partial. A failed incremental does not get > saved. ah, that would be it. it was almost certainly an incremental backup. there were definitely some files transferred tho; 2.8GB of them under the new/ directory for that machine. -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com |
From: Oystein V. <oys...@ti...> - 2004-07-22 13:49:49
|
* [Les Mikesell] > On Thu, 2004-07-22 at 07:00, Sam Przyswa wrote: >> >> The question is where/how get the pass phrase to backup/restore files > > The point is that it is impossible to do this in a way that > a person with root privileges can't intercept by replacing > the executable with a trojan or reading the raw device > where the pass phrase is submitted. With the theoretical modified crypto-rsyncd running on the clients, root on the backuppc server is none the wiser, as he can only see the encrypted files. The only bad thing he could do would be to restore older versions of a user's files (which is bad enough), but he couldn't ever gain access to the actual data in the files. The crypto keys are stored on the backup clients, so root at the backuppc server can't get at them, either. Just remember to back up the crypto keys in their unencrypted form to separate media and store in a safe place (say, CD-Rs in a secure storage locker rented at the local bank). If you don't do this, you'll be hard put to ever restore anything to a computer where the hard drive dies :) Anyway, there is no such modified rsyncd, so until someone decides to sit down and make one, this is a pretty useless though experiment. Meanwhile I'm happy with the system of having the backuppc server placed in a secure server room with only trusted people being given root access.. :) Øystein -- ssh -c rot13 otherhost |
From: Les M. <le...@fu...> - 2004-07-22 12:36:14
|
On Thu, 2004-07-22 at 07:00, Sam Przyswa wrote: > > This statement is true, out of context, for any Unix setup, ever. If > > root wants to know something, root WILL know it. If you don't trust > > root, you're a moron, especially if you ARE root. > > Sure but if the files are encrypted and you don't have the pass phrase > even root can't read it. > > The question is where/how get the pass phrase to backup/restore files The point is that it is impossible to do this in a way that a person with root privileges can't intercept by replacing the executable with a trojan or reading the raw device where the pass phrase is submitted. He can also replace all of the executables that you might use to verify that these changes had not been done with ones that will show whatever he wants. --- Les Mikesell le...@fu... |
From: Sam P. <sa...@ar...> - 2004-07-22 12:11:09
|
Le mer 21/07/2004 =E0 14:04, Oystein Viggen a =E9crit : > * [Sam Przyswa]=20 >=20 > > Yes it is a solution but I have to mount and unmount the entire > > encrypted filesystem or have an encrypted partition for each > > machine/customer and this partition must be mounted all the day, so a= n > > user with root privileges can read the files. >=20 > If you intend to give your customers root access to the backuppc server= , > you need to have one server for each customer. Server side encryption > simply won't help you. >=20 > Client side encryption is difficult, though theoretically possible. > If you use rsyncssh and/or rsyncd backups, you could deploy a specially > modified rsync on all your backup clients. This variant of rsync would > encrypt every file on the fly, and the actual rsyncing would be of > encrypted files. In the case of rsyncssh, you would need to set up ssh > to only allow access to that particular modified rsync program when > logging in with the ssh-key stored on the backuppc server. This versio= n > of rsync would of course also only accept properly encrypted data as > input for restores. >=20 In this option rsync will write/read the files encrypted on the server machine, it's a good idea, just to rewrite rsync and that is an other question :-) but I think it will be the good solution. Thanks. Sam. --=20 Ce message a =E9t=E9 v=E9rifi=E9 par MailScanner pour des virus ou des polluriels et rien de suspect n'a =E9t=E9 trouv=E9. MailScanner remercie transtec pour son soutien. |
From: Sam P. <sa...@ar...> - 2004-07-22 12:00:32
|
Le jeu 22/07/2004 =E0 05:28, David Masover a =E9crit : > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 >=20 >=20 >=20 >=20 > Sam Przyswa wrote: >=20 > | user with root privileges can read the files. >=20 > This statement is true, out of context, for any Unix setup, ever. If > root wants to know something, root WILL know it. If you don't trust > root, you're a moron, especially if you ARE root. Sure but if the files are encrypted and you don't have the pass phrase even root can't read it. The question is where/how get the pass phrase to backup/restore files Sam. --=20 Ce message a =E9t=E9 v=E9rifi=E9 par MailScanner pour des virus ou des polluriels et rien de suspect n'a =E9t=E9 trouv=E9. MailScanner remercie transtec pour son soutien. |
From: Craig B. <cba...@us...> - 2004-07-22 07:41:08
|
Justin Guenther writes: > > 1) Multiple Data Segments, IE: /dev/hdX, mounted at /usr/local/backuppc/data1, > > /dev/hdY, mounted as /usr/local/backuppc/data2, and the ability to specify > > which data storage area a machine gets written to. > > Would there be any reason not to create /usr/local/backuppc/pc1, pc2 > etc., mount your drives to each, and in each of those move the client > dirs under /usr/local/backuppc/pc, then create symbolic links to the > moved dirs in the original /usr/local/backuppc/pc dir? > > so /usr/local/backuppc/pc/client1 -> /usr/local/backuppc/pc1/client1 > (and /usr/local/backuppc/pc1 is the mountpoint for /dev/hdYX) Unfortunately the hardlinks used in the pool don't work across file systems. A symbolic link won't help. The only choices with multiple physical drives are using a raid or LVM setup, or running two instances of BackupPC. Craig |
From: Craig B. <cba...@us...> - 2004-07-22 07:39:07
|
Tony Nelson writes: > I know you've seen this a few times before.. but.. > > My hat's off to you... > > Thank you for a great product.. you have replaced an aging system for us that > only "marginally" worked with a piece of software that is very reliable.. has > very few problems.. and is HIGHLY configurable.. Thanks for the feedback! > 1) Multiple Data Segments, IE: /dev/hdX, mounted at /usr/local/backuppc/data1, > /dev/hdY, mounted as /usr/local/backuppc/data2, and the ability to specify > which data storage area a machine gets written to. I thought about this as a feature, but it would require the pool to be duplicated too. Surely an LVM would be the best choice? > 2) (Non-Trivial) With all my remote offices, I'd like to be able to config > my primary server in NY to send requests (BackupPC to BackupPC probably) > to enable the primary web server to display status/logs/etc from the > remotes. At a minimum, can't you just point your NY browser at the remote apache, assuming you have a secure connection? Craig |
From: Craig B. <cba...@us...> - 2004-07-22 07:29:34
|
Carl Wilhelm Soderstrom writes: > Now that there's a debian package for backuppc 2.1.0 (thanks!), I've > re-installed our backup server to debian and backuppc-2.1.0 > > the save-partial-backups feature was one of the improvements I was looking > forward to; but from the one test I've done, I see that it looks like > deliberately canceling a backup, causes the files downloaded so far to be > deleted, rather than saved as I expected. > > is this deliberate for some reason, or an oversight, or something that just > hasn't been dealt with yet? Any failed full backup (not incremental) that has already saved some files should be kept as a partial. A failed incremental does not get saved. Are you canceling an incremental or a full? Are you sure some files are already present (check the new directory) - with rsync some time might elapse before the first file is transferred. Craig |
From: Craig B. <cba...@us...> - 2004-07-22 07:27:02
|
Tony Nelson writes: > It appears that my little patch had no effect on the error messages. > If I find some time this afternoon, I will take another look and see > if I can find the trouble spot.. > > I'm pretty sure this error is specific to Perl 5.8.4. This is strange. I would have guessed your fix would have solved the problem. It would be great if you could understand what is going wrong so we can work out a fix. To test, you could just run: su backuppc bin/BackupPC_nightly 0 15 while BackupPC is not running. File::Find certainly changed between 5.8.2 and 5.8.5. I upgraded perl to 5.8.5 from 5.8.3 and I still don't see the error messages. I didn't try 5.8.4. Craig |
From: Justin G. <jgu...@gm...> - 2004-07-22 06:41:48
|
> 1) Multiple Data Segments, IE: /dev/hdX, mounted at /usr/local/backuppc/data1, > /dev/hdY, mounted as /usr/local/backuppc/data2, and the ability to specify > which data storage area a machine gets written to. Would there be any reason not to create /usr/local/backuppc/pc1, pc2 etc., mount your drives to each, and in each of those move the client dirs under /usr/local/backuppc/pc, then create symbolic links to the moved dirs in the original /usr/local/backuppc/pc dir? so /usr/local/backuppc/pc/client1 -> /usr/local/backuppc/pc1/client1 (and /usr/local/backuppc/pc1 is the mountpoint for /dev/hdYX) > 2) (Non-Trivial) With all my remote offices, I'd like to be able to config my > primary server in NY to send requests (BackupPC to BackupPC probably) to enable > the primary web server to display status/logs/etc from the remotes. This is covered in the manual (v2.1.0), you need to edit the config.pl file and change the $Conf{ServerPort} variable to a port number > 1024 (this port is used for communication with the BackupPC_Admin script on the Apache server), then change the $Conf{ServerMesgSecret} variable to a string that's not guessable. (details in the manual) -- Justin Guenther IT Analyst CrownAg International Inc. 250 Henderson Drive Regina, SK, Canada S4N 5P7 Tel: (306) 522-8111 Email: jus...@cr... |
From: Tony N. <tn...@st...> - 2004-07-22 04:32:42
|
Craig, I know you've seen this a few times before.. but.. My hat's off to you... Thank you for a great product.. you have replaced an aging system for us that only "marginally" worked with a piece of software that is very reliable.. has very few problems.. and is HIGHLY configurable.. For others out there, our total investment in hardware was $6,600 for a Relion 430 from Penguin Computing... Dual 2.66GHz, 2GB Ram, 40GB System Drive, 1.2TB IDE-RAID for BackupPC Data. We've taken some old (and I mean OLD) P200 machines w/ 128M RAM and 250G drives and threw them in our remote offices. This setup has replaced our entire Backup scheme. I'm still working on dumping to tape, but that's not really BackupPC's problem. Thank You, Thank You, Thank You. Tony PS. WishList 1) Multiple Data Segments, IE: /dev/hdX, mounted at /usr/local/backuppc/data1, /dev/hdY, mounted as /usr/local/backuppc/data2, and the ability to specify which data storage area a machine gets written to. 2) (Non-Trivial) With all my remote offices, I'd like to be able to config my primary server in NY to send requests (BackupPC to BackupPC probably) to enable the primary web server to display status/logs/etc from the remotes. -- Tony Nelson Director of IT Operations Starpoint Solutions LLC 115 Broadway, 2nd Fl New York, NY 10006 This email message from Starpoint Solutions LLC is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Opinions, conclusions and other information in this message that do not relate to the official business of Starpoint Solutions shall be understood as neither given nor endorsed by it. |
From: David M. <ni...@sl...> - 2004-07-22 03:29:01
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Sam Przyswa wrote: | user with root privileges can read the files. This statement is true, out of context, for any Unix setup, ever. If root wants to know something, root WILL know it. If you don't trust root, you're a moron, especially if you ARE root. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.4 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iQIVAwUBQP80WXgHNmZLgCUhAQKoVw//aNGGHFShOA2T7v+keLt344uH4Wx7TH98 LrQOoAVIkqDZA9UIAtLsx/E/tLNaRvDZBzcbm/HantsNeq5OM8M753/hS35KM66j B7bJaeRzZFGSlnm+rKAs01uuLLMROMJJRN/9aZZfpnQWPKpzzCIV0iD77aNh05aJ WNWdi16NmQqsx7g6hL32xC95FFEig5PNrFJPOpSFCArYE1AaaZk+jitFhn9jwke7 2Rp7GDWlGzYzUmorsk35A9Qx2Fezy4R4AILxpTlrq5riyv6ovYq20twTPSVwbB+h 7noSQuIEBpiAcrWtyWKTIdn5A/UFw/12pCiunBGgb6zy1pMtxgAdTnkXKe94kWnZ D5cIHkQU5tK56X9iENDHWTeZbizuqekONVEBGCRwRjTvCRUTeGpToMGhbq7WyfKO wlES/8HuhnnkfkSEvXVRtI6NBeDF3jojk3tB2mBn1ChuJQT8RltevSi6RcitQD7A gQcuSHZNLgEqtUlQ42Gst/fPFud23cmjkYZS8bN3Rx9jKl81MF72YAZ1zqbw+wnw +DJIpzOTe9EN8EC23mmjPPTLI3xCcw3niTBfDlGFIl+9vT43iBcNmNJUqbTeZFNT z49tZNTUtgO9IotFkjX49Xjx8mSUsrD9LM0nYIoBTWqZP5ERxV7pVjEZu0ZV5Vp3 Dv4ZMzTrtZk= =Veyn -----END PGP SIGNATURE----- |
From: Bryant, P. -A. <Phi...@it...> - 2004-07-21 20:05:06
|
Running version 2.1.0 =0D I've been having problems fully understanding the blackout settings, and subsequently continue to have machines that should only be backed up at night backed up during the day after the weekends, hence taking until mid week to reset the blackout period once again for these machines.=0D =0D My current settings are: =0D $Conf{BlackoutPeriods} <http://abq-linuxserver/cgi-bin/?action=3Dview&type= =3Ddocs#item_%24conf%7bbl ackoutperiods%7d> =3D [ { hourBegin =3D> 7.0, hourEnd =3D> 19.5, weekDays =3D> [1, 2, 3, 4, 5], }, ]; =0D And =0D $Conf{BlackoutBadPingLimit} <http://abq-linuxserver/cgi-bin/?action=3Dview&type= =3Ddocs#item_%24conf%7bbl ackoutbadpinglimit%7d> =3D 8; $Conf{BlackoutGoodCnt} <http://abq-linuxserver/cgi-bin/?action=3Dview&type= =3Ddocs#item_%24conf%7bbl ackoutgoodcnt%7d> =3D 3; =0D If I set the good count too low, say at 1, the machines that were erratic in being on the network weren't getting backed up consistently.=0D =0D I've got laptops that need to be backed up during the day and PC's that need to be at night. I've got individual config files for the laptops, I just want to keep the PC's only going at night after the weekend ends.=0D =0D Phillip M. Bryant ITT Industries, Advanced Engineering and Sciences Network Administrator Albuquerque, NM 87120 Ph 505-889-7016 Cell 505-385-8668 MCSE 2000, NT 4.0 MCP+I =0D ************************************ This email and any files transmitted with it are proprietary and intended= solely for the use of the individual or entity to whom they are addressed.= If you have received this email in error please notify the sender. Please note that any views or opinions= presented in this email are solely those of the author and do not= necessarily represent those of ITT Industries, Inc. The recipient should check this email and any attachments for the presence= of viruses. ITT Industries accepts no liability for any damage caused by= any virus transmitted by this email. ************************************ |
From: Carl W. S. <ch...@re...> - 2004-07-21 18:56:31
|
Now that there's a debian package for backuppc 2.1.0 (thanks!), I've re-installed our backup server to debian and backuppc-2.1.0 the save-partial-backups feature was one of the improvements I was looking forward to; but from the one test I've done, I see that it looks like deliberately canceling a backup, causes the files downloaded so far to be deleted, rather than saved as I expected. is this deliberate for some reason, or an oversight, or something that just hasn't been dealt with yet? -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com |
From: Carl W. S. <ch...@re...> - 2004-07-21 16:48:09
|
On 07/21 10:36 , Todd Curry wrote: > > Virtually all of the WinXP boxes I'm backing up will fail (for various > reasons) after 2.5 hours. Local boxes (on my net) will go longer, to > roughly 3.5 hours, but the boxes I'm backing up over the net tend to stop at > at between 2 hours and 2.5 hours. what is your timeout set to? try setting this: # timeout in seconds. # this sometimes incorrectly means the duration of the backup. # set to 12 hours, and see if backups no longer fail with 'ALRM' $Conf{ClientTimeout} = 43200 in the per-machine configuration file; and see if it makes a difference. -- Carl Soderstrom Systems Administrator Real-Time Enterprises www.real-time.com |
From: Todd C. <to...@cu...> - 2004-07-21 15:36:38
|
Gang, Virtually all of the WinXP boxes I'm backing up will fail (for various reasons) after 2.5 hours. Local boxes (on my net) will go longer, to roughly 3.5 hours, but the boxes I'm backing up over the net tend to stop at at between 2 hours and 2.5 hours. Here's the log file from one of the more successful backups (over the net). Two backup runs went much longer, proving that this is not a "barrier" per se, but more of a tendency. Average backup run from this file below is 163 minutes, but as you can see, lots of 150-minute runs. Any suggestions of things to tweak to try to get these full backup runs (initial backup runs) to go longer? What is the deal with these signal=ALRM errors? Thanks, Todd 2004-07-16 14:00:03 full backup started for directory docs 2004-07-16 14:00:06 Got fatal error during xfer (auth failed on module docs) 2004-07-16 14:00:11 Backup aborted (auth failed on module docs) 2004-07-16 14:02:45 full backup started for directory docs 2004-07-16 15:31:58 Got fatal error during xfer (Child exited prematurely) 2004-07-16 15:32:03 Backup aborted (Child exited prematurely) 2004-07-16 15:32:03 Saved partial dump 0 [92 Minutes] 2004-07-16 16:00:02 full backup started for directory docs 2004-07-16 18:13:20 Aborting backup up after signal ALRM 2004-07-16 18:13:21 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-16 18:13:21 Saved partial dump 0 [133 minutes] 2004-07-16 19:00:01 full backup started for directory docs 2004-07-16 21:18:37 Aborting backup up after signal ALRM 2004-07-16 21:18:38 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-16 21:18:38 Saved partial dump 0 [139 minutes] 2004-07-16 22:00:02 full backup started for directory docs 2004-07-17 00:59:46 Aborting backup up after signal ALRM 2004-07-17 00:59:48 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-17 00:59:48 Saved partial dump 0 [180 minutes] 2004-07-17 01:02:04 full backup started for directory docs 2004-07-17 04:53:32 Aborting backup up after signal ALRM 2004-07-17 04:53:34 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-17 04:53:35 Saved partial dump 0 [231 minutes -- great run!] 2004-07-17 05:00:03 full backup started for directory docs 2004-07-17 10:33:00 Aborting backup up after signal ALRM 2004-07-17 10:33:01 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-17 10:33:02 Saved partial dump 0 [333 minutes -- if only they could all be like this! read on...] 2004-07-17 11:00:03 full backup started for directory docs 2004-07-17 13:29:43 Aborting backup up after signal ALRM 2004-07-17 13:29:44 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-17 13:29:44 Saved partial dump 0 [150 minutes] 2004-07-17 14:00:02 full backup started for directory docs 2004-07-17 16:28:45 Aborting backup up after signal ALRM 2004-07-17 16:28:47 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-17 16:28:47 Saved partial dump 0 [148 minutes] 2004-07-17 17:00:02 full backup started for directory docs 2004-07-17 19:30:17 Aborting backup up after signal ALRM 2004-07-17 19:30:19 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-17 19:30:19 Saved partial dump 0 [150 minutes] 2004-07-17 20:00:03 full backup started for directory docs 2004-07-17 22:28:39 Aborting backup up after signal ALRM 2004-07-17 22:28:46 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-17 22:28:47 Saved partial dump 0 [148 minutes] 2004-07-17 23:00:15 full backup started for directory docs 2004-07-18 01:28:44 Aborting backup up after signal ALRM 2004-07-18 01:28:46 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-18 01:28:46 Saved partial dump 0 [148 minutes] 2004-07-18 05:00:03 full backup started for directory docs 2004-07-18 07:29:06 Aborting backup up after signal ALRM 2004-07-18 07:29:10 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-18 07:29:10 Saved partial dump 0 [149 minutes] 2004-07-18 08:00:02 full backup started for directory docs 2004-07-18 10:32:43 Aborting backup up after signal ALRM 2004-07-18 10:32:45 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-18 10:32:46 Saved partial dump 0 [152 minutes] 2004-07-18 11:00:02 full backup started for directory docs 2004-07-18 13:32:12 Aborting backup up after signal ALRM 2004-07-18 13:32:13 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-18 13:32:13 Saved partial dump 0 [152 minutes] 2004-07-18 14:00:01 full backup started for directory docs 2004-07-18 15:28:08 full backup 0 complete, 45503 files, 3032020927 bytes, 101 xferErrs (0 bad files, 0 bad shares, 101 other) 2004-07-19 14:00:01 incr backup started back to 2004-07-18 13:00:01 for directory docs 2004-07-19 16:14:50 Aborting backup up after signal ALRM 2004-07-19 16:14:54 Got fatal error during xfer (aborted by signal=ALRM) 2004-07-19 17:00:02 incr backup started back to 2004-07-18 13:00:01 for directory docs 2004-07-19 17:23:14 incr backup 1 complete, 873 files, 249110058 bytes, 12 xferErrs (0 bad files, 0 bad shares, 12 other) 2004-07-20 17:00:02 incr backup started back to 2004-07-18 13:00:01 for directory docs 2004-07-20 17:27:34 incr backup 2 complete, 1234 files, 256266190 bytes, 12 xferErrs (0 bad files, 0 bad shares, 12 other) |
From: Tony N. <tn...@st...> - 2004-07-21 14:06:43
|
It appears that my little patch had no effect on the error messages. If I find some time this afternoon, I will take another look and see if I can find the trouble spot.. I'm pretty sure this error is specific to Perl 5.8.4. -- Tony Nelson Director of IT Operations Starpoint Solutions LLC 115 Broadway, 2nd Fl New York, NY 10006 Quoting Tony Nelson <tn...@st...>: > I believe the error above from BackupPC_nightly is caused by a change to the > API > in File::Find. Since all of my backups are in cpool, my pool directory is > completely empty, and trying to do a File::Find(undef) is what is causing > the > problem. I've patched my BackupPC_nightly to skip directories that don't > exist > and will report success/failure tomorrow. > > Simple diff: > > > *** BackupPC_nightly.orig 2004-07-20 17:27:40.000000000 -0400 > --- BackupPC_nightly 2004-07-20 17:29:39.000000000 -0400 > *************** > *** 148,153 **** > --- 148,154 ---- > $fileLinkMax = 0; > $fileCntRename = 0; > %FixList = (); > + next if ( ! -d "$TopDir/$pool/$dir" ); > find({wanted => \&GetPoolStats}, "$TopDir/$pool/$dir"); > my $kb = $blkCnt / 2; > my $kbRm = $blkCntRm / 2; > > > -- > Tony Nelson > Director of IT Operations > Starpoint Solutions LLC > 115 Broadway, 2nd Fl > New York, NY 10006 > > > > > > This email message from Starpoint Solutions LLC is for the sole use of the > intended recipient(s) and may contain confidential and privileged > information. Any unauthorized review, use, disclosure or distribution is > prohibited. If you are not the intended recipient, please contact the sender > by reply email and destroy all copies of the original message. Opinions, > conclusions and other information in this message that do not relate to the > official business of Starpoint Solutions shall be understood as neither given > nor endorsed by it. > This email message from Starpoint Solutions LLC is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Opinions, conclusions and other information in this message that do not relate to the official business of Starpoint Solutions shall be understood as neither given nor endorsed by it. |
From: Oystein V. <oys...@ti...> - 2004-07-21 12:04:49
|
* [Sam Przyswa] > Yes it is a solution but I have to mount and unmount the entire > encrypted filesystem or have an encrypted partition for each > machine/customer and this partition must be mounted all the day, so an > user with root privileges can read the files. If you intend to give your customers root access to the backuppc server, you need to have one server for each customer. Server side encryption simply won't help you. Client side encryption is difficult, though theoretically possible. If you use rsyncssh and/or rsyncd backups, you could deploy a specially modified rsync on all your backup clients. This variant of rsync would encrypt every file on the fly, and the actual rsyncing would be of encrypted files. In the case of rsyncssh, you would need to set up ssh to only allow access to that particular modified rsync program when logging in with the ssh-key stored on the backuppc server. This version of rsync would of course also only accept properly encrypted data as input for restores. Alas, there is no such version of rsync, so you'd need to create it yourself. Øystein -- If it ain't broke, don't break it. |
From: Sam P. <sa...@ar...> - 2004-07-21 11:21:41
|
Le ven 16/07/2004 =E0 09:59, Oystein Viggen a =E9crit : > * [Sam Przyswa]=20 >=20 > > But some customers ask for an external backup service and they want > > encrypted backup for their external storage, I try to find a solution > > with BackupPC. >=20 > You ssh-ing in to enter the password to mount the encrypted filesystem > after every boot might satisfy your customers. Especially if you > describe to them the elaborate tricks you will be using to make sure th= e > computer's security has not been compromised before you enter the > password. Yes it is a solution but I have to mount and unmount the entire encrypted filesystem or have an encrypted partition for each machine/customer and this partition must be mounted all the day, so an user with root privileges can read the files. Sam. --=20 Ce message a =E9t=E9 v=E9rifi=E9 par MailScanner pour des virus ou des polluriels et rien de suspect n'a =E9t=E9 trouv=E9. MailScanner remercie transtec pour son soutien. |
From: Sam P. <sa...@ar...> - 2004-07-20 23:00:10
|
Le mar 20/07/2004 =E0 20:35, Les Mikesell a =E9crit : > On Tue, 2004-07-20 at 11:27, Sam Przyswa wrote: >=20 > > 1) In rsyncd does the files transfert is in compressed mode or not ? > >=20 > > 2) Does the $Conf{RsyncArgs} var is only for rsync or for rsyncd too = ? > >=20 > > If the answer are NO for the both questions : > >=20 > > 3) How to manage the rsyncd transfert parameters ? >=20 > So far, backuppc's perl version of the rsync protocol does not > include compression. If the client is unix/linux you can > run rsync over ssh and include -C in the ssh part of the > command: > $Conf{RsyncClientCmd} =3D '$sshPath -C -l root $host $rsyncPath $argLis= t'; > I've never gotten rsync over ssh to work reliably with a > Windows target, though. In the places I need it, I park a > cheap Linux box at the same site, map its drive from the Windows > servers and schedule a local rsync run from the windows drives > to the Linux share, then let backuppc hit the Linux copy via > ssh. No it's only in rsyncd mode with a linux box to backup, I tried to add the "--compress" in $Conf{RsyncRestoreArgs} but it seems not work. At this time we don't want to use rsync over ssh with root user to backup Unix/Linux machine. Sam. --=20 Ce message a =E9t=E9 v=E9rifi=E9 par MailScanner pour des virus ou des polluriels et rien de suspect n'a =E9t=E9 trouv=E9. MailScanner remercie transtec pour son soutien. |