You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Stelian P. <st...@po...> - 2008-01-04 13:15:20
|
Hi, Sorry for the delays in answering... On Wed, Dec 19, 2007 at 02:19:32PM -0800, Kenneth Porter wrote: > At the end of my verify (restore -C) I see this: > > /sbin/restore: fopen: File too large > cannot open mode file /tmp//rstmode1197954005-ohITrC > directory mode, owner, and times not set > > Is this something to worry about? Yes. This error is probably caused by the fact that your dump is huge and the mode file has exceeded 2 GB in size. As the error says, since restore cannot open the modefile, it will not be able to set the mode, owner and time information on any of the restored file. If you modify restore/dirs.c and replace the two occurences of: mf = fopen(modefile, "r"); with: mf = FOPEN(modefile, "r"); the problem should be hopefully fixed. Stelian. -- Stelian Pop <st...@po...> |
From: Ernest B. <Ern...@ax...> - 2008-01-04 10:50:51
|
Hi, I would like to index all my dumps and would therefore like to read the contents of a dump file somehow. I can extract it, I know. But my dumps are quite big 500gb so writimg whem somewhere might be a problem. I have found that "restore tvf dumpfile.dmp" gives me a file listing, but it is not sufficent, as there ano NO file sizes and no dates also. How could I get this information from the dump? thanks -- Ing <http://www.beinrohr.sk/ing.php>, RHCE <http://www.beinrohr.sk/rhce.php>, LPIC <http://www.beinrohr.sk/lpic.php>, +421-2--6241-0360 <callto://+421-2--6241-0360>, +421-903--482-603 <callto://+421-903--482-603> icq:28153343, skype:oernii-work <callto://oernii-work>, jabber:oe...@ja... ------------------------------------------------------------------------ Do not meddle in the affairs of Wizards, for they are subtle and quick to anger. JRRT |
From: Kenneth P. <sh...@se...> - 2008-01-03 13:54:12
|
--On Wednesday, January 02, 2008 3:43 PM -0800 Todd and Margo Chester <Tod...@ve...> wrote: > Yippee! I am not crazy. Found this bug and a sister bug over in > kernel.org. > > Dump of ext3 runs very slowly: > http://bugzilla.kernel.org/show_bug.cgi?id=8636 > > Unusable system (ie slow) when copying large files: > http://bugzilla.kernel.org/show_bug.cgi?id=7372 Nice work. So it looks like one could work around the slowness by temporarily changing the I/O scheduler for the duration of the dump with a /proc setting? |
From: Todd a. M. C. <Tod...@ve...> - 2008-01-02 23:43:44
|
Todd and Margo Chester wrote: > 2.6.18-8.1.8.el5 > > Hi All, > > After I wiped my hard drives clean of CentOS 4.4 > (RHEL 4.4 clone) and installed CentOS 5 (RHEL 5), > I noticed that everything was slower. Especially, > dump, which was 3.5 times slower. > > So, I did some tests from my install CD/DVD's in "rescue mode" > and the System Rescue CD from http://www.sysresccd.org. > I used the same backup script (dump) that I use in normal > boot mode. I stopped the backup after dump gave the first > estimate of time: > > "dump" CentOS4.4 (Kernel 2.6.9-42.EL) install CD in rescue mode: > transfer rate 8468 KB/s, estimated finish 2:09 > > "dump" CentOS5 (Kernel 2.6.18) install DVD in rescue mode: > transfer rate 4422 KB/s, estimated finish 4:10 > > "dump" System Rescue CD (Kernel 2.6.22.9): > transfer rate 4370 KB/s, estimated finish 4:15 > > Has anyone else seen this? If so, were you able to fix it? > > -T Yippee! I am not crazy. Found this bug and a sister bug over in kernel.org. Dump of ext3 runs very slowly: http://bugzilla.kernel.org/show_bug.cgi?id=8636 Unusable system (ie slow) when copying large files: http://bugzilla.kernel.org/show_bug.cgi?id=7372 -T |
From: Kenneth P. <sh...@se...> - 2008-01-02 14:08:26
|
--On Wednesday, January 02, 2008 7:54 AM -0500 Scott Ehrlich <sc...@MI...> wrote: > So my next question would be if I set up my filesystem withOUT LVM, and, > considering it is an active machine, is there any easy way to incorporate > LVM without having to redo everything? I don't think so, but you could prepare a mirrored LVM filesystem (mirrored by rsync, for example), and when you next do serious maintenance that requires downtime, you can swap the mirror for the old filesystem. |
From: Scott E. <sc...@MI...> - 2008-01-02 13:15:58
|
On Wed, 2 Jan 2008, Kenneth Porter wrote: > Just saw your post on the CentOS list about this: > > <http://lists.centos.org/pipermail/centos/2007-December/091795.html> > > Another dump user pointed out to me the value of using LVM snapshots to dump > a "live" filesystem. You need a little extra space on the disk to hold the > filesystem changes while the snapshot is in effect (ie. for the duration of > the dump, and perhaps the verify if you do that). > > ------------ Forwarded Message ------------ > Date: Thursday, December 20, 2007 12:31 AM +0000 > From: "Keith G. Robertson-Turner" <dum...@ge...> > To: Kenneth Porter <sh...@se...> > Subject: Re: [Dump-users] Missing files on verify, hard link issue? > > Verily I say unto thee, that Kenneth Porter spake thusly: >> --On Friday, December 14, 2007 2:16 PM -0800 Kenneth Porter >> <sh...@se...> wrote: > >>> On my first attempt I still ended up with a couple of missing files >>> in the pool. > > I strongly suggest you investigate LVM, it'll save you a lot of hassle > trying to dump a live filesystem. > > Here's a condensed version of my server's backup script, as an example: > > level=$(expr $(date +%u) - 1) > lvcreate -l100%FREE -s -n var-snapshot /dev/cumulous/var > mount -t ext3 /dev/cumulous/var-snapshot /mnt/snapshots/var > dump -${level}u -z -E /mnt/WD_Passport/sky.backup/var.exceptions \ > -f /mnt/WD_Passport/sky.backup/${level}/var.dump /mnt/snapshots/var > umount /mnt/snapshots/var > lvremove -f /dev/cumulous/var-snapshot > > It really is that simple. All you have to do is format your drive(s) > with LVM, and remember to leave some unused slack at the end (I leave > about 1GB, which has always been enough for that very busy server). > > -- > Regards, > Keith G. Robertson-Turner > > ---------- End Forwarded Message ---------- > >> From your CentOS post: > >> Trying to adapt the knowledge to a tape library... >> >> /sbin/dump -0 -v -z2 -f /dev/nst0 /var/log >> /sbin/dump -0u -v -z2 -f /dev/nst0 /home >> >> I have a cron job that dumps the results to /var/log/dump.log, and a >> review of the log file claims all went well. Now for the restore... >> >> I just tried playing with different options of restore, but could not >> successfully restore anything. I ensured I was in a scratch area so as >> to hopefully not overwrite current files. > > What options did you try with restore? What tape positioning and library > commands do you use? (I believe mt is the command used to move the tape to > the desired dump file before issuing restore.) > > For testing, try the -C option (compare). I use that following every dump to > make sure the data got to the tape. (I'm using an external USB drive now, but > I used to use DAT tape on SCSI quite successfully.) > So my next question would be if I set up my filesystem withOUT LVM, and, considering it is an active machine, is there any easy way to incorporate LVM without having to redo everything? Thanks. Scott |
From: Kenneth P. <sh...@se...> - 2008-01-02 12:17:22
|
Just saw your post on the CentOS list about this: <http://lists.centos.org/pipermail/centos/2007-December/091795.html> Another dump user pointed out to me the value of using LVM snapshots to dump a "live" filesystem. You need a little extra space on the disk to hold the filesystem changes while the snapshot is in effect (ie. for the duration of the dump, and perhaps the verify if you do that). ------------ Forwarded Message ------------ Date: Thursday, December 20, 2007 12:31 AM +0000 From: "Keith G. Robertson-Turner" <dum...@ge...> To: Kenneth Porter <sh...@se...> Subject: Re: [Dump-users] Missing files on verify, hard link issue? Verily I say unto thee, that Kenneth Porter spake thusly: > --On Friday, December 14, 2007 2:16 PM -0800 Kenneth Porter > <sh...@se...> wrote: >> On my first attempt I still ended up with a couple of missing files >> in the pool. I strongly suggest you investigate LVM, it'll save you a lot of hassle trying to dump a live filesystem. Here's a condensed version of my server's backup script, as an example: level=$(expr $(date +%u) - 1) lvcreate -l100%FREE -s -n var-snapshot /dev/cumulous/var mount -t ext3 /dev/cumulous/var-snapshot /mnt/snapshots/var dump -${level}u -z -E /mnt/WD_Passport/sky.backup/var.exceptions \ -f /mnt/WD_Passport/sky.backup/${level}/var.dump /mnt/snapshots/var umount /mnt/snapshots/var lvremove -f /dev/cumulous/var-snapshot It really is that simple. All you have to do is format your drive(s) with LVM, and remember to leave some unused slack at the end (I leave about 1GB, which has always been enough for that very busy server). -- Regards, Keith G. Robertson-Turner ---------- End Forwarded Message ---------- >From your CentOS post: > Trying to adapt the knowledge to a tape library... > > /sbin/dump -0 -v -z2 -f /dev/nst0 /var/log > /sbin/dump -0u -v -z2 -f /dev/nst0 /home > > I have a cron job that dumps the results to /var/log/dump.log, and a > review of the log file claims all went well. Now for the restore... > > I just tried playing with different options of restore, but could not > successfully restore anything. I ensured I was in a scratch area so as > to hopefully not overwrite current files. What options did you try with restore? What tape positioning and library commands do you use? (I believe mt is the command used to move the tape to the desired dump file before issuing restore.) For testing, try the -C option (compare). I use that following every dump to make sure the data got to the tape. (I'm using an external USB drive now, but I used to use DAT tape on SCSI quite successfully.) |
From: Scott E. <sc...@MI...> - 2007-12-30 23:06:41
|
I should have added this is on an out-of-box RHEL5 64-bit Server machine on an isolated LAN with no updates. Thanks again. Scott |
From: Scott E. <sc...@MI...> - 2007-12-30 23:05:12
|
I have an Overland Arcvault 12 library with a full LTO3 magazine of 400/800 GB tapes. It is connected directly to the fileserver via a SCSI card/cable. The two main directories I want to back up are /var/log, which is on one filesystem, and /home, which is on another. There are _currently_ no databases to worry about, but there may be active users logged in and active jobs running. We'll just take our chances with what can be backed up. I've successfully run dump/restore to an external USB 1 TB hard drive as our backup device until the tape library arrived. Now, finding the appropriate dump and restore commands to throw into a script is the fun part. The first time will obviously produce a full backup of said directories to back up. Subsequent ones will be incremental. What should my dump lines look like for both full and subsequent incrementals? An, to test the backup and in case I need to retrieve a file, what should a respective restore line look like? Thanks much. Scott |
From: Tony N. <ton...@ge...> - 2007-12-22 15:41:55
|
At 4:38 PM +0100 12/22/07, Peter =?utf-8?Q?M=C3=BCnster?= wrote: >Hello, > >During the backup with dump (some minutes), the responsiveness of my >desktop degrades heavily, because of the high rate dump is reading on the >hard-disk. >Is it possible to limit the io-rate of dump? Perhaps ionice, using idle priority? # ionice -c3 dump -dumpoptions -- ____________________________________________________________________ TonyN.:' <mailto:ton...@ge...> ' <http://www.georgeanelson.com/> |
From: Peter <pm...@fr...> - 2007-12-22 15:38:28
|
Hello, During the backup with dump (some minutes), the responsiveness of my desktop degrades heavily, because of the high rate dump is reading on the hard-disk. Is it possible to limit the io-rate of dump? Cheers, Peter -- http://pmrb.free.fr/contact/ |
From: Kenneth P. <sh...@se...> - 2007-12-20 22:09:25
|
Searching my folder for centos-users, I found that LogVol01 is the swap. So I'll still need to free up some space. > For those unfamiliar with lvm commands, I recommend the Red Hat GUI > "system-config-lvm" which is included with CentOS and Fedora as well. Any idea which RPM that's in? Will it run from the console, and can it be used on a live system, or must it be used from the rescue CD? (My system is headless, and I do all admin remotely.) |
From: Kenneth P. <sh...@se...> - 2007-12-20 22:04:52
|
--On Thursday, December 20, 2007 12:05 PM +0000 "Keith G. Robertson-Turner" <dum...@ge...> wrote: > You can confirm with the "lvdisplay" command, which will > display the properties of any/all logical volumes. CentOS 5 left me with this setup: [root@segw2 sbin]# lvdisplay /dev/hda: open failed: No medium found --- Logical volume --- LV Name /dev/VolGroup00/LogVol00 VG Name VolGroup00 LV UUID mw8rXz-9D8y-VT1e-0I12-7YVp-dxCi-ulKfap LV Write Access read/write LV Status available # open 1 LV Size 460.28 GB Current LE 14729 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:0 --- Logical volume --- LV Name /dev/VolGroup00/LogVol01 VG Name VolGroup00 LV UUID o11DaQ-MFJD-7QbW-WQ7k-qV4x-GXna-jvcIkZ LV Write Access read/write LV Status available # open 1 LV Size 1.94 GB Current LE 62 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:1 I don't know what LogVol01 is, as it's not mounted. I'll check on the CentOS list to see what the thinking was. |
From: Keith G. Robertson-T. <dum...@ge...> - 2007-12-20 13:27:17
|
Verily I say unto thee, that Kenneth Porter spake thusly: > --On Thursday, December 20, 2007 12:31 AM +0000 "Keith G. > Robertson-Turner" <dum...@ge...> wrote: >> It really is that simple. All you have to do is format your >> drive(s) with LVM, and remember to leave some unused slack at the >> end (I leave about 1GB, which has always been enough for that very >> busy server). > > My verify (restore) reports this: > > Level 0 dump of / on segw2.mpa.lan:/dev/mapper/VolGroup00-LogVol00 > > Does that imply I'm already using LVM? (It's using the default > partitioning with CentOS 5.) Yes it does. You can confirm with the "lvdisplay" command, which will display the properties of any/all logical volumes. Then "mount" (no args) to show what filesystems are mounted. You should see what is mounted on /dev/mapper/{xxx}. As you are already using LVM, then you really should be using snapshots when backing up - it's so much less hassle than trying to back up a live system, and it pretty much guarantees consistent results. For those unfamiliar with lvm commands, I recommend the Red Hat GUI "system-config-lvm" which is included with CentOS and Fedora as well. If it complains about unmounting the volumes first, then you can simply boot a CentOS live disc and do it from there instead (I know for a fact that system-config-lvm *is* included on their live discs, because I've used them in the past as my rescue discs). You will only need to do this *once*, to create some slack space. Under "logical view" choose one of the volumes (you probably only have one) that's the biggest with the most available space, click on the "edit properties" button, change the drop-down selector that currently reads "extents" to something more readable like "gigabytes", move the slide bar down to leave about 1GB "remaining space", then click on OK. Once you have some slack-space at the end of the hard drive, you can use it to create a snapshot filesystem with: mkdir -p /mnt/snapshots lvcreate -l100%FREE -s -n LogVol00-snapshot /dev/VolGroup00/LogVol00 mount -t ext3 /dev/VolGroup00/LogVol0-snapshot /mnt/snapshots That then "freezes" the state of the LogVol00 filesystem as a "snapshot" on LogVol00-snapshot, and you can dump LogVol00-snapshot without fear of its contents changing. When you're finished you just unmount it then: lvremove -f /dev/VolGroup00/LogVol0-snapshot That should resolve your "disappearing files" problem. One word of caution: I recommend that you don't use the machine heavily during the period that the snapshot is in use, since it will rapidly fill up and become invalidated (i.e. stop being a snapshot). If the normal background processes on your machine make more than 1GB of changes during the time it takes to dump the filesystem, then you'll need a bigger slack-space at the end. Like I said, on my very busy server, with things like squid and MySQL running, I never use more than about 250MB of the snapshot's available space during the dump. You'll need to experiment with the slack-space a little, to see what suits you best. It can be any size you like. The bigger it is, the less likely you are to run out of space before the backup is complete, but obviously you lose usable drive space as well. -- Regards, Keith G. Robertson-Turner |
From: Kenneth P. <sh...@se...> - 2007-12-19 22:19:26
|
At the end of my verify (restore -C) I see this: /sbin/restore: fopen: File too large cannot open mode file /tmp//rstmode1197954005-ohITrC directory mode, owner, and times not set Is this something to worry about? I'm running dump-0.4b41-2.fc6 on CentOS 5. I'm using -M with dump to dump to multiple 1 GB files on a Samba-mounted USB drive. $DUMP 0u -b 64 -Mf $DUMPSUBDIR/root/dump -Q $DUMPSUBDIR/root/qfa -B 1000000 / -E ${EXCLUDE_FILE} /bin/mount -o remount,noatime / $RESTORE -C -l -L 10000 -b 64 -Mf $DUMPSUBDIR/root/dump /bin/mount -o remount,atime / |
From: Kenneth P. <sh...@se...> - 2007-12-19 22:12:55
|
--On Friday, December 14, 2007 2:16 PM -0800 Kenneth Porter <sh...@se...> wrote: > On my first attempt I still ended up with a couple of missing files in > the pool. I'm suspecting that the service shutdown may have left an > earlier run of the sweeper still going, but before I investigate > further, I wanted to check here to see if there might be other reasons > the files might go missing. That there were only 2 files (not zero or a > bunch) is suspicious. On a subsequent run I didn't see any missing files. I'll report back if I see any in the future. |
From: Kenneth P. <sh...@se...> - 2007-12-14 22:16:52
|
I'm using BackupPC to back my Windows servers up to a directory on my Linux server, then dumping to an external USB HD. BPC stores multiple copies of the same file (such as the same DLL used on more than one Windows client) once in a "pool" directory and hard links to these pooled files from per-client directories. It runs a periodic task that sweeps the pool and removes files with only one link (ie. not present on any client). If I let the sweep task run during a dump/verify (restore -c) sequence, I get a lot of missing files, of course. So I stop the BPC service before the dump/verify and resume it afterwards. On my first attempt I still ended up with a couple of missing files in the pool. I'm suspecting that the service shutdown may have left an earlier run of the sweeper still going, but before I investigate further, I wanted to check here to see if there might be other reasons the files might go missing. That there were only 2 files (not zero or a bunch) is suspicious. |
From: Stelian P. <st...@po...> - 2007-12-14 22:05:37
|
Le jeudi 13 décembre 2007 à 04:16 +0000, Keith G. Robertson-Turner a écrit : > Hello Stelian and anyone else reading. > > I hope the dictum "there's no such thing as a stupid question" is well > accepted on this list, because this may be the mother of all stupid > questions. Err.. > > It's not absolutely clear from the manual, but does restore purge files > that were deleted between incremental dumps? > > E.g. If file foo.bar exists, then I do a level 0 dump, then I delete > foo.bar, then I do a level 1 dump ... if I then wipe the filesystem and > restore the level 0 dump then the level 1 dump in sequence, will foo.bar > be removed (like "rsync --delete")? > > The only info I could find was here: > > http://www.searchstorage.com.au/topics/article.asp?DocID=1267195 Nice article. > The info there suggests that dump stores a map of inodes deleted since > the last dump in an array called "usedinomap". Does restore actually use > that info when restoring incremental dumps, Yes it does. > and if so - does it only do > that if the "r" switch is used, as opposed to "x"? It should work with either switches. The only difference between -r and -x is that -r is optimized for speed since it knows that the entire dump will be restored. -x needs to check the pathnames for filtering, so it uses some slower algorithms. > TIA. > > PS: dump is still the best backup program, even after all these years. > Amazing, isn't it? [waves to Linus]. This must be because real man do real backups after all :) Thanks, Stelian. -- Stelian Pop <st...@po...> |
From: Keith G. Robertson-T. <dum...@ge...> - 2007-12-13 04:28:39
|
Hello Stelian and anyone else reading. I hope the dictum "there's no such thing as a stupid question" is well accepted on this list, because this may be the mother of all stupid questions. Err.. It's not absolutely clear from the manual, but does restore purge files that were deleted between incremental dumps? E.g. If file foo.bar exists, then I do a level 0 dump, then I delete foo.bar, then I do a level 1 dump ... if I then wipe the filesystem and restore the level 0 dump then the level 1 dump in sequence, will foo.bar be removed (like "rsync --delete")? The only info I could find was here: http://www.searchstorage.com.au/topics/article.asp?DocID=1267195 The info there suggests that dump stores a map of inodes deleted since the last dump in an array called "usedinomap". Does restore actually use that info when restoring incremental dumps, and if so - does it only do that if the "r" switch is used, as opposed to "x"? TIA. PS: dump is still the best backup program, even after all these years. Amazing, isn't it? [waves to Linus]. :) -- Regards, Keith G. Robertson-Turner |
From: Stelian P. <st...@po...> - 2007-12-05 22:12:15
|
Le mardi 04 décembre 2007 à 12:54 +0100, Helmut Jarausch a écrit : > Hi, > > does anybody know (Stelian, you are supposed to) how the > Quick File Access support does work? > I thought it would notice the position (got by ftell or counting > the output bytes) where > a given file is stored on the backup medium. It does work by using ioctl(MTIOCPOS) if the output is a tape, or lseek if the output is a file. > Therefore I tried something like > > dump -y -b256 -0 -D /usr/src/dumpdates \ > -Q /usr/src/$User-$Date.Q \ > -L $Date-$User -f - /home | \ > /usr/local/bin/ttcp -t -l 262144 -p 5618 $Server" > > Here ttcp is something similar to netcat. > > Unfortunately dump fails > DUMP: [26581] error: 29 (getting tapepos: -1) 29 is ESPIPE: a pipe is not seekable. This is why it fails. -- Stelian Pop <st...@po...> |
From: Helmut J. <jar...@ig...> - 2007-12-04 11:54:12
|
Hi, does anybody know (Stelian, you are supposed to) how the Quick File Access support does work? I thought it would notice the position (got by ftell or counting the output bytes) where a given file is stored on the backup medium. Therefore I tried something like dump -y -b256 -0 -D /usr/src/dumpdates \ -Q /usr/src/$User-$Date.Q \ -L $Date-$User -f - /home | \ /usr/local/bin/ttcp -t -l 262144 -p 5618 $Server" Here ttcp is something similar to netcat. Unfortunately dump fails DUMP: [26581] error: 29 (getting tapepos: -1) I thought, with the information of the file generated by the -Q flags, restoring a single file would save restore to crawl through a multi-gigabyte file on a USB drive by just calling 'fseek'. Can this be done or is it a limitation of 'dump'. Many thanks for an info, Helmut. -- Helmut Jarausch Lehrstuhl fuer Numerische Mathematik RWTH - Aachen University D 52056 Aachen, Germany |
From: Stelian P. <st...@po...> - 2007-11-30 21:05:26
|
Le vendredi 30 novembre 2007 à 10:07 -0800, Kenneth Porter a écrit : > --On Thursday, November 29, 2007 9:29 PM +0100 Stelian Pop > <st...@po...> wrote: > > > If the question is whether it is possible to exclude a file in order to > > not be picked by later dump, the answer is yes, either by using -e > > command line flag or by setting the nodump flag on the file. > > I figured a clever pipeline could write a new dump file with the file from > the old one excluded. Can one feed the output of restore into dump to > create a new dump file? No, this is not possible. Dump expects an ext3 filesystem not a pipe as its input. > On a related note, it would be cool if one could mount a dump file > read-only, much in the way one can mount an ISO9660 image. I suppose this could be done. Not easy though. > If the result > could be operated on using the same ext3 library used by dump, This looks even more complicated than the previous item. > that could > be a way to accomplish this. Yeah. That's the cool thing with computers: there is always a way to accomplish almost everything :) -- Stelian Pop <st...@po...> |
From: Kenneth P. <sh...@se...> - 2007-11-30 18:07:27
|
--On Thursday, November 29, 2007 9:29 PM +0100 Stelian Pop <st...@po...> wrote: > If the question is whether it is possible to exclude a file in order to > not be picked by later dump, the answer is yes, either by using -e > command line flag or by setting the nodump flag on the file. I figured a clever pipeline could write a new dump file with the file from the old one excluded. Can one feed the output of restore into dump to create a new dump file? On a related note, it would be cool if one could mount a dump file read-only, much in the way one can mount an ISO9660 image. If the result could be operated on using the same ext3 library used by dump, that could be a way to accomplish this. |
From: Stelian P. <st...@po...> - 2007-11-29 20:29:43
|
Le jeudi 29 novembre 2007 à 12:00 -0500, Scott Ehrlich a écrit : > I asked this on a different list and was encouraged to bring it here, too- > > Is it possible to delete a file from dump? I saved it in one dump archive, > and want to save space by deleting it from the others. If the question is whether it is possible to delete a file from an already done dump the answer is no. If the question is whether it is possible to exclude a file in order to not be picked by later dump, the answer is yes, either by using -e command line flag or by setting the nodump flag on the file. Stelian. -- Stelian Pop <st...@po...> |
From: Scott E. <sc...@MI...> - 2007-11-29 17:00:31
|
I asked this on a different list and was encouraged to bring it here, too- Is it possible to delete a file from dump? I saved it in one dump archive, and want to save space by deleting it from the others. I am currently dumping to a file, not tape. Thanks. Scott |