You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Brent B. <br...@ca...> - 2004-01-31 19:58:39
|
Hello, there are a few things about dump that I have always suspected but never have had the opportunity to test to see if they are true: 1) Will dump track file deletions between incremental backups? Here is an example of what I mean by that: Say I take a full backup of a directory full of files. Sometime later, I delete some of the files -- deliberately (I don't want them back). Then I do an incremental based on my last full backup (a level 1, for example). If I ever needed to recover from this backup set, due to something like a hard drive failure, would restore be smart enough to get rid of the files which disappeared between level 0 and level 1? Or would it resurrect everything? I've always presumed and *hoped* that it would delete such files during incremental restores. I've based that hope on the notion that since dump and restore work with raw inodes and not just file contents like tar and cpio do, any incremental restore of a directory node that lost files since the last incremental would in effect "overwrite" that directory table with the newer copy that omits the deleted files, having the effect of 'rm' on the files that were pointed to. Otherwise, if that were not true, you would get the following negative behavior during hard drive recovery: Restore full backup (level 0): Undesired files are extracted to disk (naturally, because they were there at time of the full backup and hadn't been deleted yet). Restore incremental (level 1): Undesired files are not in level 1, but they're left on the hard drive because you extracted them in level 0 and the incremental restore just leaves them there. (After several such incremental restores, you could potentially resurrect a lot of cruft this way...) Does dump avoid this problem? I have always hoped that it would, since it would seem that one positive effect of extracting raw inodes is it would have the same effect as an 'rm' on any files that the inode pointed to before that were not in the newer incremental being extracted. 2) If a file is copied onto a partition from another machine, and the file has a datestamp that's older than the latest dump recorded in /etc/dumpdates for that volume, will the next incremental catch the new file, on the premise that it didn't exist there before? Or will it miss it because of the older datestamp? This is actually a pretty common scenario, because one can only trust the datestamp to catch everything if one can assume that all files originate from the volume being backed up. As soon as one starts bringing in files from elsewhere without changing their datestamps, files could get missed if only the datestamp is used. These are just a couple of points of curiosity...does anyone know? -- + Brent A. Busby, "The killer application for Windows was Visual + CATMIND RECORDINGS Basic. It allowed you to make your hokey, self- + br...@ca... made applications that did something stupid for + Pekin, IL (USA) your enterprise...." --Linus Torvalds |
From: Florian Z. <fl...@gm...> - 2004-01-30 22:30:46
|
Hi, > I've tried posting to dum...@li... . > In that case I'm not even getting the confirmation mail. [...] I've recently subscribed to the list and the subscription acknowledgement took about two days to arrive after I had sent the confirmation of my subscription to the list manager (mailman). The same for dump-devel took "only" 1.5 h BTW. Seems they haven't understood that internet thing yet at sourceforge =:-) Cyas, Florian |
From: Florian Z. <fl...@gm...> - 2004-01-30 20:42:19
|
Hi, > sync > dump -0 -f - /dev/hda1 -A archive_file |restore -uvxf - > date "+%a %b %e %H:%M:%S %Y" > last_dump > > Later I'm doing a level 1 dump and writing over the level 0 dump > files: > dump -1 -T "`cat last_dump`" -f - /dev/hda1 -A archive_file.new \ > |restore -uvxf - [files deleted on the source are not deleted on the mirror in that second step above] > Unfortunately the archive_file.new, created during the level 1 dump contains > only the added/changed files and nothing about the removed ones. It will be It _does_ contain information about the removed ones - though indirectly. Dump is not about backing up files (=directory entries and their associated data) but about backing up inodes. By dumping all changed inodes (including directories) relative to some previous backup you automatically also record which inodes from a previous backup are not referenced any longer and thus can be deleted upon restore of the incremental backup. The problem here is that inode numbers are not preserved by restore, which is why restoring an incremental backup on top of a filesystem that was filled by other means than by restoring a level 0 dump does not work - incremental restore only works because restore keeps track of corresponding inode numbers in /restoresymtable on the filesystem being restored. This is how I understand it - if I should be wrong, please correct me ... So, after all it should work if you initially create the mirror using dump -0 | restore. As an alternative to dump I'd recommend rsync which is made exactly for this kind of task. However, take care to disable checksumming for local copying - maybe that rsync does that automatically, though ... Cyas, Florian |
From: Delian K. <kr...@kr...> - 2004-01-30 14:23:44
|
Hello, Here's the test I've performed: I'm making a level 0 dump, and saving the dump time to different file, since I prefer updating it than /etc/dumpdates. I'm also saving the files that are dumped, and trying to use that list later: sync dump -0 -f - /dev/hda1 -A archive_file |restore -uvxf - date "+%a %b %e %H:%M:%S %Y" > last_dump Later I'm doing a level 1 dump and writing over the level 0 dump files: dump -1 -T "`cat last_dump`" -f - /dev/hda1 -A archive_file.new \ |restore -uvxf - The archive is kept on a separate partition which is supposed to be a mirror of the original. The dumps are taken from a lvm snapshot, avoiding the errors that live filesystem might cause. The idea is to always execute only the second command, and just after it to do date "+%a %b %e %H:%M:%S %Y" > last_dump This way I was hoping to be able to keep the mirror partition up to date. The problem is that if a file is removed from the original partition, it is impossible to detect that and remove the file from the mirror also. That way the mirror is always growing in size and getting biger and bigge= r than the original. A solution with let's say "find" is not appropriate since there are sever= al=20 hundreds thousands of files and stating each one takes too much time. I might get the list of files from the level 0 dump by parsing the output= of: cat archive_file |restore -tf - Unfortunately the archive_file.new, created during the level 1 dump conta= ins only the added/changed files and nothing about the removed ones. It will = be sufficient if I could generate a list with all the files that are in the original partition, like the one of level 0 dump, when I'm doing the leve= l 1 dump. If a file is on the first list but not on a second, it has been rem= oved, and I could remove it from the mirror partition. Any Ideas ? Regards, Delian Kurstev p.s. I've searched thoroughly the archives and found nothing about a simi= lar issue. I doubt no one has faced this, may be I'm missing something obviou= s ? Thanks for the patience to read that long post. |
From: Delian K. <kr...@kr...> - 2004-01-29 22:07:06
|
Hi, I've tried posting to dum...@li... . In that case I'm not even getting the confirmation mail. I've also tried subscirbing from the WEB interfece, and confirming the su= bscription by email. I get the confirmation mail but after confirming I go no furthe= r responce (welcome message). I've checked my subscription status from the web inter= face and I've seen the subscription wasn't successful. I've tried sending the mails from two different mail servers. I've also tried telneting to mail.sourceforge.net, port 25 and=20 confirm the subsciption request from there. In all the cases I've also checked the messages were delivered. Here's the proof: 2004-01-29 19:01:49.816057500 new msg 81978 2004-01-29 19:01:49.816164500 info msg 81978: bytes 515 from <krustev@kru= stev.net> qp 10350 uid 107 2004-01-29 19:01:49.853109500 starting delivery 9399: msg 81978 to remote= dum...@li... 2004-01-29 19:01:49.853122500 status: local 0/10 remote 2/255 2004-01-29 19:01:52.263052500 delivery 9399: success: 66.35.250.206_accep= ted_message./Remote_host_said:_250_OK_id=3D1AmHQF-0002V5-NL/ 2004-01-29 19:01:52.263229500 status: local 0/10 remote 1/255 2004-01-29 19:01:52.263299500 end msg 81978 None of the above works ! I think the subscriptions for that list must be automatic(not moderated). Can You please check what is going on ? Thank You. Delian Krustev p.s. I'm cc-ing the list itself, hoping someone there might help |
From: Stelian P. <st...@po...> - 2004-01-20 10:56:27
|
On Sun, Jan 18, 2004 at 04:58:04PM -0600, Jon N. Brelie wrote: [...] > I can issue mt commands > that function until I try them after a failed dump operation. Then they > no longer work. Granted, I've only tried rewind, retension and rewoffl, > so it could still be a drive issue. I plan on calling Dell to see if they > can send me yet another drive to try out, as the one they sent was > refurbished. > > Does anyone have any other ideas for me to try? Does any of the > above info trigger a red flag for anyone? [...] Are you able to read the tape using a simple dd ? Like in: mt rewind dd if=/dev/st0 of=/tmp/test bs=10k I suspect this will behave exactly the same way as dump showing that the problem is at the kernel (or hardware) level. Contacting the tape driver maintainer (Kai Makisara) directly is the best way to proceed in this case... Stelian. -- Stelian Pop <st...@po...> |
From: Jon N. B. <ma...@nv...> - 2004-01-19 17:25:21
|
I'm using one. I didn't have to configure anything. It just worked. On Mon, 19 Jan 2004, Nick Garfield wrote: > Hi Dump Users, > > I am thinking about buying an LTO tape drive. Is anyone out there using > LTO, Linux and dump? If so does it work OK and is there anything I > should particularly watch out for? > > Thanks in advance for any replies. > > Regards, > > Nick > > Nick Garfield > IT/CS Campus Networking Section > CERN > Geneva > Switzerland > > Tel:+41 22 76 74 533 > > > ------------------------------------------------------- > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > Dump-users mailing list > Dum...@li... > https://lists.sourceforge.net/lists/listinfo/dump-users > -- ******************************* Jon N. Brelie Information Systems Manager NVE Corp. ******************************* |
From: Nick G. <Nic...@ce...> - 2004-01-19 09:30:04
|
Hi Dump Users, I am thinking about buying an LTO tape drive. Is anyone out there using LTO, Linux and dump? If so does it work OK and is there anything I should particularly watch out for? Thanks in advance for any replies. Regards, Nick Nick Garfield IT/CS Campus Networking Section CERN Geneva Switzerland Tel:+41 22 76 74 533=20 |
From: Jon N. B. <ma...@nv...> - 2004-01-18 22:58:17
|
Well I just had a terrifying ordeal that may shed some light on this subject. When I came in today to try switching SCSI cables on the tape drive, I got an alarm from the PERC RAID card. Turns out none of the drives were bad, it had just gotten confused somehow as to what the configuration was. I rebuilt the original offender just to be sure, and the system is now up and running again with all three drives functioning in the array. I thought that this might have been the source of my problems dumping to tape, but sadly, the problem persists. I can issue mt commands that function until I try them after a failed dump operation. Then they no longer work. Granted, I've only tried rewind, retension and rewoffl, so it could still be a drive issue. I plan on calling Dell to see if they can send me yet another drive to try out, as the one they sent was refurbished. Does anyone have any other ideas for me to try? Does any of the above info trigger a red flag for anyone? Thanks for the help. ~Jon. On Wed, 14 Jan 2004, Jon N. Brelie wrote: > > Ah yes. Sorry, it was late. I should bhave been more clear. mt > commands only fail *after* a failed dump operation. If I pop in a fresh > tape, they work. > > The drive came with a Dell PE 1400. I think Seagate is the > original manufacturer. Model number is STD2401LW. Connection is > internal SCSI. I've tried different cables, different channels, checked > connections and termination... I'm really stumped. > > > > > On Wed, 14 Jan 2004, Antonios Christofides wrote: > > > Jon N. Brelie wrote: > > > st0: Error with sense data: Info fld=0x0, Current st09:00: sense key > > > Medium Error > > > Additional sense indicates Excessive write errors > > > st0: Error on write filemark. > > > > (This isn't a dump problem, of course, since mt also has problems; it > > seems to be a hardware problem). > > > > What model/make is the drive? How is it connected? Is it a SCSI? What > > model/make is the SCSI interface? Try a different cable, and > > check/replace SCSI terminators. If this fails, shutdown and switch off > > the machine and the drive, then switch them on again. If this also > > fails, you should try a different interface. > > > > > > ------------------------------------------------------- > > This SF.net email is sponsored by: Perforce Software. > > Perforce is the Fast Software Configuration Management System offering > > advanced branching capabilities and atomic changes on 50+ platforms. > > Free Eval! http://www.perforce.com/perforce/loadprog.html > > _______________________________________________ > > Dump-users mailing list > > Dum...@li... > > https://lists.sourceforge.net/lists/listinfo/dump-users > > > > -- ******************************* Jon N. Brelie Information Systems Manager NVE Corp. ******************************* |
From: Jon N. B. <ma...@nv...> - 2004-01-14 17:32:07
|
Ah yes. Sorry, it was late. I should bhave been more clear. mt commands only fail *after* a failed dump operation. If I pop in a fresh tape, they work. The drive came with a Dell PE 1400. I think Seagate is the original manufacturer. Model number is STD2401LW. Connection is internal SCSI. I've tried different cables, different channels, checked connections and termination... I'm really stumped. On Wed, 14 Jan 2004, Antonios Christofides wrote: > Jon N. Brelie wrote: > > st0: Error with sense data: Info fld=0x0, Current st09:00: sense key > > Medium Error > > Additional sense indicates Excessive write errors > > st0: Error on write filemark. > > (This isn't a dump problem, of course, since mt also has problems; it > seems to be a hardware problem). > > What model/make is the drive? How is it connected? Is it a SCSI? What > model/make is the SCSI interface? Try a different cable, and > check/replace SCSI terminators. If this fails, shutdown and switch off > the machine and the drive, then switch them on again. If this also > fails, you should try a different interface. > > > ------------------------------------------------------- > This SF.net email is sponsored by: Perforce Software. > Perforce is the Fast Software Configuration Management System offering > advanced branching capabilities and atomic changes on 50+ platforms. > Free Eval! http://www.perforce.com/perforce/loadprog.html > _______________________________________________ > Dump-users mailing list > Dum...@li... > https://lists.sourceforge.net/lists/listinfo/dump-users > -- ******************************* Jon N. Brelie Information Systems Manager NVE Corp. ******************************* |
From: Antonios C. <an...@it...> - 2004-01-14 07:31:53
|
Jon N. Brelie wrote: > st0: Error with sense data: Info fld=0x0, Current st09:00: sense key > Medium Error > Additional sense indicates Excessive write errors > st0: Error on write filemark. (This isn't a dump problem, of course, since mt also has problems; it seems to be a hardware problem). What model/make is the drive? How is it connected? Is it a SCSI? What model/make is the SCSI interface? Try a different cable, and check/replace SCSI terminators. If this fails, shutdown and switch off the machine and the drive, then switch them on again. If this also fails, you should try a different interface. |
From: Jon N. B. <ma...@nv...> - 2004-01-14 04:13:14
|
Hello all. It's been a LONG time since I've posted to this list because dump has worked so well for years. It just quit on me though. For about 2.5 years now I have been running scripts similiar this: ------ dump 0uBf 20000000 /dev/nst0 / dump 0uBf 20000000 /dev/nst0 /boot dump 0uBf 20000000 /dev/nst0 /usr dump 0uBf 20000000 /dev/nst0 /var mt -f /dev/nst0 rewoffl ------ ... nearly every day. I am using dump 0.4b19 on an RH 6.2 system. Lately I have been getting this (dmesg): ------ st0: Error with sense data: Info fld=0x0, Current st09:00: sense key Medium Error Additional sense indicates Excessive write errors st0: Error on write filemark. ------ Most mt commands give me an I/O error as well. I've tried cleaning the drive, tried new media, tried old media... I've even replaced the drive with another (same model/make) and I continue to get these errors. Anyone have any ideas? I am out. It literally just stopped working the way that it had for 2 and a half years. I hadn't changed *anything* for months when this started happening. Oh yeah, filesystem is ext2. Anyone have any ideas? Thanks. ~Jon. -- ******************************* Jon N. Brelie Information Systems Manager NVE Corp. ******************************* |
From: Stelian P. <st...@po...> - 2004-01-03 20:18:23
|
On Sat, Jan 03, 2004 at 01:21:30PM -0600, Brent Busby wrote: > Hello, I am trying to use 'restore' to put the contents of /, /usr, and > /opt onto one / partition. (I also have a /var, /tmp, and /home > partition, but I'm leaving those alone and only consolidating /usr and > /opt.) My question regards the best type of restore operation to do this. > > Obviously I'm going to want to repartition and then mke2fs the new root > partition, and do 'restore -rv' from tape to put the old contents of / > onto the new big root partition. But then... > > What about /usr and /opt? The man page says that '-r' should only be used > on a pristine volume that has just been formatted. That would no longer > be the case for my root partition now that I've just restored the old / > onto it. Just use restore -r, it will work. You'll get warnings if you have some name collisions (same filename on several files) but if will work just fine. So: mkfs /dev/... mount /dev/... /mnt/new cd /mnt/new restore rf /path/to/root.dump mkdir usr cd usr restore rf /path/to/usr.dump cd .. mkdir opt cd opt restore rf /path/to/opt.dump > So I use -x instead? The man page for restore says the -x option will (if > possible) restore "the owner, modification time, and mode." What about > the creation time...last access time...group...hardlinks...special/sparse > files? The author of the man page may have just been omitting such things > for brevity, but it gives the impression that a -x type restore is quite a > bit less complete than a -r. Which I shouldn't use (it says) on a > filesystem that already has data on it. You can also use -x. -x and -r are quite equivalent, the main difference is that -r is optimised for speed when restoring all the data (-x gives the opportunity to restore only a part of the data), and -r understand incremental restores (which doesn't matter for you). > That leaves -i as the only other option, which I suppose I would use by > interactively selecting everything to simulate a -r without whatever nasty > effects I would get from an actual -r. -i will behave exactly like -x. > I'm so confused. <g> > > What's the best thing to do here? How should one best restore an entire > volume dump preserving as much of the original metadata and filesystem > structure as possible into a subdirectory of an existing filesystem with > data on it? -r? -x? -i with everything selected...? See above. Stelian. -- Stelian Pop <st...@po...> |
From: Brent B. <br...@vi...> - 2004-01-03 19:21:44
|
Hello, I am trying to use 'restore' to put the contents of /, /usr, and /opt onto one / partition. (I also have a /var, /tmp, and /home partition, but I'm leaving those alone and only consolidating /usr and /opt.) My question regards the best type of restore operation to do this. Obviously I'm going to want to repartition and then mke2fs the new root partition, and do 'restore -rv' from tape to put the old contents of / onto the new big root partition. But then... What about /usr and /opt? The man page says that '-r' should only be used on a pristine volume that has just been formatted. That would no longer be the case for my root partition now that I've just restored the old / onto it. So I use -x instead? The man page for restore says the -x option will (if possible) restore "the owner, modification time, and mode." What about the creation time...last access time...group...hardlinks...special/sparse files? The author of the man page may have just been omitting such things for brevity, but it gives the impression that a -x type restore is quite a bit less complete than a -r. Which I shouldn't use (it says) on a filesystem that already has data on it. That leaves -i as the only other option, which I suppose I would use by interactively selecting everything to simulate a -r without whatever nasty effects I would get from an actual -r. I'm so confused. <g> What's the best thing to do here? How should one best restore an entire volume dump preserving as much of the original metadata and filesystem structure as possible into a subdirectory of an existing filesystem with data on it? -r? -x? -i with everything selected...? -- Brent A. Busby "The killer application for Windows was Visual br...@ca... Basic. It allowed you to make your hokey, self- Pekin, IL (USA) made applications that did something stupid for your enterprise...." --Linus Torvalds |
From: Stelian P. <st...@po...> - 2003-12-29 18:31:45
|
On Mon, Dec 29, 2003 at 04:26:19PM +0000, Marc Thomas wrote: > > Hi Kirk, > > Why not use another dump & ext2/3 feature - the "nodump" filesystem flag? Indeed. However, in this particular case (files being created / removed very often), I think the exclusion list is more appropriate (a bit quicker too). I would use the nodump flag to permanently mark, let's say, /tmp and /var/tmp for exclusion (knowing that the inode numbers of those directories won't change over time). But both methods are equivalent from dump's point of view. Stelian. -- Stelian Pop <st...@po...> |
From: Marc T. <ma...@dr...> - 2003-12-29 16:28:12
|
Hi Kirk, Why not use another dump & ext2/3 feature - the "nodump" filesystem flag? Use the chattr(1) command to set the "nodump" flag on files and directories you wish to exclude from backups. Then use the "-h" dump option to specify the minimum backup level which will honour this flag. The default honour level is 1, meaning that full backups (level 0) will include all files regardless of this flag, thus if you want a full backup to exclude "nodump" files, specify -h0 on the dump command line. Have a look at the chattr(1) and the dump(8) manpages for further details. Example: find /home -iname '*.mp3' -type f -print -exec chattr +d {} \; ...will list all the mp3 files in /home, and turn on the nodump flag for them. Note that if you set "nodump" on a directory, it will exclude everything in that directory (and any subdirectories) regardless of their individual settings (hence the "-type f" in the example above). Hope that's of some help! Regards, Marc |
From: Stelian P. <st...@po...> - 2003-12-23 13:38:56
|
On Mon, Dec 22, 2003 at 09:41:35PM -0700, Kirk Hoganson wrote: > Greetings, > I have run into a dump problem, and I am hoping for some suggestions. > > We allow users to store personal files in their home directories on a particular server. Consequently scattered amongst the legitimate more critical files, are mp3's, mov's, avi's etc. I was hoping to exclude these files from the tape backup rotation by using find to generate an inode exclusion list. However, the list was over 800 entries, and will grow with more time. Has anyone encountered a similar situation that they might be able to enlighten me on? We need to backup the majority of the files stored on this file system, but we don't have the space to continuously backup these more frivolous files. I don't believe that the individual users in this situation can be relied upon to store these types of files in specific directories that I could exclude. The perfect solution (prior to becoming acquainted with the entry limit), was to use find to automatically generate a list of these dynamic files. Any other possible suggestions would be appreciated. The latest version of dump, released two days ago, has no more limit on the number of inodes to be excluded. Stelian. -- Stelian Pop <st...@po...> |
From: Kirk H. <kho...@co...> - 2003-12-23 04:40:24
|
Greetings, I have run into a dump problem, and I am hoping for some = suggestions. We allow users to store personal files in their home directories on a = particular server. Consequently scattered amongst the legitimate more = critical files, are mp3's, mov's, avi's etc. I was hoping to exclude = these files from the tape backup rotation by using find to generate an = inode exclusion list. However, the list was over 800 entries, and will = grow with more time. Has anyone encountered a similar situation that = they might be able to enlighten me on? We need to backup the majority = of the files stored on this file system, but we don't have the space to = continuously backup these more frivolous files. I don't believe that = the individual users in this situation can be relied upon to store these = types of files in specific directories that I could exclude. The = perfect solution (prior to becoming acquainted with the entry limit), = was to use find to automatically generate a list of these dynamic files. = Any other possible suggestions would be appreciated. Kirk |
From: Eric J. <eje...@sw...> - 2003-12-10 20:33:20
|
Hi Eros, > I am trying to restore from a tape and I have the messages > > "restore: Tape read error on first record" Sometimes it's a block-size mismatch. Try mt -f TAPEDEVICE setblk 0 where TAPEDEVICE is whatever device points to your tape drive, maybe /dev/nst0 or something like that. Then make sure the tape is rewound and try the restore again and see if that helps. This has worked for me many times on tapes that seem initially unreadable. Eric |
From: Eros A. <ph...@bo...> - 2003-12-10 17:11:42
|
I am trying to restore from a tape and I have the messages "restore: Tape read error on first record" Any clue and what it means. Regards. -- Eros Albertazzi CNR-IMM, Sez. Bologna Via P.Gobetti 101 I-40129 Bologna, Italy Tel:(+39)-051-639 9179 Fax:(+39)-051-639 9216 |
From: Stelian P. <st...@po...> - 2003-12-09 08:16:01
|
On Mon, Dec 08, 2003 at 06:06:21PM -0500, James Roth wrote: > More clues. Looking at a different partition from the same machine > yields a different error: > > restore -iv > Verify tape and initialize maps > Input is from a local tape > Tape block size is 32 > Dump date: Thu Dec 4 00:09:52 2003 > Dumped from: Tue Oct 28 15:15:21 2003 > Level 1 dump of / on orca:/dev/sda6 > Label: orca > Extract directories from tape > Checksum error 14534106460, inode 48294 file <directory file - name unknown> > Mangled directory: reclen not multiple of 4 > resync restore, skipped 101 blocks > Initialize symbol table. > restore > > > Maybe this old 6.2 redhat box has some libc issues. Kernel 2.4.18 looks > like GLIBC_2.1.3. Restoring from a newer machine does not help. This really looks like your tapes are corrupt. Try cleaning your drive head, reading in a different drive might help. But if the issue is really the magnetic media, I'm afraid you won't be able to restore them. Just to be sure, take a new tape, do a dump on it and try to restore (using the same parameters - dd et al - you were using before). If what goes wrong is in the setup, then you won't be able to restore it. On the other hand, if you restore it just fine, then the problem lies on the old medium... Stelian. -- Stelian Pop <st...@po...> |
From: Stelian P. <st...@po...> - 2003-12-09 08:11:49
|
On Mon, Dec 08, 2003 at 05:28:57PM -0500, James Roth wrote: > I tried -b 10 as well as 1, 2, 3, ... 32, 64, etc. From my backup log: > "DUMP: Writing 10 Kilobyte records" > > Also, my script sets the block size to 512 before each dump. > > Does my "dd ibs=512 obs=512 of=/dev/tape conv=sync" look suspicious to > anyone? It does seem suspicious to me, as is the fact you're using a 512 byte blocksize with a DDS4 drive, which should use much larger blocksizes (32k for example) or variable block sizes (mt setblk 0) in order to gain both speed and capacity. Stelian. -- Stelian Pop <st...@po...> |
From: James R. <ja...@kr...> - 2003-12-08 23:18:36
|
James Roth wrote: >>> Hi, >>> >>> I'm getting the following message when restoring. >>> >>> $ restore -iv >>> Verify tape and initialize maps >>> Input is from a local tape >>> Tape block size is 32 >>> Dump date: Fri Dec 5 00:12:05 2003 >>> Dumped from: Tue Oct 28 15:21:11 2003 >>> Level 1 dump of /store on orca:/dev/md0 >>> Label: orca >>> restore: Cannot find file dump list >>> >>> The tape was created using a command like this: >>> ssh orca dump -u -0 /store -f - | dd ibs=512 obs=512 of=/dev/tape >>> conv=sync >>> >>> DDS4 tape drive >>> Linux x86 with a heavily upgraded Redhat 6.2 distro >>> restore 0.4b34 (using libext2fs 1.34 of 25-Jul-2003) >>> Tape block size 512 bytes. Density code 0x26 (DDS-4 or QIC-4GB). >>> >>> Is there a way to recover this? >>> I tried... >>> >>> dd bs=512 if=/dev/tape of=temp.bin >>> restore -iv -f temp.bin >>> >> More clues. Looking at a different partition from the same machine yields a different error: restore -iv Verify tape and initialize maps Input is from a local tape Tape block size is 32 Dump date: Thu Dec 4 00:09:52 2003 Dumped from: Tue Oct 28 15:15:21 2003 Level 1 dump of / on orca:/dev/sda6 Label: orca Extract directories from tape Checksum error 14534106460, inode 48294 file <directory file - name unknown> Mangled directory: reclen not multiple of 4 resync restore, skipped 101 blocks Initialize symbol table. restore > Maybe this old 6.2 redhat box has some libc issues. Kernel 2.4.18 looks like GLIBC_2.1.3. Restoring from a newer machine does not help. Thanks, James |
From: James R. <ja...@kr...> - 2003-12-08 22:41:15
|
Stelian Pop wrote: >On Mon, Dec 08, 2003 at 03:16:03PM -0500, James Roth wrote: > > > >>Hi, >> >> I'm getting the following message when restoring. >> >>$ restore -iv >>Verify tape and initialize maps >>Input is from a local tape >>Tape block size is 32 >>Dump date: Fri Dec 5 00:12:05 2003 >>Dumped from: Tue Oct 28 15:21:11 2003 >>Level 1 dump of /store on orca:/dev/md0 >>Label: orca >>restore: Cannot find file dump list >> >>The tape was created using a command like this: >>ssh orca dump -u -0 /store -f - | dd ibs=512 obs=512 of=/dev/tape conv=sync >> >>DDS4 tape drive >>Linux x86 with a heavily upgraded Redhat 6.2 distro >>restore 0.4b34 (using libext2fs 1.34 of 25-Jul-2003) >>Tape block size 512 bytes. Density code 0x26 (DDS-4 or QIC-4GB). >> >>Is there a way to recover this? >>I tried... >> >> dd bs=512 if=/dev/tape of=temp.bin >> restore -iv -f temp.bin >> >> > >Try: > restore -iv -f temp.bin -b 10 > >but I'm not sure it will help... > >Are you sure the tape blocksize (as per mt setblk) hasn't changed >between the dump and the restore ? > >Stelian. > > Thanks for the reply. I tried -b 10 as well as 1, 2, 3, ... 32, 64, etc. From my backup log: "DUMP: Writing 10 Kilobyte records" Also, my script sets the block size to 512 before each dump. Does my "dd ibs=512 obs=512 of=/dev/tape conv=sync" look suspicious to anyone? James |
From: Stelian P. <st...@po...> - 2003-12-08 22:26:56
|
On Mon, Dec 08, 2003 at 03:16:03PM -0500, James Roth wrote: > Hi, > > I'm getting the following message when restoring. > > $ restore -iv > Verify tape and initialize maps > Input is from a local tape > Tape block size is 32 > Dump date: Fri Dec 5 00:12:05 2003 > Dumped from: Tue Oct 28 15:21:11 2003 > Level 1 dump of /store on orca:/dev/md0 > Label: orca > restore: Cannot find file dump list > > The tape was created using a command like this: > ssh orca dump -u -0 /store -f - | dd ibs=512 obs=512 of=/dev/tape conv=sync > > DDS4 tape drive > Linux x86 with a heavily upgraded Redhat 6.2 distro > restore 0.4b34 (using libext2fs 1.34 of 25-Jul-2003) > Tape block size 512 bytes. Density code 0x26 (DDS-4 or QIC-4GB). > > Is there a way to recover this? > I tried... > > dd bs=512 if=/dev/tape of=temp.bin > restore -iv -f temp.bin Try: restore -iv -f temp.bin -b 10 but I'm not sure it will help... Are you sure the tape blocksize (as per mt setblk) hasn't changed between the dump and the restore ? Stelian. -- Stelian Pop <st...@po...> |