You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Bernhard E. <be...@be...> - 2000-06-26 23:15:22
|
Chuck Pierce wrote: > > is there a way to compress a partition before you put it on to tape?? I'd like > to keep all of my data on one tape, and I don't really care how long it takes.. > thanks - Chuck /sbin/dump 0af - /partition | gzip -c | dd obs=32k of=$TAPE |
From: Chuck P. <ch...@on...> - 2000-06-26 19:06:26
|
is there a way to compress a partition before you put it on to tape?? I'd like to keep all of my data on one tape, and I don't really care how long it takes.. thanks - Chuck |
From: Stelian P. <st...@ca...> - 2000-06-26 08:18:31
|
On Mon, 26 Jun 2000, Linas Vepstas wrote: > If I read between the lines, there seems to be no way to > put two or more filessystems on one tape. You're wrong. This was already discussed on this mailing list one or two months ago... > Yet I want to > to this ... how? what's teh best way? Use the no rewinding tape device (such as /dev/nst0) and: mt -f /dev/nst0 rewind dump 0f /dev/nst0 /dev/sda1 dump 0f /dev/nst0 /dev/sda2 dump 0f /dev/nst0 /dev/sda3 mt -f /dev/nst0 rewind In order to do a restore, first go to the wanted place of the tape (mt -f /dev/nst0 fsf 1 or fsf 2 or fsf 3), then do your preferred flavour or restore ( -i, -r, -x etc). > Amanda seems like > overkill ... (a hint on the manpage about this topic would > be nice) Yes, it would :) Stelian. -- Stelian Pop <sp...@ca...>| Too many things happened today Captimark | Too many words I don't wanna say Paris, France | I wanna be cool but the heat's coming up | I'm ready to kill 'cause enough is enough PGP key available on request | (Accept - "Up To The Limit") |
From: Linas V. <li...@li...> - 2000-06-25 23:30:21
|
Hi, If I read between the lines, there seems to be no way to put two or more filessystems on one tape. Yet I want to to this ... how? what's teh best way? Amanda seems like overkill ... (a hint on the manpage about this topic would be nice) --linas |
From: John R. J. <jr...@ga...> - 2000-06-20 15:09:42
|
>| DUMP: master/slave protocol botched. That probably means one of the dump child processes died. Start looking for core files (try /tmp/amanda first). John R. Jackson, Technical Software Specialist, jr...@pu... |
From: Troy E. A. <ch...@me...> - 2000-06-20 13:47:41
|
I'm using 2.4.2-19991216-beta1 of amanda with linux dump 0.4b17 (had 0.4b15 and have used it without this problem in the past), but now it's running on kernel 2.4.0-test1-ac18 and I'm getting this kind of failure on several filesystems, all of them have > 2G of data on them, but some of them fail immediately. I'm cc'ing the linux-kernel list just in case someone knows of any fundamental changes in dev kernels that would cause something like this. Thanks. /-- moonlight /home/media/misc lev 0 FAILED [/sbin/dump returned 3] sendbackup: start [moonlight:/home/media/misc level 0] sendbackup: info BACKUP=/sbin/dump sendbackup: info RECOVER_CMD=/sbin/restore -f... - sendbackup: info end | DUMP: Date of this level 0 dump: Tue Jun 20 05:38:27 2000 | DUMP: Date of last level 0 dump: the epoch | DUMP: Dumping /dev/mvg0/mmed (/home/media/misc) to standard output | DUMP: Label: none | DUMP: mapping (Pass I) [regular files] | DUMP: mapping (Pass II) [directories] | DUMP: estimated 6011317 tape blocks. | DUMP: Volume 1 started at: Tue Jun 20 05:39:07 2000 | DUMP: dumping (Pass III) [directories] | DUMP: dumping (Pass IV) [regular files] | DUMP: 2.43% done, finished in 3:21 | DUMP: 5.49% done, finished in 2:52 | DUMP: 8.63% done, finished in 2:38 | DUMP: 11.79% done, finished in 2:29 | DUMP: 14.96% done, finished in 2:22 | DUMP: 17.89% done, finished in 2:17 | DUMP: 21.08% done, finished in 2:11 | DUMP: 24.22% done, finished in 2:05 | DUMP: 27.39% done, finished in 1:59 | DUMP: 30.54% done, finished in 1:53 | DUMP: 33.71% done, finished in 1:48 | DUMP: 36.87% done, finished in 1:42 | DUMP: 40.02% done, finished in 1:37 | DUMP: 43.16% done, finished in 1:32 | DUMP: 46.30% done, finished in 1:26 | DUMP: 49.46% done, finished in 1:21 | DUMP: 52.60% done, finished in 1:16 | DUMP: 55.73% done, finished in 1:11 | DUMP: 58.86% done, finished in 1:06 | DUMP: 62.02% done, finished in 1:01 | DUMP: 65.17% done, finished in 0:56 | DUMP: 68.31% done, finished in 0:51 | DUMP: 71.45% done, finished in 0:45 | DUMP: 74.62% done, finished in 0:40 | DUMP: 77.78% done, finished in 0:35 | DUMP: master/slave protocol botched. | DUMP: The ENTIRE dump is aborted. sendbackup: error [/sbin/dump returned 3] \-------- |
From: Stelian P. <po...@cy...> - 2000-05-28 16:52:46
|
On Sun, 28 May 2000, José Juan Pina Camacho wrote: > I'm new at this list and I'm looking forward a dump HOWTO wich enabled me to make a document. Any of you knows where I can fin smth interesting (apart from the man pages) ? There is currently a lack of dump documentation (except if you can read Japanese :) ). But the man pages are rather complete and explicit, and since dump is a UNIX standard tool, you can rely on any other documentation you can find (not linux specific), like a good UNIX book... Stelian. -- /\ / \ Stelian Pop / DS \ Email: po...@cy... \____/ |
From: <jos...@wa...> - 2000-05-28 16:16:26
|
Hi!=20 I'm new at this list and I'm looking forward a dump HOWTO wich enabled = me to make a document. Any of you knows where I can fin smth interesting = (apart from the man pages) ? Thanks in advance, Jose Juan |
From: Stelian P. <po...@cy...> - 2000-05-28 15:07:09
|
On Mon, 17 Apr 2000, Kenneth Porter wrote: > I back up 3 partitions to a tape, rewind it, and then verify the 3 > partitions with "restore C". I'd like to see a success report at the > end of each verify, if all files compared equal, so I know that restore > definitely didn't exit silently with an error. (Peace of mind issue.) This should be rather simple to add to restore, but you should be aware that this will work only if you are dumping a whole filesystem, not a directory of a filesystem (because you will have 'file not found on tape errors', inherent to the dump design). Maybe you should analyse the restore output with grep... > Also, when I perform the verify pass, I have to issue 5 restore > operations to verify 3 partitions. I have to issue a dummy "restore Cf > /dev/nst0" between the real ones, which responds immediately with > "restore: tape read error: Success". What's up with that? The system is > Red Hat 6.1 with a 2.2.14 kernel from the 6.2 distribution. (It's done > this since 5.2, though.) The drive is an HP SureStore DDS3 DAT > (12/24GB). The SCSI controller is an Adaptec AIC7xxx controller. This seems to me like a module problem... try to insmod the module before launching dump... Stelian. -- /\ / \ Stelian Pop / DS \ Email: po...@cy... \____/ |
From: Rob S. <rsi...@me...> - 2000-05-23 15:24:29
|
Thanks! Robert Simmons Systems Administrator http://www.wlcg.com/ On Tue, 23 May 2000, Bernhard Erdmann wrote: > Rob Simmons wrote: > > > > I just tried to run dump into /dev/null with the same args that amanda > > gives dump (except for the "u") and I get the following error: > > $ dump -0sf 1048576 /dev/null /dev/hdb1 > > dump: illegal tape size -- f > > Don't use "-" before your parameter list: > > # /sbin/dump -0sf 1048576 /dev/null /dev/hdc5 > /sbin/dump: illegal tape size -- f > > # /sbin/dump 0sf 1048576 /dev/null /dev/hdc5 > DUMP: Date of this level 0 dump: Tue May 23 07:45:07 2000 > DUMP: Date of last level 0 dump: the epoch > DUMP: Dumping /dev/hdc5 (/) to /dev/null > DUMP: Label: none > DUMP: mapping (Pass I) [regular files] > DUMP: mapping (Pass II) [directories] > DUMP: estimated 63332 tape blocks on 0.00 tape(s). > DUMP: Volume 1 started at: Tue May 23 07:45:07 2000 > DUMP: dumping (Pass III) [directories] > DUMP: dumping (Pass IV) [regular files] > DUMP: Closing /dev/null > DUMP: Volume 1 completed at: Tue May 23 07:45:16 2000 > DUMP: Volume 1 took 0:00:09 > DUMP: Volume 1 transfer rate: 7088 KB/s > DUMP: 63792 tape blocks (62.30MB) on 1 volume(s) > DUMP: finished in 9 seconds, throughput 7088 KBytes/sec > DUMP: Date of this level 0 dump: Tue May 23 07:45:07 2000 > DUMP: Date this dump completed: Tue May 23 07:45:16 2000 > DUMP: Average transfer rate: 7088 KB/s > DUMP: DUMP IS DONE > > > Look carefully to your sendbackup.debug file: > > sendbackup: spawning "/sbin/dump" in pipeline > sendbackup: argument list: "dump" "0usf" "1048576" "-" "/dev/hdb1" > |
From: Bernhard E. <be...@be...> - 2000-05-23 05:50:10
|
Rob Simmons wrote: > > I just tried to run dump into /dev/null with the same args that amanda > gives dump (except for the "u") and I get the following error: > $ dump -0sf 1048576 /dev/null /dev/hdb1 > dump: illegal tape size -- f Don't use "-" before your parameter list: # /sbin/dump -0sf 1048576 /dev/null /dev/hdc5 /sbin/dump: illegal tape size -- f # /sbin/dump 0sf 1048576 /dev/null /dev/hdc5 DUMP: Date of this level 0 dump: Tue May 23 07:45:07 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/hdc5 (/) to /dev/null DUMP: Label: none DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 63332 tape blocks on 0.00 tape(s). DUMP: Volume 1 started at: Tue May 23 07:45:07 2000 DUMP: dumping (Pass III) [directories] DUMP: dumping (Pass IV) [regular files] DUMP: Closing /dev/null DUMP: Volume 1 completed at: Tue May 23 07:45:16 2000 DUMP: Volume 1 took 0:00:09 DUMP: Volume 1 transfer rate: 7088 KB/s DUMP: 63792 tape blocks (62.30MB) on 1 volume(s) DUMP: finished in 9 seconds, throughput 7088 KBytes/sec DUMP: Date of this level 0 dump: Tue May 23 07:45:07 2000 DUMP: Date this dump completed: Tue May 23 07:45:16 2000 DUMP: Average transfer rate: 7088 KB/s DUMP: DUMP IS DONE Look carefully to your sendbackup.debug file: sendbackup: spawning "/sbin/dump" in pipeline sendbackup: argument list: "dump" "0usf" "1048576" "-" "/dev/hdb1" |
From: Rob S. <rsi...@me...> - 2000-05-23 01:07:23
|
whatever value I send to dump in the -s arg returns "illegal tape size" $ dump -0sf <any number here> /dev/null /dev/hda1 dump: illegal tape size -- f Is there an ./configure option that I missed, or is this a bug in dump? Robert Simmons Systems Administrator http://www.wlcg.com/ |
From: Stelian P. <po...@cy...> - 2000-05-13 14:18:02
|
On Sat, 13 May 2000, Tim Thornley wrote: > Hi, > > I am using a rather old version of DUMP on RedHat 4.2 and for some reason I > have started getting the following error on one file system. > > Is this a bug that has been fixed in newer versions or am I missing the > meaning of "master/slave protocol botched" If so can you please explain it > to me. > This reminds me of a very old bug, which was corrected a while ago. Try the latest version, it should work (and MANY bugs were fixed, many features added since dump 0.3 - which I believe your version is). Stelian. -- /\ / \ Stelian Pop / DS \ Email: po...@cy... \____/ |
From: Tim T. <tho...@we...> - 2000-05-13 04:07:12
|
Hi, I am using a rather old version of DUMP on RedHat 4.2 and for some reason I have started getting the following error on one file system. DUMP: Date of this level 1 dump: Sat May 13 05:53:21 2000 DUMP: Date of last level 0 dump: Wed Apr 5 06:06:13 2000 DUMP: Dumping /dev/hdb1 (/home) to /backup/sehm.1.1305.00 DUMP: mapping (Pass I) [regular files] DUMP: mapping (Pass II) [directories] DUMP: estimated 1199274 tape blocks on 1.67 tape(s). DUMP: dumping (Pass III) [directories] DUMP: master/slave protocol botched. DUMP: The ENTIRE dump is aborted. Is this a bug that has been fixed in newer versions or am I missing the meaning of "master/slave protocol botched" If so can you please explain it to me. Thanks Tim ---------------------------------------------------------------------------- -- |
From: Eros A. <er...@la...> - 2000-05-08 15:39:57
|
I am seeking suggestions, clue or comments for a dump schedule that should be a compromise between workload and safety. At present I am using a 28-day alternative differential (taken from the manual examples of Digital unix) 0 5 4 7 6 9 8 1 5 4 7 6 9 8 2 5 4 7 6 9 8 3 5 4 7 6 9 8 with the dump at level 0 done on tape (manually) and the incrementals on disk through a cron scripts. -- Eros Albertazzi CNR-LAMEL, Via P.Gobetti 101, 40129 Bologna, Italy Tel: (+39)-051-639 9179 Fax: (+39)-051-639 9216 E-mail: er...@la... |
From: Bernhard E. <be...@be...> - 2000-04-18 06:53:19
|
TJ McNeely wrote: > > I am sorry if this has been asked and answered already, I am new to the > list. > > [tjm@ted tjm]$ df > Filesystem 1k-blocks Used Available Use% Mounted on > /dev/hda2 101517 41031 55245 43% / > /dev/hdc1 16682168 3778868 12055876 24% /backup > /dev/hda6 300603 42113 242969 15% /home > /dev/hda5 1511968 844516 590644 59% /usr > /dev/hda7 1051632 12156 986056 1% /var > > Here is the DUMP command we attempted to use: > dump -0ua -b 32 -f /dev/ht0 / > > We got a nice looking directory structure, but no files :) Why do you use a blocksize of 32 KB? Is it necessary for your tape drive? How did you verify your "directory structure, but no files"? Does /etc/dumpdates exist? Did you actually restore / from tape? Did you issue "mkdir -p /tmp/restored; cd /tmp/restored; restore -rf /dev/ht0; du ."? > I want to back the entire system "/" up to a tape... it worked fine on my > other server cause it was all one partition, this one only backed up > /dev/hda2 .. is there is slice 2 like in Solaris that gets the WHOLE DRIVE.. > actually I need to get the backup partiton too. Any help would be > appreciated. You didn't fully unterstand the concept of dump yet. Dump works at inode-level on a partition. Each partition has to be dumped separately. You'd like to backup four partitions? So go and dump four partitions. export $TAPE=/dev/(n?)ht0 # the non-rewinding tape device, check for it! dump 0ua / dump 0ua /home dump 0ua /var dump 0ua /usr > I would be willing to get all of the hda drive backed up to the hdc drive > (if possible)... then just do all of HDC --> tape.. dump 0uaf /backup/root-20000418.dump / dump 0uaf /backup/home-20000418.dump /home ... If you have a 2 GB file limit on your system, you have to use the -M option (see "man dump"). Another possibility is to use gzip on the fly: dump 0uaf - /home | gzip -c > /backup/home-20000418.dump.gz You can write the dump images with dd to your tape drive: cd /backup for i in *.dump do dd if=$i of=/dev/(n?)ht0 # use the non-rewinding drive! done Compressed dump images can also be restored from tape: dd if=$TAPE | gzip -dc | restore -rf - (Maybe you have to specify of=- to dd, it depends on your OS and dd's version.) How do you forward and rewind your tape drive? Can you use mt? From the device /dev/ht0 I assume you have an IDE tape. > PS: also, and I feel stupid asking this, but how do I make a cron run only > on SUNDAY night (well Monday morning).. right now our server (bill) does a > remote backup to a file on ted every morning at 2:30.. I really only need it > once a week. Thanks! 0 1 * * 1 /usr/local/bin/backup-me-now will run your script every monday morning 1:00 am "man 5 crontab" says: The time and date fields are: field allowed values ----- -------------- minute 0-59 hour 0-23 day of month 1-31 month 1-12 (or names, see below) day of week 0-7 (0 or 7 is Sun, or use names) |
From: Kenneth P. <sh...@we...> - 2000-04-18 00:21:49
|
I back up 3 partitions to a tape, rewind it, and then verify the 3 partitions with "restore C". I'd like to see a success report at the end of each verify, if all files compared equal, so I know that restore definitely didn't exit silently with an error. (Peace of mind issue.) For two of the partitions (/ and /var), I expect to get a few miscompares, but the /home partition is normally quiet at night when the backup runs, and I'd like to know by a positive message that the compare completed successfully. Also, when I perform the verify pass, I have to issue 5 restore operations to verify 3 partitions. I have to issue a dummy "restore Cf /dev/nst0" between the real ones, which responds immediately with "restore: tape read error: Success". What's up with that? The system is Red Hat 6.1 with a 2.2.14 kernel from the 6.2 distribution. (It's done this since 5.2, though.) The drive is an HP SureStore DDS3 DAT (12/24GB). The SCSI controller is an Adaptec AIC7xxx controller. Ken mailto:sh...@we... http://www.sewingwitch.com/ken/ http://www.harrybrowne2000.org/ |
From: TJ M. <tj...@di...> - 2000-04-17 02:40:12
|
I am sorry if this has been asked and answered already, I am new to the list. [tjm@ted tjm]$ df Filesystem 1k-blocks Used Available Use% Mounted on /dev/hda2 101517 41031 55245 43% / /dev/hdc1 16682168 3778868 12055876 24% /backup /dev/hda6 300603 42113 242969 15% /home /dev/hda5 1511968 844516 590644 59% /usr /dev/hda7 1051632 12156 986056 1% /var Here is the DUMP command we attempted to use: dump -0ua -b 32 -f /dev/ht0 / We got a nice looking directory structure, but no files :) I want to back the entire system "/" up to a tape... it worked fine on my other server cause it was all one partition, this one only backed up /dev/hda2 .. is there is slice 2 like in Solaris that gets the WHOLE DRIVE.. actually I need to get the backup partiton too. Any help would be appreciated. I would be willing to get all of the hda drive backed up to the hdc drive (if possible)... then just do all of HDC --> tape.. Thanks, Knop Head PS: also, and I feel stupid asking this, but how do I make a cron run only on SUNDAY night (well Monday morning).. right now our server (bill) does a remote backup to a file on ted every morning at 2:30.. I really only need it once a week. Thanks! |
From: Knop <kn...@di...> - 2000-04-14 22:12:24
|
This is only for dumping to a file. tapes are fine with large files... at least I have not come across any issues (yet). here is my "backup" script.. it backs up the entire / partition to a server named ted (which in turn backs itself up to tape) root@bill:~:# cat bin/backup #!/bin/bash printf "\n\n --- START FULL BACKUP --- \n\n\n" /sbin/dump -0uM -B 2000000 -f ted:/backup/bill.dump /dev/hda1 printf "\n\n --- END --- \n\n" root@bill:~:# that basically limits dump to making 2 GB files (well a lttle less to prevent issues)... I have, however, had issues with dump actually creating the remote file... it is fine if I touch the file on ted ahead of time. |
From: <cco...@sw...> - 2000-04-14 13:59:47
|
On Thu, Apr 13, 2000 at 01:37:27PM -0500, chuck wrote: > Bernhard Erdmann wrote: > > > > On Thu, Apr 13, 2000 at 01:21:12PM -0500, chuck wrote: > > > anyone know how to dump/restore multiple partitions onto one tape?? - Chuck > > > > No experience yet, but I'd simply use several dumps in serial order > > onto a nonrewinding tape drive. > > > yea, but if you do that, how do you do an interactive restore (restore -i)?? You'd just use the mt commands (or their equivalents) between restore sessions: mt rewind restore -i .. .. quit mt fsf 1 restore -i .. .. quit etc... best, c |
From: Bernhard E. <be...@be...> - 2000-04-14 07:01:57
|
On Thu, Apr 13, 2000 at 06:26:45PM -0600, TJ McNeely wrote: > I for some reason cannot dump a file bigger than 2 GB.. is this a DUMP > limitation or an E2FS limitation? or am I crazy... wait don't answer that :) Your e2fs on a 32 bit machine is limited to 2 GB filesize. Look at option -M in "man dump" to bypass it. (dump 0.4b16) |
From: TJ M. <tj...@di...> - 2000-04-14 00:39:52
|
I for some reason cannot dump a file bigger than 2 GB.. is this a DUMP limitation or an E2FS limitation? or am I crazy... wait don't answer that :) Knop Head |
From: chuck <ch...@on...> - 2000-04-13 18:44:13
|
Bernhard Erdmann wrote: > > On Thu, Apr 13, 2000 at 01:21:12PM -0500, chuck wrote: > > anyone know how to dump/restore multiple partitions onto one tape?? - Chuck > > No experience yet, but I'd simply use several dumps in serial order > onto a nonrewinding tape drive. yea, but if you do that, how do you do an interactive restore (restore -i)?? |
From: chuck <ch...@on...> - 2000-04-13 18:26:52
|
anyone know how to dump/restore multiple partitions onto one tape?? - Chuck |
From: Stelian P. <po...@cy...> - 2000-04-06 18:48:34
|
On Wed, 5 Apr 2000, Greg Thompson wrote: > what are reasonable values for blocksize, density, and feet for one of > these? i have a legacy dumping script that uses 126, 54000, and 13000, > respectively, but none of those numbers make sense to me. are density and > feet only used by dump to calculate how much data it can shove on a tape? > should i just use "-a" rather than try to figure them out? They are used only to calculate the amount of data a tape will manage. So you can safely use -a for that (or -B). > blocksize: what value should i be using? does it matter? will i get better > throughput to the tapes if i pick some magic blocksize that's just right for > this drive? Exactly. The best to do is to experiment with different values, setting your tape drive to variable blocksize and fixed blocksize (if it accepts those settings), and see by yourself... Generally, a bigger blocksize gives better performance. But if a tape block is corrupt, you will lose more data... > density: i'm using mt to set the density code for the drive to 21 (0x15), > but i can't find anyplace that'll tell me what that corresponds to in bpi. > > length: even though exabyte recommends against it, i'm using 160m XL tapes. > if m stands for meters, then that works out to just shy of 525 feet, which > is nowhere near the 13000 that someone put in this old script i'm using. > what's up with that? Leave the length/density stuff alone. The only interesting settings are -a (or -B if -a if your tape drive is buggy) and -b. The length/density were used for old tape drives (and cartridges), and it's a complete mess. If you are really interested, you can look at the source code to see how this stuff is calculated. I also found an interesting page on different tape drive settings somewhere aroung www.amanda.org... Hope this made some thing clearer for you (although I'm not convinced :) ). Stelian. -- /\ / \ Stelian Pop / DS \ Email: po...@cy... \____/ |