You can subscribe to this list here.
2000 |
Jan
(2) |
Feb
(15) |
Mar
(1) |
Apr
(11) |
May
(9) |
Jun
(22) |
Jul
(23) |
Aug
(21) |
Sep
(21) |
Oct
(7) |
Nov
(13) |
Dec
(58) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(20) |
Feb
(33) |
Mar
(24) |
Apr
(27) |
May
(48) |
Jun
(12) |
Jul
(35) |
Aug
(37) |
Sep
(41) |
Oct
(37) |
Nov
(29) |
Dec
(4) |
2002 |
Jan
(35) |
Feb
(17) |
Mar
(33) |
Apr
(65) |
May
(53) |
Jun
(43) |
Jul
(38) |
Aug
(37) |
Sep
(11) |
Oct
(25) |
Nov
(26) |
Dec
(38) |
2003 |
Jan
(44) |
Feb
(58) |
Mar
(16) |
Apr
(15) |
May
(11) |
Jun
(5) |
Jul
(70) |
Aug
(3) |
Sep
(25) |
Oct
(8) |
Nov
(16) |
Dec
(15) |
2004 |
Jan
(16) |
Feb
(27) |
Mar
(21) |
Apr
(23) |
May
(14) |
Jun
(16) |
Jul
(5) |
Aug
(5) |
Sep
(7) |
Oct
(17) |
Nov
(15) |
Dec
(44) |
2005 |
Jan
(37) |
Feb
(3) |
Mar
(7) |
Apr
(13) |
May
(14) |
Jun
(23) |
Jul
(7) |
Aug
(7) |
Sep
(12) |
Oct
(11) |
Nov
(11) |
Dec
(9) |
2006 |
Jan
(17) |
Feb
(8) |
Mar
(6) |
Apr
(14) |
May
(18) |
Jun
(16) |
Jul
(6) |
Aug
(1) |
Sep
(5) |
Oct
(12) |
Nov
(1) |
Dec
(1) |
2007 |
Jan
(3) |
Feb
(6) |
Mar
(6) |
Apr
|
May
|
Jun
(7) |
Jul
(8) |
Aug
(5) |
Sep
(4) |
Oct
|
Nov
(8) |
Dec
(14) |
2008 |
Jan
(31) |
Feb
(3) |
Mar
(9) |
Apr
|
May
(15) |
Jun
(9) |
Jul
|
Aug
(13) |
Sep
(10) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(11) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(9) |
Jul
(23) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(10) |
2011 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(5) |
Feb
(3) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
(5) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
From: Dragan K. <dk...@ly...> - 2007-11-12 01:10:29
|
Dump isn't the fastest backup software, but if you <br>get such ridiculously low transfer rates, there must <br>be something wrong with your hardware. I had a similar <br>problem. The "tar" could do something like really <br>respectable transfer rates and "dump" reported such <br>ridiculous rates as you quote. <br> <br>Why don't you try something else in the way of e2fs? <br>Try an "fsck". Mine needed close to 6 hours to <br>complete a 290 GB 15-disks RAID-6. <br> <br>After I replaced the controller it was back to <br>minutes and "dump" was at about 50-60 MB/s. <br> <br> <br>Date: Wed, 07 Nov 2007 13:30:12 -0800 <br>From: Todd and Margo Chester <Tod...@ve...> <br>Subject: Re: [Dump-users] slowwwwwww <br>To: James Roth <jam...@gm...> <br>Cc: Stelian Pop <st...@po...>, dum...@li... <br>Message-ID: <473...@ve...> <br>Content-Type: text/plain; charset=ISO-8859-1; format=flowed <br> <br>> On Nov 6, 2007 7:59 AM, Stelian Pop <st...@po...> wrote: <br>>> Le dimanche 04 novembre 2007 ? 15:11 -0800, Todd and Margo Chester a <br>>> ?crit : <br>>>> 2.6.18-8.1.8.el5 <br>>>> <br>>>> Hi All, <br>>> Hi, <br>>>> After I wiped my hard drives clean of CentOS 4.4 <br>>>> (RHEL 4.4 clone) and installed CentOS 5 (RHEL 5), <br>>>> I noticed that everything was slower. Especially, <br>>>> dump, which was 3.5 times slower. <br>>>> <br>>>> So, I did some tests from my install CD/DVD's in "rescue mode" <br>>>> and the System Rescue CD from http://www.sysresccd.org. <br>>>> I used the same backup script (dump) that I use in normal <br>>>> boot mode. I stopped the backup after dump gave the first <br>>>> estimate of time: <br>>>> <br>>>> "dump" CentOS4.4 (Kernel 2.6.9-42.EL) install CD in rescue mode: <br>>>> transfer rate 8468 KB/s, estimated finish 2:09 <br>>>> <br>>>> "dump" CentOS5 (Kernel 2.6.18) install DVD in rescue mode: <br>>>> transfer rate 4422 KB/s, estimated finish 4:10 <br>>>> <br>>>> "dump" System Rescue CD (Kernel 2.6.22.9): <br>>>> transfer rate 4370 KB/s, estimated finish 4:15 <br>>>> <br>>>> Has anyone else seen this? If so, were you able to fix it? <br>>> I guess you'll have to do some cross-tests yourself... The problem could <br>>> be related to the filesystem read, the output write, or dump itself. <br>>> <br>>> Try dumping to /dev/null, and compare the speeds on both OSes. <br>>> <br>>> Try using the same dump version on both OSes to rule out dump problems. <br>>> <br>>> If the two tests above do not give any indication, it is probably a <br>>> kernel problem (either in the filesystem driver or the disk driver). <br>>> Maybe running some filesystem performance tests (bonnie etc) will show <br>>> the problem. <br>>> <br>>> Stelian. <br>>> -- <br>>> Stelian Pop <st...@po...> <br>>> <br> <br>James Roth wrote: <br>> Did you use hdparm to check / enable DMA on the hard drive? What are <br>> the drives and how are they configured? <br>> <br> <br>Hi Stelian and James, <br> <br>I read through the man page for hdparm and could not figure <br>out how to read my DMA settings with hdparam. :'( <br>How would I test this? <br> <br>I take it "bonnie++" will mean nothing to any of us <br>until I have data from a CentOS5 and a CentOS4.5 <br>machine to compare with. (I will be at the <br>4.5 customers site next Tuesday and will test <br>him for a comparison.) <br> <br>Also, the problem occurs when backing up to tape <br>as well as hard drive. Of the four computers below, <br>all are using RAID controllers as the source of the <br>data to be backed up. <br> <br>And, it is not just dump that is slow, everything <br>is slow. dump is just the easiest to get number <br>from. <br> <br>Basically, I have three identical computers in play: two <br>with the problem and one without (it is running Cent OS 4.5). <br>This is extensively documented, including fresh dmesg's, on <br> <br>http://bugs.centos.org/view.php?id=2382 <br> <br>Also, there is another party having the same problem <br>with a different processor, motherboard, chipset, <br>RAID controller and tape drive. He is documented over at <br> <br>http://www.centos.org/modules/newbb/viewtopic.php?topic_id=10659&start=0#forumpost34209 <br> <br>To summarize my three computers: <br> <br>My CentOS5: uname -r 2.6.18-8.1.14.el5 <br>My (and my customer's) motherboards (3) are the Supermicro x7dal-e: <br> <br>http://www.supermicro.com/products/motherboard/Xeon1333/5000X/X7DAL-E.cfm <br> <br>The LSI Megaraid cards are the SATA-150-4 and SATA 300-4XLP: <br> <br>http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sata/megaraid_sata_1504/index.html <br> <br> <br>http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sata/megaraid_sata_300_4xlp/index.html <br> <br> <br>The tape drive is a Exabyte VXA-3. <br>http://exabyte.com/products/products/get_products.cfm?prod_id=641 <br>run from an Adaptec 19160 SCSI-U160 LVDS Controller <br> <br>The backup hard drive is an external SATA drive (eSATA) <br> <br>Computer A: VXA-3, CentOS 5 , LSI SATA 300-4XLP <br> <br>DUMP: 17815320 blocks (17397.77MB) on 1 volume(s) <br>DUMP: finished in 6473 seconds, throughput 2,752 kBytes/sec <br> <br>DUMP: 5038380 blocks (4920.29MB) on 1 volume(s) <br>DUMP: finished in 1872 seconds, throughput 2,691 kBytes/sec <br>Note: 3 times slower than Computer B <br> <br>Computer B: VXA-3, CentOS 4.5 , LSI SATA-150-4 <br> <br>DUMP: 35933440 blocks (35091.25MB) on 1 volume(s) <br>DUMP: finished in 4396 seconds, throughput 8,174 kBytes/sec <br> <br>DUMP: 27096640 blocks (26461.56MB) on 1 volume(s) <br>DUMP: finished in 3532 seconds, throughput 7,671 kBytes/sec <br> <br> <br>Computer C: eSATA, CentOS 4.4 & 5 , LSI SATA-150-4 <br> <br>CentOS 4.4, 1:05 hours, approx 52 GB backup file 13,333 kBytes/sec <br>CentOS 5.0, 3:16 hours, approx 43 GB backup file 3,656 kBytes/sec <br>Note: 3.6 times slower <br> <br> <br>Computer C: misc speed tests <br> <br>cp winxp.hdd /dev/null (7.7 GB), HDD=LSI, Cent 5: 1:44 min, <br>Cent OS 4.4: 1:43 min <br>cp estat.hdd /dev/null (7.7 GB), HDD=eSata, Cent 5: 3:08 min, <br>Cent OS 4.4: 3:07 min <br> <br>cp winxp.hdd eraseme (7.7 GB), HDD=LSI, Cent 5: 4:50 min, <br>Cent OS 4.4: 4:48 min <br>cp estat.hdd eraseme (7.7 GB), HDD=eSata, Cent 5: 7:22 min, <br>Cent OS 4.4: 6:54 min <br> <br>Parallels' start up of winxp, HDD=LSI, 44 sec <br>startup of Acrobat 8 Pro: 1st time: 5 sec; 2nd time: 2 sec <br> <br>Parallels' start up of eSata, HDD=eSata, 38 sec <br>startup of Acrobat 8 Pro: 1st time: 7 sec; 2nd time: 1 sec <br> <br> <br>The [forth] computer that the other reporter is using <br>(cut and paste from the CentOS posting): <br> <br>MB Tyan S2892 <br>2x3ware9508 Raid Controlles 16x300Gb SATA drives. <br>1xAIC7902W onboard controller for connect external SCSI devices. <br>2xAXUS Brownie BR1600U3P SCSI Ultra 160 device with <br>16x250Gb IDE drives in each. <br>1xHP Storage Works Ultrium 460 LTO-2 Tape drive <br>1xQuantum LTO-2 Tape Drive <br>1xHP Storage Works Ultrium 920 LTO-3 Tape drive <br> <br>For backup we are use tapes in these Tape drives. <br> <br>As you can see this configuration use RAID massives <br>but not all of them on RAID cards. Some of these <br>RAIDs are on the RAID Storages. <br> <br>Now I cannot represent adequate information about <br>slower but can write some data for analises. <br>We write on tape different information in few <br>sessions and this count is different but we can see <br>from logs time of start, count of sessions, size of <br>total writed data. <br> <br>RHEL3: <br>Start - End : Size : Sessions <br>2007/07/23-13:07:01 - 2007/07/23-16:40:30 : 115.737 GB. : 5 <br>2007/07/23-13:07:36 - 2007/07/23-16:52:54 : 163.738 GB. : 8 <br>2007/07/25-09:50:42 - 2007/07/25-12:26:21 : 190.159 GB. : 8 <br>2007/07/27-09:20:40 - 2007/07/27-10:44:21 : 115.603 GB. : 7 <br>2007/07/30-09:49:18 - 2007/07/30-12:10:03 : 182.304 GB. : 10 <br> <br>CentOS 5: <br>2007/10/05-16:16:06 - 2007/10/06-16:32:30 : 98.534 GB. : 4 <br>2007/10/05-16:26:00 - 2007/10/08-16:37:49 : 100.837 GB. : 2 <br>2007/10/08-16:34:40 - 2007/10/09-17:12:23 : 176.883 GB. : 9 <br>2007/10/08-16:41:36 - 2007/10/09-05:17:06 : 100.837 GB. : 4 <br>2007/10/09-16:48:48 - 2007/10/09-22:22:14 : 73.3732 GB. : 4 |
From: Todd a. M. C. <Tod...@ve...> - 2007-11-07 21:30:45
|
> On Nov 6, 2007 7:59 AM, Stelian Pop <st...@po...> wrote: >> Le dimanche 04 novembre 2007 à 15:11 -0800, Todd and Margo Chester a >> écrit : >>> 2.6.18-8.1.8.el5 >>> >>> Hi All, >> Hi, >>> After I wiped my hard drives clean of CentOS 4.4 >>> (RHEL 4.4 clone) and installed CentOS 5 (RHEL 5), >>> I noticed that everything was slower. Especially, >>> dump, which was 3.5 times slower. >>> >>> So, I did some tests from my install CD/DVD's in "rescue mode" >>> and the System Rescue CD from http://www.sysresccd.org. >>> I used the same backup script (dump) that I use in normal >>> boot mode. I stopped the backup after dump gave the first >>> estimate of time: >>> >>> "dump" CentOS4.4 (Kernel 2.6.9-42.EL) install CD in rescue mode: >>> transfer rate 8468 KB/s, estimated finish 2:09 >>> >>> "dump" CentOS5 (Kernel 2.6.18) install DVD in rescue mode: >>> transfer rate 4422 KB/s, estimated finish 4:10 >>> >>> "dump" System Rescue CD (Kernel 2.6.22.9): >>> transfer rate 4370 KB/s, estimated finish 4:15 >>> >>> Has anyone else seen this? If so, were you able to fix it? >> I guess you'll have to do some cross-tests yourself... The problem could >> be related to the filesystem read, the output write, or dump itself. >> >> Try dumping to /dev/null, and compare the speeds on both OSes. >> >> Try using the same dump version on both OSes to rule out dump problems. >> >> If the two tests above do not give any indication, it is probably a >> kernel problem (either in the filesystem driver or the disk driver). >> Maybe running some filesystem performance tests (bonnie etc) will show >> the problem. >> >> Stelian. >> -- >> Stelian Pop <st...@po...> >> James Roth wrote: > Did you use hdparm to check / enable DMA on the hard drive? What are > the drives and how are they configured? > Hi Stelian and James, I read through the man page for hdparm and could not figure out how to read my DMA settings with hdparam. :'( How would I test this? I take it "bonnie++" will mean nothing to any of us until I have data from a CentOS5 and a CentOS4.5 machine to compare with. (I will be at the 4.5 customers site next Tuesday and will test him for a comparison.) Also, the problem occurs when backing up to tape as well as hard drive. Of the four computers below, all are using RAID controllers as the source of the data to be backed up. And, it is not just dump that is slow, everything is slow. dump is just the easiest to get number from. Basically, I have three identical computers in play: two with the problem and one without (it is running Cent OS 4.5). This is extensively documented, including fresh dmesg's, on http://bugs.centos.org/view.php?id=2382 Also, there is another party having the same problem with a different processor, motherboard, chipset, RAID controller and tape drive. He is documented over at http://www.centos.org/modules/newbb/viewtopic.php?topic_id=10659&start=0#forumpost34209 To summarize my three computers: My CentOS5: uname -r 2.6.18-8.1.14.el5 My (and my customer's) motherboards (3) are the Supermicro x7dal-e: http://www.supermicro.com/products/motherboard/Xeon1333/5000X/X7DAL-E.cfm The LSI Megaraid cards are the SATA-150-4 and SATA 300-4XLP: http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sata/megaraid_sata_1504/index.html http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sata/megaraid_sata_300_4xlp/index.html The tape drive is a Exabyte VXA-3. http://exabyte.com/products/products/get_products.cfm?prod_id=641 run from an Adaptec 19160 SCSI-U160 LVDS Controller The backup hard drive is an external SATA drive (eSATA) Computer A: VXA-3, CentOS 5 , LSI SATA 300-4XLP DUMP: 17815320 blocks (17397.77MB) on 1 volume(s) DUMP: finished in 6473 seconds, throughput 2,752 kBytes/sec DUMP: 5038380 blocks (4920.29MB) on 1 volume(s) DUMP: finished in 1872 seconds, throughput 2,691 kBytes/sec Note: 3 times slower than Computer B Computer B: VXA-3, CentOS 4.5 , LSI SATA-150-4 DUMP: 35933440 blocks (35091.25MB) on 1 volume(s) DUMP: finished in 4396 seconds, throughput 8,174 kBytes/sec DUMP: 27096640 blocks (26461.56MB) on 1 volume(s) DUMP: finished in 3532 seconds, throughput 7,671 kBytes/sec Computer C: eSATA, CentOS 4.4 & 5 , LSI SATA-150-4 CentOS 4.4, 1:05 hours, approx 52 GB backup file 13,333 kBytes/sec CentOS 5.0, 3:16 hours, approx 43 GB backup file 3,656 kBytes/sec Note: 3.6 times slower Computer C: misc speed tests cp winxp.hdd /dev/null (7.7 GB), HDD=LSI, Cent 5: 1:44 min, Cent OS 4.4: 1:43 min cp estat.hdd /dev/null (7.7 GB), HDD=eSata, Cent 5: 3:08 min, Cent OS 4.4: 3:07 min cp winxp.hdd eraseme (7.7 GB), HDD=LSI, Cent 5: 4:50 min, Cent OS 4.4: 4:48 min cp estat.hdd eraseme (7.7 GB), HDD=eSata, Cent 5: 7:22 min, Cent OS 4.4: 6:54 min Parallels' start up of winxp, HDD=LSI, 44 sec startup of Acrobat 8 Pro: 1st time: 5 sec; 2nd time: 2 sec Parallels' start up of eSata, HDD=eSata, 38 sec startup of Acrobat 8 Pro: 1st time: 7 sec; 2nd time: 1 sec The [forth] computer that the other reporter is using (cut and paste from the CentOS posting): MB Tyan S2892 2x3ware9508 Raid Controlles 16x300Gb SATA drives. 1xAIC7902W onboard controller for connect external SCSI devices. 2xAXUS Brownie BR1600U3P SCSI Ultra 160 device with 16x250Gb IDE drives in each. 1xHP Storage Works Ultrium 460 LTO-2 Tape drive 1xQuantum LTO-2 Tape Drive 1xHP Storage Works Ultrium 920 LTO-3 Tape drive For backup we are use tapes in these Tape drives. As you can see this configuration use RAID massives but not all of them on RAID cards. Some of these RAIDs are on the RAID Storages. Now I cannot represent adequate information about slower but can write some data for analises. We write on tape different information in few sessions and this count is different but we can see from logs time of start, count of sessions, size of total writed data. RHEL3: Start - End : Size : Sessions 2007/07/23-13:07:01 - 2007/07/23-16:40:30 : 115.737 GB. : 5 2007/07/23-13:07:36 - 2007/07/23-16:52:54 : 163.738 GB. : 8 2007/07/25-09:50:42 - 2007/07/25-12:26:21 : 190.159 GB. : 8 2007/07/27-09:20:40 - 2007/07/27-10:44:21 : 115.603 GB. : 7 2007/07/30-09:49:18 - 2007/07/30-12:10:03 : 182.304 GB. : 10 CentOS 5: 2007/10/05-16:16:06 - 2007/10/06-16:32:30 : 98.534 GB. : 4 2007/10/05-16:26:00 - 2007/10/08-16:37:49 : 100.837 GB. : 2 2007/10/08-16:34:40 - 2007/10/09-17:12:23 : 176.883 GB. : 9 2007/10/08-16:41:36 - 2007/10/09-05:17:06 : 100.837 GB. : 4 2007/10/09-16:48:48 - 2007/10/09-22:22:14 : 73.3732 GB. : 4 Many thanks, -T |
From: Stelian P. <st...@po...> - 2007-11-06 12:59:13
|
Le dimanche 04 novembre 2007 à 15:11 -0800, Todd and Margo Chester a écrit : > 2.6.18-8.1.8.el5 > > Hi All, Hi, > > After I wiped my hard drives clean of CentOS 4.4 > (RHEL 4.4 clone) and installed CentOS 5 (RHEL 5), > I noticed that everything was slower. Especially, > dump, which was 3.5 times slower. > > So, I did some tests from my install CD/DVD's in "rescue mode" > and the System Rescue CD from http://www.sysresccd.org. > I used the same backup script (dump) that I use in normal > boot mode. I stopped the backup after dump gave the first > estimate of time: > > "dump" CentOS4.4 (Kernel 2.6.9-42.EL) install CD in rescue mode: > transfer rate 8468 KB/s, estimated finish 2:09 > > "dump" CentOS5 (Kernel 2.6.18) install DVD in rescue mode: > transfer rate 4422 KB/s, estimated finish 4:10 > > "dump" System Rescue CD (Kernel 2.6.22.9): > transfer rate 4370 KB/s, estimated finish 4:15 > > Has anyone else seen this? If so, were you able to fix it? I guess you'll have to do some cross-tests yourself... The problem could be related to the filesystem read, the output write, or dump itself. Try dumping to /dev/null, and compare the speeds on both OSes. Try using the same dump version on both OSes to rule out dump problems. If the two tests above do not give any indication, it is probably a kernel problem (either in the filesystem driver or the disk driver). Maybe running some filesystem performance tests (bonnie etc) will show the problem. Stelian. -- Stelian Pop <st...@po...> |
From: Todd a. M. C. <Tod...@ve...> - 2007-11-04 22:12:07
|
2.6.18-8.1.8.el5 Hi All, After I wiped my hard drives clean of CentOS 4.4 (RHEL 4.4 clone) and installed CentOS 5 (RHEL 5), I noticed that everything was slower. Especially, dump, which was 3.5 times slower. So, I did some tests from my install CD/DVD's in "rescue mode" and the System Rescue CD from http://www.sysresccd.org. I used the same backup script (dump) that I use in normal boot mode. I stopped the backup after dump gave the first estimate of time: "dump" CentOS4.4 (Kernel 2.6.9-42.EL) install CD in rescue mode: transfer rate 8468 KB/s, estimated finish 2:09 "dump" CentOS5 (Kernel 2.6.18) install DVD in rescue mode: transfer rate 4422 KB/s, estimated finish 4:10 "dump" System Rescue CD (Kernel 2.6.22.9): transfer rate 4370 KB/s, estimated finish 4:15 Has anyone else seen this? If so, were you able to fix it? -T |
From: Israel G. <iga...@gm...> - 2007-09-28 13:38:21
|
On 9/28/07, Stelian Pop <st...@po...> wrote: > Le vendredi 28 septembre 2007 =E0 07:40 -0500, Israel Garcia a =E9crit : > > Hi, I have a box running CentOS 4.5 and an external hdd (100 GB) > > connected in /dev/sda1.. I'm dumping all filesystems to files.dump to > > the external hdd running this: > > > > dump -0 -a -f /mounted_exthdd/filesystem.dmp / > > dump -0 -a -f /mounted_exthdd/filesystem.dmp /usr > > dump -0 -a -f /mounted_exthdd/filesystem.dmp /var > > > > and so on.. > > Are you aware that you're overwriting your backups at each invocation ? > Hi, Stelian... thanks for your soon reply.. yes, I know that.. I'm really dumping all filesystems from Monday to Saturday.. > > So, When I'm gping to restore I run: > > > > restore -i -A /mounted_exthdd/filesystem.dmp to enter in interactive > > mode but when I add/restore files I get this error: > > > > restore > extract > > -A is used to specify a special "table of contents" file. What you're > probably searching for is: > restore -i -f /mounted_exthdd/filesystem.dmp For example, Im trying to restore a file from /boot filesystem... [root@domain# /usr/local/sbin/restore -i -f /hdd/Fri/boot.dump /usr/local/sbin/restore > /usr/local/sbin/restore > what Dump date: Fri Sep 28 08:44:09 2007 Dumped from: the epoch Level 0 dump of /boot on sentai-test.cimex.com.cu:/dev/hda1 Label: /boot1 /usr/local/sbin/restore > verbose verbose mode on /usr/local/sbin/restore > ls .: 2 ./ 17 initrd-2.6.9-55.EL.img 2 ../ 11 lost+found/ 14 System.map-2.6.9-55.EL 12 message 15 config-2.6.9-55.EL 13 message.ja 2009 grub/ 16 vmlinuz-2.6.9-55.EL /usr/local/sbin/restore > add vmlinuz-2.6.9-55.EL /usr/local/sbin/restore > extract Extract requested files You have not read any volumes yet. Unless you know which volume your file(s) are on you should start with the last volume and work towards the first. Specify next volume # (none if no more volumes): none You have not read any volumes yet. Unless you know which volume your file(s) are on you should start with the last volume and work towards the first. Specify next volume # (none if no more volumes): none ./vmlinuz-2.6.9-55.EL: (inode 16) not found on tape I'm getting the same error... regards; Israel > > Stelian. > > -- > Stelian Pop <st...@po...> > > --=20 Regards; Israel Garcia |
From: Israel G. <iga...@gm...> - 2007-09-28 12:40:25
|
Hi, I have a box running CentOS 4.5 and an external hdd (100 GB) connected in /dev/sda1.. I'm dumping all filesystems to files.dump to the external hdd running this: dump -0 -a -f /mounted_exthdd/filesystem.dmp / dump -0 -a -f /mounted_exthdd/filesystem.dmp /usr dump -0 -a -f /mounted_exthdd/filesystem.dmp /var and so on.. So, When I'm gping to restore I run: restore -i -A /mounted_exthdd/filesystem.dmp to enter in interactive mode but when I add/restore files I get this error: restore > extract Mount tape volume 1 Enter ``none'' if there are no more tapes otherwise enter tape name (default: /dev/tape) none You have not read any volumes yet. Unless you know which volume your file(s) are on you should start with the last volume and work towards the first. Specify next volume # (none if no more volumes): none ./local/bin/backup2file.sh: (inode 1782407) not found on tape So, I can not restore nothing from this dumped files... Can you help me? thabks in advance -- Regards; Israel Garcia |
From: Kenneth P. <sh...@se...> - 2007-09-19 12:26:34
|
--On Wednesday, September 19, 2007 1:13 PM +0200 Helmut Jarausch <jar...@ig...> wrote: > I'm piping dump's output via (e.g.) ttcp over the network to my server. > > Now I'd like to put the "Quick File Access" file generated by the option > -Q on the server as well. > Has anybody tried that? Probably I have to do it via a named pipe. I just back up to a Samba mount, a removable USB drive on a neighboring Windows server. The drive is FAT32 so it can't handle files bigger than 2 GB. As a result, I use the -M option to split the backup into 1 GB chunks. |
From: Helmut J. <jar...@ig...> - 2007-09-19 11:14:06
|
Hi, I'm piping dump's output via (e.g.) ttcp over the network to my server. Now I'd like to put the "Quick File Access" file generated by the option -Q on the server as well. Has anybody tried that? Probably I have to do it via a named pipe. Many thanks for any hints, Helmut Jarausch Lehrstuhl fuer Numerische Mathematik RWTH - Aachen University D 52056 Aachen, Germany |
From: Tony N. <ton...@ge...> - 2007-08-27 20:54:36
|
At 11:57 AM -0400 8/27/07, Ubaidul Khan wrote: >I attempting to backup a Redhat Linux Machine(Linux 2.6.18-8.1.1.el5xen >x86_64) without a tape drive to another Redhat Linux Machine(Linux >2.6.18-8.1.1.el5xen x86_64) with a tape drive. I have some backup scripts >that login to remote machines over ssh and call scripts that start dump over >ssh. The machines are setup in the following way: > >1. operator - This group exits on all machines >2. sysop - This is the user that's invoking dump on all machines - this >user is also part of the following groups: operator disk wheel >3. The sysop environment includes the variable RSH which is set to >"/usr/bin/ssh" >4. Public key authentication has been setup between the machines so sysop >can login from one machine to the other and back and forth. I verified this >and it works(I can ssh to the remote machine and from that machine I can ssh >to the machine with the tape drive and run commands). > >The backup system works in the following manner: > > M1 ---sysop-ssh-login(invoke dump)----> M2 > M2 ---dump-over-ssh---> M1:/dev/nst0 > [sysop@M1 ~]$ ssh -T -v M2 "/sbin/dump -0 -f M1:/dev/nst0 /boot" > >The error message I am getting is: > DUMP: rcmd: socket: Permission denied > >However I can successfully run dumps, if I login to M2 over ssh to an >interactive shell(bash) and run: > > [sysop@M2 ~]$/sbin/dump -0 -f M1:/dev/nst0 /boot > >Any help/suggestion is greatly appreciated. Uhh, the second one is a login shell? WAG. -- ____________________________________________________________________ TonyN.:' <mailto:ton...@ge...> ' <http://www.georgeanelson.com/> |
From: Ubaidul K. <ukh...@ho...> - 2007-08-27 15:57:37
|
I attempting to backup a Redhat Linux Machine(Linux 2.6.18-8.1.1.el5xen x86_64) without a tape drive to another Redhat Linux Machine(Linux 2.6.18-8.1.1.el5xen x86_64) with a tape drive. I have some backup scripts that login to remote machines over ssh and call scripts that start dump over ssh. The machines are setup in the following way: 1. operator - This group exits on all machines 2. sysop - This is the user that's invoking dump on all machines - this user is also part of the following groups: operator disk wheel 3. The sysop environment includes the variable RSH which is set to "/usr/bin/ssh" 4. Public key authentication has been setup between the machines so sysop can login from one machine to the other and back and forth. I verified this and it works(I can ssh to the remote machine and from that machine I can ssh to the machine with the tape drive and run commands). The backup system works in the following manner: M1 ---sysop-ssh-login(invoke dump)----> M2 M2 ---dump-over-ssh---> M1:/dev/nst0 [sysop@M1 ~]$ ssh -T -v M2 "/sbin/dump -0 -f M1:/dev/nst0 /boot" The error message I am getting is: DUMP: rcmd: socket: Permission denied However I can successfully run dumps, if I login to M2 over ssh to an interactive shell(bash) and run: [sysop@M2 ~]$/sbin/dump -0 -f M1:/dev/nst0 /boot Any help/suggestion is greatly appreciated. _________________________________________________________________ See what youre getting into before you go there http://newlivehotmail.com/?ocid=TXT_TAGHM_migration_HM_viral_preview_0507 |
From: <pm...@fr...> - 2007-08-09 21:46:02
|
On Thu, 9 Aug 2007, Michael Walma wrote: > My questions are: will using the '-e' option to exclude an inode > associated with a directory exclude all its sub-directories as well > and, more generally; will my scheme work? Yes. Another method: chattr +d /boot /var; dump -h0 ...; chattr -d /boot /var Cheers, Peter -- http://pmrb.free.fr/contact/ |
From: Eric J. <eje...@sw...> - 2007-08-09 17:51:07
|
This should work fine. I've used -e before to exclude inodes of mozilla cache directories, and it excludes the whole subdirectory tree. You could make a dummy directory with a few files and some subdirectories with files and do a quick test dump of that if you want to test it quickly. Eric |
From: Michael W. <mi...@wa...> - 2007-08-09 17:33:06
|
Hello, I've been using dump/restore to 'clone' debian boxes and have been =20 very pleased with the results so far. This time, I want to try to do =20 something different. I want to copy the appropriate sub-trees from a =20 monolithic partition that contains the whole file tree into separate =20 partitions for the /boot, /var and, of course, /. I was planning to =20 dump /boot and /var separately, and then dump / but exclude /boot and =20 /var. I would accomplish this by 'stat'ing /boot and /var to get =20 their inode numbers, and using the '-e' option of 'dump' to exclude =20 them. My questions are: will using the '-e' option to exclude an inode =20 associated with a directory exclude all its sub-directories as well =20 and, more generally; will my scheme work? I know I could just try it, but I'd prefer to save myself the effort =20 of a doomed-to-fail attempt. Many thanks in advance, Michael ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. |
From: Stelian P. <st...@po...> - 2007-07-28 19:43:11
|
Le vendredi 27 juillet 2007 à 09:28 -0700, Peter Van a écrit : > I'll post my results after swapping the tape drive in case there is > interest. Sure. Stelian. -- Stelian Pop <st...@po...> |
From: Peter V. <pe...@wn...> - 2007-07-27 16:28:12
|
Stelian, Thanks for the reply. In response to your suggestions: >In your case, I see only two possibilities: >1) there is a hardware incompatibility between the two drives. >2) there is a software (driver) incompatibility between the two drives. >In order to rule out 1), you need to physically swap the tape drives = (or >install RH9 on the new machine on some removable media). I am going to install the new drive in the RH9 system and see if I=20 can read old tapes. =20 For 2) I guess you should ask for help the maintainer of the SCSI tape driver (Kai Makisara), either directly or on the scsi mailing list. Kai Makisara replied to my query just today. His response=20 to my questions follows: > I'll insert a tape into the FC6 PC that was created on the RH9 PC > and issue this command : >=20 > " mt -f /dev/nst0 fsf 1" >=20 > And get back an error "/dev/nst0: Input/output error" >=20 > and in dmesg I get : "st0: Current: sense key: Medium Error > Additional sense: Unrecovered read = error" >=20 This comes from the drive. It can't read the tape. This is not a problem = you can solve by software. > So just a basic position of the tape fails. However > "mt -f /dev/nst0 rewind" yields no error. >=20 Rewind does not need to be able to read much of the tape. > The versions of /bin/mt are different. RH9's is 0.7 and FC6 is 0.9b > but from looking at the man pages for mt, st and the README file > for the mt v0.9b I don't see any significant modifications or default > differences. >=20 The mt versions don't matter. > What's also very interesting is that if I create multiple dumps on the > FC6 PC I can restore from the FC6 PC. Positioning the tape > also works. >=20 > I'd like to get to the bottom of this - can you offer any suggestions = on > how to proceed? Are there any "mt" commands that could be helpful? >=20 It seems that the drive reading the tape can't raed what the other drive = has written. The DDS drives are quite well compatible. So, I would say=20 that either the drive that has written the tape or the drive reading the = tape is broken. If it is an alignment problem in the drive, it is = probable=20 that the drive can read tapes written by the same drive but has=20 difficulties with tapes written by other drives. Kai ~~~~~~~~ I'll post my results after swapping the tape drive in case there is = interest. Thanks to all, Peter |
From: Stelian P. <st...@po...> - 2007-07-27 14:05:06
|
Le jeudi 26 juillet 2007 à 10:41 +0200, Helmut Jarausch a écrit : > Hi, Hi, > is dump going to support the new Linux ext4 fs format? I'm afraid I haven't had time to look at ext4 yet. But dump does almost all filesystem access using libext2fs, so if libext2fs will support ext4, dump will follow. Stelian. -- Stelian Pop <st...@po...> |
From: Stelian P. <st...@po...> - 2007-07-27 13:57:52
|
Le jeudi 26 juillet 2007 à 09:47 -0700, Peter Van a écrit : > Hi Eric, > > I do use a block size of 0. I wish that would fix it. I think > my problem is with mt-st since I can't even > position the tape with "mt -f /dev/nst0 fsf 1" without > an error. I'm not sure the block size even affects mt-st > but just to be on the safe side it is set to 0. mt is irrelevant. mt is just the userspace program which configures the SCSI tape driver. In your case, I see only two possibilities: 1) there is a hardware incompatibility between the two drives. 2) there is a software (driver) incompatibility between the two drives. In order to rule out 1), you need to physically swap the tape drives (or install RH9 on the new machine on some removable media). For 2) I guess you should ask for help the maintainer of the SCSI tape driver (Kai Makisara), either directly or on the scsi mailing list. Stelian. -- Stelian Pop <st...@po...> |
From: Peter V. <pe...@wn...> - 2007-07-26 16:47:15
|
Hi Eric, I do use a block size of 0. I wish that would fix it. I think=20 my problem is with mt-st since I can't even=20 position the tape with "mt -f /dev/nst0 fsf 1" without an error. I'm not sure the block size even affects mt-st but just to be on the safe side it is set to 0. Thanks for the reply. Peter ----- Original Message -----=20 From: Eric Jensen=20 To: Peter Van=20 Sent: Thursday, July 26, 2007 6:42 AM Subject: Re: [Dump-users] Problems restoring on FC6=20 Hi Peter, I apologize if I already suggested this earlier in the thread - I can't recall - could it be something as simple as tape block size? Try "mt -f /dev/nst0 setblk 0" and try the read? Just a guess - good luck. Eric |
From: Helmut J. <jar...@ig...> - 2007-07-26 08:42:19
|
Hi, is dump going to support the new Linux ext4 fs format? Many thanks for any info, Helmut Jarausch Lehrstuhl fuer Numerische Mathematik RWTH - Aachen University D 52056 Aachen, Germany |
From: Peter V. <pe...@wn...> - 2007-07-25 23:00:57
|
Hello, I would be grateful for any additional input regarding this problem. I have DDS4 tapes created on RH9 (dump 0.4b36) that I am unable to restore to a newly built PC running FC6 with a new DDS4 tape drive. My first thought was to assume an incompatabiliy between the dump versions (dump 0.4b36 verses 0.4b41 for FC6) but Stelian points out below this is unlikely. He does mention differences in the scsi driver : "However, one piece of the puzzle did change: the scsi tape driver. It could be as simple as having different default settings - like the read ahead for example - or it could behave differently." So I'm now trying my third tape drive and I'm seeing a pattern. I'll insert a tape into the FC6 PC that was created on the RH9 PC and issue this command : " mt -f /dev/nst0 fsf 1" And get back an error "/dev/nst0: Input/output error" and in dmesg I get : "st0: Current: sense key: Medium Error Additional sense: Unrecovered read error" So just a basic position of the tape fails. However "mt -f /dev/nst0 rewind" yields no error. The versions of /bin/mt are different. RH9's is 0.7 and FC6 is 0.9b but from looking at the man pages for mt, st and the README file for the mt v0.9b I don't see any significant modifications or default differences. What's also very interesting is that if I create multiple dumps on the FC6 PC I can restore from the FC6 PC. Positioning the tape also works. I'd like to get to the bottom of this - does any one have a suggestion on how to proceed? Thanks Pete ----- Original Message ----- From: Stelian Pop To: Peter Van Cc: dum...@li... Sent: Thursday, June 14, 2007 2:16 PM Subject: Re: [Dump-users] Problems restoring on FC6 On Wed, Jun 13, 2007 at 10:01:44AM -0700, Peter Van wrote: > Hi, > > I have many DDS4 dump tapes created on Redhat 9 (dump 0.4b36) and > have no problem restoring to the same RH9 computer or even to a > different computer running FC2. > > I have just built a new computer with FC6 (dump/restore version 0.4b41) > and am having limited success restoring tapes to this computer > that were created on my RH9 computer. > > Some tapes restore fine but most do not and fail with : > > "Tape read error while skipping over inode 17008 > continue? [yn] " > > and the following in dmesg : "st0: Current: sense key: Medium Error > Additional sense: Sequential positioning error" > > Or I get a "Tape read error on first record" message. > > Are there any differences between the versions of dump/restore on > RH9 and FC6 that could account for this ? I don't think there are (but you could try compiling 0.4b36 on FC6 just to confirm). However, one piece of the puzzle did change: the scsi tape driver. It could be as simple as having different default settings - like the read ahead for example - or it could behave differently. Or maybe the tape drive (the hardware) is just less tolerant on tape errors. Maybe you should try swapping the tape drives just to see... Stelian. -- Stelian Pop <st...@po...> |
From: Todd a. M. C. <Tod...@ve...> - 2007-07-20 18:12:15
|
CentOS 5 (Red Hat Enterprise Linux clone) #uname -r 2.6.18-8.1.8.el5 #rpm -qa \*dump\* dump-0.4b41-2.fc6 Hi All, Has anyone come across this problem and how did you fix it? I am using an LSI Megaraid SATA 150-4 RAID card (/dev/sda). My motherboard is a Supermicro x7dal-e. I am backing up to a removable SATA drive (/dev/sdb). I wiped my CentOS 4.4 and installed CentOS 5. No hardware changes. Under 4.4 a full backup (dump) took 1:05 hours (52 GB backup file). Under 5.0, the same backup takes 3:16 (Note: 43 GB, yes it is smaller). In general, the system is now very slow. Windows XP running under Parallels Virtual Machine is remarkably more sluggish. I made the following speed tests in CentOS 5 and from my first CentOS 4.4 boot disk in "rescue" mode (see below). I can not see a difference that would explain this. Anyone have any ideas? -T ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #grep -i bogomips /var/log/dmesg Calibrating delay using timer specific routine.. 4001.91 BogoMIPS (lpj=2000959) Calibrating delay using timer specific routine.. 3999.58 BogoMIPS (lpj=1999791) Total of 2 processors activated (8001.50 BogoMIPS). #/sbin/hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 236 MB in 3.01 seconds = 78.53 MB/sec #/sbin/hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 182 MB in 3.01 seconds = 60.37 MB/sec ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CentOS 4.4 (linux rescue 2.6.9-42.EL): #cat /proc/cpuinfo bogomips : 4002.92 #/sbin/hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 216 MB in 3.01 seconds = 71.87 MB/sec #/sbin/hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 184 MB in 3.01 seconds = 61.18 MB/sec |
From: Tony N. <ton...@ge...> - 2007-06-28 18:55:59
|
At 1:37 PM -0400 6/28/07, Jeffrey Ross wrote: >moving a partition from one disk to another, the partition is just shy of >300 GB (291GB) I used the syntax of "dump -0f - /dev/sdb2 | restore -rf -" > >The system is copying from one SATA drive to another SATA drive (3.0GB/s 3.0Gb/s? >transfer rate is enabled). The OS is Fedora 7. > >after a half hour dump is giving me estimate times of 20+ hours with a >transfer rate 3930kB/s > >Is this about right? or should I run the dump/restore commands with >additional options for better throughput? It seems reasonable to me. The source drive is doing a lot of seeking, a seek or two for each file, though the destination drive gets to write more smoothly. Have you tried dd'ing some data across, just for testing the speed of copying? If you feel no need to defragment the partition and the destination is as lorge as the source, you could do the copy with dd and adjust the volume size later as needed. If dump is the problem, you might try cpio instead. Or tar, if you're so inclined. I think they'll all be about the same speed, however. -- ____________________________________________________________________ TonyN.:' <mailto:ton...@ge...> ' <http://www.georgeanelson.com/> |
From: Jeffrey R. <je...@bu...> - 2007-06-28 17:38:09
|
moving a partition from one disk to another, the partition is just shy of 300 GB (291GB) I used the syntax of "dump -0f - /dev/sdb2 | restore -rf -" The system is copying from one SATA drive to another SATA drive (3.0GB/s transfer rate is enabled). The OS is Fedora 7. after a half hour dump is giving me estimate times of 20+ hours with a transfer rate 3930kB/s Is this about right? or should I run the dump/restore commands with additional options for better throughput? Thanks! Jeff |
From: Stelian P. <st...@po...> - 2007-06-14 21:17:04
|
On Wed, Jun 13, 2007 at 10:01:44AM -0700, Peter Van wrote: > Hi, > > I have many DDS4 dump tapes created on Redhat 9 (dump 0.4b36) and > have no problem restoring to the same RH9 computer or even to a > different computer running FC2. > > I have just built a new computer with FC6 (dump/restore version 0.4b41) > and am having limited success restoring tapes to this computer > that were created on my RH9 computer. > > Some tapes restore fine but most do not and fail with : > > "Tape read error while skipping over inode 17008 > continue? [yn] " > > and the following in dmesg : "st0: Current: sense key: Medium Error > Additional sense: Sequential positioning error" > > Or I get a "Tape read error on first record" message. > > Are there any differences between the versions of dump/restore on > RH9 and FC6 that could account for this ? I don't think there are (but you could try compiling 0.4b36 on FC6 just to confirm). However, one piece of the puzzle did change: the scsi tape driver. It could be as simple as having different default settings - like the read ahead for example - or it could behave differently. Or maybe the tape drive (the hardware) is just less tolerant on tape errors. Maybe you should try swapping the tape drives just to see... Stelian. -- Stelian Pop <st...@po...> |
From: Tony N. <ton...@ge...> - 2007-06-14 01:59:34
|
At 10:01 AM -0700 6/13/07, Peter Van wrote: >Hi, > >I have many DDS4 dump tapes created on Redhat 9 (dump 0.4b36) and >have no problem restoring to the same RH9 computer or even to a >different computer running FC2. > >I have just built a new computer with FC6 (dump/restore version 0.4b41) >and am having limited success restoring tapes to this computer >that were created on my RH9 computer. > >Some tapes restore fine but most do not and fail with : > >"Tape read error while skipping over inode 17008 >continue? [yn] " > >and the following in dmesg : "st0: Current: sense key: Medium Error > Additional sense: Sequential positioning error" > >Or I get a "Tape read error on first record" message. > >Are there any differences between the versions of dump/restore on >RH9 and FC6 that could account for this ? I don't know, myself. Perhaps you could copy the entire tape to a file with, say, dd, to make sure that the tape can be read completely. Also, if that works, then you would be able to try restore more times without adding wear to the tape. If it almost works you might be able to try dd several times and assemble a complete copy that way (dd halts on errors). There is a program called dd_rescue that could help with this, but it might add too much wear for a tape. -- ____________________________________________________________________ TonyN.:' <mailto:ton...@ge...> ' <http://www.georgeanelson.com/> |