From: Dan L. <da...@la...> - 2016-02-08 22:42:25
|
Hello, I am working with an LTO-4 tape library. It has two drives but I plan to write to only one for backups. I will backup to disk first, on another SD. Later, I will copy the jobs to the tape library on this new SD which is on another server. The copy jobs will be spooled to local SSD before being written to tape. re http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance <http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance> Now what I'm thinking of is streaming multiple concurrent jobs to a single drive. Sure, downside on restore is interleaving of blocks.... I don't see any downsides to going down this path. I have yet to run any copy jobs to the new library, but it may be ready this week. Comments? -- Dan Langille - BSDCan / PGCon da...@la... |
From: Heitor F. <he...@ba...> - 2016-02-08 23:03:44
|
> Hello, Hello, Dan. > I am working with an LTO-4 tape library. It has two drives but I plan to write > to only one for backups. > I will backup to disk first, on another SD. Later, I will copy the jobs to the > tape library on this new SD > which is on another server. The copy jobs will be spooled to local SSD before > being written to tape. > re http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance > Now what I'm thinking of is streaming multiple concurrent jobs to a single > drive. Did you test if it is possible? (I can test for you in a dummy environment if you want). I don't remember if it is or if the bellow quote only applies to copy jobs... "If the Migration control job finds a number of JobIds to migrate (e.g. it is asked to migrate one or more Volumes), it will start one new migration backup job for each JobId found on the specified Volumes. Please note that Migration doesn't scale too well since Migrations are done on a Job by Job basis. This if you select a very large volume or a number of volumes for migration, you may have a large number of Jobs that start. Because each job must read the same Volume, they will run consecutively (not simultaneously)." (http://www.bacula.org/5.2.x-manuals/en/main/main/Migration_Copy.html) > Sure, downside on restore is interleaving of blocks.... I also remember that I read somewhere that you don't need to concern about that when doing copy jobs, but I think it refers to lack of multiplexing quoted before. > I don't see any downsides to going down this path. I have yet to run any copy > jobs to the new library, > but it may be ready this week. Even without multiplexing, I think you will achieve a hight throughput since you are copying sequential data. In this case, if you really want to multiplex the copy I think you would have to create two different source and destination pools. > Comments? > -- > Dan Langille - BSDCan / PGCon > da...@la... Regards, -- ======================================================================= Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified Administrator II Próximas aulas telepresencial ao-vivo - 15 de fevereiro: http://www.bacula.com.br/agenda/ Ministro treinamento e implementação in-company Bacula: http://www.bacula.com.br/in-company/ Ou assista minhas videoaulas on-line: http://www.bacula.com.br/treinamento-bacula-ed/ 61 8268-4220 Site: www.bacula.com.br | Facebook: heitor.faria ======================================================================== |
From: Heitor F. <he...@ba...> - 2016-02-08 23:09:56
|
>> I am working with an LTO-4 tape library. It has two drives but I plan to write >> to only one for backups. >> I will backup to disk first, on another SD. Later, I will copy the jobs to the >> tape library on this new SD >> which is on another server. The copy jobs will be spooled to local SSD before >> being written to tape. >> re http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance >> Now what I'm thinking of is streaming multiple concurrent jobs to a single >> drive. > Did you test if it is possible? (I can test for you in a dummy environment if > you want). I don't remember if it is or if the bellow quote only applies to > copy jobs... > "If the Migration control job finds a number of JobIds to migrate (e.g. it is > asked to migrate one or more Volumes), it will start one new migration backup > job for each JobId found on the specified Volumes. Please note that Migration > doesn't scale too well since Migrations are done on a Job by Job basis. This if > you select a very large volume or a number of volumes for migration, you may > have a large number of Jobs that start. Because each job must read the same > Volume, they will run consecutively (not simultaneously)." > (http://www.bacula.org/5.2.x-manuals/en/main/main/Migration_Copy.html) Nevermind, Dan. Since version 7.0 I think it is possible multiplexing copy jobs and you will be just fine: "Migration/Copy/VirtualFull Performance Enhancements The Bacula Storage daemon now permits multiple jobs to simultaneously read the same disk Volume, which gives substantial performance enhancements when running Migration, Copy, or VirtualFull jobs that read disk Volumes. Our testing shows that when running multiple simultaneous jobs, the jobs can finish up to ten times faster with this version of Bacula. This is built-in to the Storage daemon, so it happens automatically and transparently." Regards, -- ======================================================================= Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified Administrator II Próximas aulas telepresencial ao-vivo - 15 de fevereiro: http://www.bacula.com.br/agenda/ Ministro treinamento e implementação in-company Bacula: http://www.bacula.com.br/in-company/ Ou assista minhas videoaulas on-line: http://www.bacula.com.br/treinamento-bacula-ed/ 61 8268-4220 Site: www.bacula.com.br | Facebook: heitor.faria ======================================================================== |
From: Heitor F. <he...@ba...> - 2016-02-08 23:12:52
|
>>> I will backup to disk first, on another SD. Later, I will copy the jobs to the >>> tape library on this new SD >>> which is on another server. The copy jobs will be spooled to local SSD before >>> being written to tape. Sorry about this mess. If you are using disk spooling you don't have to concern about data interleaving, unless your job spool limit is too low. Regards. -- ======================================================================= Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified Administrator II Próximas aulas telepresencial ao-vivo - 15 de fevereiro: http://www.bacula.com.br/agenda/ Ministro treinamento e implementação in-company Bacula: http://www.bacula.com.br/in-company/ Ou assista minhas videoaulas on-line: http://www.bacula.com.br/treinamento-bacula-ed/ 61 8268-4220 Site: www.bacula.com.br | Facebook: heitor.faria ======================================================================== |
From: Ana E. M. A. <emi...@gm...> - 2016-02-09 08:44:37
|
Hello Heitor and Dan, When a Job is despooling (disk->tape), the file daemon will wait. It will just begin spooling again (if necessary, i.e., amount of space that can be used by the job in the spool area is less then the amount of data that will be backed up for this client). The others file daemons will be spooling to disk. So IMHO the more large the spool area you could have for a job, the minimum interleaving you will have. It will depend also if you have jobs with very diversified amounts of backup data (total backup size per job). I would choose the average value (of the total backup size for each job) for the Maximum Spool Size, if I did not had enough space in disk to choose the highest value (the highest total backup size that a job could have), to minimize data interleaving. In any way, you will speed up your backups since the network delay for the data travels from client to the storage is greater than the transfer speeds from disk to tape (supposing your spool area data is not traveling through network). Best regards, Ana On Tue, Feb 9, 2016 at 12:12 AM, Heitor Faria <he...@ba...> wrote: > I will backup to disk first, on another SD. Later, I will copy the jobs > to the tape library on this new SD > which is on another server. The copy jobs will be spooled to local SSD > before being written to tape. > > Sorry about this mess. If you are using disk spooling you don't have to > concern about data interleaving, unless your job spool limit is too low. > > Regards. > -- > ======================================================================= > Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified > Administrator II > Próximas aulas telepresencial ao-vivo - 15 de fevereiro: > http://www.bacula.com.br/agenda/ > Ministro treinamento e implementação in-company Bacula: > http://www.bacula.com.br/in-company/ > Ou assista minhas videoaulas on-line: > http://www.bacula.com.br/treinamento-bacula-ed/ > 61 <%2B55%2061%202021-8260>8268-4220 <%2B55%2061%208268-4220> > Site: www.bacula.com.br | Facebook: heitor.faria > <http://www.facebook.com/heitor.faria> > ======================================================================== > > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 > _______________________________________________ > Bacula-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/bacula-users > > |
From: Ana E. M. A. <emi...@gm...> - 2016-02-09 09:31:25
|
If you have very slow network file transfers from your file daemons to your disk spool area, it would be better to have a not too large data spooling area. Because despooling would be unnecessary delayed waiting a job to reach the total amount of spool area dedicated to it. Best regards, Ana On Tue, Feb 9, 2016 at 9:44 AM, Ana Emília M. Arruda <emi...@gm... > wrote: > Hello Heitor and Dan, > > When a Job is despooling (disk->tape), the file daemon will wait. It will > just begin spooling again (if necessary, i.e., amount of space that can be > used by the job in the spool area is less then the amount of data that will > be backed up for this client). The others file daemons will be spooling to > disk. > > So IMHO the more large the spool area you could have for a job, the > minimum interleaving you will have. > > It will depend also if you have jobs with very diversified amounts of > backup data (total backup size per job). I would choose the average value > (of the total backup size for each job) for the Maximum Spool Size, if I > did not had enough space in disk to choose the highest value (the highest > total backup size that a job could have), to minimize data interleaving. > > In any way, you will speed up your backups since the network delay for the > data travels from client to the storage is greater than the transfer speeds > from disk to tape (supposing your spool area data is not traveling through > network). > > Best regards, > Ana > > On Tue, Feb 9, 2016 at 12:12 AM, Heitor Faria <he...@ba...> > wrote: > >> I will backup to disk first, on another SD. Later, I will copy the jobs >> to the tape library on this new SD >> which is on another server. The copy jobs will be spooled to local SSD >> before being written to tape. >> >> Sorry about this mess. If you are using disk spooling you don't have to >> concern about data interleaving, unless your job spool limit is too low. >> >> Regards. >> -- >> ======================================================================= >> Heitor Medrado de Faria - LPIC-III | ITIL-F | Bacula Systems Certified >> Administrator II >> Próximas aulas telepresencial ao-vivo - 15 de fevereiro: >> http://www.bacula.com.br/agenda/ >> Ministro treinamento e implementação in-company Bacula: >> http://www.bacula.com.br/in-company/ >> Ou assista minhas videoaulas on-line: >> http://www.bacula.com.br/treinamento-bacula-ed/ >> 61 <%2B55%2061%202021-8260>8268-4220 <%2B55%2061%208268-4220> >> Site: www.bacula.com.br | Facebook: heitor.faria >> <http://www.facebook.com/heitor.faria> >> ======================================================================== >> >> >> ------------------------------------------------------------------------------ >> Site24x7 APM Insight: Get Deep Visibility into Application Performance >> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >> Monitor end-to-end web transactions and take corrective actions now >> Troubleshoot faster and improve end-user experience. Signup Now! >> http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 >> _______________________________________________ >> Bacula-users mailing list >> Bac...@li... >> https://lists.sourceforge.net/lists/listinfo/bacula-users >> >> > |
From: Dan L. <da...@la...> - 2016-02-11 21:46:08
|
> On Feb 9, 2016, at 3:44 AM, Ana Emília M. Arruda <emi...@gm...> wrote: > > Hello Heitor and Dan, > > When a Job is despooling (disk->tape), the file daemon will wait. It will just begin spooling again (if necessary, i.e., amount of space that can be used by the job in the spool area is less then the amount of data that will be backed up for this client). The others file daemons will be spooling to disk. > > So IMHO the more large the spool area you could have for a job, the minimum interleaving you will have. > > It will depend also if you have jobs with very diversified amounts of backup data (total backup size per job). I would choose the average value (of the total backup size for each job) for the Maximum Spool Size, if I did not had enough space in disk to choose the highest value (the highest total backup size that a job could have), to minimize data interleaving. > > In any way, you will speed up your backups since the network delay for the data travels from client to the storage is greater than the transfer speeds from disk to tape (supposing your spool area data is not traveling through network). In my case, this will be an SD to SD copy job. No FD involved. I backup to disk first. Then I use copy jobs to move from the disk backup on server A to the tape backup on server B. Server B has 500GB of SSD (in a mirror). |
From: Josh F. <jf...@pv...> - 2016-02-09 13:30:10
|
On 2/8/2016 5:42 PM, Dan Langille wrote: > Hello, > > I am working with an LTO-4 tape library. It has two drives but I plan > to write to only one for backups. > > I will backup to disk first, on another SD. Later, I will copy the > jobs to the tape library on this new SD > which is on another server. The copy jobs will be spooled to local > SSD before being written to tape. > > re > http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance > > Now what I'm thinking of is streaming multiple concurrent jobs to a > single drive Are the two servers on a 10G or better network? Unless the disk subsystem on the other SD is slow, it will likely stream close to the 1G max of 125 MB/s, since it will be essentially sequential reads. I'm not convinced that concurrency will gain anything. > . > > Sure, downside on restore is interleaving of blocks.... > > I don't see any downsides to going down this path. I have yet to run > any copy jobs to the new library, > but it may be ready this week. > > Comments? > > -- > Dan Langille - BSDCan / PGCon > da...@la... <mailto:da...@la...> > > > > > > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 > > > _______________________________________________ > Bacula-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/bacula-users |
From: Ana E. M. A. <emi...@gm...> - 2016-02-09 22:10:42
|
Hello Dan and Josh, Sorry, I totally misunderstood the situation here. It seems to me that data spooling is useful to avoid the tape library be waiting for data from various slow clients. I think the LTO-4 drive (120 MB/s full height) write speed and 1GB network will be the bottleneck. If you could use both drives for writing, then they would be waiting for data if you have a 1GB network (125 MB/s) and data coming from only one source (the SD with the original disk volume backups). Maybe fast disks (or disk arrays) and both hosts using NIC bonding or 10GB network there would be a gain in performance using concurrent copy jobs. The local SSD for data spooling seems to me will bring you no gain. Best regards, Ana On Tue, Feb 9, 2016 at 2:30 PM, Josh Fisher <jf...@pv...> wrote: > > On 2/8/2016 5:42 PM, Dan Langille wrote: > > Hello, > > I am working with an LTO-4 tape library. It has two drives but I plan to > write to only one for backups. > > I will backup to disk first, on another SD. Later, I will copy the jobs > to the tape library on this new SD > which is on another server. The copy jobs will be spooled to local SSD > before being written to tape. > > re > http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance > > Now what I'm thinking of is streaming multiple concurrent jobs to a single > drive > > > Are the two servers on a 10G or better network? Unless the disk subsystem > on the other SD is slow, it will likely stream close to the 1G max of 125 > MB/s, since it will be essentially sequential reads. I'm not convinced that > concurrency will gain anything. > > . > > Sure, downside on restore is interleaving of blocks.... > > I don't see any downsides to going down this path. I have yet to run any > copy jobs to the new library, > but it may be ready this week. > > Comments? > > -- > Dan Langille - BSDCan / PGCon > da...@la... > > > > > > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now!http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 > > > > _______________________________________________ > Bacula-users mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/bacula-users > > > > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 > _______________________________________________ > Bacula-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/bacula-users > > |
From: Dan L. <da...@la...> - 2016-02-11 21:44:18
|
> On Feb 9, 2016, at 8:30 AM, Josh Fisher <jf...@pv... <mailto:jf...@pv...>> wrote: > > > On 2/8/2016 5:42 PM, Dan Langille wrote: >> Hello, >> >> I am working with an LTO-4 tape library. It has two drives but I plan to write to only one for backups. >> >> I will backup to disk first, on another SD. Later, I will copy the jobs to the tape library on this new SD >> which is on another server. The copy jobs will be spooled to local SSD before being written to tape. >> >> re http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance <http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance> >> >> Now what I'm thinking of is streaming multiple concurrent jobs to a single drive > > Are the two servers on a 10G or better network? Unless the disk subsystem on the other SD is slow, it will likely stream close to the 1G max of 125 MB/s, since it will be essentially sequential reads. I'm not convinced that concurrency will gain anything. No, it's a 1G network in my house. -- Dan Langille - BSDCan / PGCon da...@la... <mailto:da...@la...> |
From: compdoc <co...@ho...> - 2016-02-11 22:30:54
|
> Now what I'm thinking of is streaming multiple concurrent jobs to a single drive I do that. On my lan, I need to backup config files and mysql, etc on a few centos and ubuntu servers. And also my win7pro computer which has large ssds, and which takes hours to backup to an LTO4 drive on my 1G network. Trying to schedule all that is a nightmare, so I just do it all at once. The servers are only a few gigs each, and without spooling they are interleaved with the Windows backup. But that's fine because they are located at the beginning of the tape, and so searching for server files doesn't take very long. Without spooling, the tape drive runs continuously with only occasional/brief stops or changes in speed. With spooling, the drive sits idle until the cache fills, then runs until the cache is empty, then waits again. It's a 2.5 hour backup without, and it's a 4+ hour backup with spooling, if I recall. At least that's how it works for me. Feature request might be a threaded cache? Not sure if that's a correct way to describe it... deleting files as are they are recorded and fetching new files to fill in the background. Anyway, I think it works great as is, without spooling. Best to test for yourself. Good luck. |
From: Dan L. <da...@la...> - 2016-02-14 22:42:40
Attachments:
signature.asc
|
> On Feb 8, 2016, at 5:42 PM, Dan Langille <da...@la...> wrote: > > Hello, > > I am working with an LTO-4 tape library. It has two drives but I plan to write to only one for backups. > > I will backup to disk first, on another SD. Later, I will copy the jobs to the tape library on this new SD > which is on another server. The copy jobs will be spooled to local SSD before being written to tape. > > re http://bacula-users.narkive.com/QRkTVfEz/typical-tape-write-performance > > Now what I'm thinking of is streaming multiple concurrent jobs to a single drive. > > Sure, downside on restore is interleaving of blocks.... > > I don't see any downsides to going down this path. I have yet to run any copy jobs to the new library, > but it may be ready this week. > > Comments? There have been good suggestions, but I'm posting below my original email to maintain context. I was just reading: http://www.bacula.org/7.4.x-manuals/en/main/Data_Spooling.html Main points: With concurrent tape jobs, only one job will write to tape at a time: • When Bacula begins despooling data spooled to disk, it takes exclusive use of the tape. This has the major advantage that in running multiple simultaneous jobs at the same time, the blocks of several jobs will not be intermingled. Multiple jobs can spool concurrently: • If you are running multiple simultaneous jobs, Bacula will continue spooling other jobs while one is despooling to tape, provided there is sufficient spool file space. I ran my first copy job this morning. * 18 minutes to copy 7.075 GB from one SD to the other, spooling onto SSD. * 7 minutes to spool 577,884,981 bytes back to the Director. 14-Feb 16:30 bacula-dir JobId 231106: Warning: FileSet MD5 digest not found. 14-Feb 16:30 bacula-dir JobId 231106: The following 1 JobId was chosen to be copied: 225267 14-Feb 16:30 bacula-dir JobId 231106: Copying using JobId=225267 Job=supernews_FP_msgs.2015-12-06_03.05.35_25 14-Feb 16:30 bacula-dir JobId 231106: Bootstrap records written to /usr/local/bacula/working/bacula-dir.restore.26.bsr 14-Feb 16:30 bacula-dir JobId 231106: Start Copying JobId 231106, Job=CopyToTape-Full-Just-One-tape-01.2016-02-14_16.30.41_38 14-Feb 16:30 bacula-dir JobId 231106: Using Device "vDrive-0" to read. 14-Feb 16:30 bacula-dir JobId 231107: Using Device "LTO_0" to write. 14-Feb 16:30 crey-sd JobId 231106: Ready to read from volume "FullAuto-3337" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:30 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3337" to file:block 0:1957810277. 14-Feb 16:31 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3337" 14-Feb 16:31 crey-sd JobId 231106: Ready to read from volume "FullAuto-3340" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:31 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3340" to file:block 0:64728. 14-Feb 16:31 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3340" 14-Feb 16:31 crey-sd JobId 231106: Ready to read from volume "FullAuto-3341" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:31 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3341" to file:block 0:216. 14-Feb 16:33 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3341" 14-Feb 16:33 crey-sd JobId 231106: Ready to read from volume "FullAuto-3351" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:33 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3351" to file:block 0:64728. 14-Feb 16:34 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3351" 14-Feb 16:34 crey-sd JobId 231106: Ready to read from volume "FullAuto-3357" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:34 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3357" to file:block 0:216. 14-Feb 16:36 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3357" 14-Feb 16:36 crey-sd JobId 231106: Ready to read from volume "FullAuto-3370" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:36 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3370" to file:block 0:64728. 14-Feb 16:37 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3370" 14-Feb 16:37 crey-sd JobId 231106: Ready to read from volume "FullAuto-3373" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:37 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3373" to file:block 0:64728. 14-Feb 16:38 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3373" 14-Feb 16:38 crey-sd JobId 231106: Ready to read from volume "FullAuto-3382" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:38 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3382" to file:block 0:216. 14-Feb 16:40 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3382" 14-Feb 16:40 crey-sd JobId 231106: Ready to read from volume "FullAuto-3391" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:40 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3391" to file:block 0:64728. 14-Feb 16:42 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3391" 14-Feb 16:42 crey-sd JobId 231106: Ready to read from volume "FullAuto-3393" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:42 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3393" to file:block 0:64728. 14-Feb 16:43 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3393" 14-Feb 16:43 crey-sd JobId 231106: Ready to read from volume "FullAuto-3397" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:43 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3397" to file:block 0:64728. 14-Feb 16:45 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3397" 14-Feb 16:45 crey-sd JobId 231106: Ready to read from volume "FullAuto-3398" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:45 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3398" to file:block 0:64728. 14-Feb 16:46 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3398" 14-Feb 16:46 crey-sd JobId 231106: Ready to read from volume "FullAuto-3401" on file device "vDrive-0" (/usr/local/bacula/volumes). 14-Feb 16:46 crey-sd JobId 231106: Forward spacing Volume "FullAuto-3401" to file:block 0:64728. 14-Feb 16:48 crey-sd JobId 231106: End of Volume at file 1 on device "vDrive-0" (/usr/local/bacula/volumes), Volume "FullAuto-3401" 14-Feb 16:48 crey-sd JobId 231106: End of all volumes. 14-Feb 16:48 crey-sd JobId 231106: Elapsed time=00:17:25, Transfer rate=6.770 M Bytes/second 14-Feb 16:48 tape01-sd JobId 231107: Elapsed time=00:17:26, Transfer rate=6.763 M Bytes/second 14-Feb 16:48 tape01-sd JobId 231107: Sending spooled attrs to the Director. Despooling 577,884,981 bytes ... 14-Feb 16:55 bacula-dir JobId 231106: Bacula bacula-dir 7.4.0 (16Jan16): Build OS: amd64-portbld-freebsd10.2 freebsd 10.2-RELEASE-p8 Prev Backup JobId: 225267 Prev Backup Job: supernews_FP_msgs.2015-12-06_03.05.35_25 New Backup JobId: 231107 Current JobId: 231106 Current Job: CopyToTape-Full-Just-One-tape-01.2016-02-14_16.30.41_38 Backup Level: Full Client: tape01-fd FileSet: "EmptyCopyToTape" 2011-02-20 20:53:31 Read Pool: "FullFile" (From Job resource) Read Storage: "CreyFile" (From Pool resource) Write Pool: "FullsLTO4" (From Job resource) Write Storage: "tape01" (From Job resource) Catalog: "MyCatalog" (From Client resource) Start time: 14-Feb-2016 16:30:44 End time: 14-Feb-2016 16:55:13 Elapsed time: 24 mins 29 secs Priority: 410 SD Files Written: 1,567,729 SD Bytes Written: 7,075,031,080 (7.075 GB) Rate: 4816.2 KB/s Volume name(s): 000003L4 Volume Session Id: 1168 Volume Session Time: 1454189071 Last Volume Bytes: 8,648,930,304 (8.648 GB) SD Errors: 0 SD termination status: OK Termination: Copying OK 14-Feb 16:55 bacula-dir JobId 231106: Begin pruning Jobs older than 3 years . 14-Feb 16:55 bacula-dir JobId 231106: No Jobs found to prune. 14-Feb 16:55 bacula-dir JobId 231106: Begin pruning Files. 14-Feb 16:55 bacula-dir JobId 231106: No Files found to prune. 14-Feb 16:55 bacula-dir JobId 231106: End auto prune. -- Dan Langille - BSDCan / PGCon da...@la... |