You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
(23) |
May
(45) |
Jun
(22) |
Jul
(11) |
Aug
(14) |
Sep
(38) |
Oct
(62) |
Nov
(34) |
Dec
(25) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(88) |
Feb
(126) |
Mar
(182) |
Apr
(168) |
May
(244) |
Jun
(249) |
Jul
(206) |
Aug
(309) |
Sep
(250) |
Oct
(487) |
Nov
(620) |
Dec
(470) |
2004 |
Jan
(623) |
Feb
(671) |
Mar
(522) |
Apr
(841) |
May
(671) |
Jun
(638) |
Jul
(622) |
Aug
(544) |
Sep
(580) |
Oct
(621) |
Nov
(668) |
Dec
(575) |
2005 |
Jan
(440) |
Feb
(547) |
Mar
(626) |
Apr
(594) |
May
(597) |
Jun
(796) |
Jul
(760) |
Aug
(908) |
Sep
(948) |
Oct
(923) |
Nov
(1114) |
Dec
(686) |
2006 |
Jan
(897) |
Feb
(943) |
Mar
(1049) |
Apr
(882) |
May
(774) |
Jun
(736) |
Jul
(816) |
Aug
(798) |
Sep
(1000) |
Oct
(742) |
Nov
(881) |
Dec
(705) |
2007 |
Jan
(1086) |
Feb
(1085) |
Mar
(1132) |
Apr
(1011) |
May
(778) |
Jun
(847) |
Jul
(857) |
Aug
(865) |
Sep
(830) |
Oct
(939) |
Nov
(856) |
Dec
(543) |
2008 |
Jan
(933) |
Feb
(832) |
Mar
(772) |
Apr
(587) |
May
(723) |
Jun
(872) |
Jul
(962) |
Aug
(915) |
Sep
(766) |
Oct
(658) |
Nov
(780) |
Dec
(554) |
2009 |
Jan
(604) |
Feb
(766) |
Mar
(719) |
Apr
(745) |
May
(547) |
Jun
(554) |
Jul
(474) |
Aug
(338) |
Sep
(424) |
Oct
(670) |
Nov
(421) |
Dec
(510) |
2010 |
Jan
(732) |
Feb
(702) |
Mar
(693) |
Apr
(666) |
May
(556) |
Jun
(515) |
Jul
(553) |
Aug
(549) |
Sep
(344) |
Oct
(431) |
Nov
(437) |
Dec
(329) |
2011 |
Jan
(822) |
Feb
(540) |
Mar
(435) |
Apr
(437) |
May
(624) |
Jun
(458) |
Jul
(416) |
Aug
(395) |
Sep
(333) |
Oct
(280) |
Nov
(246) |
Dec
(324) |
2012 |
Jan
(340) |
Feb
(273) |
Mar
(429) |
Apr
(321) |
May
(311) |
Jun
(329) |
Jul
(201) |
Aug
(307) |
Sep
(263) |
Oct
(308) |
Nov
(315) |
Dec
(294) |
2013 |
Jan
(481) |
Feb
(337) |
Mar
(310) |
Apr
(269) |
May
(274) |
Jun
(231) |
Jul
(182) |
Aug
(214) |
Sep
(276) |
Oct
(178) |
Nov
(222) |
Dec
(150) |
2014 |
Jan
(135) |
Feb
(144) |
Mar
(218) |
Apr
(152) |
May
(312) |
Jun
(187) |
Jul
(197) |
Aug
(218) |
Sep
(241) |
Oct
(282) |
Nov
(292) |
Dec
(229) |
2015 |
Jan
(200) |
Feb
(133) |
Mar
(154) |
Apr
(162) |
May
(268) |
Jun
(274) |
Jul
(166) |
Aug
(311) |
Sep
(182) |
Oct
(236) |
Nov
(160) |
Dec
(216) |
2016 |
Jan
(187) |
Feb
(248) |
Mar
(259) |
Apr
(112) |
May
(203) |
Jun
(104) |
Jul
(156) |
Aug
(131) |
Sep
(135) |
Oct
(161) |
Nov
(179) |
Dec
(110) |
2017 |
Jan
(148) |
Feb
(96) |
Mar
(236) |
Apr
(99) |
May
(118) |
Jun
(156) |
Jul
(157) |
Aug
(204) |
Sep
(151) |
Oct
(152) |
Nov
(125) |
Dec
(58) |
2018 |
Jan
(127) |
Feb
(151) |
Mar
(119) |
Apr
(131) |
May
(170) |
Jun
(125) |
Jul
(103) |
Aug
(119) |
Sep
(143) |
Oct
(116) |
Nov
(141) |
Dec
(90) |
2019 |
Jan
(179) |
Feb
(126) |
Mar
(97) |
Apr
(135) |
May
(135) |
Jun
(110) |
Jul
(121) |
Aug
(61) |
Sep
(96) |
Oct
(48) |
Nov
(58) |
Dec
(105) |
2020 |
Jan
(116) |
Feb
(97) |
Mar
(114) |
Apr
(96) |
May
(154) |
Jun
(116) |
Jul
(76) |
Aug
(20) |
Sep
(68) |
Oct
(105) |
Nov
(33) |
Dec
(118) |
2021 |
Jan
(34) |
Feb
(81) |
Mar
(94) |
Apr
(74) |
May
(133) |
Jun
(86) |
Jul
(65) |
Aug
(44) |
Sep
(68) |
Oct
(56) |
Nov
(113) |
Dec
(195) |
2022 |
Jan
(135) |
Feb
(65) |
Mar
(108) |
Apr
(48) |
May
(102) |
Jun
(153) |
Jul
(89) |
Aug
(90) |
Sep
(135) |
Oct
(77) |
Nov
(85) |
Dec
(61) |
2023 |
Jan
(102) |
Feb
(62) |
Mar
(81) |
Apr
(103) |
May
(71) |
Jun
(45) |
Jul
(57) |
Aug
(60) |
Sep
(94) |
Oct
(104) |
Nov
(96) |
Dec
(68) |
2024 |
Jan
(107) |
Feb
(92) |
Mar
(91) |
Apr
(155) |
May
(78) |
Jun
(121) |
Jul
(64) |
Aug
(136) |
Sep
(75) |
Oct
|
Nov
|
Dec
|
From: Martin S. <ma...@li...> - 2024-09-17 16:38:18
|
They are being sent to the Director by the client (nuc2). I suggest adding some firewall rules on nuc2 to only allow connections to port 9102 from the Director. __Martin >>>>> On Tue, 17 Sep 2024 11:41:14 +0100, Chris Wilkinson said: > > I keep getting security alerts from a remote client backup. The backups > always run to success. The IPs that are listed in the job log are different > every time and in various locations including some in Russia but also in > London and European data centres. There are no entries at all in the remote > client bacula log. This only happens with remote client backups, never with > local client backups. > > It's not clear to me whether these alerts are coming from the DIR or being > sent to the Director by the client. > > I'm not sure whether to just ignore these or take some steps to block them. > Is there an FD directive that would reject these perhaps? > > Any advice welcomed. > > Thanks > > -Chris Wilkinson > > ---------- Forwarded message --------- > From: Bacula <win...@gm...> > Date: Tue, 17 Sep 2024, 03:50 > Subject: Bacula: Backup OK of Client:nuc2 Fileset:nuc2 Incremental > To: <root@localhost> > > > 17-Sep 03:50 raspberrypi-dir JobId 7536: Start Backup JobId 7536, Job=nuc2.2024-09-17_03.50.00_03 > 17-Sep 03:50 raspberrypi-dir JobId 7536: Using Device "qnap-usb3" to write. > 17-Sep 03:50 raspberrypi-dir JobId 7536: Sending Accurate information to the FD. > 17-Sep 03:50 raspberrypi-sd JobId 7536: Volume "nuc2-incremental6040" previously written, moving to end of data. > 17-Sep 03:50 raspberrypi-sd JobId 7536: Ready to append to end of Volume "nuc2-incremental6040" size=162,983,874 > 16-Sep 07:25 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.167. Len=-4. > 17-Sep 03:50 raspberrypi-sd JobId 7536: Elapsed time=00:00:01, Transfer rate=90.58 K Bytes/second > 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.159. Len=-4. > 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.148. Len=-4. > 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.154. Len=-4. > 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.155. Len=-2147483608. > 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.163. Len=49. > 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.163. Len=110. > 16-Sep 07:27 nuc2 JobId 0: Security Alert: bsock.c:560 Read error from client:87.236.176.156:9102: ERR=No data available > 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.156. Len=0. > 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.161. Len=-4. > 16-Sep 07:28 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.178. Len=-4. > 16-Sep 07:28 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.156. Len=-4. > 16-Sep 07:28 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.170. Len=-4. > 16-Sep 07:29 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.159. Len=-4. > 16-Sep 07:29 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.152. Len=-4. > 16-Sep 07:29 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.156. Len=-4. > 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.170. Len=-4. > 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.168. Len=0. > 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.171. Len=0. > 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.166. Len=-4. > 16-Sep 19:54 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 80.66.76.134. Len=-4. > 17-Sep 03:50 raspberrypi-sd JobId 7536: Sending spooled attrs to the Director. Despooling 6,131 bytes ... > 17-Sep 03:50 raspberrypi-dir JobId 7536: Bacula raspberrypi-dir 11.0.6 (10Mar22): > Build OS: aarch64-unknown-linux-gnu debian 11.3 > JobId: 7536 > Job: nuc2.2024-09-17_03.50.00_03 > Backup Level: Incremental, since=2024-09-16 03:50:03 > Client: "nuc2" 11.0.6 (10Mar22) x86_64-pc-linux-gnu,debian,12.7 > FileSet: "nuc2" 2023-09-26 03:50:00 > Pool: "nuc2-incremental" (From Job IncPool override) > Catalog: "MyCatalog" (From Pool resource) > Storage: "remote-clients" (From Job resource) > Scheduled time: 17-Sep-2024 03:50:00 > Start time: 17-Sep-2024 03:50:05 > End time: 17-Sep-2024 03:50:13 > Elapsed time: 8 secs > Priority: 10 > FD Files Written: 25 > SD Files Written: 25 > FD Bytes Written: 87,301 (87.30 KB) > SD Bytes Written: 90,582 (90.58 KB) > Rate: 10.9 KB/s > Software Compression: 50.0% 2.0:1 > Comm Line Compression: None > Snapshot/VSS: no > Encryption: no > Accurate: yes > Volume name(s): nuc2-incremental6040 > Volume Session Id: 175 > Volume Session Time: 1725763550 > Last Volume Bytes: 163,075,762 (163.0 MB) > Non-fatal FD errors: 0 > SD Errors: 0 > FD termination status: OK > SD termination status: OK > Termination: Backup OK > > 17-Sep 03:50 raspberrypi-dir JobId 7536: Begin pruning Jobs older than 7 days . > 17-Sep 03:50 raspberrypi-dir JobId 7536: Pruned 2 Jobs for client nuc2 from catalog. > 17-Sep 03:50 raspberrypi-dir JobId 7536: Begin pruning Files. > 17-Sep 03:50 raspberrypi-dir JobId 7536: No Files found to prune. > 17-Sep 03:50 raspberrypi-dir JobId 7536: End auto prune. > > 17-Sep 03:50 raspberrypi-dir JobId 7536: shell command: run AfterJob "/home/pi/run-copy-job.sh nuc2-copy Incremental nuc2-Incremental nuc2-copy-Incremental" > 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: Connecting to Director raspberrypi.fritz.box:9101 > 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: 1000 OK: 10002 raspberrypi-dir Version: 11.0.6 (10 March 2022) > 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: Enter a period to cancel a command. > 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: run yes job=nuc2-copy level=Incremental pool=nuc2-incremental nextpool=nuc2-copy-incremental > 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: Using Catalog "MyCatalog" > 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: Job queued. JobId=7537 > 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: You have messages. > |
From: Bill A. <wa...@pr...> - 2024-09-17 14:07:28
|
On 9/17/24 4:41 AM, Chris Wilkinson wrote: > I keep getting security alerts from a remote client backup. The backups always run to success. The IPs that are listed in the > job log are different every time and in various locations including some in Russia but also in London and European data > centres. There are no entries at all in the remote client bacula log. This only happens with remote client backups, never > with local client backups. > > It's not clear to me whether these alerts are coming from the DIR or being sent to the Director by the client. > > I'm not sure whether to just ignore these or take some steps to block them. Is there an FD directive that would reject these > perhaps? > > Any advice welcomed. > > Thanks > > -Chris Wilkinson Hello Chris, Since this FD "nuc2" is (obviously) exposed to the Internet, I would enable the firewall on it, and only allow connections in from the Director on port 9102/TCP (default). Best/safest way IMHO. Best regards, Bill -- Bill Arlofski wa...@pr... |
From: Dragan M. <ga...@pk...> - 2024-09-17 11:57:40
|
AFAIK there is no such feature on the FD side but I might be wrong. On Tue, 17 Sept 2024 at 13:52, Chris Wilkinson <win...@gm...> wrote: > > Is that something that can be done in the FD or is it a job for iptables? > > -Chris Wilkinson > > On Tue, 17 Sep 2024, 12:48 Dragan Milivojević, <ga...@pk...> wrote: >> >> These are just automated scans. I would not run a FD open to the world. >> Block anything but the DIR and SD from contacting the FD? >> >> On Tue, 17 Sept 2024 at 12:43, Chris Wilkinson <win...@gm...> wrote: >> > >> > I keep getting security alerts from a remote client backup. The backups always run to success. The IPs that are listed in the job log are different every time and in various locations including some in Russia but also in London and European data centres. There are no entries at all in the remote client bacula log. This only happens with remote client backups, never with local client backups. >> > |
From: Chris W. <win...@gm...> - 2024-09-17 10:41:32
|
I keep getting security alerts from a remote client backup. The backups always run to success. The IPs that are listed in the job log are different every time and in various locations including some in Russia but also in London and European data centres. There are no entries at all in the remote client bacula log. This only happens with remote client backups, never with local client backups. It's not clear to me whether these alerts are coming from the DIR or being sent to the Director by the client. I'm not sure whether to just ignore these or take some steps to block them. Is there an FD directive that would reject these perhaps? Any advice welcomed. Thanks -Chris Wilkinson ---------- Forwarded message --------- From: Bacula <win...@gm...> Date: Tue, 17 Sep 2024, 03:50 Subject: Bacula: Backup OK of Client:nuc2 Fileset:nuc2 Incremental To: <root@localhost> 17-Sep 03:50 raspberrypi-dir JobId 7536: Start Backup JobId 7536, Job=nuc2.2024-09-17_03.50.00_03 17-Sep 03:50 raspberrypi-dir JobId 7536: Using Device "qnap-usb3" to write. 17-Sep 03:50 raspberrypi-dir JobId 7536: Sending Accurate information to the FD. 17-Sep 03:50 raspberrypi-sd JobId 7536: Volume "nuc2-incremental6040" previously written, moving to end of data. 17-Sep 03:50 raspberrypi-sd JobId 7536: Ready to append to end of Volume "nuc2-incremental6040" size=162,983,874 16-Sep 07:25 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.167. Len=-4. 17-Sep 03:50 raspberrypi-sd JobId 7536: Elapsed time=00:00:01, Transfer rate=90.58 K Bytes/second 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.159. Len=-4. 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.148. Len=-4. 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.154. Len=-4. 16-Sep 07:26 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.155. Len=-2147483608. 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.163. Len=49. 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.163. Len=110. 16-Sep 07:27 nuc2 JobId 0: Security Alert: bsock.c:560 Read error from client:87.236.176.156:9102: ERR=No data available 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.156. Len=0. 16-Sep 07:27 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.161. Len=-4. 16-Sep 07:28 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.178. Len=-4. 16-Sep 07:28 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.156. Len=-4. 16-Sep 07:28 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.170. Len=-4. 16-Sep 07:29 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.159. Len=-4. 16-Sep 07:29 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.152. Len=-4. 16-Sep 07:29 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.156. Len=-4. 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.170. Len=-4. 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.168. Len=0. 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.171. Len=0. 16-Sep 07:30 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 87.236.176.166. Len=-4. 16-Sep 19:54 nuc2 JobId 0: Security Alert: job.c:548 FD expecting Hello got bad command from 80.66.76.134. Len=-4. 17-Sep 03:50 raspberrypi-sd JobId 7536: Sending spooled attrs to the Director. Despooling 6,131 bytes ... 17-Sep 03:50 raspberrypi-dir JobId 7536: Bacula raspberrypi-dir 11.0.6 (10Mar22): Build OS: aarch64-unknown-linux-gnu debian 11.3 JobId: 7536 Job: nuc2.2024-09-17_03.50.00_03 Backup Level: Incremental, since=2024-09-16 03:50:03 Client: "nuc2" 11.0.6 (10Mar22) x86_64-pc-linux-gnu,debian,12.7 FileSet: "nuc2" 2023-09-26 03:50:00 Pool: "nuc2-incremental" (From Job IncPool override) Catalog: "MyCatalog" (From Pool resource) Storage: "remote-clients" (From Job resource) Scheduled time: 17-Sep-2024 03:50:00 Start time: 17-Sep-2024 03:50:05 End time: 17-Sep-2024 03:50:13 Elapsed time: 8 secs Priority: 10 FD Files Written: 25 SD Files Written: 25 FD Bytes Written: 87,301 (87.30 KB) SD Bytes Written: 90,582 (90.58 KB) Rate: 10.9 KB/s Software Compression: 50.0% 2.0:1 Comm Line Compression: None Snapshot/VSS: no Encryption: no Accurate: yes Volume name(s): nuc2-incremental6040 Volume Session Id: 175 Volume Session Time: 1725763550 Last Volume Bytes: 163,075,762 (163.0 MB) Non-fatal FD errors: 0 SD Errors: 0 FD termination status: OK SD termination status: OK Termination: Backup OK 17-Sep 03:50 raspberrypi-dir JobId 7536: Begin pruning Jobs older than 7 days . 17-Sep 03:50 raspberrypi-dir JobId 7536: Pruned 2 Jobs for client nuc2 from catalog. 17-Sep 03:50 raspberrypi-dir JobId 7536: Begin pruning Files. 17-Sep 03:50 raspberrypi-dir JobId 7536: No Files found to prune. 17-Sep 03:50 raspberrypi-dir JobId 7536: End auto prune. 17-Sep 03:50 raspberrypi-dir JobId 7536: shell command: run AfterJob "/home/pi/run-copy-job.sh nuc2-copy Incremental nuc2-Incremental nuc2-copy-Incremental" 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: Connecting to Director raspberrypi.fritz.box:9101 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: 1000 OK: 10002 raspberrypi-dir Version: 11.0.6 (10 March 2022) 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: Enter a period to cancel a command. 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: run yes job=nuc2-copy level=Incremental pool=nuc2-incremental nextpool=nuc2-copy-incremental 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: Using Catalog "MyCatalog" 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: Job queued. JobId=7537 17-Sep 03:50 raspberrypi-dir JobId 7536: AfterJob: You have messages. |
From: B. S. <az...@sh...> - 2024-09-17 07:46:37
|
This is all running on TrueNAS, so BSD. The HBA is an LSI 9220-8i. The NIC is 10Gb, but not relevant here because the data and the tape library are all on the same system. The disks are SATA. 32GB RAM, but I don't see the system running out of RAM while spooling/despooling. The JBOD enclosures are DS4243s, and I have a 54 disk pool of SATA disks in raidz2 (six disks per vdev). Thanks. On Tue, Sep 17, 2024 at 1:09 AM Gary R. Schmidt <gr...@mc...> wrote: > On 17/09/2024 08:58, B. Smith wrote: > > Good evening, > > > > I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB > > drives, configured as three mirrors of two striped disks. My tape drive > > is LTO8. All the data is local to the server. When I despool without > > simultaneously spooling another job, my despool rate is about 280 > > MB/sec. However, I noticed that when I allow a second job to spool at > > the same time I despool, the despool rate goes down to as low as 75 > > MB/sec, with the drive stopping relatively frequently to refill the > cache. > > > > I'd like to try to figure out what the bottleneck is in this > > configuration. I looked at using fio, but I don't want to test random > > reads and writes... I want to test sequentially reading from one file on > > the pool while sequentially writing to another. Can anyone suggest a > > good way of testing this, or maybe point me to some other way of > > determining the bottleneck? I realize it could be as simple as the disks > > having poor simultaneous r/w performance, or it could be a limitation > > with the backplane, HBA, or motherboard. Just trying to drill-down to > > figure out what, if anything, I can upgrade. > > > Which ZFS are you using? Solaris, BSD, or Linux? > > What's the HBA? NICs? > > Are the disks SATA (so one-way only) or SAS (two-way)? > > How much physical RAM does the system have? > > I don't see this, but I am running Solaris with SAS disks in a RAIDZ > pool and 72Gb of RAM. > > Cheers, > Gary B-) > > > _______________________________________________ > Bacula-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/bacula-users > |
From: Gary R. S. <gr...@mc...> - 2024-09-17 05:08:47
|
On 17/09/2024 08:58, B. Smith wrote: > Good evening, > > I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB > drives, configured as three mirrors of two striped disks. My tape drive > is LTO8. All the data is local to the server. When I despool without > simultaneously spooling another job, my despool rate is about 280 > MB/sec. However, I noticed that when I allow a second job to spool at > the same time I despool, the despool rate goes down to as low as 75 > MB/sec, with the drive stopping relatively frequently to refill the cache. > > I'd like to try to figure out what the bottleneck is in this > configuration. I looked at using fio, but I don't want to test random > reads and writes... I want to test sequentially reading from one file on > the pool while sequentially writing to another. Can anyone suggest a > good way of testing this, or maybe point me to some other way of > determining the bottleneck? I realize it could be as simple as the disks > having poor simultaneous r/w performance, or it could be a limitation > with the backplane, HBA, or motherboard. Just trying to drill-down to > figure out what, if anything, I can upgrade. > Which ZFS are you using? Solaris, BSD, or Linux? What's the HBA? NICs? Are the disks SATA (so one-way only) or SAS (two-way)? How much physical RAM does the system have? I don't see this, but I am running Solaris with SAS disks in a RAIDZ pool and 72Gb of RAM. Cheers, Gary B-) |
From: B. S. <az...@sh...> - 2024-09-17 04:16:40
|
Apologies, I misstated the configuration. I do in fact have striped mirrors. On Mon, Sep 16, 2024, 8:59 PM Phil Stracchino <ph...@ca...> wrote: > On 9/16/24 18:58, B. Smith wrote: > > Good evening, > > > > I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB > > drives, configured as three mirrors of two striped disks. > > OK, just one observation: That's generally considered the wrong way to > do it. The normally preferred arrangement is to first build mirrored > pairs, then stripe the mirrors. > > > -- > Phil Stracchino > Fenian House Publishing > ph...@ca... > ph...@co... > Landline: +1.603.293.8485 > Mobile: +1.603.998.6958 > > > > _______________________________________________ > Bacula-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/bacula-users > |
From: Phil S. <ph...@ca...> - 2024-09-17 00:58:21
|
On 9/16/24 18:58, B. Smith wrote: > Good evening, > > I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB > drives, configured as three mirrors of two striped disks. OK, just one observation: That's generally considered the wrong way to do it. The normally preferred arrangement is to first build mirrored pairs, then stripe the mirrors. -- Phil Stracchino Fenian House Publishing ph...@ca... ph...@co... Landline: +1.603.293.8485 Mobile: +1.603.998.6958 |
From: Phil S. <ph...@ca...> - 2024-09-17 00:55:35
|
On 9/16/24 20:34, Dan Langille wrote: > On Mon, Sep 16, 2024, at 5:46 PM, Phil Stracchino wrote: >> On 9/16/24 15:17, Dan Langille wrote: >> >> delete from version order by version limit 1; > > bacula=*# delete from version order by version limit 1; > ERROR: syntax error at or near "order" > LINE 1: delete from version order by version limit 1; Doh, my bad. Forgot you can't use ORDER BY in a DELETE FROM. (Which means you have to use shenanigans when you want to do that.) -- Phil Stracchino Fenian House Publishing ph...@ca... ph...@co... Landline: +1.603.293.8485 Mobile: +1.603.998.6958 |
From: Dan L. <da...@la...> - 2024-09-17 00:34:38
|
On Mon, Sep 16, 2024, at 5:46 PM, Phil Stracchino wrote: > On 9/16/24 15:17, Dan Langille wrote: >> >> Hello, >> >> I discovered this situation today when reviewing upgrade instructions for Bacula 9 -> Bacula 15. FYI, Bacula 9 and 13 are both deprecated in FreeBSD and will be removed from the ports tree at the end of the month. >> >> The problem: duplicate entries: >> >> bacula=# select * from version ; >> versionid >> ----------- >> 1026 >> 1026 >> (2 rows) >> >> bacula=# >> >> I reckon I can just delete one of them. How to accomplish that is left as an exercise for the reader. > > delete from version order by version limit 1; bacula=*# delete from version order by version limit 1; ERROR: syntax error at or near "order" LINE 1: delete from version order by version limit 1; -- Dan Langille da...@la... |
From: B. S. <az...@sh...> - 2024-09-17 00:24:07
|
Good evening, I have a ZFS pool as a dedicated Bacula spool. The pool contains six 4TB drives, configured as three mirrors of two striped disks. My tape drive is LTO8. All the data is local to the server. When I despool without simultaneously spooling another job, my despool rate is about 280 MB/sec. However, I noticed that when I allow a second job to spool at the same time I despool, the despool rate goes down to as low as 75 MB/sec, with the drive stopping relatively frequently to refill the cache. I'd like to try to figure out what the bottleneck is in this configuration. I looked at using fio, but I don't want to test random reads and writes... I want to test sequentially reading from one file on the pool while sequentially writing to another. Can anyone suggest a good way of testing this, or maybe point me to some other way of determining the bottleneck? I realize it could be as simple as the disks having poor simultaneous r/w performance, or it could be a limitation with the backplane, HBA, or motherboard. Just trying to drill-down to figure out what, if anything, I can upgrade. Thanks. |
From: Phil S. <ph...@ca...> - 2024-09-16 21:47:11
|
On 9/16/24 15:17, Dan Langille wrote: > > Hello, > > I discovered this situation today when reviewing upgrade instructions for Bacula 9 -> Bacula 15. FYI, Bacula 9 and 13 are both deprecated in FreeBSD and will be removed from the ports tree at the end of the month. > > The problem: duplicate entries: > > bacula=# select * from version ; > versionid > ----------- > 1026 > 1026 > (2 rows) > > bacula=# > > I reckon I can just delete one of them. How to accomplish that is left as an exercise for the reader. delete from version order by version limit 1; -- Phil Stracchino Fenian House Publishing ph...@ca... ph...@co... Landline: +1.603.293.8485 Mobile: +1.603.998.6958 |
From: Dan L. <da...@la...> - 2024-09-16 19:17:44
|
Hello, I discovered this situation today when reviewing upgrade instructions for Bacula 9 -> Bacula 15. FYI, Bacula 9 and 13 are both deprecated in FreeBSD and will be removed from the ports tree at the end of the month. The problem: duplicate entries: bacula=# select * from version ; versionid ----------- 1026 1026 (2 rows) bacula=# I reckon I can just delete one of them. How to accomplish that is left as an exercise for the reader. Versions: * PostgreSQL 16.4 * Bacula 15.0.2 * FreeBSD 14.1 -- Dan Langille da...@la... |
From: Arno L. <al...@it...> - 2024-09-15 09:45:27
|
Hi Marco, Am 13.09.2024 um 17:42 schrieb Marco Gaiarin: > Mandi! Marco Gaiarin > In chel di` si favelave... > >> EG, i'll come back on this on mid-september... > > Still working on script. I have not looked up the discussion leading here. > > For a sake of mental health i want to mount snapshot on different mount > point... so different file path. > But path are defined in filesets, and i cannot define adifferent fileset for > every mountpoint... > > I know i can use StripPrefix/AddPrefix on restore, but there's some way to > use a 'StripPrefix' like options on jobs? The File Set Include Option "Strip Path = <integer>" may be useful. What I usually prefer is a combination of a Run Script on the client to create a file list to back up, and a File= entry in the include list that reads from file or program, i.e. "\|" or "\<" ones. However, you can do really interesting and slightly confusing things, too: Job { Name = "magichomedirs:_home_b:10:27" Fileset = "magichomedirs" Level = Full Type = "Backup" Client = "homedirfileserver-fd" MaximumConcurrentJobs = 1 Messages = "Standard" Pool = "short-lab" Priority = 10 } FileSet { Name = magichomedirs Include { Options { Signature = SHA1 } File = "\\|/opt/bacula/scripts/magicfileset.sh '%n'" } } # cat /opt/bacula/scripts/magicfileset.sh #!/bin/bash IFS=: read -r j p v b <<< $1 if [ -z "$b" ] ; then exit 0; fi p=${p//_/\/} # echo "Job $j prefix $p from $v to $b" w=$(dirname "$p") n=$(basename "$p") # below is only one line, wrapped by mailer! find "$w" -maxdepth 1 -type d -iname "$n"'*' | awk "/.*[0-9]+$/ {p=match(\$0, /[0-9]+\$/); s=0+substr(\$0, p); if (p && (s>=$v) && (s<$b)) print \$0}" Which works as intended: # ls /home/ arno b004 b008 b012 b016 b020 b026 b030 b034 test b001 b005 b009 b013 b017 b021 b027 b031 b035 b002 b006 b010 b014 b018 b024 b028 b032 b036 b003 b007 b011 b015 b019 b025 b029 b033 b037 # /opt/bacula/scripts/magicfileset.sh magichomedirs:_home_b:10:27 /home/b010 /home/b011 /home/b012 /home/b013 /home/b014 /home/b015 /home/b016 /home/b017 /home/b018 /home/b019 /home/b020 /home/b021 /home/b024 /home/b025 /home/b026 # Usually, more explicit configuration is better -- you configure once, usually under no pressure and with time to experiment, but you need this stuff to have predictable effect when you are under pressure. Cheers, Arno > > I hope i was clear... > -- Arno Lehmann IT-Service Lehmann Sandstr. 6, 49080 Osnabrück |
From: Marco G. <ga...@li...> - 2024-09-15 09:10:16
|
Mandi! Marco Gaiarin In chel di` si favelave... > EG, i'll come back on this on mid-september... Still working on script. For a sake of mental health i want to mount snapshot on different mount point... so different file path. But path are defined in filesets, and i cannot define adifferent fileset for every mountpoint... I know i can use StripPrefix/AddPrefix on restore, but there's some way to use a 'StripPrefix' like options on jobs? I hope i was clear... -- |
From: Marcin H. <gan...@gm...> - 2024-09-15 07:03:08
|
Hello Everybody, We are pleased to let you know about a new Bacularis release 4.1.0. We continue the action "Bacula for Everybody!" for making first contact with Bacula easier for new users and this time we prepared a new file storage wizard. It enables to create both single storage devices and multi-device autochangers as well. Second significant change is support for plugins. This is a friendly plugin interface to easily extend Bacularis for new functions. The first step in this feature is support for the web interface plugins. To prepare a plugin you will need a bit of knowledge about PHP language and object-oriented programming. If you are not a strong PHP programmer, don't worry, it is a really simple interface to create new plugins. We also prepared two new plugins: for e-mail notifications and for Mattermost notifications. Finally, we would like to thank the entire Community for bug reports, ideas and for being active. With your involvement we can much more. Here you can find materials that present new functions: - [VIDEO] New file storage wizard: https://www.youtube.com/watch?v=OFOJ5P14jFI - [VIDEO] Plugin support: https://www.youtube.com/watch?v=6Y-nQ8ysa10 - [DOC] Plugin interface documentation: https://bacularis.app/doc/plugins/basic.html For more details, please look at the release announcement: https://bacularis.app/news/91/36/New-release-Bacularis-4.1.0 Happy using Bacularis! On behalf of the Bacularis Team Marcin Haba (gani) |
From: Arno L. <al...@it...> - 2024-09-13 07:35:11
|
Hi Guillermo, Am 10.09.2024 um 10:05 schrieb Guillermo Martin Cabanillas: > Hi, > > I would like to know if anyone is aware of the possibility to perform > VMware snapshot backups using the Bacula Community version. You would need to communicate with the hypervisor. The community edition has nothing included to do that. Obviously, you can always come up with the needed integration using tools you can access via Run Script{}, but knowing how much work it is too create, test, support and maintain such thing, I doubt this is going to be a viable way forward. If backups through the hypervisor are not a suloution, the most realistic approach is indeed to use the FD inside the VM, and use whatever tools you have available to get consistent data for backup purposes -- for Windows, VSS which is used by default will get you far, for non windows systems, you could try to find and use the hook scripts the hypervisor systems usually call to prepare VM snapshots. You will hardly get the consistency "guarantees" that VMware provides, but you may get something better than just blindly backing up files. Also, file system snapshots would be something to use. > We are currently using Bacula 11.0.6, but we can update to the latest > version if needed. > > I know it's possible with Bacula Enterprise, but maybe there's a way to > do it with the Bacula Community version. > > Does anyone know about the pricing for Bacula Enterprise Edition? I do, but won't tell here. Just start a conversation using email, the contact form or the chat on the web site and you'll get in touch with people less shy ;-) Cheers, Arno -- Arno Lehmann IT-Service Lehmann Sandstr. 6, 49080 Osnabrück |
From: Arno L. <al...@it...> - 2024-09-13 07:26:58
|
Hi Guillermo, I'm still not quite sure if I get the full piture here... for example, the process you have implemented may easily restore files with identical paths, essentially overwriting older data with newer files. If that is a problem depends on the data you backed up and particular requirements which we are not aware of. However -- Am 09.09.2024 um 17:31 schrieb Guillermo Martin Cabanillas: > Hi Arno, > > At the end we found an easy solution to restore all Full backups from > one machine(client) from 2022 till today. We have to repeat this task > with every machine(client). > > with a SQL select we get the full backups jobs ids: > > select JobId from Job where JobStatus='T' and Type='B' and > Level='F' and Name='XXXX'; > > Then format the column results to commas > > Then restore using this option: > > 3: Enter list of comma separated JobIds to select > > If anyone knows a better option to do this task please let us know, thanks! If that works and solves the problem, it surely is a valid solution. A variant I would probably use would be to to use restore job prepartion as intended for more complex user interfaces, setting up a catalog table containing the information what to restore. You could do that using the .bvfs commands or a customized SQL query. None of that will be easier to understand than a piece of shell script including awk, sed, perl or whatever toolchain you like, but it may be useful if you expect such exercises to become routine. In a more automated solution, I would probably restore each job or sequence of jobs in their own directory tree, so that newer file versions do not overwrite older ones. All of this, however, are just suggestions which may or may not be useful -- your solution, if it does what you need, is obviously good, and "better" depends a lot on factors we're not aware of. Cheers, Arno -- Arno Lehmann IT-Service Lehmann Sandstr. 6, 49080 Osnabrück |
From: Radosław K. <rad...@ko...> - 2024-09-11 12:36:50
|
pt., 6 wrz 2024 o 10:20 Andrea Venturoli <ml...@ne...> napisał(a): > On 9/6/24 09:50, Anders Gustafsson wrote: > > > I am no expert, but retention is apparently set per job, not media. > Correct me if I am wrong. > > Retention is set per job *and* per media. > But those are very different kinds of retention and do not apply to the same matter. There is a third retention type - File retention. Job Retention means how long you want to save in catalog the job execution details. When expired the catalog is updated only. Your backups are untouched and still occupy backup space. File retention is similar. Volume (media) retention means how long you want to save backup data of any kind which includes backup data on your media and all metadata in the catalog. best regards -- Radosław Korzeniewski rad...@ko... |
From: Bill A. <wa...@pr...> - 2024-09-11 08:40:11
|
On 9/11/24 1:50 AM, Bruno Bartels (Intero Technologies) wrote: > > Hi Bill, > thank you very much for your great answer! > I am going to implement this in the next time and then get back to you. > Thank you again for that valuable hint! > Bruno Hello Bruno, You are welcome. I have worked with this new feature when it was first introduced, but it has been a while since I touched it so I look forward to your results. :) Best regards, Bill -- Bill Arlofski wa...@pr... |
From: Bill A. <wa...@pr...> - 2024-09-11 08:18:40
|
On 9/11/24 1:55 AM, Simon Flutura wrote: > Hi, > > > I inherited a bacula setup, backing several machines up on tape. > > Sadly the setup hangs, no network/cpu/io activity while jobs are running > forever. > > We are running Bacula 11.0.6. > > Do you have any clue where to start in debugging the setup? > > > Best, > > > Simon Hello Simon, Most likely Bacula is waiting for something (media is my first guess) In bconsole, a "status director" will be the first place to start. Best regards, Bill -- Bill Arlofski wa...@pr... |
From: Simon F. <sf...@de...> - 2024-09-11 08:11:59
|
Hi, I inherited a bacula setup, backing several machines up on tape. Sadly the setup hangs, no network/cpu/io activity while jobs are running forever. We are running Bacula 11.0.6. Do you have any clue where to start in debugging the setup? Best, Simon |
From: Bruno B. (I. Technologies) <ba...@in...> - 2024-09-11 07:51:08
|
Hi Bill, thank you very much for your great answer! I am going to implement this in the next time and then get back to you. Thank you again for that valuable hint! Bruno -----Ursprüngliche Nachricht----- Von: Bill <bac...@li...> An: bacula-users <bac...@li...> Datum: Mittwoch, 11. September 2024 09:46 CEST Betreff: Re: [Bacula-users] Automatically cancel backup jobs --> do not mark as fatal On 9/10/24 5:21 AM, Bruno Bartels \(Intero Technologies\) via Bacula-users wrote: > Hi alltogether, > I have adjusted Bacula to cancel jobs, that are duplicates of one job (Disable "AllowDuplicateJobs", or > "DisableDuplicateJobs" AND "CancelQueedDuplicates, which should be the same > im my understanding). > The reason for this is, that there are running some really big full backups, that take few days to finish, and meanwhile > there is the possibility, that the same job can start in a lower level > (incremental/differential) > This works fine, except one problem: > When the new job gets canceled there is this error thrown in the logs: > > Fatal error: JobId XXX already running. Duplicate job not allowed. > > And Bacula is sending out a mail. > > Question: Is there possibility not log this as FATAL ERROR? > > I want to receive mails concerning fatal errors, so setting the message ressources isn't a option. > > Can you please help? > > Thank you in advance > > Bruno Hello Bruno, You cannot do this the way you are currently trying because as you have seen, Bacula will cancel the job, and it will show up as a "non good" (canceled) job in the catalog, and you will get the failed job email. Fortunately, there is a new feature which was added recently that should solve this issue for you. It is called 'Run Queue Advanced Control' which adds a new "RunsWhen" setting for your job Runscripts. The new setting is "RunsWhen = queued" The idea is that instead of using the AllowDuplicateJobs, DisableDuplicateJobs, and CancelQueedDuplicates job options to control if a job is allowed to be queued/started, you add a RunScript{} stanza to your job, set the RunScript's "Runswhen = queued", and have the RunScript's "Command =" setting point to a custom script (we have examples). The script's returncode/errorlevel will determine whether the job enters the queue, or is just dropped and forgotten about - producing no canceled jo b, and no job error email. Using this new advanced "RunsWhen = queued" level, you should be able to accomplish what you are looking to do: - Prevent same jobs from being queued when the same job is already running - Prevent duplicate jobs from being canceled and error emails from being sent. This new feature is available since Bacula Community version 15.0.x, which closely tracks Bacula Enterprise v16.0.x. It is documented here in the Enterprise manual: https://docs.baculasystems.com/BETechnicalReference/Director/DirectorResourceTypes/JobResource/index.html#job-resource The section you are looking for is named: "Notes about the Run Queue Advanced Control with RunsWhen=Queued" Please make sure you are running Community version v15.0.2 first, then give this a try and let me know if this helps. Best regards, Bill -- Bill Arlofski wa...@pr... _______________________________________________ Bacula-users mailing list Bac...@li... https://lists.sourceforge.net/lists/listinfo/bacula-users |
From: Bill A. <wa...@pr...> - 2024-09-11 07:42:15
|
On 9/10/24 5:21 AM, Bruno Bartels \(Intero Technologies\) via Bacula-users wrote: > Hi alltogether, > I have adjusted Bacula to cancel jobs, that are duplicates of one job (Disable "AllowDuplicateJobs", or > "DisableDuplicateJobs" AND "CancelQueedDuplicates, which should be the same > im my understanding). > The reason for this is, that there are running some really big full backups, that take few days to finish, and meanwhile > there is the possibility, that the same job can start in a lower level > (incremental/differential) > This works fine, except one problem: > When the new job gets canceled there is this error thrown in the logs: > > Fatal error: JobId XXX already running. Duplicate job not allowed. > > And Bacula is sending out a mail. > > Question: Is there possibility not log this as FATAL ERROR? > > I want to receive mails concerning fatal errors, so setting the message ressources isn't a option. > > Can you please help? > > Thank you in advance > > Bruno Hello Bruno, You cannot do this the way you are currently trying because as you have seen, Bacula will cancel the job, and it will show up as a "non good" (canceled) job in the catalog, and you will get the failed job email. Fortunately, there is a new feature which was added recently that should solve this issue for you. It is called 'Run Queue Advanced Control' which adds a new "RunsWhen" setting for your job Runscripts. The new setting is "RunsWhen = queued" The idea is that instead of using the AllowDuplicateJobs, DisableDuplicateJobs, and CancelQueedDuplicates job options to control if a job is allowed to be queued/started, you add a RunScript{} stanza to your job, set the RunScript's "Runswhen = queued", and have the RunScript's "Command =" setting point to a custom script (we have examples). The script's returncode/errorlevel will determine whether the job enters the queue, or is just dropped and forgotten about - producing no canceled jo b, and no job error email. Using this new advanced "RunsWhen = queued" level, you should be able to accomplish what you are looking to do: - Prevent same jobs from being queued when the same job is already running - Prevent duplicate jobs from being canceled and error emails from being sent. This new feature is available since Bacula Community version 15.0.x, which closely tracks Bacula Enterprise v16.0.x. It is documented here in the Enterprise manual: https://docs.baculasystems.com/BETechnicalReference/Director/DirectorResourceTypes/JobResource/index.html#job-resource The section you are looking for is named: "Notes about the Run Queue Advanced Control with RunsWhen=Queued" Please make sure you are running Community version v15.0.2 first, then give this a try and let me know if this helps. Best regards, Bill -- Bill Arlofski wa...@pr... |
From: Stefan G. W. <li...@xu...> - 2024-09-11 07:01:00
|
Am 03.09.24 um 08:05 schrieb Stefan G. Weichinger: > Am 31.08.24 um 08:25 schrieb Stefan G. Weichinger: > >> Logs show: >> >> # journalctl -u bacula-fd >> Aug 28 20:07:16 samba bacula-fd[958]: samba-fd: job.c:3698-5189 Bad >> response from SD to Append Data command. Wanted 3000 OK data >> Aug 28 20:07:16 samba bacula-fd[958]: , got len=25 msg="3903 Error >> append data: " >> Aug 28 20:07:53 samba bacula-fd[958]: samba-fd: job.c:3698-5190 Bad >> response from SD to Append Data command. Wanted 3000 OK data >> Aug 28 20:07:53 samba bacula-fd[958]: , got len=25 msg="3903 Error >> append data: " > > I see that every night now (!?) Should I open a new thread for this? This behavior is there every day since then. The jobs look good ... I don't know how to interpret this. |