You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
(23) |
May
(45) |
Jun
(22) |
Jul
(11) |
Aug
(14) |
Sep
(38) |
Oct
(62) |
Nov
(34) |
Dec
(25) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(88) |
Feb
(126) |
Mar
(182) |
Apr
(168) |
May
(244) |
Jun
(249) |
Jul
(206) |
Aug
(309) |
Sep
(250) |
Oct
(487) |
Nov
(620) |
Dec
(470) |
| 2004 |
Jan
(623) |
Feb
(671) |
Mar
(522) |
Apr
(841) |
May
(671) |
Jun
(638) |
Jul
(622) |
Aug
(544) |
Sep
(580) |
Oct
(621) |
Nov
(668) |
Dec
(575) |
| 2005 |
Jan
(440) |
Feb
(547) |
Mar
(626) |
Apr
(594) |
May
(597) |
Jun
(796) |
Jul
(760) |
Aug
(908) |
Sep
(948) |
Oct
(923) |
Nov
(1114) |
Dec
(686) |
| 2006 |
Jan
(897) |
Feb
(943) |
Mar
(1049) |
Apr
(882) |
May
(774) |
Jun
(736) |
Jul
(816) |
Aug
(798) |
Sep
(1000) |
Oct
(742) |
Nov
(881) |
Dec
(705) |
| 2007 |
Jan
(1086) |
Feb
(1085) |
Mar
(1132) |
Apr
(1011) |
May
(778) |
Jun
(847) |
Jul
(857) |
Aug
(865) |
Sep
(830) |
Oct
(939) |
Nov
(856) |
Dec
(543) |
| 2008 |
Jan
(933) |
Feb
(832) |
Mar
(772) |
Apr
(587) |
May
(723) |
Jun
(872) |
Jul
(962) |
Aug
(915) |
Sep
(766) |
Oct
(658) |
Nov
(780) |
Dec
(554) |
| 2009 |
Jan
(604) |
Feb
(766) |
Mar
(719) |
Apr
(745) |
May
(547) |
Jun
(554) |
Jul
(474) |
Aug
(338) |
Sep
(424) |
Oct
(670) |
Nov
(421) |
Dec
(510) |
| 2010 |
Jan
(732) |
Feb
(702) |
Mar
(693) |
Apr
(666) |
May
(556) |
Jun
(515) |
Jul
(553) |
Aug
(549) |
Sep
(344) |
Oct
(431) |
Nov
(437) |
Dec
(329) |
| 2011 |
Jan
(822) |
Feb
(540) |
Mar
(435) |
Apr
(437) |
May
(624) |
Jun
(458) |
Jul
(416) |
Aug
(395) |
Sep
(333) |
Oct
(280) |
Nov
(246) |
Dec
(324) |
| 2012 |
Jan
(340) |
Feb
(273) |
Mar
(429) |
Apr
(321) |
May
(311) |
Jun
(329) |
Jul
(201) |
Aug
(307) |
Sep
(263) |
Oct
(308) |
Nov
(315) |
Dec
(294) |
| 2013 |
Jan
(481) |
Feb
(337) |
Mar
(310) |
Apr
(269) |
May
(274) |
Jun
(231) |
Jul
(182) |
Aug
(214) |
Sep
(276) |
Oct
(178) |
Nov
(222) |
Dec
(150) |
| 2014 |
Jan
(135) |
Feb
(144) |
Mar
(218) |
Apr
(152) |
May
(312) |
Jun
(187) |
Jul
(197) |
Aug
(218) |
Sep
(241) |
Oct
(282) |
Nov
(292) |
Dec
(229) |
| 2015 |
Jan
(200) |
Feb
(133) |
Mar
(154) |
Apr
(162) |
May
(268) |
Jun
(274) |
Jul
(166) |
Aug
(311) |
Sep
(182) |
Oct
(236) |
Nov
(160) |
Dec
(216) |
| 2016 |
Jan
(187) |
Feb
(248) |
Mar
(259) |
Apr
(112) |
May
(203) |
Jun
(104) |
Jul
(156) |
Aug
(131) |
Sep
(135) |
Oct
(161) |
Nov
(179) |
Dec
(110) |
| 2017 |
Jan
(148) |
Feb
(96) |
Mar
(236) |
Apr
(99) |
May
(118) |
Jun
(156) |
Jul
(157) |
Aug
(204) |
Sep
(151) |
Oct
(152) |
Nov
(125) |
Dec
(58) |
| 2018 |
Jan
(127) |
Feb
(151) |
Mar
(119) |
Apr
(131) |
May
(170) |
Jun
(125) |
Jul
(103) |
Aug
(119) |
Sep
(143) |
Oct
(116) |
Nov
(141) |
Dec
(90) |
| 2019 |
Jan
(179) |
Feb
(126) |
Mar
(97) |
Apr
(135) |
May
(135) |
Jun
(110) |
Jul
(121) |
Aug
(61) |
Sep
(96) |
Oct
(48) |
Nov
(58) |
Dec
(105) |
| 2020 |
Jan
(116) |
Feb
(97) |
Mar
(114) |
Apr
(96) |
May
(154) |
Jun
(116) |
Jul
(76) |
Aug
(20) |
Sep
(68) |
Oct
(105) |
Nov
(33) |
Dec
(118) |
| 2021 |
Jan
(34) |
Feb
(81) |
Mar
(94) |
Apr
(74) |
May
(133) |
Jun
(86) |
Jul
(65) |
Aug
(44) |
Sep
(68) |
Oct
(56) |
Nov
(113) |
Dec
(195) |
| 2022 |
Jan
(135) |
Feb
(65) |
Mar
(108) |
Apr
(48) |
May
(102) |
Jun
(153) |
Jul
(89) |
Aug
(90) |
Sep
(135) |
Oct
(77) |
Nov
(85) |
Dec
(61) |
| 2023 |
Jan
(102) |
Feb
(62) |
Mar
(81) |
Apr
(103) |
May
(71) |
Jun
(45) |
Jul
(57) |
Aug
(60) |
Sep
(94) |
Oct
(104) |
Nov
(96) |
Dec
(68) |
| 2024 |
Jan
(107) |
Feb
(92) |
Mar
(91) |
Apr
(155) |
May
(78) |
Jun
(121) |
Jul
(64) |
Aug
(136) |
Sep
(108) |
Oct
(105) |
Nov
(124) |
Dec
(88) |
| 2025 |
Jan
(115) |
Feb
(95) |
Mar
(84) |
Apr
(23) |
May
(59) |
Jun
(89) |
Jul
(71) |
Aug
(59) |
Sep
(60) |
Oct
(24) |
Nov
(56) |
Dec
(58) |
| 2026 |
Jan
(62) |
Feb
(52) |
Mar
(44) |
Apr
(36) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Arno L. <al...@it...> - 2026-04-23 14:29:35
|
Hi Andrew, just a very quick suggestion for now -- you could try disabling transport encryption and compression and see if you get better throughput. And one question about the change: Have you touched the client on the Solaris host or the configuration (beyond changing credentials for the DIR)? The reason I ask is that, as far as I recall, the openSSL and possibly compression libraries on Solaris may be just slow compared to other platforms. 'status client=<the-solaris-host-fd>' should give a bit of information about the client build in the header, and that may become interesting when we try to find out if the FD is potentially problematic. Cheers, Arno Am 23.04.2026 um 13:28 schrieb Andrew Watkins: > Hello, > > Have a strange one here. I have moved to a new Bacula Server and the > read performance for a Solaris client is 3 times as bad. > > All other clients (Windows, Linux) are good. > > Old setup: > > Solaris Bacula 13.04 Server (slow disks for spooling) LTO5 when backing > up a Solaris file server I get: > > iostat gives me about 100Ms > Bacula Backup report gives me ~ 54,664.4 KB/s > > New Setup: > > Ubuntu Bacula 13.04 Server (Fast disks for spooling) LTO9 when backup up > save Solaris file server I get: > > iostat gives me about 30Ms > Bacula Backup report gives me ~4,661.3 KB/s > > But other clients Windows & linux are report good rates. > > The conf files are near enough the same, so any ideas why Solaris is so > slow when backup up to a Linux server? > > Thanks, > > Andrew > > -- > ***************************************************** > ***** Support Request to...@dc... ***** > ***************************************************** > * Andrew Watkins * > * Birkbeck, University of London * > * Computing and Mathematical Sciences * > *http://notallmicrosoft.blogspot.com * > ***************************************************** > > > > _______________________________________________ > Bacula-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/bacula-users -- Arno Lehmann IT-Service Lehmann Sandstr. 6, 49080 Osnabrück |
|
From: Andrew W. <a.w...@bb...> - 2026-04-23 14:01:39
|
Hello, Have a strange one here. I have moved to a new Bacula Server and the read performance for a Solaris client is 3 times as bad. All other clients (Windows, Linux) are good. Old setup: Solaris Bacula 13.04 Server (slow disks for spooling) LTO5 when backing up a Solaris file server I get: iostat gives me about 100Ms Bacula Backup report gives me ~ 54,664.4 KB/s New Setup: Ubuntu Bacula 13.04 Server (Fast disks for spooling) LTO9 when backup up save Solaris file server I get: iostat gives me about 30Ms Bacula Backup report gives me ~4,661.3 KB/s But other clients Windows & linux are report good rates. The conf files are near enough the same, so any ideas why Solaris is so slow when backup up to a Linux server? Thanks, Andrew -- ***************************************************** ***** Support Request to...@dc... ***** ***************************************************** * Andrew Watkins * * Birkbeck, University of London * * Computing and Mathematical Sciences * *http://notallmicrosoft.blogspot.com * ***************************************************** |
|
From: A.F.M <a.f...@pr...> - 2026-04-22 11:28:41
|
Hi Heitor, I hope this message finds you well. I am reaching out to request more detailed information regarding how your solution operates. From my understanding, it appears that there is a one-way synchronization involved. However, I would like to clarify how the database information is handled in this process. Are the databases also synchronized in the same manner? Additionally, I noticed that you mentioned costs associated with your solution. Is there a dedicated product website where I can find more information about pricing and features? Furthermore, if I understood correctly, your support seems to be focused primarily on Bacula Community. Could you elaborate on whether you support just the plugin, or do you also provide assistance for issues that arise within the Bacula Community in conjunction with your plugin? In the event of a problem, will we need to rely solely on the community for support, or do you have a system in place to assist with these issues? I also wanted to address your special offer, which claims that we can transfer our current license in favor of your product, such as Bacula Enterprise. However, when I compare the two solutions, I notice a significant gap between Bacula Community and Bacula Enterprise. Do you have plans to bridge this gap? As it stands, the current offer makes it challenging to consider replacing an existing product. I look forward to receiving more information about your product, its features, and any future plans you may have. Thank you for your attention to these questions. |
|
From: Heitor F. <hei...@gm...> - 2026-04-21 17:29:23
|
Hello Users, Just FYI: Introducing **PodHeitor vSphere BRC** — the first plugin that brings enterprise-grade VMware vSphere Backup, Replication & Conversion to Bacula Community Edition. THREE PRODUCTS IN ONE PLUGIN: * BACKUP — Image-level Full/Incremental/Differential via VADP + CBT * REPLICATION — CBT-based async VM replication with 10-mode DR failover * CONVERSION — Cross-hypervisor restore: vSphere <-> Hyper-V <-> Proxmox/KVM WHY SWITCH? * No Bacula Enterprise license required — works with FREE Bacula Community * 10-mode DR lifecycle: seed, push, failover (test/planned/unplanned/permanent), failback, reprotect * Automatic network mapping and IP reconfiguration on failover * TLS-encrypted DR protocol with constant-time authentication * Snapshot-based restore points on replica VM * Written in Rust — memory-safe, high-performance, zero GC pauses * 12/12 replication tests PASSED — 100% success rate WHAT YOU CAN REPLACE: * Bacula Enterprise vSphere plugin — we do more, for less * Veeam Backup & Replication — same features, fraction of the cost * Commvault / Netbackup — enterprise features without enterprise pricing * Zerto / VMware SRM — native replication built into your backup infrastructure SPECIAL OFFER: Bring your Bacula Enterprise, Veeam, Commvault or Netbackup quote or renewal proposal. We guarantee at least 50% discount, with far more features. COMPATIBLE WITH: * VMware ESXi 7.0 / 8.0 / 8.0U3 * vCenter 7.0 / 8.0 (optional — standalone ESXi supported) * VDDK 8.0+ / 9.0+ * Oracle Linux 9 / RHEL 9 / Rocky Linux 9 / AlmaLinux 9 * Bacula Community 15.0.x Ready to protect your VMware infrastructure at a fraction of the cost? Contact us: Heitor Faria — Creator of PodHeitor Email: he...@op... Phone: +1 789 726-1749 WhatsApp: +55 61 98268-4220 --- Copyright (c) 2026 Heitor Faria — All Rights Reserved -- Regards, Heitor faria (Miami) https://www.youtube.com/@podheitor WhatsApp: +1 786-726-1749 | +55 61 98268-4220 |
|
From: Marcin H. <gan...@gm...> - 2026-04-21 03:04:10
|
On Tue, 21 Apr 2026 at 04:58, Davide <df...@df...> wrote: > > Hello Marcin, > > Congratulations for this new release! 🎉💪 Thanks, Davide! Nice to hear :-) Best regards, Marcin |
|
From: Davide <df...@df...> - 2026-04-21 02:58:50
|
Hello Marcin, Congratulations for this new release! 🎉💪 Best regards Davide On Mon, Apr 20, 2026 at 7:53 AM Marcin Haba <gan...@gm...> wrote: > Hello Community, > > This Monday is quite special for Bacularis, because today we've > released a new version of the web interface. > > Version 6.1.0 brings several interesting features. Among them, we'd > especially like to highlight the new global Bacula resource search. We > designed it to be user-friendly, simple, and configurable. > > You can also read about the global search and other features in the > official release notes for version 6.1.0: > > https://bacularis.app/news/160/36/New-release-Bacularis-6.1.0 > > Feel free to check it out. Feedback and suggestions are very welcome. > > We wish you smooth installations and upgrades. > > On behalf of The Bacularis Team > Marcin Haba (gani) > > > _______________________________________________ > Bacula-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/bacula-users > |
|
From: Heitor F. <hei...@gm...> - 2026-04-20 11:17:16
|
Hello,
How much time does your team spend waiting for Bacula restore jobs just to
recover a single accidentally deleted file?
If you manage Bacula Community Edition environments, this scenario is
all too familiar:
• Backup completed. ✓
• File lost or corrupted. ✗
• Restore job started. ⏳ (wait another 30 minutes...)
The PodHeitor BRC Plugin breaks this cycle for good.
───────────────────────────────────────────────────────────────────────────────
WHAT IS THE PODHEITOR BRC PLUGIN?
───────────────────────────────────────────────────────────────────────────────
A native Bacula Storage Daemon plugin that replicates files in real time —
during the backup itself — directly to a target filesystem directory.
→ No separate replication tools
→ No additional agents on client machines
→ No proprietary appliances or Bacula Enterprise licensing fees
→ 100% compatible with Bacula Community Edition 11.x, 14.x, and 15.x
When the backup finishes, files ARE ALREADY available at the destination.
Recovery becomes a directory browse — nothing more.
───────────────────────────────────────────────────────────────────────────────
3 PILLARS: BACKUP · REPLICATION · CONVERSION
───────────────────────────────────────────────────────────────────────────────
[B] NATIVE BACKUP
Full, Incremental, and Differential — all Bacula backup levels
supported.
Complete metadata fidelity: ACLs, xattrs, sparse files, UID/GID,
permissions, and timestamps.
[R] INTELLIGENT REPLICATION
✔ Mirror mode — 1:1 replica, orphans cleaned on Full jobs
✔ Retention mode — versioned file history
✔ Multi-site fan-out — replicate to multiple destinations in parallel
✔ BLAKE3 skip-unchanged — only writes files with changed content
✔ Bandwidth throttling — protect network links (K/M/G suffixes)
✔ Consistency groups — coordinated replication across multiple jobs
✔ Failover automation — automatic promotion/demotion hooks
[C] CONVERSION & COMPLIANCE
✔ Transparent stream decompression (zlib / LZ4)
✔ AES-256-GCM encryption at rest
✔ RPO/RTO compliance reporting (JSON + Markdown)
✔ Snapshot integration with LVM, ZFS, and Btrfs
───────────────────────────────────────────────────────────────────────────────
REAL-WORLD USE CASES
───────────────────────────────────────────────────────────────────────────────
→ Warm DR site: replicated files ready for immediate failover
→ Near-line instant recovery: access files without running a restore job
→ Dev/test dataset refresh: keep test environments in sync with production
→ Compliance and audit: automated RPO/RTO evidence generation
→ Multi-site replication: fan-out to multiple geographic locations
───────────────────────────────────────────────────────────────────────────────
COMPATIBILITY
───────────────────────────────────────────────────────────────────────────────
• Bacula Community Edition 11.x, 14.x, 15.x
• Linux x86_64 / aarch64
• RHEL / OEL / Rocky Linux 8 and 9
• Debian 11 and 12 | Ubuntu 22.04+
• Target filesystems: ext4, XFS, Btrfs (ACL-enabled recommended)
───────────────────────────────────────────────────────────────────────────────
INSTALLATION IN 3 STEPS
───────────────────────────────────────────────────────────────────────────────
1. sudo bash podheitor-replica-sd-0.4.0-linux-x86_64.run \
--target=/mnt/replica_dest
2. Add Plugin Directory to your Storage resource in bacula-sd.conf
3. sudo systemctl restart bacula-sd
Done. Your very next backup will replicate in real time.
───────────────────────────────────────────────────────────────────────────────
WHY CHOOSE THE PODHEITOR BRC PLUGIN?
───────────────────────────────────────────────────────────────────────────────
✦ No migration cost — leverages the Bacula infrastructure you already own
✦ No learning curve — minimal configuration, transparent operation
✦ No vendor lock-in — replicated files are plain filesystem files
✦ High performance — static Rust binary, zero runtime dependencies
✦ Security — AES-256-GCM encryption, data never travels in the clear
✦ Enterprise support — SLA, DR project consulting, corporate licensing
───────────────────────────────────────────────────────────────────────────────
INTERESTED? GET IN TOUCH
───────────────────────────────────────────────────────────────────────────────
Heitor Faria
E-mail: he...@op...
Phone: +1 789 726-1749
WhatsApp: +55 61 98268-4220
We offer enterprise licensing, priority support, implementation services,
and consulting for Disaster Recovery projects.
Request a demo or a free evaluation by replying to this e-mail.
───────────────────────────────────────────────────────────────────────────────
© 2026 Heitor Faria — All Rights Reserved.
PodHeitor BRC Plugin for Bacula Community Edition.
--
Atenciosamente,
Heitor faria (Miami)
https://www.youtube.com/@podheitor
WhatsApp: +1 786-726-1749 | +55 61 98268-4220
|
|
From: Marcin H. <gan...@gm...> - 2026-04-20 05:51:04
|
Hello Community, This Monday is quite special for Bacularis, because today we've released a new version of the web interface. Version 6.1.0 brings several interesting features. Among them, we'd especially like to highlight the new global Bacula resource search. We designed it to be user-friendly, simple, and configurable. You can also read about the global search and other features in the official release notes for version 6.1.0: https://bacularis.app/news/160/36/New-release-Bacularis-6.1.0 Feel free to check it out. Feedback and suggestions are very welcome. We wish you smooth installations and upgrades. On behalf of The Bacularis Team Marcin Haba (gani) |
|
From: Heitor F. <hei...@gm...> - 2026-04-16 01:32:31
|
Dear Bacula Users, I'm proud to announce my unprecedented product for the Bacula Community. PodHeitor Hyper-V Backup Plugin for Bacula Full VM backup and restore for Microsoft Hyper-V — direct VHDX access, RCT block-level incremental, application-consistent. Back up Hyper-V virtual machines including VHDX disks, configuration, checkpoints, and metadata. No Export-VM needed — reads VHDX files directly with production checkpoints for consistency, and uses Resilient Change Tracking (RCT) for true block-level incremental/differential backups. Built with Rust — The backend engine (podheitor-hyperv-backend.exe) is implemented in Rust, delivering memory safety, zero-cost abstractions, and native performance. This is a state-of-the-art Hyper-V backup plugin — no PowerShell spaghetti, no fragile script chains. Production-grade systems engineering from the ground up. Why Direct VHDX + RCT Challenge: VHD files locked while VM runs Direct VHDX + RCT Approach: Production checkpoint freezes parent VHDX for reading Challenge: Inconsistent state during copy Direct VHDX + RCT Approach: VSS quiesce ensures application consistency Challenge: Export-VM requires full local copy Direct VHDX + RCT Approach: No export — read VHDX in place, zero extra disk space Challenge: Full VHDX re-sent on each incremental Direct VHDX + RCT Approach: RCT tracks changed blocks — send only modified regions Challenge: Fast incremental for large disks Direct VHDX + RCT Approach: Block-level delta: 100GB disk with 2GB changed → only 2GB transferred Challenge: Manual snapshot management Direct VHDX + RCT Approach: Plugin creates + removes checkpoint automatically Challenge: No catalog of backed-up files Direct VHDX + RCT Approach: Bacula catalog tracks every file — restore individual VHDs Datasheet: https://baculaenterprise.com.br/wp-content/uploads/2026/04/PodHeitor-Hyper-V-Backup-Plugin-for-Bacula-.pdf Please let me know in a private message if you need this plugin or any other. I'm taking orders for Bacula Community plugin development. Regards, Heitor faria (Miami) https://www.youtube.com/@podheitor WhatsApp: +1 786-726-1749 | +55 61 98268-4220 |
|
From: Lloyd B. <llo...@by...> - 2026-04-15 17:44:56
|
Thanks everyone for your responses. I'm working on creating a
demonstration system with a few VMs, to see what I can figure out. I'm
definitely running into a problem, but the messages are confusing, so
I'm hoping that someone here can spot it. All 3 hosts are running
bacula 15.0.3 on Debian 13, installed from the Debian 13/Trixie
repository. If anyone has any ideas what could be going wrong, I'd love
to hear them.
I'll attach the relevant config files, stripped of the unnecessary
comments, etc., and with passwords redacted. Here's the general structure:
Host: backupdir - backup director, running `bacula-dir`
Host: backupsddemo{1,2} - running both `bacula-sd` and `bacula-fd`
Both `backupsddemo*` hosts have a single local disk pool stored at
`/backupdata`. There are 2 pools defined as "FullPool" (on
`bacupsddemo1`) and "IncPool" (on `backupsddemo2`). I was able to run a
Full and an Incremental for job `Test-backupsddemo1`. I wrote a file
between the Full and the Incremental, so I could do the restore for each
type and differentiate. The restores went perfectly, and I could tell
by the presence/absence of that file, that it restored the correct one.
When I try to run the VirtualFull, though, it fails. The relevant error
seems to be the following, taken from the director running with `-d 200`:
> backupdir-dir: msgchan.c:297-14 Rstore=backupsddemo2-sd
> backupdir-dir: msgchan.c:308-14 rstore >stored: use
> storage=backupsddemo2-sd media_type=File2 pool_name=IncPool
> pool_type=Backup append=0 copy=0 stripe=0 wait=1
> backupdir-dir: msgchan.c:315-14 >stored: use device=FileStorage-demo2-dev1
> backupdir-dir: getmsg.c:152-14 bget_dirmsg n=-1 msglen=-6 is_stop=0:
> backupdir-dir: getmsg.c:152-14 bget_dirmsg n=106 msglen=106 is_stop=0:
> 3924 Device "FileStorage-demo2-dev1" not in SD Device resources or no
> matching Media Type or is disa
> bled.
>
> backupdir-dir: msgchan.c:321-14 <stored: 3924 Device
> "FileStorage-demo2-dev1" not in SD Device resources or no matching
> Media Type or is disabled.
For reference, the corresponding SD running with `-d 200` is showing no
output at all when the VirtualFull tries to run, but does show output
for things like `status storage...`, and when I do the Incremental to
it, or the restore from it.
At this point, I'm confused by the "3924 Device "FileStorage-demo2-dev1"
not in SD Device resources or no matching Media Type or is disabled.".
As far as I can tell, the Media Type ("File2"), the pool ("IncPool") and
the device ("FileStorage-demo2-dev1"), are all correct for that SD.
If there were any evidence that it was failing because it was trying to
read the Full backup (or the corresponding `Full-*` file) from SD2, that
would make sense, and confirm that I can't do the VFs this way. But
that's not the evidence I'm seeing.
Thanks,
Lloyd
On 4/8/26 13:24, Lloyd Brown wrote:
> Hey, all. I'm in the middle of designing the next generation of our
> backup system, which will likely be based on Bacula 15. This time,
> we'll be using both disk and tape, vs the disk-only we've run before,
> which brings up a question or two that I haven't encountered before.
>
> When running VirtualFull jobs, does the source material (previous
> Full, latest Differential, subsequent Incrementals), need to all
> reside in the same Pool? In multiple pools but available to the same SD?
>
> For this system, we're planning on 2 SD hosts, with local disks, and a
> tape library attached to one of them. We thought that we could keep
> the Fulls/VFs entirely on tape, only store the incrementals on disk,
> and write new VFs directly to tape. As a result, we trimmed down the
> size of our disk pool for cost reasons (HDDs and SSD are both crazy
> expensive right now). I don't think I'll have enough space in the
> disk pool to store Incrementals *and* a previous Full/VF. If that's
> true, I need to re-think my design a bit. It may mean we just do true
> Fulls to tape, and Incrementals on disk, and not get to use VFs at all.
>
> Thanks,
>
> Lloyd
>
--
Lloyd Brown
HPC Systems Administrator
Office of Research Computing
Brigham Young University
http://rc.byu.edu
|
|
From: Lloyd B. <llo...@by...> - 2026-04-15 16:56:09
|
Hey, guys. This email was in response to an earlier email I sent. For
some reason, this one came thru the list before it's parent. Maybe
because it has an attachment? But you can see the text of the delayed
parent email below, if needed.
Lloyd
On 4/15/26 10:18, Lloyd Brown wrote:
> Never mind. By running `backupsddemo1` with the `-d 200` logging, I
> can now verify that the director is asking *it* to connect with
> `FileStorage-demo2-dev1`, which is not on that host. So, despite the
> fact that the director *claimed* it was talking to `backupsddemo2-sd`,
> it was actually trying to communicate with `backupsddemo1-sd`.
>
> Lloyd
>
>
> log excerpt:
>
>> backupsddemo1-sd: dircmd.c:254-21 Do command: bootstrap
>> backupsddemo1-sd: dircmd.c:1897-21 === Bootstrap file ===
>> backupsddemo1-sd: dircmd.c:1899-21 Storage="backupsddemo1-sd"
>> backupsddemo1-sd: dircmd.c:1899-21 Volume="Full-0001"
>> backupsddemo1-sd: dircmd.c:1899-21 MediaType="File1"
>> backupsddemo1-sd: dircmd.c:1899-21 Device="FileStorage-demo1-dev1"
>> backupsddemo1-sd: dircmd.c:1899-21 VolSessionId=3
>> backupsddemo1-sd: dircmd.c:1899-21 VolSessionTime=1776263278
>> backupsddemo1-sd: dircmd.c:1899-21 VolAddr=1016684-2033139
>> backupsddemo1-sd: dircmd.c:1899-21 FileIndex=1-1781
>> backupsddemo1-sd: dircmd.c:1899-21 FileIndex=1783-1803
>> backupsddemo1-sd: dircmd.c:1899-21 Count=1802
>> backupsddemo1-sd: dircmd.c:1899-21 Storage="backupsddemo2-sd"
>> backupsddemo1-sd: dircmd.c:1899-21 Volume="Inc-0017"
>> backupsddemo1-sd: dircmd.c:1899-21 MediaType="File2"
>> backupsddemo1-sd: dircmd.c:1899-21 Device="FileStorage-demo2-dev1"
>> backupsddemo1-sd: dircmd.c:1899-21 VolSessionId=1
>> backupsddemo1-sd: dircmd.c:1899-21 VolSessionTime=1776263582
>> backupsddemo1-sd: dircmd.c:1899-21 VolAddr=251-969
>> backupsddemo1-sd: dircmd.c:1899-21 FileIndex=1-2
>> backupsddemo1-sd: dircmd.c:1899-21 Count=2
>> backupsddemo1-sd: dircmd.c:1907-21 === end bootstrap file ===
>> Next : 0x38010da8
>> Root bsr : 0x38010be8
>> VolumeName : Full-0001
>> MediaType : File1
>> Device : FileStorage-demo1-dev1
>> Slot : 0
>> SessId : 3
>> SessTime : 1776263278
>> VolAddr : 1016684-2033139
>> FileIndex : 1-1781
>> FileIndex : 1783-1803
>> count : 1802
>> found : 0
>> done : no
>> positioning : 1
>> fast_reject : 1
>>
>> Next : 0x0
>> Root bsr : 0x38010be8
>> VolumeName : Inc-0017
>> MediaType : File2
>> Device : FileStorage-demo2-dev1
>> Slot : 0
>> SessId : 1
>> SessTime : 1776263582
>> VolAddr : 251-969
>> FileIndex : 1-2
>> count : 2
>> found : 0
>> done : no
>> positioning : 0
>> fast_reject : 0
>> backupsddemo1-sd: vol_mgr.c:152-21 add read_vol=Full-0001 JobId=21
>> backupsddemo1-sd: vol_mgr.c:152-21 add read_vol=Inc-0017 JobId=21
>> backupsddemo1-sd: dircmd.c:240-21 <dird: use storage=backupsddemo2-sd
>> media_type=File2 pool_name=IncPool pool_type=Backup append=0 copy=0
>> stripe=0 wait=1
>>
>> backupsddemo1-sd: dircmd.c:254-21 Do command: use storage=
>> backupsddemo1-sd: reserve.c:286-21 <dird: use
>> storage=backupsddemo2-sd media_type=File2 pool_name=IncPool
>> pool_type=Backup append=0 copy=0 stripe=0 wait=1
>> backupsddemo1-sd: reserve.c:315-21 <dird device: use
>> device=FileStorage-demo2-dev1
>> backupsddemo1-sd: reserve.c:248-21 Inx=1 mntVol=1 exact=1 chgonly=0
>> low_use=1 any=0
>> backupsddemo1-sd: reserve.c:466-21 Start find_suit_dev PrefMnt=1
>> exact=1 suitable=0 chgronly=0 any=0
>> backupsddemo1-sd: reserve.c:591-21 search res for FileStorage-demo2-dev1
>> backupsddemo1-sd: reserve.c:673-21 Try match res=FileStorage-demo1-dev1
>> backupsddemo1-sd: reserve.c:567-21 No usable device found.
>> backupsddemo1-sd: reserve.c:577-21 Leave find_suit_dev: no dev found.
>> backupsddemo1-sd: reserve.c:248-21 Inx=2 mntVol=1 exact=0 chgonly=1
>> low_use=1 any=0
>> backupsddemo1-sd: reserve.c:466-21 Start find_suit_dev PrefMnt=1
>> exact=0 suitable=0 chgronly=1 any=0
>> backupsddemo1-sd: reserve.c:591-21 search res for FileStorage-demo2-dev1
>> backupsddemo1-sd: reserve.c:567-21 No usable device found.
>> backupsddemo1-sd: reserve.c:577-21 Leave find_suit_dev: no dev found.
>> backupsddemo1-sd: reserve.c:248-21 Inx=3 mntVol=1 exact=0 chgonly=1
>> low_use=0 any=1
>> backupsddemo1-sd: reserve.c:466-21 Start find_suit_dev PrefMnt=1
>> exact=0 suitable=0 chgronly=1 any=1
>> backupsddemo1-sd: reserve.c:591-21 search res for FileStorage-demo2-dev1
>> backupsddemo1-sd: reserve.c:567-21 No usable device found.
>> backupsddemo1-sd: reserve.c:577-21 Leave find_suit_dev: no dev found.
>> backupsddemo1-sd: reserve.c:248-21 Inx=4 mntVol=1 exact=1 chgonly=0
>> low_use=1 any=0
>> backupsddemo1-sd: reserve.c:466-21 Start find_suit_dev PrefMnt=1
>> exact=1 suitable=0 chgronly=0 any=0
>> backupsddemo1-sd: reserve.c:591-21 search res for FileStorage-demo2-dev1
>> backupsddemo1-sd: reserve.c:673-21 Try match res=FileStorage-demo1-dev1
>> backupsddemo1-sd: reserve.c:567-21 No usable device found.
>> ...
>> backupsddemo1-sd: reserve.c:248-21 Inx=7 mntVol=1 exact=0 chgonly=0
>> low_use=0 any=1
>> backupsddemo1-sd: reserve.c:466-21 Start find_suit_dev PrefMnt=1
>> exact=0 suitable=0 chgronly=0 any=1
>> backupsddemo1-sd: reserve.c:591-21 search res for FileStorage-demo2-dev1
>> backupsddemo1-sd: reserve.c:673-21 Try match res=FileStorage-demo1-dev1
>> backupsddemo1-sd: reserve.c:567-21 No usable device found.
>> backupsddemo1-sd: reserve.c:577-21 Leave find_suit_dev: no dev found.
>> backupsddemo1-sd: reserve.c:417-21 Fail. !suitable_device ||
>> !wait_for_device
>> backupsddemo1-sd: reserve.c:432-21 >dird: 3924 Device
>> "FileStorage-demo2-dev1" not in SD Device resources or no matching
>> Media Type or is disabled.
>> backupsddemo1-sd: dircmd.c:257-21 Command use storage= requests quit
>> backupsddemo1-sd: vol_mgr.c:197-21 remove_read_vol=Full-0001 JobId=21
>> found=1
>> backupsddemo1-sd: vol_mgr.c:197-21 remove_read_vol=Inc-0017 JobId=21
>> found=1
>> backupsddemo1-sd: jcr.c:184-21 write_last_jobs seek to 192
>>
>
> On 4/15/26 09:52, Lloyd Brown wrote:
>> Thanks everyone for your responses. I'm working on creating a
>> demonstration system with a few VMs, to see what I can figure out.
>> I'm definitely running into a problem, but the messages are
>> confusing, so I'm hoping that someone here can spot it. All 3 hosts
>> are running bacula 15.0.3 on Debian 13, installed from the Debian
>> 13/Trixie repository. If anyone has any ideas what could be going
>> wrong, I'd love to hear them.
>>
>> I'll attach the relevant config files, stripped of the unnecessary
>> comments, etc., and with passwords redacted. Here's the general
>> structure:
>>
>> Host: backupdir - backup director, running `bacula-dir`
>>
>> Host: backupsddemo{1,2} - running both `bacula-sd` and `bacula-fd`
>>
>> Both `backupsddemo*` hosts have a single local disk pool stored at
>> `/backupdata`. There are 2 pools defined as "FullPool" (on
>> `bacupsddemo1`) and "IncPool" (on `backupsddemo2`). I was able to
>> run a Full and an Incremental for job `Test-backupsddemo1`. I wrote a
>> file between the Full and the Incremental, so I could do the restore
>> for each type and differentiate. The restores went perfectly, and I
>> could tell by the presence/absence of that file, that it restored the
>> correct one.
>>
>> When I try to run the VirtualFull, though, it fails. The relevant
>> error seems to be the following, taken from the director running with
>> `-d 200`:
>>
>>
>>> backupdir-dir: msgchan.c:297-14 Rstore=backupsddemo2-sd
>>> backupdir-dir: msgchan.c:308-14 rstore >stored: use
>>> storage=backupsddemo2-sd media_type=File2 pool_name=IncPool
>>> pool_type=Backup append=0 copy=0 stripe=0 wait=1
>>> backupdir-dir: msgchan.c:315-14 >stored: use
>>> device=FileStorage-demo2-dev1
>>> backupdir-dir: getmsg.c:152-14 bget_dirmsg n=-1 msglen=-6 is_stop=0:
>>> backupdir-dir: getmsg.c:152-14 bget_dirmsg n=106 msglen=106
>>> is_stop=0: 3924 Device "FileStorage-demo2-dev1" not in SD Device
>>> resources or no matching Media Type or is disa
>>> bled.
>>>
>>> backupdir-dir: msgchan.c:321-14 <stored: 3924 Device
>>> "FileStorage-demo2-dev1" not in SD Device resources or no matching
>>> Media Type or is disabled.
>>
>>
>> For reference, the corresponding SD running with `-d 200` is showing
>> no output at all when the VirtualFull tries to run, but does show
>> output for things like `status storage...`, and when I do the
>> Incremental to it, or the restore from it.
>>
>> At this point, I'm confused by the "3924 Device
>> "FileStorage-demo2-dev1" not in SD Device resources or no matching
>> Media Type or is disabled.". As far as I can tell, the Media Type
>> ("File2"), the pool ("IncPool") and the device
>> ("FileStorage-demo2-dev1"), are all correct for that SD.
>>
>> If there were any evidence that it was failing because it was trying
>> to read the Full backup (or the corresponding `Full-*` file) from
>> SD2, that would make sense, and confirm that I can't do the VFs this
>> way. But that's not the evidence I'm seeing.
>>
>>
>> Thanks,
>>
>> Lloyd
>>
>>
>> On 4/8/26 13:24, Lloyd Brown wrote:
>>> Hey, all. I'm in the middle of designing the next generation of our
>>> backup system, which will likely be based on Bacula 15. This time,
>>> we'll be using both disk and tape, vs the disk-only we've run
>>> before, which brings up a question or two that I haven't encountered
>>> before.
>>>
>>> When running VirtualFull jobs, does the source material (previous
>>> Full, latest Differential, subsequent Incrementals), need to all
>>> reside in the same Pool? In multiple pools but available to the
>>> same SD?
>>>
>>> For this system, we're planning on 2 SD hosts, with local disks, and
>>> a tape library attached to one of them. We thought that we could
>>> keep the Fulls/VFs entirely on tape, only store the incrementals on
>>> disk, and write new VFs directly to tape. As a result, we trimmed
>>> down the size of our disk pool for cost reasons (HDDs and SSD are
>>> both crazy expensive right now). I don't think I'll have enough
>>> space in the disk pool to store Incrementals *and* a previous
>>> Full/VF. If that's true, I need to re-think my design a bit. It
>>> may mean we just do true Fulls to tape, and Incrementals on disk,
>>> and not get to use VFs at all.
>>>
>>> Thanks,
>>>
>>> Lloyd
>>>
--
Lloyd Brown
HPC Systems Administrator
Office of Research Computing
Brigham Young University
http://rc.byu.edu
|
|
From: Lloyd B. <llo...@by...> - 2026-04-15 16:19:03
|
Never mind. By running `backupsddemo1` with the `-d 200` logging, I
can now verify that the director is asking *it* to connect with
`FileStorage-demo2-dev1`, which is not on that host. So, despite the
fact that the director *claimed* it was talking to `backupsddemo2-sd`,
it was actually trying to communicate with `backupsddemo1-sd`.
Lloyd
log excerpt:
> backupsddemo1-sd: dircmd.c:254-21 Do command: bootstrap
> backupsddemo1-sd: dircmd.c:1897-21 === Bootstrap file ===
> backupsddemo1-sd: dircmd.c:1899-21 Storage="backupsddemo1-sd"
> backupsddemo1-sd: dircmd.c:1899-21 Volume="Full-0001"
> backupsddemo1-sd: dircmd.c:1899-21 MediaType="File1"
> backupsddemo1-sd: dircmd.c:1899-21 Device="FileStorage-demo1-dev1"
> backupsddemo1-sd: dircmd.c:1899-21 VolSessionId=3
> backupsddemo1-sd: dircmd.c:1899-21 VolSessionTime=1776263278
> backupsddemo1-sd: dircmd.c:1899-21 VolAddr=1016684-2033139
> backupsddemo1-sd: dircmd.c:1899-21 FileIndex=1-1781
> backupsddemo1-sd: dircmd.c:1899-21 FileIndex=1783-1803
> backupsddemo1-sd: dircmd.c:1899-21 Count=1802
> backupsddemo1-sd: dircmd.c:1899-21 Storage="backupsddemo2-sd"
> backupsddemo1-sd: dircmd.c:1899-21 Volume="Inc-0017"
> backupsddemo1-sd: dircmd.c:1899-21 MediaType="File2"
> backupsddemo1-sd: dircmd.c:1899-21 Device="FileStorage-demo2-dev1"
> backupsddemo1-sd: dircmd.c:1899-21 VolSessionId=1
> backupsddemo1-sd: dircmd.c:1899-21 VolSessionTime=1776263582
> backupsddemo1-sd: dircmd.c:1899-21 VolAddr=251-969
> backupsddemo1-sd: dircmd.c:1899-21 FileIndex=1-2
> backupsddemo1-sd: dircmd.c:1899-21 Count=2
> backupsddemo1-sd: dircmd.c:1907-21 === end bootstrap file ===
> Next : 0x38010da8
> Root bsr : 0x38010be8
> VolumeName : Full-0001
> MediaType : File1
> Device : FileStorage-demo1-dev1
> Slot : 0
> SessId : 3
> SessTime : 1776263278
> VolAddr : 1016684-2033139
> FileIndex : 1-1781
> FileIndex : 1783-1803
> count : 1802
> found : 0
> done : no
> positioning : 1
> fast_reject : 1
>
> Next : 0x0
> Root bsr : 0x38010be8
> VolumeName : Inc-0017
> MediaType : File2
> Device : FileStorage-demo2-dev1
> Slot : 0
> SessId : 1
> SessTime : 1776263582
> VolAddr : 251-969
> FileIndex : 1-2
> count : 2
> found : 0
> done : no
> positioning : 0
> fast_reject : 0
> backupsddemo1-sd: vol_mgr.c:152-21 add read_vol=Full-0001 JobId=21
> backupsddemo1-sd: vol_mgr.c:152-21 add read_vol=Inc-0017 JobId=21
> backupsddemo1-sd: dircmd.c:240-21 <dird: use storage=backupsddemo2-sd
> media_type=File2 pool_name=IncPool pool_type=Backup append=0 copy=0
> stripe=0 wait=1
>
> backupsddemo1-sd: dircmd.c:254-21 Do command: use storage=
> backupsddemo1-sd: reserve.c:286-21 <dird: use storage=backupsddemo2-sd
> media_type=File2 pool_name=IncPool pool_type=Backup append=0 copy=0
> stripe=0 wait=1
> backupsddemo1-sd: reserve.c:315-21 <dird device: use
> device=FileStorage-demo2-dev1
> backupsddemo1-sd: reserve.c:248-21 Inx=1 mntVol=1 exact=1 chgonly=0
> low_use=1 any=0
> backupsddemo1-sd: reserve.c:466-21 Start find_suit_dev PrefMnt=1
> exact=1 suitable=0 chgronly=0 any=0
> backupsddemo1-sd: reserve.c:591-21 search res for FileStorage-demo2-dev1
> backupsddemo1-sd: reserve.c:673-21 Try match res=FileStorage-demo1-dev1
> backupsddemo1-sd: reserve.c:567-21 No usable device found.
> backupsddemo1-sd: reserve.c:577-21 Leave find_suit_dev: no dev found.
> backupsddemo1-sd: reserve.c:248-21 Inx=2 mntVol=1 exact=0 chgonly=1
> low_use=1 any=0
> backupsddemo1-sd: reserve.c:466-21 Start find_suit_dev PrefMnt=1
> exact=0 suitable=0 chgronly=1 any=0
> backupsddemo1-sd: reserve.c:591-21 search res for FileStorage-demo2-dev1
> backupsddemo1-sd: reserve.c:567-21 No usable device found.
> backupsddemo1-sd: reserve.c:577-21 Leave find_suit_dev: no dev found.
> backupsddemo1-sd: reserve.c:248-21 Inx=3 mntVol=1 exact=0 chgonly=1
> low_use=0 any=1
> backupsddemo1-sd: reserve.c:466-21 Start find_suit_dev PrefMnt=1
> exact=0 suitable=0 chgronly=1 any=1
> backupsddemo1-sd: reserve.c:591-21 search res for FileStorage-demo2-dev1
> backupsddemo1-sd: reserve.c:567-21 No usable device found.
> backupsddemo1-sd: reserve.c:577-21 Leave find_suit_dev: no dev found.
> backupsddemo1-sd: reserve.c:248-21 Inx=4 mntVol=1 exact=1 chgonly=0
> low_use=1 any=0
> backupsddemo1-sd: reserve.c:466-21 Start find_suit_dev PrefMnt=1
> exact=1 suitable=0 chgronly=0 any=0
> backupsddemo1-sd: reserve.c:591-21 search res for FileStorage-demo2-dev1
> backupsddemo1-sd: reserve.c:673-21 Try match res=FileStorage-demo1-dev1
> backupsddemo1-sd: reserve.c:567-21 No usable device found.
> ...
> backupsddemo1-sd: reserve.c:248-21 Inx=7 mntVol=1 exact=0 chgonly=0
> low_use=0 any=1
> backupsddemo1-sd: reserve.c:466-21 Start find_suit_dev PrefMnt=1
> exact=0 suitable=0 chgronly=0 any=1
> backupsddemo1-sd: reserve.c:591-21 search res for FileStorage-demo2-dev1
> backupsddemo1-sd: reserve.c:673-21 Try match res=FileStorage-demo1-dev1
> backupsddemo1-sd: reserve.c:567-21 No usable device found.
> backupsddemo1-sd: reserve.c:577-21 Leave find_suit_dev: no dev found.
> backupsddemo1-sd: reserve.c:417-21 Fail. !suitable_device ||
> !wait_for_device
> backupsddemo1-sd: reserve.c:432-21 >dird: 3924 Device
> "FileStorage-demo2-dev1" not in SD Device resources or no matching
> Media Type or is disabled.
> backupsddemo1-sd: dircmd.c:257-21 Command use storage= requests quit
> backupsddemo1-sd: vol_mgr.c:197-21 remove_read_vol=Full-0001 JobId=21
> found=1
> backupsddemo1-sd: vol_mgr.c:197-21 remove_read_vol=Inc-0017 JobId=21
> found=1
> backupsddemo1-sd: jcr.c:184-21 write_last_jobs seek to 192
>
On 4/15/26 09:52, Lloyd Brown wrote:
> Thanks everyone for your responses. I'm working on creating a
> demonstration system with a few VMs, to see what I can figure out.
> I'm definitely running into a problem, but the messages are confusing,
> so I'm hoping that someone here can spot it. All 3 hosts are running
> bacula 15.0.3 on Debian 13, installed from the Debian 13/Trixie
> repository. If anyone has any ideas what could be going wrong, I'd
> love to hear them.
>
> I'll attach the relevant config files, stripped of the unnecessary
> comments, etc., and with passwords redacted. Here's the general
> structure:
>
> Host: backupdir - backup director, running `bacula-dir`
>
> Host: backupsddemo{1,2} - running both `bacula-sd` and `bacula-fd`
>
> Both `backupsddemo*` hosts have a single local disk pool stored at
> `/backupdata`. There are 2 pools defined as "FullPool" (on
> `bacupsddemo1`) and "IncPool" (on `backupsddemo2`). I was able to run
> a Full and an Incremental for job `Test-backupsddemo1`. I wrote a
> file between the Full and the Incremental, so I could do the restore
> for each type and differentiate. The restores went perfectly, and I
> could tell by the presence/absence of that file, that it restored the
> correct one.
>
> When I try to run the VirtualFull, though, it fails. The relevant
> error seems to be the following, taken from the director running with
> `-d 200`:
>
>
>> backupdir-dir: msgchan.c:297-14 Rstore=backupsddemo2-sd
>> backupdir-dir: msgchan.c:308-14 rstore >stored: use
>> storage=backupsddemo2-sd media_type=File2 pool_name=IncPool
>> pool_type=Backup append=0 copy=0 stripe=0 wait=1
>> backupdir-dir: msgchan.c:315-14 >stored: use
>> device=FileStorage-demo2-dev1
>> backupdir-dir: getmsg.c:152-14 bget_dirmsg n=-1 msglen=-6 is_stop=0:
>> backupdir-dir: getmsg.c:152-14 bget_dirmsg n=106 msglen=106
>> is_stop=0: 3924 Device "FileStorage-demo2-dev1" not in SD Device
>> resources or no matching Media Type or is disa
>> bled.
>>
>> backupdir-dir: msgchan.c:321-14 <stored: 3924 Device
>> "FileStorage-demo2-dev1" not in SD Device resources or no matching
>> Media Type or is disabled.
>
>
> For reference, the corresponding SD running with `-d 200` is showing
> no output at all when the VirtualFull tries to run, but does show
> output for things like `status storage...`, and when I do the
> Incremental to it, or the restore from it.
>
> At this point, I'm confused by the "3924 Device
> "FileStorage-demo2-dev1" not in SD Device resources or no matching
> Media Type or is disabled.". As far as I can tell, the Media Type
> ("File2"), the pool ("IncPool") and the device
> ("FileStorage-demo2-dev1"), are all correct for that SD.
>
> If there were any evidence that it was failing because it was trying
> to read the Full backup (or the corresponding `Full-*` file) from SD2,
> that would make sense, and confirm that I can't do the VFs this way.
> But that's not the evidence I'm seeing.
>
>
> Thanks,
>
> Lloyd
>
>
> On 4/8/26 13:24, Lloyd Brown wrote:
>> Hey, all. I'm in the middle of designing the next generation of our
>> backup system, which will likely be based on Bacula 15. This time,
>> we'll be using both disk and tape, vs the disk-only we've run before,
>> which brings up a question or two that I haven't encountered before.
>>
>> When running VirtualFull jobs, does the source material (previous
>> Full, latest Differential, subsequent Incrementals), need to all
>> reside in the same Pool? In multiple pools but available to the same
>> SD?
>>
>> For this system, we're planning on 2 SD hosts, with local disks, and
>> a tape library attached to one of them. We thought that we could
>> keep the Fulls/VFs entirely on tape, only store the incrementals on
>> disk, and write new VFs directly to tape. As a result, we trimmed
>> down the size of our disk pool for cost reasons (HDDs and SSD are
>> both crazy expensive right now). I don't think I'll have enough
>> space in the disk pool to store Incrementals *and* a previous
>> Full/VF. If that's true, I need to re-think my design a bit. It may
>> mean we just do true Fulls to tape, and Incrementals on disk, and not
>> get to use VFs at all.
>>
>> Thanks,
>>
>> Lloyd
>>
--
Lloyd Brown
HPC Systems Administrator
Office of Research Computing
Brigham Young University
http://rc.byu.edu
|
|
From: Andrea V. <ml...@ne...> - 2026-04-15 07:31:27
|
On 4/14/26 20:47, Martin Simmons wrote: > Maybe the filesystem has some corruption? > > I don't know if fsck (or something like it) is possible on macOS. You might > need to boot into Recovery Mode. I'll try an "S.O.S." next time I am on site. bye & Thanks av. |
|
From: Martin S. <ma...@li...> - 2026-04-14 18:47:52
|
>>>>> On Tue, 14 Apr 2026 10:48:37 +0200, Andrea Venturoli said: > > > check with: xattr -l"$filename" > > This gives "xattr: [Errno 1] Operation not permitted". > > However "ls -lah@ $filename" gives: > com.apple.quarantine -1B I suspect ls also gets an error and doesn't check, so it just prints -1 (instead of the size of the attribute). Maybe the filesystem has some corruption? I don't know if fsck (or something like it) is possible on macOS. You might need to boot into Recovery Mode. __Martin |
|
From: Martin S. <ma...@li...> - 2026-04-14 18:38:45
|
>>>>> On Tue, 14 Apr 2026 10:48:41 +0200, Andrea Venturoli said:
>
> On 4/13/26 18:52, Martin Simmons wrote:
> > mount | awk '$1=="'$(df "/Users/.../.../.../NewMailboxPanel.nib" | awk 'NR==2 {print $1}')'" { print $0 }'
>
> /dev/disk3s5 on /System/Volumes/Data (apfs, local, journaled, nobrowse,
> protect, root data)
OK, that looks normal.
|
|
From: Josh F. <jf...@ja...> - 2026-04-14 16:23:17
|
On 4/13/26 11:57, Radosław Korzeniewski wrote: > Hi, > > śr., 8 kwi 2026 o 23:00 Lloyd Brown <llo...@by...> napisał(a): > > Hey, all. I'm in the middle of designing the next generation of our > backup system, which will likely be based on Bacula 15. This time, > we'll be using both disk and tape, vs the disk-only we've run before, > which brings up a question or two that I haven't encountered before. > > When running VirtualFull jobs, does the source material (previous > Full, > latest Differential, subsequent Incrementals), need to all reside > in the > same Pool? > > > No. There are other limitations. > > In multiple pools but available to the same SD? > > > Good question. I would say you need all at the single SD to work. But > it is a conservative approach. I have not tested VF on multiple SDs yet. I believe you can have the fulls on tape volumes with one SD and the incrementals on disk on another SD, but you cannot split the incrementals across different SDs. But I too have not tested. > For this system, we're planning on 2 SD hosts, with local disks, > and a > tape library attached to one of them. We thought that we could > keep the > Fulls/VFs entirely on tape, only store the incrementals on disk, and > write new VFs directly to tape. > > > Good choice! > > As a result, we trimmed down the size > of our disk pool for cost reasons (HDDs and SSD are both crazy > expensive > right now). I don't think I'll have enough space in the disk pool to > store Incrementals *and* a previous Full/VF. If that's true, I > need to > re-think my design a bit. It may mean we just do true Fulls to tape, > and Incrementals on disk, and not get to use VFs at all. > > > No, not needed. You can have your fulls saved on your tapes and > incremental on disks. > One caveat, you have to setup your Full Pool to close (status in > ['full', 'used']) before you start to write the next Virtual Full. > This is to prohibit read/write for the same tape during the Virtual > Full Job. This means your previous (V)Full needs to be on the separate > (closed) tape then the one you write to (status=append). > > best regards > -- > Radosław Korzeniewski > rad...@ko... > > > _______________________________________________ > Bacula-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/bacula-users |
|
From: Gilles V. V. <gil...@em...> - 2026-04-14 15:26:45
|
Hi Heitor, Where can we find more information on this? Greetings, Gilles On 4/13/26 14:39, Heitor Faria wrote: > > Hello All, > > I just developed a Bacula SFTP/SSH plugin. Send me a DM if you're > interested. > > ------------------------------------------------------------------------ > > *Agentless SFTP/SSH backup for Bacula Community Edition — zero > deployment on remote hosts* > > Back up files from any server, NAS, switch, router, or cloud service > accessible via SFTP/SSH. > No Bacula agent required on the remote side. > > ------------------------------------------------------------------------ > > *Why SFTP instead of mountpoints* > > • No stale mounts — each backup opens a fresh connection > • No kernel dependencies — runs fully in userspace (Paramiko) > • No credentials in |/etc/fstab| — stored securely in Bacula config > • No full filesystem exposure — access limited to defined paths > • Simple firewalling — single TCP port (22) > • No hanging processes — per-operation timeouts > • Works with network devices — switches, routers, firewalls > • Native for cloud SFTP — no mount layer needed > > ------------------------------------------------------------------------ > > *Features* > > • Agentless — SFTP/SSH only, nothing installed on remote hosts > • Full / Incremental / Differential backups via |mtime| > • Include/Exclude filters — glob patterns (|*.pdf|, |*.tmp|) > • Metadata preserved — permissions, UID/GID, timestamps, symlinks > • Authentication — SSH key (Ed25519, RSA, ECDSA), agent, or password > • Host key verification — MITM protection via |known_hosts| > • File listing — estimate, catalog queries, restore tree browsing > • Multiple sources — multiple SFTP endpoints in a single FileSet > • Compression & encryption — LZ4/GZIP + AES (native Bacula) > • Metaplugin architecture — aligned with Bacula ecosystem > > -- > Atenciosamente, > > Heitor faria > https://www.youtube.com/@podheitor <http://www.bacula.com.br> > Tel.: +1 7867261749 | +55 61 982684220 > > > _______________________________________________ > Bacula-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/bacula-users -- CONFIDENTIAL: This email and any attachments are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you've received this in error, please delete it and notify the sender. |
|
From: Andrea V. <ml...@ne...> - 2026-04-14 08:49:02
|
On 4/13/26 18:12, Rob Gerber wrote: > The other question is what you get back when you restore the files that > had the errors? I get a 0 size file! > Actually, I wonder if the bacula-fd daemon has suitable permissions to > access the files in question. I know the FD runs as root, but I remember > that when running find operations on a Mac as root, I needed to > continually give permission for terminal to access documents folder, > calendar folder, etc. You can't actually get full access (like root on UNIX) on MacOS. That's some freedom Apple won't give you. Personally I find this unacceptable, but my customers, of course, don't share my opinion. That said, I've been through all the hops to give bacula-fd as much power as it's possible: it currently has "full disk access" (which still means it cannot backup some things). Still I get many "ERR=Operation not permitted" which are permission problems. If "ERR=Attribute not found" is also a permission problem, why a different error, then? bye & Thanks av. |
|
From: Andrea V. <ml...@ne...> - 2026-04-14 08:48:57
|
On 4/13/26 18:52, Martin Simmons wrote:
> mount | awk '$1=="'$(df "/Users/.../.../.../NewMailboxPanel.nib" | awk 'NR==2 {print $1}')'" { print $0 }'
/dev/disk3s5 on /System/Volumes/Data (apfs, local, journaled, nobrowse,
protect, root data)
bye & Thanks
av.
|
|
From: Andrea V. <ml...@ne...> - 2026-04-14 08:48:53
|
On 4/13/26 18:07, Radosław Korzeniewski wrote: > This should be extended attributes (afaik). Exactly (as I said later in the previous message). > check with: xattr -l"$filename" This gives "xattr: [Errno 1] Operation not permitted". However "ls -lah@ $filename" gives: com.apple.quarantine -1B Again, not all the affected files have extended attributes. bye & Thanks av. |
|
From: Davide <df...@df...> - 2026-04-14 04:24:43
|
Amazing 🎉 Thanks for sharing Heitor Davide On Mon, Apr 13, 2026 at 14:43 Heitor Faria <hei...@gm...> wrote: > Hello All, > > I just developed a Bacula SFTP/SSH plugin. Send me a DM if you're > interested. > ------------------------------ > > *Agentless SFTP/SSH backup for Bacula Community Edition — zero deployment > on remote hosts* > > Back up files from any server, NAS, switch, router, or cloud service > accessible via SFTP/SSH. > No Bacula agent required on the remote side. > ------------------------------ > > *Why SFTP instead of mountpoints* > > • No stale mounts — each backup opens a fresh connection > • No kernel dependencies — runs fully in userspace (Paramiko) > • No credentials in /etc/fstab — stored securely in Bacula config > • No full filesystem exposure — access limited to defined paths > • Simple firewalling — single TCP port (22) > • No hanging processes — per-operation timeouts > • Works with network devices — switches, routers, firewalls > • Native for cloud SFTP — no mount layer needed > ------------------------------ > > *Features* > > • Agentless — SFTP/SSH only, nothing installed on remote hosts > • Full / Incremental / Differential backups via mtime > • Include/Exclude filters — glob patterns (*.pdf, *.tmp) > • Metadata preserved — permissions, UID/GID, timestamps, symlinks > • Authentication — SSH key (Ed25519, RSA, ECDSA), agent, or password > • Host key verification — MITM protection via known_hosts > • File listing — estimate, catalog queries, restore tree browsing > • Multiple sources — multiple SFTP endpoints in a single FileSet > • Compression & encryption — LZ4/GZIP + AES (native Bacula) > • Metaplugin architecture — aligned with Bacula ecosystem > -- > Atenciosamente, > > Heitor faria > https://www.youtube.com/@podheitor <http://www.bacula.com.br> > Tel.: +1 7867261749 | +55 61 982684220 > _______________________________________________ > Bacula-users mailing list > Bac...@li... > https://lists.sourceforge.net/lists/listinfo/bacula-users > |
|
From: Radosław K. <rad...@ko...> - 2026-04-13 17:00:33
|
Hi, pon., 13 kwi 2026 o 18:02 Andrea Venturoli <ml...@ne...> napisał(a): > On 4/13/26 16:03, Rob Gerber wrote: > > I wonder what you find when you investigate the files mentioned in the > > error messages. Does an 'ls -lah $filename' show any details, or give an > > error? > > No error and nothing else but the > permissions/user/group/size/date/filename. > E.g. > ls -lah $filename > -rw-r--r--@ 1 user staff 2.3M Aug 17 2022 $filename > > Notice some files have the "@", others don't. > This should be extended attributes (afaik). check with: xattr -l "$filename" best regards -- Radosław Korzeniewski rad...@ko... |
|
From: Martin S. <ma...@li...> - 2026-04-13 16:52:19
|
>>>>> On Wed, 8 Apr 2026 14:14:02 +0200, Andrea Venturoli said:
>
> Backing up some Mac clients, I see many errors like the following:
>
> > Cannot open "/Users/.../.../.../NewMailboxPanel.nib": ERR=Attribute not found.
>
> Any idea what this means?
> What attribute is it talking about?
> How to solve?
"ERR=Attribute not found" is a strange error to get when opening a file.
What is the filesystem for the files that get this error? E.g. use df to get
the mount point and then match it with the output of mount:
mount | awk '$1=="'$(df "/Users/.../.../.../NewMailboxPanel.nib" | awk 'NR==2 {print $1}')'" { print $0 }'
__Martin
|
|
From: Rob G. <ro...@cr...> - 2026-04-13 16:37:50
|
The other question is what you get back when you restore the files that had the errors? Do their md5/sha256 hashes match the files found on disk? I suppose this would be less frightening if the errors are only about some sort of metadata, but the files themselves are being backed up correctly. It is still a problem, though, because ideally you don't want to have to ignore error conditions, because this could lead to missing a more serious problem. Actually, I wonder if the bacula-fd daemon has suitable permissions to access the files in question. I know the FD runs as root, but I remember that when running find operations on a Mac as root, I needed to continually give permission for terminal to access documents folder, calendar folder, etc. Robert Gerber 402-237-8692 ro...@cr... On Mon, Apr 13, 2026, 11:01 AM Andrea Venturoli <ml...@ne...> wrote: > On 4/13/26 16:03, Rob Gerber wrote: > > I wonder what you find when you investigate the files mentioned in the > > error messages. Does an 'ls -lah $filename' show any details, or give an > > error? > > No error and nothing else but the > permissions/user/group/size/date/filename. > E.g. > ls -lah $filename > -rw-r--r--@ 1 user staff 2.3M Aug 17 2022 $filename > > Notice some files have the "@", others don't. > > > > > Are the files all related to a specific program? > > I guess the answer is no... > Some files are in "/Users/.../Library/Mail/", some in a folder which > holds data taken from an older Mac. > > > > > Are all the files of > > type .nib, or are they different types? > > No. > Most are .nib, but I also see see one ".pdf", one ".strings", one > ".itxib", one ".plist"... > > > > > I wonder also if it is talking about apple extended attributes, or > > something else. > > That would be a good question. > I hoped someone who knows would step up. > If I had to bet on this, I'd say no, since only a few of these files > seem to have extended attributes (see above for the @). > > > > > I think if you can predict what bacula will complain about, and if you > > can predict that the files or types aren't important, you can exclude > > them from your backup. > > Unfortunately I'm not sure these can be excluded. > I'd have to check with the user (which will be a hard task). > > > > bye & Thanks > av. > |
|
From: Radosław K. <rad...@ko...> - 2026-04-13 16:28:47
|
Hi, pon., 13 kwi 2026 o 14:41 Heitor Faria <hei...@gm...> napisał(a): > Hello All, > > I just developed a Bacula SFTP/SSH plugin. Send me a DM if you're > interested. > Great work! > ------------------------------ > > *Agentless SFTP/SSH backup for Bacula Community Edition — zero deployment > on remote hosts* > > Back up files from any server, NAS, switch, router, or cloud service > accessible via SFTP/SSH. > No Bacula agent required on the remote side. > ------------------------------ > > *Why SFTP instead of mountpoints* > > • No stale mounts — each backup opens a fresh connection > • No kernel dependencies — runs fully in userspace (Paramiko) > • No credentials in /etc/fstab — stored securely in Bacula config > • No full filesystem exposure — access limited to defined paths > • Simple firewalling — single TCP port (22) > • No hanging processes — per-operation timeouts > • Works with network devices — switches, routers, firewalls > • Native for cloud SFTP — no mount layer needed > ------------------------------ > > *Features* > > • Agentless — SFTP/SSH only, nothing installed on remote hosts > • Full / Incremental / Differential backups via mtime > • Include/Exclude filters — glob patterns (*.pdf, *.tmp) > • Metadata preserved — permissions, UID/GID, timestamps, symlinks > • Authentication — SSH key (Ed25519, RSA, ECDSA), agent, or password > • Host key verification — MITM protection via known_hosts > • File listing — estimate, catalog queries, restore tree browsing > • Multiple sources — multiple SFTP endpoints in a single FileSet > • Compression & encryption — LZ4/GZIP + AES (native Bacula) > • Metaplugin architecture — aligned with Bacula ecosystem > Yes, it was my first question when I saw Paramiko above - "I wonder if he used metaplugin for this integration". :D best regards -- Radosław Korzeniewski rad...@ko... |