Oh I guess I can answer this one myself already.

Creating dedicated Storages for Pools should just do the trick.

If anyone knows better solution please do speak.

On 30 January 2011 19:18, Bart Swedrowski <bart@timedout.org> wrote:
Hello everyone,

I have got a Bacula configured and running for quite a while now, without any major issues.  However, as I keep on adding client nodes, more and more often I am running into situation when some jobs are waiting for the other ones to finish.

The configuration I use is as follows:
- 1 Storage Daemon using disks as backend;  lets call it sd01
- this storage daemon has got various pools set for various clients, eg. client_A, client_B, client_C

Now, the situation I am running into is as follows:

*st dir
[...]
Running Jobs:
Console connected at 30-Jan-11 19:07
 JobId Level   Name                       Status
======================================================================
    39 Full    donkey_FS.2011-01-30_16.33.57_47 is running
    40 Full    ashes_DB.2011-01-30_19.05.14_52 is waiting on Storage sd01
    41 Full    ashes_FS.2011-01-30_19.05.52_53 is waiting on Storage sd01
====

When I check status of the sd01 storage, I am getting:

*status storage=sd01 
Connecting to Storage daemon sd01 at sd01.domain.org:9103

sd01.domain.org Version: 5.0.3 (04 August 2010) i686-redhat-linux-gnu redhat 
Daemon started 29-Jan-11 12:50. Jobs: run=19, running=2.
 Heap: heap=995,328 smbytes=371,231 max_bytes=852,215 bufs=188 max_bufs=328
Sizes: boffset_t=8 size_t=4 int32_t=4 int64_t=8

Running Jobs:
Writing: Full Backup job donkey_FS JobId=39 Volume="olartek-0005"
    pool="olartek-sd01 Pool" device="FileStorage" (/var/lib/bacula/storage/)
    spooling=0 despooling=0 despool_wait=0
    Files=491,226 Bytes=10,684,045,907 Bytes/sec=1,113,617
    FDReadSeqNo=5,983,303 in_msg=4163620 out_msg=5 fd=6
Writing: Full Backup job ashes_DB JobId=40 Volume="olartek-0005"
    pool="pws Pool" device="FileStorage" (/var/lib/bacula/storage/)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 Bytes/sec=0
    FDSocket closed
Writing: Full Backup job ashes_FS JobId=41 Volume="olartek-0005"
    pool="pws Pool" device="FileStorage" (/var/lib/bacula/storage/)
    spooling=0 despooling=0 despool_wait=0
    Files=0 Bytes=0 Bytes/sec=0
    FDSocket closed
====

Jobs waiting to reserve a drive:
   3608 JobId=40 wants Pool="pws Pool" but have Pool="olartek-sd01 Pool" nreserve=0 on drive "FileStorage" (/var/lib/bacula/storage/).
   3608 JobId=41 wants Pool="pws Pool" but have Pool="olartek-sd01 Pool" nreserve=0 on drive "FileStorage" (/var/lib/bacula/storage/).
====

Terminated Jobs:
 JobId  Level    Files      Bytes   Status   Finished        Name 
===================================================================
    38  Full         11    76.64 M  OK       30-Jan-11 16:32 donkey_DB
====

Device status:
Device "FileStorage" (/var/lib/bacula/storage/) is mounted with:
    Volume:      olartek-0005
    Pool:        olartek-sd01 Pool
    Media type:  File
    Total Bytes=10,794,793,222 Blocks=167,331 Bytes/block=64,511
    Positioned at File=2 Block=2,204,858,629
Device "pws.FileStorage" (/var/lib/bacula/pws/) is not open.
Device "olartek-sd01" (/var/lib/bacula/olartek/) is not open.
====

Used Volume status:
olartek-0005 on device "FileStorage" (/var/lib/bacula/storage/)
    Reader=0 writers=1 devres=0 volinuse=1
====

Attr spooling: 1 active jobs, 0 bytes; 3 total jobs, 1,460,891 max bytes.
====

From this behaviour I understand that even though there may be many concurrent jobs writing to the same Pool at the same time, given device can have only 1 Pool open at the same time.

Now, what is the best way to resolve this?  I obviously would want to be able to write to many Pools at the same time as well as create combinations of Full and Incremental backups at the same time (eg. on Monday Full for client_A, but only incremental for client_B and client_C).

Any advice much appreciated.

Kind regards,
Bart