Menu

#30 Unspooler is extremely slow with large numbers of jobs

open
nobody
None
5
2014-09-19
2007-01-04
Adam Crews
No

I have a server where I frequently hold 2000+ jobs (some times as much as 15K) then release them to a large Xerox production printer. When the queue length gets around the 2K mark, the unspooling process starts to dramatically slow down. From doing straces, it appears this is because every cf and hf file for every job in the queue must be processed every time a job is unspooled.

Oddly enough this only seems to be a problem if you do something like an lpc release <printer> all

If you release a few jobs at a time (20 or so), the few jobs that are released un-spool quite quickly.

Discussion

  • grumpf_

    grumpf_ - 2007-02-04

    Logged In: YES
    user_id=1456210
    Originator: NO

    Hello Adam,
    the whole spool process is very slow with a large number of jobs but it is very robust.
    Unfortunally this can not be solved easly, e.g. in you case a simple rm Hf* Df* may remove jobs that have just arrived and that should not removed.
    but you are right lpc release should not be the only tool that suffers.
    When you have access to the daemon is the easiest way to stop it and remove job by hand.
    otherwise creating a dummy queue and move jobs from a congested queue to the dummy queue should speed up the process.

    thanks for your report.

     
  • Adam Crews

    Adam Crews - 2007-02-05

    Logged In: YES
    user_id=3017
    Originator: YES

    Let me clarify a little... A lpc release of 20 jobs from a queue of 15K, un-spools those jobs with no noticeable delay between each job being un-spooled. A lpc release of the entire queue results in several seconds between each job when they are un-spooled. I have tried various things such as no-atime on the file system, and faster machines, but these tweaks make only marginal difference. Running the spool from tempfs is remarkably quick, but this has obvious risks. I know this is not a simple fix, however, I am sure there is room for some tuning at some point.

    I have written scripts to parse up the queue, and release a few jobs at a time to help work around this issue when I need to drain a large queue.

     
  • grumpf_

    grumpf_ - 2007-02-05

    Logged In: YES
    user_id=1456210
    Originator: NO

    hi adam,
    mmh, i will try to reproduce that therefore i need:

    15K jobs in a queue
    lpc release id1 .. id10 /* should be fast */
    lpc release all /* should be slow */

    is that right ?
    btw: i assumed silently you are using the lastest version of lprng, is that correct ?

     
  • Adam Crews

    Adam Crews - 2007-02-05

    Logged In: YES
    user_id=3017
    Originator: YES

    This is correct. You can probably get the delays to happen with around 2K jobs, 3-4K would be a good starting point. The more jobs, the slower it goes.

    Also all the jobs I use are not especially large. They are usually 1 page of simple postscript.

    I am using LPRng-3.8.28 from when Dr. Powell was maintaining the software. I have not yet tried the code on sourceforge.

     
  • grumpf_

    grumpf_ - 2007-02-07

    Logged In: YES
    user_id=1456210
    Originator: NO

    hi adam,
    mmh, i will try to reproduce that therefore i need:

    15K jobs in a queue
    lpc release id1 .. id10 /* should be fast */
    lpc release all /* should be slow */

    is that right ?
    btw: i assumed silently you are using the lastest version of lprng, is that correct ?

     

Log in to post a comment.