From: Carl K. <ku...@us...> - 2001-02-15 17:26:20
|
What if the documents within a job have different attributes, such as document format, or number of copies, or duplex, or ...? One example might be a client-generated separator page followed by the document to be printed. The separator page might have totally different attributes than the document. -Carl Ben Woodard <be...@va...>@lists.sourceforge.net on 02/14/2001 10:52:27 PM Sent by: lpr...@li... To: lpr...@li... cc: Subject: [Lpr-discuss] job and document semantics One of the biggest differences between Sun's interface and mine is that Sun's interface has two concepts, a job and a document, which I essentially merge into one concept a job. There are many reasons that you might want to have two distinct concepts a job and document. The first five that come to my mind are: 1) when printing multiple files, a user might want them to come out in the order they specified them on the command line. 2) when printing multiple files and using notification, a user might them to be notified when the last one comes out not when any one of the jobs completes. 3) some network protocols such as LPR and IPP are are setup to handle jobs and documents and they work better if aggregate jobs end up being sent together. 4) if you have a queue setup with multiple printers, a user might be expecting all the component documents to come out of the same printer. 5) when doing queue manipulation such as removing a job or promoting a job within a queue, it is commonly expected that you refer to the job and it affects all the component documents. Still with all these very good reasons, I believe that the print spooler and the interface to the print system don't need to specify two concepts. I could argue that a user may not care that the component documents print in the same order or out of the same physical printer and so we should allow this to be a bit of policy defined by either the user or the SA but that kind of dodges the question rather than addressing any of the problems above. Here is how I propose to do it: First of all the key insight is that all of the necessary sematics have to do with some sort of grouping or ordering which is not the domain of the spooler as I have defined it. They are all within the domain one of the filters. So if we tag all of the jobs with three pieces of data (in the attributes) we can push the work out of the spooler into the filters. The three tags that I propose are: aggregate_sequence=x aggregate_total=y aggregate_previous=z y is the number of documents in this job x is the sequence number of this particular document z is the previuous in the aggregate sequence Simple sequencing filter ------------------------ One filter would be the simple sequencing filter. You would need this filter if you want to preserve the sequencing semantics on aggregate jobs when you are not queuing. It would work like this: if aggregate_sequence || aggregate_sequence!=1 while the job listed in aggregate_previous is not complete sleep some time send the job through A more sophisticated mechanism other than polling and sleeping can be engineered if necessary. Basically, it blocks before reading in the case where the previous job is not already completed. network protocol filters ------------------------ Another filter that would have to be aware of aggregation are the network protocol filters such as sendipp and sendlpr. If !aggregate_sequence || aggregate_sequence==1 open the network socket and write the network protocol's notion of a hello and control information. send your own job else connect to unix domain socket representing the jobid stored in aggregate_previous (might have to wait and retry) read file descriptor send job down that file descriptor if !aggregage_sequence || aggregate_sequence==aggragate_total send network protocol's notion of goodbye close connection exit else bind a unix domain socket representing your jobid when a connection occurs send the file descriptor to the network protocol socket exit Another way to create this same effect is make z be the list of all the previous documents in the aggregate job. Then: if !aggregate_sequence || aggregate_sequence==1 open the socket and send the network protocol's version of a hello send control information if aggregate_sequence open and bind a unix domain socket representing your jobid send own job if aggregate_sequence accept connections from all the other documents in the sequence iterate through those sockets in order and send data out socket say goodbye to the network close socket else connect to unix domain socket representing first jobid listed in z send data to it close socket queue filters ------------- Queue filters will have to be aware of these aggregation attributes as well. They will need to make sure that they preserve order within the aggregate jobs. -ben _______________________________________________ Lpr-discuss mailing list Lpr...@li... http://lists.sourceforge.net/lists/listinfo/lpr-discuss |