From: Kristian H. <Kri...@ge...> - 2001-04-27 16:17:18
|
It's nice to know that functionality exists in some form, it's pretty complex stuff - and also required in anything that integrates across multiple systems. The way i'd approach this would be to have some kind of shared queue, probably held in a database that sessions can write to and a cron based process can read from and act upon. There may be some scalability issues with this. The callbacks are an interesting problem that can be approached in a couple of ways, on page presentment the user session could poll the queue to see the status of requests, however i'd be inclined to have the queue call a http page with details of the transaction, user session and result. Polling just seems like bad design :) The most difficult thing to implement would be some generic method for maintaining multi-step integrety in transactions. Perhaps every transaction has a corresponding rollback method, and the queue rollsback all completed transactions in the event of the failure halfway through. Kristian -----Original Message----- From: Alex Black [mailto:en...@tu...] Sent: 27 April 2001 16:43 To: binarycloud-dev Subject: Re: [binarycloud-dev] asynchronous queuing hi kristian, Much of the capability you describe below exists in the EntityManager... of course you would have no way of knowing that given that the spec for it is a little anemic :) As far as complex tasks in queues, this is of course a big issue in php given that it's part of a forking webserver process. I think the best way to do things are with "cgi" style hooks and cron. There are many other approaches, but that's solid and it's proven elsewhere. _alex > I've been lurking for a while, just thought i'd throw in something I've > learnt from experience to think about. > > If you are looking at using xmlrpc or other API to interface with systems > then you should really look at transaction queues / managers. > > For instance, say you need to hit three different systems - pull data from > the first system, update data in the second system with the data you just > grabbed and fire an unrelated event to the third system. Sounds easy enough > right? What happens if each of these systems take 15 seconds to respond? You > now have the user waiting 45 seconds for a response - they'll probably hit > refresh and kick off a second transaction, corrupting your data. > > You can improve the response times by parallelizing the requests, bundle the > first two requests into a transaction and rund the third request > concurrently - the response time is now 30 seconds. > > Now suppose that the user won't be looking at the results, maybe they just > wanted to synchronise the systems and get back to using the site. The > response isn't relevent to the task the user is performing. A further > optimisation would be a fire and forget system. Example, put the request on > a queue and get an immediate response confirming the request. When the > transaction is complete, a callback could be used to return success or > failure, and in the event of failure a rollback could be attempted. The work > is still done and the user can go about their business without long waits. > > Of course the implementation of something like this is where the fun starts, > it really stretches PHP but is possible I think. > > Kristian > > > > _______________________________________________ > binarycloud-dev mailing list > bin...@li... > http://lists.sourceforge.net/lists/listinfo/binarycloud-dev > -- alex black, ceo en...@tu... the turing studio, inc. http://www.turingstudio.com vox+510.666.0074 fax+510.666.0093 _______________________________________________ binarycloud-dev mailing list bin...@li... http://lists.sourceforge.net/lists/listinfo/binarycloud-dev |