From: Alex B. <en...@tu...> - 2001-04-27 08:20:13
|
I've cc'd the list, after I re-read this. I think it's worth people reading. > Just to give you some background info, Nick is a member of the phpWebsite > team. I have been evangelizing about Binarycloud over there (see > http://phpwebsite.appstate.edu/article.php?sid=50&mode=flat&order=0 ) and > he's actually come for more info...! While I don't think that BC as it > stands could be converted into a PHP-Nuke rival, I do think that some of Thanks for the post :) There's no intention to do so ("compete" with phpNuke) - any more than I would try and compete with phpLib. binarycloud is a very specialized system for building big, deep things. It has all kinds of design requirements implicit in the system that your average web application just doesn't care about. If I'm building a forum application in isolation, all I care about are a few tables, some session stuff, and a bit of html. If, however, I need to be able to make that form app _aware_ of a larger system, possibly registering events with the core eventManager, running queries through a central query engine, etc, etc. - then bc starts to be relevant. You need a certain amount of "scale critical mass" to make full use of a system like bc. The point of the system is advanced integration, which is why I gave up on phpLib and the like. In isolation, they all work well. As soon as I needed them to work together, I had to download 50 little snippets of code, and fight with them to get things working. And I couldn't make important changes. Which is why bc was born. So, bc probably won't ever have the kind of developer audience that phplib or phpnuke have. But those that need the kind of functionality r2 offers will instantly recognize the difference in scale and complexity. Also, as part of r2, there are going to be a couple apps that will be so smokin' that people can't resist... I'm going to publish the first rev of the editor interface soon, on that note. > The only drawback I can find with r2 is that it needs so many special PHP > modules installed. Ehfoo? Maybe I should update the docs, because there's a good, standard recommended list of compiled modules, but the only things really required are sablot an mcrypt so far. I prefer the kitchen sink approach to php installs, but there will be few "core" php module dependencies. > It is possible to install r1 on a virtual host - I've > done it at http://www.binarycloud.f2s.com and you cannot access the > binarycloud/ directory (have a go) which is in htdocs, can you? - but this > won't be possible with r2 unless the dependencies on all these extra > modules are optional. again, ehfoo? I run binarycloud on virtualhosts all the time, and have _absolutely_no_ intention of _ever_ disabling that. I actually can't imagine how I _would_, now that I think about it. If anything, I'm going to make sure it has fewer requirements of the environment: r1 assumes a bunch of things because that's the way I work, but there's no reason why you shouldn't be able to install r2 in a vhost setup and run it just fine, assuming some basic php.ini config requirements are met. > Maybe I'm looking for the wrong thing in BC, or maybe I just don't need > something of the complexity of BC. So far I haven't found anything which > really suits my needs (managing small business websites), which is why I am > excited about the phpWebsite proposal. Whether it works or not remains to > be seen! I think the above is the best response to that. If you're building relatively simple applications, you're not going to see the most benefit that the system can offer. This is especially true of r2, with a rule engine, and entities, etc - it's too much of a pain in the butt if you're just trying to run a simple cart and some forums. But if you need to build an exchange, or a hefty commerce platform, or a publishing site with 50 article types, you'll love it. _alex -- alex black, ceo en...@tu... the turing studio, inc. http://www.turingstudio.com vox+510.666.0074 fax+510.666.0093 |
From: Kristian H. <kri...@ge...> - 2001-04-27 09:15:28
|
Hi, I've been lurking for a while, just thought i'd throw in something I've learnt from experience to think about. If you are looking at using xmlrpc or other API to interface with systems then you should really look at transaction queues / managers. For instance, say you need to hit three different systems - pull data from the first system, update data in the second system with the data you just grabbed and fire an unrelated event to the third system. Sounds easy enough right? What happens if each of these systems take 15 seconds to respond? You now have the user waiting 45 seconds for a response - they'll probably hit refresh and kick off a second transaction, corrupting your data. You can improve the response times by parallelizing the requests, bundle the first two requests into a transaction and rund the third request concurrently - the response time is now 30 seconds. Now suppose that the user won't be looking at the results, maybe they just wanted to synchronise the systems and get back to using the site. The response isn't relevent to the task the user is performing. A further optimisation would be a fire and forget system. Example, put the request on a queue and get an immediate response confirming the request. When the transaction is complete, a callback could be used to return success or failure, and in the event of failure a rollback could be attempted. The work is still done and the user can go about their business without long waits. Of course the implementation of something like this is where the fun starts, it really stretches PHP but is possible I think. Kristian |
From: Alex B. <en...@tu...> - 2001-04-27 15:43:33
|
hi kristian, Much of the capability you describe below exists in the EntityManager... of course you would have no way of knowing that given that the spec for it is a little anemic :) As far as complex tasks in queues, this is of course a big issue in php given that it's part of a forking webserver process. I think the best way to do things are with "cgi" style hooks and cron. There are many other approaches, but that's solid and it's proven elsewhere. _alex > I've been lurking for a while, just thought i'd throw in something I've > learnt from experience to think about. > > If you are looking at using xmlrpc or other API to interface with systems > then you should really look at transaction queues / managers. > > For instance, say you need to hit three different systems - pull data from > the first system, update data in the second system with the data you just > grabbed and fire an unrelated event to the third system. Sounds easy enough > right? What happens if each of these systems take 15 seconds to respond? You > now have the user waiting 45 seconds for a response - they'll probably hit > refresh and kick off a second transaction, corrupting your data. > > You can improve the response times by parallelizing the requests, bundle the > first two requests into a transaction and rund the third request > concurrently - the response time is now 30 seconds. > > Now suppose that the user won't be looking at the results, maybe they just > wanted to synchronise the systems and get back to using the site. The > response isn't relevent to the task the user is performing. A further > optimisation would be a fire and forget system. Example, put the request on > a queue and get an immediate response confirming the request. When the > transaction is complete, a callback could be used to return success or > failure, and in the event of failure a rollback could be attempted. The work > is still done and the user can go about their business without long waits. > > Of course the implementation of something like this is where the fun starts, > it really stretches PHP but is possible I think. > > Kristian > > > > _______________________________________________ > binarycloud-dev mailing list > bin...@li... > http://lists.sourceforge.net/lists/listinfo/binarycloud-dev > -- alex black, ceo en...@tu... the turing studio, inc. http://www.turingstudio.com vox+510.666.0074 fax+510.666.0093 |