There seems to be a trend, okay, maybe not a trend, but the last few open source projects I've evaluated (or in the process of evaluation) have all been configured under the assumption of running on a dedicated server, usually a dedicated Linux server. If it's not a dedicated server config, then it's very close, i.e. the server has to be set up to run the OSS project primarily and then you can add all your extra stuff once you get it (the OSS project) running.
I'd give examples, but the point of this post is not to criticize, it's more philosophical. I'm wondering if the programmers and producers of these projects are working from the assumtion that a production level version of their project running in your average library will have it's own server.
IMO, this is a fairly bad assumption. I think that it's unlikely that there are production level services being hosted on individual linux boxes sitting under the reference desk, or any one else's desk for that matter. This may happen in smaller institutions that do not have the luxury of a large IT budget or department, but certainly for ARL libraries, production services are hosted on larger servers, possibly a cluster environment configured for high availability, where projects are separated by virtual hosting. These servers are typically going to be a Solaris or possibly WinNT environment. Of course I could be wrong about this, but to me, if you're serious about a production service, it will need to have some kind of fail-over mechanism in addition to continual backup, both in terms of power (UPS) and storage, not to mention the necessary security that's required. All of these pieces are not generally a part of an out-of-the-box linux configuration.
The reason I bring all this up is because not one project I've evaluated in the last year contained documentation on how to configure the product with existing web services via Virtual Hosting.
Any thoughts, comments?
Tim Mori
NCSU Libraries
Systems Department