From: Gerhard H. <gh...@gh...> - 2003-04-25 11:55:58
|
Dick Kniep wrote: > Op do 24-04-2003, om 14:53 schreef Gerhard Haering: > >>Dick Kniep wrote: >> >>>Hi list, >>> >>>We are developing a system with a central database and many clients. The >>>server and all clients run Linux. The database runs on the central >>>server (surprise, surprise). To run pyPgSQL on the clients, we need to >>>install it on all clients (is that so?), but to compile it (on the >>>client), it needs Postgresql files. My question is which files are >>>needed, and where should I locate them... >> >>I can't tell which files exactly are needed to build against >>PostgreSQL's libpq. Building pyPgSQL works best if you have the >>PostgreSQL header and libraries installed (and not just in a PostgreSQL >>source tree). For most Linux distributions, there are several PostgreSQL >>packages, in particular you'll need the development package. >> [...] >>The correct paths should be picked up by setup.py automatically for most >>major distributions (Debian, SuSE, Redhat). If not, you'll have to fire >>up an editor with setup.py and >> >>- set USE_CUSTOM = 1 >>- set include_dirs and library_dirs appropriately > Thanks for the advice, I am using Slackware, which is a bit different > from Redhat and others. I see 3 alternatives: > > - Compile on the server and also RUN on the server, in which case I will > have to adjust the PYTHONPATH on the client (is that the only required > change?? and is it performing??) You mean using pyPgSQL via a remote directory mounted via NFS? Sure that's possible. Performance isn't a problem - the import of the pyPgSQL module is a little slower but after the import Python doesn't care where the module came from. > - Compile on the server (which has all files naturally) and distribute > libpq etc. to the clients You'd only need to distribute libpq.so.$version to the clients. I'd try to statically link against libpq, though. This saves you distribution the libpq shared library. > - Compile on the clients after installing Postgresql completely (Ugly) That's what you get for Slackware's nonexis^wprimitive package management :-P I use a similar one on FreeBSD and haven't found it a problem, though. Just don't start the PostgreSQL server when you don't need it. It'll waste some disk space, but that's all :-) > Both from a performance and maintainance point of view I would prefer > the first option, but do not know exactly how to go about it. Do you have any software distribution system in place? If not here's my recommended way: - compile pyPgSQL statically against libpq - tar-gz the pyPgSQL build directory up - un-tar-gz it on the client - python setup.py install The ideal way would be to package pyPgSQL for Slackware (it's already packaged at least for FreeBSD, RPM- and .deb-based systems). > Maintainance off course because there is only one installation of the > package, which is always easier to maintain. > I think it is also on the performance side the best option, as it > processes all actual I/O and conversion on the server, and passes only > the results back to the client, only the actual import of the package > requires extra network traffic. I don't understand this paragraph, but could it be you're having a logic error there? Just because pyPgSQL is imported from a network share doesn't mean the work will actually be done on the network server?! I don't want to sound like the wise guy (don't have that much experience administering Unix), but I'd recommend you get *some* software distribution system in place if you have several machines. If you already have your software in a packaged form, that's not an issue, just set up a repository where the clients can download and install their .deb/.rpm/.pkg whatever. For Slackware, something like rdist over ssh might be a simple, yet powerful option. Just mounting the "local software" repository via NFS and setting paths appropriately is another simple alternative. -- Gerhard |