From: Kenneth M. <kt...@ri...> - 2007-03-29 19:29:48
|
On Thu, Mar 29, 2007 at 09:09:58PM +0200, Lionel Bouton wrote: > Dave Strickler wrote the following on 29.03.2007 20:54 : > > Please don't answer other threads to start new ones... > > > We have a lot (read: millions) of records in our Connect table inside > > of 24 hours. Purging it down to a reasonable size for speed is very > > difficult as we are using Postgres. > > What is the problem? > > > We are considering having multiple Connect tables in different > > databases - perhaps databases that begin with A through Z, thus giving > > us Connect tables that are 26 times as small, and thus very fast and > > manageable. > > Do you really want multiple servers ? You may get roughly the same > performance by adding more RAM to your existing server or tuning > PostgreSQL (26 less data should mean only 3x or 4x speedup, not 26x as > the query time follows a log(tablesize) rule and I very much doubt you'd > want 26 database servers to maintain just for greylisting). Plus SQLgrey > will have to regularly clean multiple tables instead of only one... > > Note that the connect table usually shrinks down a little after 24 hours > (the autowhitelists kick in). > > Do you use pg_autovacuum (or the integrated autovacuum daemon in 8.2 or > later versions)? It should help. > > Lionel. > Dave, I agree with Lionel. You would be better off adding a bit more memory to your DB server and tuning it rather than trying to split it up like that. How long is a typical connect table lookup taking? an insert? an update? You should definitely be using a recent PostgreSQL and be running with autovacuum enabled. You should also tune it to be more agressive than the default settings. How many queries/sec are you seeing? What is your bottle- neck, I/O or CPU? Please post a few more configuration details, both hardware and software, and we can provide more germane recommendations. Ken |