You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(10) |
Nov
(37) |
Dec
(66) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(52) |
Feb
(136) |
Mar
(65) |
Apr
(38) |
May
(46) |
Jun
(143) |
Jul
(60) |
Aug
(33) |
Sep
(79) |
Oct
(29) |
Nov
(13) |
Dec
(14) |
2006 |
Jan
(25) |
Feb
(26) |
Mar
(4) |
Apr
(9) |
May
(29) |
Jun
|
Jul
(9) |
Aug
(11) |
Sep
(10) |
Oct
(9) |
Nov
(45) |
Dec
(8) |
2007 |
Jan
(82) |
Feb
(61) |
Mar
(39) |
Apr
(7) |
May
(9) |
Jun
(16) |
Jul
(2) |
Aug
(22) |
Sep
(2) |
Oct
|
Nov
(4) |
Dec
(5) |
2008 |
Jan
|
Feb
|
Mar
(5) |
Apr
(2) |
May
(8) |
Jun
|
Jul
(10) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
(32) |
May
|
Jun
(7) |
Jul
|
Aug
(38) |
Sep
(3) |
Oct
|
Nov
(4) |
Dec
|
2010 |
Jan
(36) |
Feb
(32) |
Mar
(2) |
Apr
(19) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
(6) |
Nov
(8) |
Dec
|
2011 |
Jan
(3) |
Feb
|
Mar
(5) |
Apr
|
May
(2) |
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
(6) |
2012 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
|
Nov
(6) |
Dec
(10) |
2014 |
Jan
(8) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
(34) |
Aug
(6) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(18) |
Jul
(13) |
Aug
(30) |
Sep
(4) |
Oct
(1) |
Nov
|
Dec
(4) |
2016 |
Jan
(2) |
Feb
(10) |
Mar
(3) |
Apr
|
May
|
Jun
(11) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: McDonald, D. <Dan...@au...> - 2007-05-24 20:17:18
|
I've been running sqlgrey 1.6.6 for a while, and am comfortable with how it works. One of my spam-tossers lost a disk, and I am rebuilding the system from scratch (It's been in operation basically unchanged for a year - with this sort of application it's good to get a nice clean slate now and again). Should I stick with 1.6.6 or go with the latest experimental release? (1.7.5?) --=20 Daniel J McDonald, CCIE # 2495, CISSP # 78281, CNX Austin Energy http://www.austinenergy.com |
From: Matteo N. <mat...@st...> - 2007-05-24 14:26:18
|
Brian Wong ha scritto: > I have found my sqlgrey process to consume more and more memory until > it dies. Previous posts to this list indicate that it is a memory leak > in the DBI driver. Yes, there's a memory leak in DBI version < 1.5.2. After upgrade to 1.5.5 memory leak has been partialy solved. Now sqlgrey eat 1mbyte of ram per days, instead of 1MByte each 10 minutes :) |
From: Brian W. <bw...@gm...> - 2007-05-24 14:05:49
|
On 5/23/07, Ryan Ivey <iv...@gm...> wrote: > SLES 10 > sqlgrey 1.6.7 > postfix > mailwatch > mailscanner > > I have sqlgrey setup and configured and all appears fine when starting > sqlgrey in daemon mode and it logs and writes to the mysql database and > greylists each incoming message for the default 6 minutes, but as soon > as the 6 minutes is up and it begins allowing messages, sqlgrey dies and > the default port of 2501 closes. I run 'socklist |grep 2501' and see > several sqlgrey processes and can telnet 127.0.0.1 2501 without any > problems during the initial 6 minutes, but all processes drop and the > connection is refused once sqlgrey dies. I cannot figure out why > sqlgrey is dying. I restart sqlgrey when I see it stop, but it quickly > dies again and I can restart postfix, but this too doesn't fully resolve > the problem, besides it seems somewhat barbaric to setup a cron job to > run every 5 or 6 minutes to restart everything. Before I switch to > using postfix greylisting, does anyone have any pointers? I don't have > a problem with syslog, as I've noticed in other posts, besides I believe > 1.6.7 resolved that issue. > > Thanks, > Ryan I have found my sqlgrey process to consume more and more memory until it dies. Previous posts to this list indicate that it is a memory leak in the DBI driver. Monitor your memory usage. Perhaps you are suffering from the same problem? I use PgSQL. I had to resort to restarting the sqlgrey process every night. I didnt want to do this but I couldnt find a good alternative for sqlgrey. Hopefully everything will be sorted out soon. |
From: Ryan I. <iv...@gm...> - 2007-05-24 01:12:19
|
SLES 10 sqlgrey 1.6.7 postfix mailwatch mailscanner I have sqlgrey setup and configured and all appears fine when starting sqlgrey in daemon mode and it logs and writes to the mysql database and greylists each incoming message for the default 6 minutes, but as soon as the 6 minutes is up and it begins allowing messages, sqlgrey dies and the default port of 2501 closes. I run 'socklist |grep 2501' and see several sqlgrey processes and can telnet 127.0.0.1 2501 without any problems during the initial 6 minutes, but all processes drop and the connection is refused once sqlgrey dies. I cannot figure out why sqlgrey is dying. I restart sqlgrey when I see it stop, but it quickly dies again and I can restart postfix, but this too doesn't fully resolve the problem, besides it seems somewhat barbaric to setup a cron job to run every 5 or 6 minutes to restart everything. Before I switch to using postfix greylisting, does anyone have any pointers? I don't have a problem with syslog, as I've noticed in other posts, besides I believe 1.6.7 resolved that issue. Thanks, Ryan |
From: Matteo N. <mat...@st...> - 2007-05-03 19:53:16
|
Andrew Diederich ha scritto: > Hello Matteo, > > This has been discussed on the list a few times. As I recall the leak > is in DBI somewhere. You can either backrev your DBI install, or just > keep restarting sqlgrey. I restart sqlgrey several times a day, just > to make sure. > > To get more specifics, search for my name and you'll find the threads. > Thanks to all! |
From: Andrew D. <and...@gm...> - 2007-05-03 19:44:45
|
Hello Matteo, This has been discussed on the list a few times. As I recall the leak is in DBI somewhere. You can either backrev your DBI install, or just keep restarting sqlgrey. I restart sqlgrey several times a day, just to make sure. To get more specifics, search for my name and you'll find the threads. On Thursday, May 3, 2007, 10:45:58 AM, Matteo Niccoli wrote: > I'm running sqlgrey 1.7.4 with mysql support on a gentoo linux server > with postfix 2.3.8 > I have 3 mysql server, 1 is master and the other are slaves. > I see that each time I restart sqlgrey it uses about 12Mbyte of RAM. > Afterk some days, > 3-4, it uses 110Mbyte of RAM. If I don't restart it in a 3 week, it uses > something like > 1.1Gbyte of RAM :) > At the moment I use a workaround based on a simple restart of sqlgrey > once a week. > Anyone of you can help me to find the solution to this problem? > Thanks. > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > Sqlgrey-users mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlgrey-users -- Best regards, Andrew Diederich |
From: Lionel B. <lio...@bo...> - 2007-05-03 17:57:17
|
Matteo Niccoli wrote the following on 03.05.2007 18:45 : > I'm running sqlgrey 1.7.4 with mysql support on a gentoo linux server > with postfix 2.3.8 > I have 3 mysql server, 1 is master and the other are slaves. > > I see that each time I restart sqlgrey it uses about 12Mbyte of RAM. > Afterk some days, > 3-4, it uses 110Mbyte of RAM. If I don't restart it in a 3 week, it uses > something like > 1.1Gbyte of RAM :) > > At the moment I use a workaround based on a simple restart of sqlgrey > once a week. > > Anyone of you can help me to find the solution to this problem? > The problem is usually a bad version of DBI or the MySQL DBD driver. I tried various methods in SQLgrey to avoid some code I suspected to leak memory but never managed to solve the problem (in fact I could only make it worse). The only thing to do is to try newer or older versions of these perl libraries (I managed to solve the same problem by reverting to an older version of the PostgreSQL driver once). Lionel |
From: Matteo N. <mat...@st...> - 2007-05-03 16:45:07
|
I'm running sqlgrey 1.7.4 with mysql support on a gentoo linux server with postfix 2.3.8 I have 3 mysql server, 1 is master and the other are slaves. I see that each time I restart sqlgrey it uses about 12Mbyte of RAM. Afterk some days, 3-4, it uses 110Mbyte of RAM. If I don't restart it in a 3 week, it uses something like 1.1Gbyte of RAM :) At the moment I use a workaround based on a simple restart of sqlgrey once a week. Anyone of you can help me to find the solution to this problem? Thanks. |
From: Philipp H. <ma...@ph...> - 2007-04-30 13:09:44
|
Hey guys, A) I'm using version 1.6.7 and I read in the feature list about SPF. Is my version querying the domains for the SPF-records in the domain? Or is this only supported in the devel-version? I tried to send a mail from a gmail-account and saw that the mail was greylisted although gmail.com has a spf record. Why? Well, I didn't write the gmail-IPs in my whitelist because I'd like to try if it does a query or not. So it seems as if SQLGrey doesn't :( So is SPF querying available? What do I have to change in my settings? B) If it doesn't work with this version, but with the devel-version, how stable is it? I'd like to use it on a customer server, so it has to work properly! C) If SPF queries are not supported by any version, I'd like to write a little script that automatically queries the big domains (manually added to a DB) and adds the IPs to the whitelist and notifies the admin for whitelist candidates (more than x mails from a domain). To do this, it's important to know how I can add IP-ranges like for gmx.de 213.165.64.0/23 (213.165.64.1 - 213.165.65.254) or something like a.b.c.0/27 (a.b.c.1 - a.b.c.30). So is this possible? Or do I have to add every IP what would be really annoying. Thanks for any answer.. Greets, Philipp |
From: Lionel B. <lio...@bo...> - 2007-04-26 17:18:13
|
A little gift for Solaris users out there. |
From: David L. <da...@la...> - 2007-04-19 21:00:14
|
Dan Faerch wrote: > Brian Wong wrote: >> Is there a way I can obtain any clues from the system besides noticing >> that the daemon isnt running anymore? Thanks. I observe periodic (2-3 times a month) crashes in sqlgrey (weird garbage collection panics) but I can't see anything glaringly obvious that could be the cause. I wrote a program that I run out of cron every five minutes, that restarts the daemon if it discovers that there's nothing listening on the socket. I filed a bug report about this on the SF page, and provided the program that does the watching. You should be able to adapt it pretty easily to suit your environment. But other than that, my hack is good enough, and so I haven't the tuits to investigate the issue further. David |
From: Dan F. <da...@ha...> - 2007-04-19 18:54:57
|
Brian Wong wrote: > Is there a way I can obtain any clues from the system besides noticing > that the daemon isnt running anymore? Thanks. > Well we DID have an issue where sqlgrey would die if it attempts to write to syslog, when syslog isnt running.. On many systems, syslog is shut down during logrotation. Dont know if thats a hint? Other than that, check the logs, raise the loglevel and so forth to see what happens. - Dan |
From: Brian W. <bw...@gm...> - 2007-04-18 19:40:48
|
List, I was wondering if anyone is able to give me insight as to why SQLgrey randomly dies on me. I am using version 1.6.7 and Net::Server 0.94 against PostgreSQL 8.2.3. I see a couple of archived threads regarding this subject but can not access the content at the moment (sourceforge having problems). Is there a way I can obtain any clues from the system besides noticing that the daemon isnt running anymore? Thanks. |
From: <as...@ko...> - 2007-04-14 15:19:36
|
Hi. Has anyone created Debian packages of SQLgrey? The closest I have come to finding some is a 200 day old ITP: <http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=389472> Best regards, Adam -- "They misunderestimated me." Adam Sjøgren as...@ko... |
From: Daniel J M. <dan...@au...> - 2007-04-10 20:11:59
|
Microsoft's Hosted Exchange Services appear to sit behind a set of postfix boxes with the name frontbridge.net. Companies that use these servers have their messages sent from random servers. It does not appear to retry beyond checking multiple MX records. I have added *.frontbridge.net to my /etc/sqlgrey/clients_fqdn_whitelist.local file. I really think it is funny that Microsoft front-ends their boxes with postfix, as can be seen in these headers: Microsoft Mail Internet Headers Version 2.0 Received: from sa.austinenergy.com ([10.10.10.3]) by smtp.austinenergy.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 9 Apr 2007 09:39:06 -0500 Received: from localhost (sa.austinenergy.com [127.0.0.1]) by sa.austinenergy.com (Postfix) with ESMTP id 6ADED4B for <Bar...@au...>; Mon, 9 Apr 2007 09:39:05 -0500 (CDT) X-Virus-Scanned: amavisd-new at austinenergy.com X-Spam-Score: 0.733 X-Spam-Level: X-Spam-Status: No, score=0.733 tagged_above=-999 required=4.5 tests=[AWL=-0.093, EXTRA_MPART_TYPE=0.815, HTML_MESSAGE=0.001, RELAYCOUNTRY_US=0.01] Received: from sa.austinenergy.com ([127.0.0.1]) by localhost (sa.austinenergy.com [127.0.0.1]) (amavisd-new, port 10025) with LMTP id 5kfoyvddosXj for <Bar...@au...>; Mon, 9 Apr 2007 09:39:01 -0500 (CDT) X-Greylist: from auto-whitelisted by SQLgrey-1.6.6 Received: from outbound2-cpk-R.bigfish.com (outbound-cpk.frontbridge.com [207.46.163.16]) by sa.austinenergy.com (Postfix) with ESMTP id 18F0E19 for <Bar...@au...>; Mon, 9 Apr 2007 09:39:01 -0500 (CDT) Received: from outbound2-cpk.bigfish.com (localhost [127.0.0.1]) by outbound2-cpk-R.bigfish.com (Postfix) with ESMTP id CC3BF421FB2 for <Bar...@au...>; Mon, 9 Apr 2007 14:37:24 +0000 (UTC) Received: from mail59-cpk-R.bigfish.com (unknown [10.2.40.3]) by outbound2-cpk.bigfish.com (Postfix) with ESMTP id C969120804D for <Bar...@au...>; Mon, 9 Apr 2007 14:37:24 +0000 (UTC) Received: from mail59-cpk (localhost [127.0.0.1]) by mail59-cpk-R.bigfish.com (Postfix) with ESMTP id 47A031100C6 for <Bar...@au...>; Mon, 9 Apr 2007 14:37:48 +0000 (UTC) X-BigFish: VP Received: by mail59-cpk (MessageSwitch) id 1176129467833765_22884; Mon, 9 Apr 2007 14:37:47 +0000 (UCT) Received: from USFOXSRVXCH236.ipscorp.invensys.com (unknown [65.204.211.16]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by mail59-cpk.bigfish.com (Postfix) with ESMTP id 35E3D918052 for <Bar...@au...>; Mon, 9 Apr 2007 14:37:47 +0000 (UTC) Received: from USFOXSRVXCH232.ipscorp.invensys.com ([10.155.18.232]) by USFOXSRVXCH236.ipscorp.invensys.com with Microsoft SMTPSVC(6.0.3790.1830); Mon, 9 Apr 2007 10:37:46 -0400 -- Daniel J McDonald, CCIE # 2495, CISSP # 78281, CNX Austin Energy http://www.austinenergy.com |
From: Gary C. <cas...@na...> - 2007-03-31 03:47:55
|
I'm trying to install sqlgrey version 1.7.5 on Solaris 10, and while it will start in standalone mode, it gives me this error when started with '-d': no connection to syslog available - /dev/conslog is not a socket at /opt/csw/share/perl/site_perl/Net/Server.pm line 1238 Has anyone else run into this? Thanks, _Gary |
From: Michael S. <Mic...@lr...> - 2007-03-30 10:18:49
|
Hi Dave, On Thu, 29 Mar 2007, Dave Strickler wrote: > The problem we are having is a very old one. In about 24 hours, our > Connect table will grow to 6 million records. We can purge it down, but > that takes time, and what really hurts us is doing a VACUUM FULL which > would write/read lock the database for long periods of time, therefore > pulling SQLGrey "offline". We are not using pg_autovacuum, and perhaps > this would be the solution for now. > In addition to get a smooth cleaning and restructuring, you could try to reduce the amount of entries in the connect table. Do you use throttling? We use it and in our case instead of having more than 2 million entries in connect we only have 800.000 (statistic from today). If you are not using throttling, you could analyze your connect table to find out how many entries could be avoided. Another thing would be to use more aggressive tests before a request is made to Sqlgrey. Michael Storz -- ====================================================== Leibniz-Rechenzentrum | <mailto:St...@lr...> Boltzmannstr. 1 | Fax: +49 89 35831-9700 85748 Garching / Germany | Tel: +49 89 35831-8840 ====================================================== |
From: Kenneth M. <kt...@ri...> - 2007-03-30 00:14:31
|
On Thu, Mar 29, 2007 at 03:47:40PM -0400, Dave Strickler wrote: > > However, I have a larger question that asks, "what do we do when the Connect table reaches 10 million, or 20 million?" How large can it grow before I'm in trouble? Since I don't know it's upper limits, I fear I may run into them randomly one day. > Dave, Like Lionel stated, you need to avoid VACUUM FULL and use just a regular VACUUM instead. Autovacuum will run it for you and ANALYZE your tables when needed. As I previously mentioned, you should reduce the size of the autovacuum thresholds so that it will work when a smaller fraction of the table has changed. The size at which performance will start to drop is when the working set of data that is being changed is larger than your system cache footprint. The lower bound is going to be the size of the index for the connection table, since that is what is used to speed access to the data in the table. If you calculate the average size of each field that comprises the index and add them together. That will give you the size of a typical index entry. Then divide that number into the system memory size to generate an absolute lower bound on the size. Ken |
From: Dave S. <dst...@ma...> - 2007-03-29 20:51:57
|
Not a problem - Merci bien de votre aide !! ;-) Dave Strickler MailWise LLC 617-933-5810 (direct) www.mailwise.com ( http://www.mailwise.com/ ) "Intelligent E-mail Protection" >>> Lionel Bouton <lio...@bo...> 4:40 PM Thursday, March 29, 2007 >>> Lionel Bouton wrote the following on 29.03.2007 22:28 : > [...] > - shared_buffer (attention au delà d'un certain seuil l'OS ne peut pas > suivre et PostgreSQL ne démarrera pas), I must be tired, I switched to French without realizing it. Translation : beware that past a threshold, the OS will return an error and PostgreSQL won't start. ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ Sqlgrey-users mailing list Sql...@li... https://lists.sourceforge.net/lists/listinfo/sqlgrey-users This message has been certified virus-free by MailWise Filter - The real-time, intelligent, e-mail firewall used to scan inbound and outbound messages for SPAM, Viruses and Content. For more information, please visit: http://www.mailwise.com |
From: Dave S. <dst...@ma...> - 2007-03-29 20:51:27
|
As usual, excellent advice from you - Thanks for your help ! We will try this out over the next week and let you know the results. Dave Strickler MailWise LLC 617-933-5810 (direct) www.mailwise.com ( http://www.mailwise.com/ ) "Intelligent E-mail Protection" >>> Lionel Bouton <lio...@bo...> 4:28 PM Thursday, March 29, 2007 >>> Dave Strickler wrote the following on 29.03.2007 21:47 : > (sorry on the answering-old-threads-to-start-new-ones... It's a old, > bad habit and I will stop) > > The problem we are having is a very old one. In about 24 hours, our > Connect table will grow to 6 million records. We can purge it down, > but that takes time, and what really hurts us is doing a VACUUM FULL > which would write/read lock the database for long periods of time, > therefore pulling SQLGrey "offline". We are not using pg_autovacuum, > and perhaps this would be the solution for now. Do not do a VACUUM FULL. This is for extreme conditions only (when the disk space used becomes huge). VACUUM ANALYZE should be enough and won't block SQLgrey (but probably slow it down a bit). pg_autovacuum will do it for you but not as soon as you launch it, as it is designed to wait for enough changes to a table to analyze it, so you'll have to launch a VACUUM ANALYZE manually the first time. 1/ Note that if you stop PostgreSQL or prevent SQLgrey to connect to it, it will switch to passthrough (handy if you need to take PostgreSQL offline to tweak it for a few seconds/minutes). 2/ If you never ANALYZE, PostgreSQL might very well do full table scans instead of using indexes which obviously will slow down SQLgrey tremendously. > > However, I have a larger question that asks, "what do we do when the > Connect table reaches 10 million, or 20 million?" How large can it > grow before I'm in trouble? Since I don't know it's upper limits, I > fear I may run into them randomly one day. It depends on you actual setup. I've run one old PostgreSQL 7.1 instance with around this number of lines years ago and it didn't break. But it could get too slow. You should compare the time needed to query the connect table and the rate of connections to your SMTP servers. If your query time is less than one tenth of the average delay between 2 connexions you should be safe. If not, time to optimize. Throw memory at PostgreSQL and increase : - shared_buffer (attention au delà d'un certain seuil l'OS ne peut pas suivre et PostgreSQL ne démarrera pas), which should help maintain index in memory - max_fsm_pages and max_fsm_relations (which should prevent too much fragmentation of the on-disk data), run *VACUUM VERBOSE to check if these parameters are enough at the end you will see something like:* *INFO: free space map: 12 relations, 2319 pages stored; 2464 total pages needed* *DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared memory.* **** be sure to be at least around the values reported by the INFO line. ** - commit_demay, should help commit several writes at the same time, - reduce commit_siblings to 2 (with SQLgrey you can have at most as much sibling as SQLgrey servers, if you have only one, commit_* won't change your performance), Lionel. ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ Sqlgrey-users mailing list Sql...@li... https://lists.sourceforge.net/lists/listinfo/sqlgrey-users This message has been certified virus-free by MailWise Filter - The real-time, intelligent, e-mail firewall used to scan inbound and outbound messages for SPAM, Viruses and Content. For more information, please visit: http://www.mailwise.com |
From: Lionel B. <lio...@bo...> - 2007-03-29 20:40:47
|
Lionel Bouton wrote the following on 29.03.2007 22:28 : > [...] > - shared_buffer (attention au delà d'un certain seuil l'OS ne peut pas > suivre et PostgreSQL ne démarrera pas), I must be tired, I switched to French without realizing it. Translation : beware that past a threshold, the OS will return an error and PostgreSQL won't start. |
From: Lionel B. <lio...@bo...> - 2007-03-29 20:28:25
|
Dave Strickler wrote the following on 29.03.2007 21:47 : > (sorry on the answering-old-threads-to-start-new-ones... It's a old, > bad habit and I will stop) > > The problem we are having is a very old one. In about 24 hours, our > Connect table will grow to 6 million records. We can purge it down, > but that takes time, and what really hurts us is doing a VACUUM FULL > which would write/read lock the database for long periods of time, > therefore pulling SQLGrey "offline". We are not using pg_autovacuum, > and perhaps this would be the solution for now. Do not do a VACUUM FULL. This is for extreme conditions only (when the disk space used becomes huge). VACUUM ANALYZE should be enough and won't block SQLgrey (but probably slow it down a bit). pg_autovacuum will do it for you but not as soon as you launch it, as it is designed to wait for enough changes to a table to analyze it, so you'll have to launch a VACUUM ANALYZE manually the first time. 1/ Note that if you stop PostgreSQL or prevent SQLgrey to connect to it, it will switch to passthrough (handy if you need to take PostgreSQL offline to tweak it for a few seconds/minutes). 2/ If you never ANALYZE, PostgreSQL might very well do full table scans instead of using indexes which obviously will slow down SQLgrey tremendously. > > However, I have a larger question that asks, "what do we do when the > Connect table reaches 10 million, or 20 million?" How large can it > grow before I'm in trouble? Since I don't know it's upper limits, I > fear I may run into them randomly one day. It depends on you actual setup. I've run one old PostgreSQL 7.1 instance with around this number of lines years ago and it didn't break. But it could get too slow. You should compare the time needed to query the connect table and the rate of connections to your SMTP servers. If your query time is less than one tenth of the average delay between 2 connexions you should be safe. If not, time to optimize. Throw memory at PostgreSQL and increase : - shared_buffer (attention au delà d'un certain seuil l'OS ne peut pas suivre et PostgreSQL ne démarrera pas), which should help maintain index in memory - max_fsm_pages and max_fsm_relations (which should prevent too much fragmentation of the on-disk data), run *VACUUM VERBOSE to check if these parameters are enough at the end you will see something like:* *INFO: free space map: 12 relations, 2319 pages stored; 2464 total pages needed* *DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared memory.* **** be sure to be at least around the values reported by the INFO line. ** - commit_demay, should help commit several writes at the same time, - reduce commit_siblings to 2 (with SQLgrey you can have at most as much sibling as SQLgrey servers, if you have only one, commit_* won't change your performance), Lionel. |
From: Dave S. <dst...@ma...> - 2007-03-29 19:48:36
|
(sorry on the answering-old-threads-to-start-new-ones... It's a old, bad = habit and I will stop) =20 The problem we are having is a very old one. In about 24 hours, our = Connect table will grow to 6 million records. We can purge it down, but = that takes time, and what really hurts us is doing a VACUUM FULL which = would write/read lock the database for long periods of time, therefore = pulling SQLGrey "offline". We are not using pg_autovacuum, and perhaps = this would be the solution for now. =20 However, I have a larger question that asks, "what do we do when the = Connect table reaches 10 million, or 20 million?" How large can it grow = before I'm in trouble? Since I don't know it's upper limits, I fear I may = run into them randomly one day. =20 To be clear, SQLGrey is running great. It's when Postgres slows in dealing = with a "not VACUUMed in the past 6 hours" Connect table that the problems = start to arrise, and SQLGrey slows down because of this. =20 Again, maybe your pg_autovacuum idea is the right choice... I think we = should try that first. >>> Lionel Bouton <lio...@bo...> 3:09 PM Thursday, March 29, = 2007 >>> Dave Strickler wrote the following on 29.03.2007 20:54 : Please don't answer other threads to start new ones... > We have a lot (read: millions) of records in our Connect table inside > of 24 hours. Purging it down to a reasonable size for speed is very > difficult as we are using Postgres. What is the problem? > We are considering having multiple Connect tables in different > databases - perhaps databases that begin with A through Z, thus giving > us Connect tables that are 26 times as small, and thus very fast and > manageable. Do you really want multiple servers ? You may get roughly the same performance by adding more RAM to your existing server or tuning PostgreSQL (26 less data should mean only 3x or 4x speedup, not 26x as the query time follows a log(tablesize) rule and I very much doubt you'd want 26 database servers to maintain just for greylisting). Plus SQLgrey will have to regularly clean multiple tables instead of only one... Note that the connect table usually shrinks down a little after 24 hours (the autowhitelists kick in). Do you use pg_autovacuum (or the integrated autovacuum daemon in 8.2 or later versions)? It should help. Lionel. ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share = your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3DDE= VDEV=20 _______________________________________________ Sqlgrey-users mailing list Sql...@li...=20 https://lists.sourceforge.net/lists/listinfo/sqlgrey-users This message has been certified virus-free by MailWise Filter - The real-ti= me, intelligent, e-mail firewall used to scan inbound and outbound messages = for SPAM, Viruses and Content. =0A=0A For more information, please visit: http:= //www.mailwise.com=0A |
From: Kenneth M. <kt...@ri...> - 2007-03-29 19:29:48
|
On Thu, Mar 29, 2007 at 09:09:58PM +0200, Lionel Bouton wrote: > Dave Strickler wrote the following on 29.03.2007 20:54 : > > Please don't answer other threads to start new ones... > > > We have a lot (read: millions) of records in our Connect table inside > > of 24 hours. Purging it down to a reasonable size for speed is very > > difficult as we are using Postgres. > > What is the problem? > > > We are considering having multiple Connect tables in different > > databases - perhaps databases that begin with A through Z, thus giving > > us Connect tables that are 26 times as small, and thus very fast and > > manageable. > > Do you really want multiple servers ? You may get roughly the same > performance by adding more RAM to your existing server or tuning > PostgreSQL (26 less data should mean only 3x or 4x speedup, not 26x as > the query time follows a log(tablesize) rule and I very much doubt you'd > want 26 database servers to maintain just for greylisting). Plus SQLgrey > will have to regularly clean multiple tables instead of only one... > > Note that the connect table usually shrinks down a little after 24 hours > (the autowhitelists kick in). > > Do you use pg_autovacuum (or the integrated autovacuum daemon in 8.2 or > later versions)? It should help. > > Lionel. > Dave, I agree with Lionel. You would be better off adding a bit more memory to your DB server and tuning it rather than trying to split it up like that. How long is a typical connect table lookup taking? an insert? an update? You should definitely be using a recent PostgreSQL and be running with autovacuum enabled. You should also tune it to be more agressive than the default settings. How many queries/sec are you seeing? What is your bottle- neck, I/O or CPU? Please post a few more configuration details, both hardware and software, and we can provide more germane recommendations. Ken |
From: Lionel B. <lio...@bo...> - 2007-03-29 19:10:10
|
Dave Strickler wrote the following on 29.03.2007 20:54 : Please don't answer other threads to start new ones... > We have a lot (read: millions) of records in our Connect table inside > of 24 hours. Purging it down to a reasonable size for speed is very > difficult as we are using Postgres. What is the problem? > We are considering having multiple Connect tables in different > databases - perhaps databases that begin with A through Z, thus giving > us Connect tables that are 26 times as small, and thus very fast and > manageable. Do you really want multiple servers ? You may get roughly the same performance by adding more RAM to your existing server or tuning PostgreSQL (26 less data should mean only 3x or 4x speedup, not 26x as the query time follows a log(tablesize) rule and I very much doubt you'd want 26 database servers to maintain just for greylisting). Plus SQLgrey will have to regularly clean multiple tables instead of only one... Note that the connect table usually shrinks down a little after 24 hours (the autowhitelists kick in). Do you use pg_autovacuum (or the integrated autovacuum daemon in 8.2 or later versions)? It should help. Lionel. |