You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(10) |
Nov
(37) |
Dec
(66) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(52) |
Feb
(136) |
Mar
(65) |
Apr
(38) |
May
(46) |
Jun
(143) |
Jul
(60) |
Aug
(33) |
Sep
(79) |
Oct
(29) |
Nov
(13) |
Dec
(14) |
2006 |
Jan
(25) |
Feb
(26) |
Mar
(4) |
Apr
(9) |
May
(29) |
Jun
|
Jul
(9) |
Aug
(11) |
Sep
(10) |
Oct
(9) |
Nov
(45) |
Dec
(8) |
2007 |
Jan
(82) |
Feb
(61) |
Mar
(39) |
Apr
(7) |
May
(9) |
Jun
(16) |
Jul
(2) |
Aug
(22) |
Sep
(2) |
Oct
|
Nov
(4) |
Dec
(5) |
2008 |
Jan
|
Feb
|
Mar
(5) |
Apr
(2) |
May
(8) |
Jun
|
Jul
(10) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
(32) |
May
|
Jun
(7) |
Jul
|
Aug
(38) |
Sep
(3) |
Oct
|
Nov
(4) |
Dec
|
2010 |
Jan
(36) |
Feb
(32) |
Mar
(2) |
Apr
(19) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
(6) |
Nov
(8) |
Dec
|
2011 |
Jan
(3) |
Feb
|
Mar
(5) |
Apr
|
May
(2) |
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
(6) |
2012 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
|
Nov
(6) |
Dec
(10) |
2014 |
Jan
(8) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
(34) |
Aug
(6) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(18) |
Jul
(13) |
Aug
(30) |
Sep
(4) |
Oct
(1) |
Nov
|
Dec
(4) |
2016 |
Jan
(2) |
Feb
(10) |
Mar
(3) |
Apr
|
May
|
Jun
(11) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Dan F. <da...@ha...> - 2010-11-18 07:17:48
|
Its possible to test sqlgrey by talking to it like posfix does, using telnet or netcat Example: $ nc localhost 2501 request=smtpd_access_policy protocol_state=RCPT protocol_name=SMTP client_address=66.102.13.104 client_name=unknown reverse_client_name=ez-in-f104.1e100.net helo_name=ez-in-f104.1e100.net sender=te...@ez... recipient=te...@ez... < hit return to add a blank line > And the server will respond with its verdict: action=451 Greylisted for 1 minutes (10) It should be fairly easy to use this to validate a whitelist entry. Simply modify the appropriate fields in the above and paste it to the sqlgrey port. Hit return once more to make a blank line at the end and read the output. Additional information may also be in the log, depending on your loglevel. - Dan Faerch -- http://www.phpappwall.com On 2010-11-17 22:32, Douglas Mortensen wrote: > Besides making a connection attempt from a whitelisted IP/FQDN & watching the sqlgrey logging, is there any way that I can simply query the locally loaded whitelist for the IP/FQDN in question, or have it output the entire currently loaded whitelist to the console or logging? > > Thanks, > - > Doug Mortensen > Network Consultant > Impala Networks Inc > CCNA, MCSA, Security+, A+ > Linux+, Network+, Server+ > . > www.impalanetworks.com > P: (505) 327-7300 > F: (505) 327-7545 > > > ------------------------------------------------------------------------------ > Beautiful is writing same markup. Internet Explorer 9 supports > standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 & L3. > Spend less time writing and rewriting code and more time creating great > experiences on the web. Be a part of the beta today > http://p.sf.net/sfu/msIE9-sfdev2dev > _______________________________________________ > Sqlgrey-users mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlgrey-users |
From: Bruce B. <bb...@bo...> - 2010-11-17 22:15:07
|
Doug, On Nov 17, 2010, at 3:32 PM, Douglas Mortensen wrote: > Besides making a connection attempt from a whitelisted IP/FQDN & > watching the sqlgrey logging, is there any way that I can simply > query the locally loaded whitelist for the IP/FQDN in question, or > have it output the entire currently loaded whitelist to the console > or logging? Have you had a look at the "Sqlgrey WebInterface" sgwi? http://www.vanheusden.com/sgwi/ You can also see a screenshot here: www.beebeec.nl/sgwi/ B. Bodger |
From: Karl O. P. <ko...@me...> - 2010-11-17 21:43:13
|
On 11/17/2010 03:32:05 PM, Douglas Mortensen wrote: > Besides making a connection attempt from a whitelisted IP/FQDN & > watching the sqlgrey logging, is there any way that I can simply > query > the locally loaded whitelist for the IP/FQDN in question, or have it > output the entire currently loaded whitelist to the console or > logging? All the data's in postgresql. Use psql to write an sql query, connecting as the postgresql user or some other user with rights. Karl <ko...@me...> Free Software: "You don't pay back, you pay forward." -- Robert A. Heinlein |
From: Douglas M. <do...@im...> - 2010-11-17 21:33:16
|
Besides making a connection attempt from a whitelisted IP/FQDN & watching the sqlgrey logging, is there any way that I can simply query the locally loaded whitelist for the IP/FQDN in question, or have it output the entire currently loaded whitelist to the console or logging? Thanks, - Doug Mortensen Network Consultant Impala Networks Inc CCNA, MCSA, Security+, A+ Linux+, Network+, Server+ . www.impalanetworks.com P: (505) 327-7300 F: (505) 327-7545 |
From: Douglas M. <do...@im...> - 2010-11-17 21:31:46
|
The IRS has an automated email that goes out to Professional Independent tax preparers when they sign up on the IRS website. It does not retry after SQLGrey initial defers them. Is it is advisable to whitelist them by IP or FQDN? Is the only advantage of one or the other the fact that IP is quicker because it doesn't require a DNS lookup (is a DNS lookup even done, or is the FQDN based only on the SMTP HELO hostname?), however FQDN is dynamic & will continue to work in case IPs are ever changed? 66.77.65.237 is the only IP I've seen, out of about 4 connection attempts over 2 days. The FQDN shown I the postfix connection logs is vaftp03.qai.irs.gov. Its DNS record also resolves to the IP just mentioned (66.77.65.237). The PTR record for this IP, also resolves to the same hostname. So is it recommended to whitelist 66.77.65.237, 66.77.65, or vaftp03.qai.irs.gov? Thanks, - Doug Mortensen Network Consultant Impala Networks Inc CCNA, MCSA, Security+, A+ Linux+, Network+, Server+ . www.impalanetworks.com<http://www.impalanetworks.com> P: (505) 327-7300 F: (505) 327-7545 |
From: Bob T. <ta...@re...> - 2010-10-28 19:50:45
|
Quick quesiton regarding time resolution on reconnect_delay # 1 minute delay reconnect_delay = 1 # 30 sec delay reconnect_delay = .5 Would a .5 give me a 30s reconnect delay? In the past I hacked sqlgrey to give me seconds as a reconnect_delay. -- Bob Tanner <ta...@re...> | Phone : (952 943-8700 http://www.real-time.com, Linux, OSX, VMware | Fax : (952)943-8500 Key fingerprint = F785 DDFC CF94 7CE8 AA87 3A9D 3895 26F1 0DDB E378 |
From: Paweł M. <paw...@pr...> - 2010-10-03 21:15:42
|
On Sun, 3 Oct 2010 22:52:07 +0200, Lionel Bouton <lio...@bo...> wrote: > The Sun, 03 Oct 2010 22:27:02 +0200, > Paweł Madej <paw...@pr...> wrote: > >> [...] Now if I want to use my database I >> need to edit sqlgrey main file and add prefixes to tables. > > I don't see any valid reason why an admin could not use a dedicated > database for SQLgrey. Every officially supported DBMS (PostgreSQL, > MySQL and SQLite) supports multiple databases, it's a core feature > every DBA should expect and use. > For software designed to share a common database (with a set of > commonly used reference tables for example) I can understand prefixes. > In SQLgrey's case it seems to me it's complexity for no good reason. > > Lionel Hello, The reason for which I need sqlgrey with prefixed tables is that I got set foreign keys to other mail tables which manage optin/out and so on. This if for consistency of my mail database. Prefixes also give me better readability of tables. With different tables it isn't possible. -- Greets Pawel |
From: Lionel B. <lio...@bo...> - 2010-10-03 21:09:16
|
The Sun, 03 Oct 2010 22:27:02 +0200, Paweł Madej <paw...@pr...> wrote: > [...] Now if I want to use my database I > need to edit sqlgrey main file and add prefixes to tables. I don't see any valid reason why an admin could not use a dedicated database for SQLgrey. Every officially supported DBMS (PostgreSQL, MySQL and SQLite) supports multiple databases, it's a core feature every DBA should expect and use. For software designed to share a common database (with a set of commonly used reference tables for example) I can understand prefixes. In SQLgrey's case it seems to me it's complexity for no good reason. Lionel |
From: Paweł M. <paw...@pr...> - 2010-10-03 20:25:51
|
On Sun, 03 Oct 2010 15:55:26 +0200, Dan Faerch <da...@ha...> wrote: > I dont remeber if theres an option for table prefixes. > But as for the second part of your question, cant you just give > permissions to create tables and indexes on first run? Then revoke these > when tables have been created? I know that there are no prefixes for tables but this was rather request to developers who I hope subscribe this list, as sqlgrey is great piece of antispam software. Yes I can do this but if there is sqlgrey.sql schema this could be easier to me to prepare full db schema for all services I use and use this after loading to mysql server. Now if I want to use my database I need to edit sqlgrey main file and add prefixes to tables. If it is possible to change then great, if not I will edit this every implemented server, but comparing other software which stores data in mysql its rather common to have above features. -- Greets Pawel |
From: Dan F. <da...@ha...> - 2010-10-03 14:12:10
|
On 2010-10-03 09:12, Paweł Madej wrote: > Hello, > > Is it possible to reconfigure sqlgrey script to store default tables > in itself but allow to add table prefixes on my own? also it will be > helpfull to get db structure in attached file not auto create because > of permissions to db. I do not want to to give sqlgrey user permissions > to operate on creating tables indexes and so on. only plain usage on > previously created tables. > I dont remeber if theres an option for table prefixes. But as for the second part of your question, cant you just give permissions to create tables and indexes on first run? Then revoke these when tables have been created? - Dan |
From: Paweł M. <paw...@pr...> - 2010-10-03 07:29:57
|
Hello, Is it possible to reconfigure sqlgrey script to store default tables in itself but allow to add table prefixes on my own? also it will be helpfull to get db structure in attached file not auto create because of permissions to db. I do not want to to give sqlgrey user permissions to operate on creating tables indexes and so on. only plain usage on previously created tables. Greets Pawel Madej |
From: David D. <da...@en...> - 2010-06-16 12:43:47
|
Hi all, The FreeBSD port of SQLgrey is still on 1.7.6, is anyone able to update it to 1.8.0? I've no idea how to write a FreeBSD port but we could really use the improved IPv6 handling. Regards, -- David Derrick Entanet International Ltd T: 0333 101 0600 W: http://www.enta.net |
From: Gary S. <gar...@ho...> - 2010-04-27 00:42:38
|
> > You could try talking with the load balancing folk. > > > > I'm working with them on this as well. As for right now, sqlgrey is the only > service that I am having problems with. I had issues with mysql as well, but > fixing the arp issue seemed to resolve it for that server. It did not however > resolve it for sqlgrey. I'm pretty sure that it has something to do with the > return close from postfix to the load balancer. I don't think that the close > is actually making it back. At the same time, postfix enters a FIN_WAIT for a > minute or so, then it falls off. > > Anyway, I will also check with the postfix group as well as there could be > something in the closure logic for policy maps that's only brought forward > during this type of scenario. > Things work much better. The lost connections were because of iptables. I have this rule early on for server that has the director. I guess the ACK FIN is an technically an invalid state... -A INPUT -p tcp -m conntrack --ctstate INVALID -j LOG --log-prefix "FW-I BF: " -A INPUT -p tcp -m conntrack --ctstate INVALID -j REJECT --reject-with icmp-port-unreachable Apr 26 04:36:02 wall1 kernel: FW-I BF: IN=br0 OUT= PHYSIN=eth1 MAC=00:50:56:b1:63:bc:00:0c:29:92:be:b7:08:00 SRC=10.80.66.24 DST=10.80.55.11 LEN=52 TOS=0x08 PREC=0x00 TTL=64 ID=40835 DF PROTO=TCP SPT=52114 DPT=3917 WINDOW=363 RES=0x00 ACK FIN URGP=0 > Gary Smith > |
From: Karl O. P. <ko...@me...> - 2010-04-27 00:06:32
|
On 04/26/2010 05:36:15 PM, Gary Smith wrote: > > You could try talking with the load balancing folk. > > > > I'm working with them on this as well. As for right now, sqlgrey is > the only service that I am having problems with. I had issues with > mysql as well, but fixing the arp issue seemed to resolve it for that > server. It did not however resolve it for sqlgrey. I'm pretty sure > that it has something to do with the return close from postfix to the > load balancer. I don't think that the close is actually making it > back. At the same time, postfix enters a FIN_WAIT for a minute or > so, > then it falls off. > > Anyway, I will also check with the postfix group as well as there > could be something in the closure logic for policy maps that's only > brought forward during this type of scenario. If postfix is in FIN_WAIT then it thinks the tcp connection is closed. IIRC FIN_WAIT is a TCP delay that avoids accidentally injecting old packets that are still on the wire into a new TCP session. (See the TCP RFC.) Karl <ko...@me...> Free Software: "You don't pay back, you pay forward." -- Robert A. Heinlein |
From: Gary S. <gar...@ho...> - 2010-04-26 22:36:26
|
> You could try talking with the load balancing folk. > I'm working with them on this as well. As for right now, sqlgrey is the only service that I am having problems with. I had issues with mysql as well, but fixing the arp issue seemed to resolve it for that server. It did not however resolve it for sqlgrey. I'm pretty sure that it has something to do with the return close from postfix to the load balancer. I don't think that the close is actually making it back. At the same time, postfix enters a FIN_WAIT for a minute or so, then it falls off. Anyway, I will also check with the postfix group as well as there could be something in the closure logic for policy maps that's only brought forward during this type of scenario. Gary Smith |
From: Gary S. <gar...@ho...> - 2010-04-26 20:04:44
|
> Ive not tried LVS'ing sqlgrey but i do use LVS (ipvsadm) alot. > I run sqlgrey on the MTA, and then LVS to the MTA's. The MTA's then > communicate with sqlgrey on localhost. Sqlgrey reads from MySQL on > localhost and writes to a master for replication. > > > When you say ESTABLISHED, i assume you mean from looking at the output > from netstat or something like it. If so, i cant imagine that it can be > an application level problem. If postfix deliberately closes the > connection (eg due to a timeout), it should/would transmit either a TCP > RST or FIN. The receiving party (your cluster node) will handle RST and > FIN in the kernel's ip-stack, not in the application. > Upon receiving FIN, "ESTABLISHED" should have changed to "CLOSE-WAIT" in > netstat. > Which suggests that your cluster node does not actually receive a FIN. > Which again suggest that the connection drop before your cluster node, > ie. the loadbalancer. > > Of course, not knowing you LVS setup and config, i can only guess. But a > typical misconfigure would be using "Direct Routing" (the "gatewaying" > option in ipvsadm. which is default), without taking precautions against > the cluster nodes ARP'ing the virutal ip. If this is the case, all > cluster nodes would "battle" for the virtual ip, making the ip "hop" > around the nodes. > That would, i imagine, leave ESTABLISHED connections behind. Dan, I'll look into this. The arp problem makes sense. We have been doing some work with the arp configurations on these. This is a new cluster environment. As for the routing, we are currently using NAT (-m) for all nodes. I'm pretty sure that's it's the load balancer that has introduced the problems. We do have other IVPS nodes running (which have been running for years) and this is the first set that I have run into the lingering connection problems. Thanks for the pointers. > Regards > - Dan Faerch > > > ------------------------------------------------------------------------------ > _______________________________________________ > Sqlgrey-users mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlgrey-users |
From: Karl O. P. <ko...@me...> - 2010-04-26 20:03:30
|
On 04/26/2010 02:43:26 PM, Gary Smith wrote: > I will also try some of the options on the load balancer, but if > persistent connections didn't resolve it, then its doubtful that > other > options will. You could try talking with the load balancing folk. Karl <ko...@me...> Free Software: "You don't pay back, you pay forward." -- Robert A. Heinlein |
From: Dan F. <da...@ha...> - 2010-04-26 19:59:43
|
Gary Smith wrote: >> I have setup two sqlgrey servers load balanced with ipvsadm. Load balancing >> is operating but I end up with a lot of orphaned ESTABLISHED connections on >> the real servers. In a period of 48 hours, I received ~500 requests (per real >> server) and here were about ~250 established connections per server. >> >> >> Ive not tried LVS'ing sqlgrey but i do use LVS (ipvsadm) alot. I run sqlgrey on the MTA, and then LVS to the MTA's. The MTA's then communicate with sqlgrey on localhost. Sqlgrey reads from MySQL on localhost and writes to a master for replication. When you say ESTABLISHED, i assume you mean from looking at the output from netstat or something like it. If so, i cant imagine that it can be an application level problem. If postfix deliberately closes the connection (eg due to a timeout), it should/would transmit either a TCP RST or FIN. The receiving party (your cluster node) will handle RST and FIN in the kernel's ip-stack, not in the application. Upon receiving FIN, "ESTABLISHED" should have changed to "CLOSE-WAIT" in netstat. Which suggests that your cluster node does not actually receive a FIN. Which again suggest that the connection drop before your cluster node, ie. the loadbalancer. Of course, not knowing you LVS setup and config, i can only guess. But a typical misconfigure would be using "Direct Routing" (the "gatewaying" option in ipvsadm. which is default), without taking precautions against the cluster nodes ARP'ing the virutal ip. If this is the case, all cluster nodes would "battle" for the virtual ip, making the ip "hop" around the nodes. That would, i imagine, leave ESTABLISHED connections behind. Regards - Dan Faerch |
From: Gary S. <gar...@ho...> - 2010-04-26 19:44:37
|
> SQLgrey is doing the correct thing in this case. It does not know why > the connection is gone or even if it is gone for a while. The load balancer > should close the connection to the remote SQLgrey when the frontends go > away or depending on how it works, when all connections from the frontend > are closed. This will keep SQLgrey from holding old connections around until > they are reclaimed. It is useful to have timeouts such as you mention to > handle > other bits of poorly designed software. > I will probably implement some level of optional timeouts in the codebase that I have, and I will provide it as a patch for those who might be interested. I think adding it as an optional config variable has merits, default it to 0, which would just bypass the timeout for normal operation. For the most part, postfix->sqlgrey has always worked flawlessly (for the most part) and it's this load balancing that is disrupting the normal flow. I will also try some of the options on the load balancer, but if persistent connections didn't resolve it, then its doubtful that other options will. If we are seeing this issue on test (~500msg/day) this on production it might fail entirely (~250msg/minute). That's where my concern really is. |
From: Kenneth M. <kt...@ri...> - 2010-04-26 19:10:16
|
On Mon, Apr 26, 2010 at 11:38:34AM -0700, Gary Smith wrote: > > I have setup two sqlgrey servers load balanced with ipvsadm. Load balancing > > is operating but I end up with a lot of orphaned ESTABLISHED connections on > > the real servers. In a period of 48 hours, I received ~500 requests (per real > > server) and here were about ~250 established connections per server. > > > > When I bypass ipvsadm and just go direct to a single server, I see only a few > > connections established (and there is a corresponding connection on the > > postfix side). > > > > Does anyone else on the list run sqlgrey in an ipvsadm load balanced scenario? > > If so, any pointers? Postfix seems to have no complaint on this, but I think > > by design it reconnects when the connection is gone. > > This might be helpful for people on the list. > > Okay, isolating it to a single real server node in the load balanced cluster still causes the same result. It appears that after N second, postfix hangs up on the connection but it's not realized by sqlgrey, probably because of the load balancer. So it is then up to the OS TCP TTL settings to kill the TCP connection. I have put a dirty hack in place just to test. I have setup $mux->set_timeout in the mux_input and for the mux_timeout callback, I close the current filehandle ($fh). I do this after the processing has taken place (immediately after the while loop). It's probably the wrong thing to do, but the connection is closed after the timeout and even this closure is seen by postfix and the load balancer. > > The concern here is that sqlgrey isn't reacting gracefully when connections are abandoned (that is closed, but never receiving notification). It stands out when something like load balancing is put in place (my observation, I could still be wrong here). > > It might be useful to put some type of sanity timeout check in place for a case like this. If you have a reasonably configured default TTL for TCP at the OS level then the impact is probably minimal. > > I have been using sqlgrey for some years now, and I am migrating it to a separate cluster (currently it lives on each MTA, but trying to break that as we have a resource need to move this to it's own cluster). > > Thoughts? > SQLgrey is doing the correct thing in this case. It does not know why the connection is gone or even if it is gone for a while. The load balancer should close the connection to the remote SQLgrey when the frontends go away or depending on how it works, when all connections from the frontend are closed. This will keep SQLgrey from holding old connections around until they are reclaimed. It is useful to have timeouts such as you mention to handle other bits of poorly designed software. Cheers, Ken |
From: Gary S. <gar...@ho...> - 2010-04-26 18:38:45
|
> I have setup two sqlgrey servers load balanced with ipvsadm. Load balancing > is operating but I end up with a lot of orphaned ESTABLISHED connections on > the real servers. In a period of 48 hours, I received ~500 requests (per real > server) and here were about ~250 established connections per server. > > When I bypass ipvsadm and just go direct to a single server, I see only a few > connections established (and there is a corresponding connection on the > postfix side). > > Does anyone else on the list run sqlgrey in an ipvsadm load balanced scenario? > If so, any pointers? Postfix seems to have no complaint on this, but I think > by design it reconnects when the connection is gone. This might be helpful for people on the list. Okay, isolating it to a single real server node in the load balanced cluster still causes the same result. It appears that after N second, postfix hangs up on the connection but it's not realized by sqlgrey, probably because of the load balancer. So it is then up to the OS TCP TTL settings to kill the TCP connection. I have put a dirty hack in place just to test. I have setup $mux->set_timeout in the mux_input and for the mux_timeout callback, I close the current filehandle ($fh). I do this after the processing has taken place (immediately after the while loop). It's probably the wrong thing to do, but the connection is closed after the timeout and even this closure is seen by postfix and the load balancer. The concern here is that sqlgrey isn't reacting gracefully when connections are abandoned (that is closed, but never receiving notification). It stands out when something like load balancing is put in place (my observation, I could still be wrong here). It might be useful to put some type of sanity timeout check in place for a case like this. If you have a reasonably configured default TTL for TCP at the OS level then the impact is probably minimal. I have been using sqlgrey for some years now, and I am migrating it to a separate cluster (currently it lives on each MTA, but trying to break that as we have a resource need to move this to it's own cluster). Thoughts? |
From: Gary S. <gar...@ho...> - 2010-04-26 16:57:09
|
I have setup two sqlgrey servers load balanced with ipvsadm. Load balancing is operating but I end up with a lot of orphaned ESTABLISHED connections on the real servers. In a period of 48 hours, I received ~500 requests (per real server) and here were about ~250 established connections per server. When I bypass ipvsadm and just go direct to a single server, I see only a few connections established (and there is a corresponding connection on the postfix side). Does anyone else on the list run sqlgrey in an ipvsadm load balanced scenario? If so, any pointers? Postfix seems to have no complaint on this, but I think by design it reconnects when the connection is gone. |
From: Gary S. <gar...@ho...> - 2010-04-26 16:41:06
|
I had a need to run sqlgrey output to a separate log facility, so I hacked the code and created a patch. You might find it useful for the mainline as not all people want to log to mail. It defaults to mail though. In the sqlgrey.conf you would just add. log_facility = local7 Gary Smith |
From: Kyle L. <la...@uc...> - 2010-04-07 03:39:02
|
Daniel McDonald wrote: > ERROR 1071 (42000): Specified key was too long; max key length is 1000 bytes > > That error didn't make much sense to me, as 64+255+39 <1000 This is probably because Unicode is involved. See also: http://www.xaprb.com/blog/2006/04/17/max-key-length-in-mysql/ 64+255+39 = 358 > 333. --Kyle |
From: Daniel M. <dan...@au...> - 2010-04-06 22:47:35
|
I¹m trying to bring up a new instance of sqlgrey, and it is unhappy creating the from_awl table: Apr 6 17:09:49 ma sqlgrey: dbaccess: warning: couldn't do query: CREATE TABLE from_awl (sender_name varchar(64) NOT NULL, sender_domain varchar(255) NOT NULL, src varchar(39) NOT NULL, first_seen timestamp NOT NULL, last_seen timestamp NOT NULL, PRIMARY KEY (src, sender_domain, sender_name)): , reconnecting to DB I tried to run this command interactively and was returned: mysql> use sqlgrey Database changed mysql> CREATE TABLE from_awl (sender_name varchar(64) NOT NULL, sender_domain varchar(255) NOT NULL, src varchar(39) NOT NULL, first_seen timestamp NOT NULL, last_seen timestamp NOT NULL, PRIMARY KEY (src, sender_domain, sender_name)); ERROR 1071 (42000): Specified key was too long; max key length is 1000 bytes That error didn¹t make much sense to me, as 64+255+39 <1000 mysql> quit Bye Sqlgrey is version 1.7.6, from a locally rebuilt SRPM on Mandriva Enterprise Server 5.1 $ rpm -qi sqlgrey Name : sqlgrey Relocations: (not relocatable) Version : 1.7.6 Vendor: Austin Energy Release : 2mdvmes5.1 Build Date: Thu 01 Apr 2010 08:57:20 AM CDT Install Date: Thu 01 Apr 2010 09:18:18 AM CDT Build Host: Group : System/Servers Source RPM: sqlgrey-1.7.6-2mdvmes5.1.src.rpm Size : 147603 License: GPL Signature : (none) URL : http://sqlgrey.sourceforge.net Summary : Postfix grey-listing policy service Description : SQLgrey is a Postfix grey-listing policy service with auto-white-listing written in Perl with SQL database as storage backend. Greylisting stops 50 to 90 % junk mails (spam and virus) before they reach your Postfix server (saves BW, user time and CPU time). Mysql is version 5.0.89, using the distro supplied RPM: $ rpm -qi mysql-max Name : mysql-max Relocations: (not relocatable) Version : 5.0.89 Vendor: Mandriva Release : 0.1mdvmes5 Build Date: Sun 17 Jan 2010 09:43:11 AM CST Install Date: Thu 01 Apr 2010 07:31:50 AM CDT Build Host: titan.mandriva.com Group : System/Servers Source RPM: mysql-5.0.89-0.1mdvmes5.src.rpm Size : 8645155 License: GPL Signature : DSA/SHA1, Sun 17 Jan 2010 11:25:07 AM CST, Key ID 9aa8d0d022458a98 Packager : Mandriva Linux Security Team <sec...@ma...> URL : http://www.mysql.com Summary : MySQL - server with extended functionality Clues welcome! -- Daniel J McDonald, CCIE # 2495, CISSP # 78281 |