Thread: [mod-security-users] Ideas for future features..
Brought to you by:
victorhora,
zimmerletw
|
From: Zach R. <ad...@li...> - 2006-02-23 07:49:57
|
I know at least a few of us that use mod_security to enhance security in a shared webhosting environment have tried to tackle the problem of comment spam. The idea of using mod_security rules to block it isn't new. See gotroot.com's blacklist.conf for their attempt at it. The problem is that the idea of using flatfiles for a blacklist cannot possibly be sustained indefinitely as more of this comment spam surfaces. Even blocking the robots by IPs will be nearly impossible using firewalls or flatfiles as even firewalls will start to slow down servers after tens of thousands of IPs are added. The current solutions for blogs such as WordPress involve running a PHP script that accesses MySQL for each attempt and then blocking it based on certain criteria. While it works for now I would hate to see the day when this type of spam is as common as email spam getting ten attempts per second while attempting to run PHP and MySQL. In my opinion what is needed is support for dnsbl type blacklists. Blar's mod_access_rbl was one attempt at this but, the results aren't cached so it isn't very efficient. A rule such as.. SecFilterSelective "ARG_url" "^(http|https):/" lookup:combined.surbl.org,denyonfail Even a way of mod_security extracting the domain from the arguement and then passing it to the surbl would be even better. Another rule might be.. SecFilterSelective REMOTE_ADDR "regex_to_check_valid_ip" lookup:sbl-xbl.spamhaus.org,denyonfail I think you can see where I'm going with this. Zach |
|
From: Tom A. <tan...@oa...> - 2006-02-23 14:27:50
|
Zach Roberts wrote: > I know at least a few of us that use mod_security to enhance security in > a shared webhosting environment have tried to tackle the problem of > comment spam. The idea of using mod_security rules to block it isn't > new. See gotroot.com's blacklist.conf for their attempt at it. ... > In my opinion what is needed is support for dnsbl type blacklists. > Blar's mod_access_rbl was one attempt at this but, the results aren't > cached so it isn't very efficient. If I were running a blog or forum, I might pipe requests through Bogofilter or another statistical filter in order to remove spam. That would be more effective than playing cat and mouse with IP addresses. I'd put an admin-only button on my software to allow me to flag a post as spam, thus training the filter with it. Questionable or "unsure" posts would go into a non-public holding list which could then be approved manually, and upon doing so, train the filter with them. On my mail server, DNSBLs remove maybe half of all incoming spam. That's great and efficient, but not enough. The rest, up to about 99.9%, is removed through Bogofilter and a few helper scripts that do things like tag the email with the ASN of the sender and run any links through URLBLs, tagging the email with a token if they match. Out of thousands of spams per week, I only receive 1-2 false negatives and about 3-4 unsures, with no false positives. I'm sure very similar measures could be employed to quash comment spam. Trying to do it with mod_security is probably going to be about as effective as filtering email with procmail. It looks like a good idea at first, and even works a little bit, but quickly becomes unmanageable and ineffective. Tom |
|
From: Jason E. <jed...@ca...> - 2006-02-23 18:36:47
|
Zach Roberts wrote: > I know at least a few of us that use mod_security to enhance security > in a shared webhosting environment have tried to tackle the problem of > comment spam. The idea of using mod_security rules to block it isn't > new. See gotroot.com's blacklist.conf for their attempt at it. > > The problem is that the idea of using flatfiles for a blacklist cannot > possibly be sustained indefinitely as more of this comment spam > surfaces. Even blocking the robots by IPs will be nearly impossible > using firewalls or flatfiles as even firewalls will start to slow down > servers after tens of thousands of IPs are added. I haven't encountered the problem of too many blacklisted IP's yet. For that problem, we may want a non-flat-file option such as berkely db, sqlite or something similar. Even sendmail compiles it's aliases file. The thing I have noticed is that there is no way to reload the file besides restarting apache. If you don't have firewall access and block Ip's using mod_security (which I don't), it would be nice to be able have the file reloaded periodically. something like check for an updated file every 5 minutes (configurable). > The current solutions for blogs such as WordPress involve running a > PHP script that accesses MySQL for each attempt and then blocking it > based on certain criteria. While it works for now I would hate to see > the day when this type of spam is as common as email spam getting ten > attempts per second while attempting to run PHP and MySQL. Wordpress already does with by using a plugin called Spam Karma. > In my opinion what is needed is support for dnsbl type blacklists. > Blar's mod_access_rbl was one attempt at this but, the results aren't > cached so it isn't very efficient. > > A rule such as.. > > SecFilterSelective "ARG_url" "^(http|https):/" > lookup:combined.surbl.org,denyonfail > > Even a way of mod_security extracting the domain from the arguement > and then passing it to the surbl would be even better. > > Another rule might be.. > > SecFilterSelective REMOTE_ADDR "regex_to_check_valid_ip" > lookup:sbl-xbl.spamhaus.org,denyonfail > > I think you can see where I'm going with this. DNS lookups can drastically affect the performance of your server. It may take one second or longer to do the first lookup for an IP. The latency is noticeable. I use blacklists, but only on post requests or after the request has been served. I have a cron job than runs every minute and uses the blacklist utility from http://www.apachesecurity.net to block IP's on the DNS blacklists among other things. the blacklist utility keeps my blocklist from growing too large by expiring the entries. The other concern is that waiting on a DNS lookup before serving a request leaves you more open to an DoS attack. It would be nice if mod_security had this, but I would be very careful about implementing it. Jason Edgecombe |
|
From: Michael S. <mi...@go...> - 2006-02-24 14:53:38
|
Jason Edgecombe wrote: > Zach Roberts wrote: >> I know at least a few of us that use mod_security to enhance security >> in a shared webhosting environment have tried to tackle the problem >> of comment spam. The idea of using mod_security rules to block it >> isn't new. See gotroot.com's blacklist.conf for their attempt at it. >> >> The problem is that the idea of using flatfiles for a blacklist >> cannot possibly be sustained indefinitely as more of this comment >> spam surfaces. Even blocking the robots by IPs will be nearly >> impossible using firewalls or flatfiles as even firewalls will start >> to slow down servers after tens of thousands of IPs are added. > I haven't encountered the problem of too many blacklisted IP's yet. > For that problem, we may want a non-flat-file option such as berkely > db, sqlite or something similar. Even sendmail compiles it's aliases > file. > > The thing I have noticed is that there is no way to reload the file > besides restarting apache. If you don't have firewall access and block > Ip's using mod_security (which I don't), it would be nice to be able > have the file reloaded periodically. something like check for an > updated file every 5 minutes (configurable). For those that are having problems with lots of IPs via the blacklist.conf rules (either as firewall rules, or using mod_security), I am setting up a special set of RBLs for those IPs (spammers, attackers, owned boxes, etc.). This will use a modified mod_access, which will allow for real time lookups of the IPs. rsync access will also be available to the zones for secondaries, and sites that wish to use the IPs for firewalling purposes. |
|
From: Ivan R. <iv...@we...> - 2006-02-24 15:52:56
|
Michael Shinn wrote: > > attackers, owned boxes, etc.). This will use a modified mod_access, > which will allow for real time lookups of the IPs. rsync access will > also be available to the zones for secondaries, and sites that wish to > use the IPs for firewalling purposes. How long do the lookups take to complete when the information is cached in a local DNS? -- Ivan Ristic, Technical Director Thinking Stone, http://www.thinkingstone.com ModSecurity: Open source Web Application Firewall |
|
From: Michael S. <mi...@go...> - 2006-02-24 18:46:24
|
On Fri, 2006-02-24 at 15:53 +0000, Ivan Ristic wrote: > Michael Shinn wrote: > > > > attackers, owned boxes, etc.). This will use a modified mod_access, > > which will allow for real time lookups of the IPs. rsync access will > > also be available to the zones for secondaries, and sites that wish to > > use the IPs for firewalling purposes. > > How long do the lookups take to complete when the information > is cached in a local DNS? Good question, I'll have to generate some official stats to quantify the performance, but I've been running a modified mod_access doing lookups against the spamhaus.org RBL for about a year and I've never noticed a measurable change in performance as a user. In short, a local DNS (in my case bind) seems to do an good job of caching the lookups, in the same manner that a local DNS seems to do a good enough job with SMTP RBLs. Based on this anecdotal experience,I don't think that an internal cache would be necessary for mod_security to support RBL lookups. It may not even be optimal in some cases, as TTLs may require one lookup to be retried in 1 minute, another in 3600, etc. further complicating the code in mod_security to age these out differently. But, I definitely could see some users not being aware of the need to setup a local DNS and experiencing a significant performance problem and then perceiving that as a mod_security issue. Regardless, a local DNS in my experience does seem to do an adequate job, so my vote would be to add the RBL lookup capability to mod_security minus the cache for the initial testing release, and if the performance seems suboptimal with a local DNS, then add in a cache for the next phase of testing. -- Michael T. Shinn KeyID:0xDAE2EC86 Key Fingerprint: 1884 E657 A6DF DF1B BFB9 E2C5 DCC6 5297 DAE2 EC86 http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xDAE2EC86 Got Root? http://www.gotroot.com modsecurity rules: http://www.modsecurityrules.com Troubleshooting Firewalls: http://troubleshootingfirewalls.com |
|
From: Zach R. <ad...@li...> - 2006-02-25 12:24:32
|
I apologize for being absent for most of the discussion. My schedule has been quite full lately. I have been using a forked mod_access_rbl for about a year now. While I don't use it to scan every request that comes in I do use it to control access to two or three files that are accessed quite a bit. For these three files I am using seven different blacklists and I've noticed no drop in performance. I don't think DNS lookups are all that heavy in terms of resource usage when compared to PHP/MySQL being run for every spambot request but, using dnsrbls to deny access to an entire website could be fairly resource intensive. The idea of an internal cache was mostly to save additional DNS lookups to the local DNS server but, it isn't necessary. The performance of the existing modules with no internal cache is enough that it shouldn't be a problem for protecting a few files but, if this was going to be deployed to scan every request or every argument sent to the server it could be a problem. ---- The issue of IPs in a firewall that I mentioned wasn't really directed at 5000 - 10000 IPs but, more along the lines of 40000 - 50000 IPs. It scales for now but, we need a better solution for the future. --- As a matter of fact, ModSecurity 1.8.x-dev was able to interface with external spam checkers. I announced it on the list (I think) but since no one used it I removed it prior to 1.9 final. I believe this sort of checking needs to be internal. Accessing an external Perl script for example would be far too resource intensive if it were used to scan a very large number of incoming connections. I can see you guys have a good handle on the situation. The future features of 2.0.0 look very promising with functionality similar to mod_evasive. If the functionality works with Frontpage too (mod_evasive does not) it will be all that much better. Zach Michael Shinn wrote: >On Fri, 2006-02-24 at 15:53 +0000, Ivan Ristic wrote: > > >>Michael Shinn wrote: >> >> >>>attackers, owned boxes, etc.). This will use a modified mod_access, >>>which will allow for real time lookups of the IPs. rsync access will >>>also be available to the zones for secondaries, and sites that wish to >>>use the IPs for firewalling purposes. >>> >>> >> How long do the lookups take to complete when the information >> is cached in a local DNS? >> >> > >Good question, I'll have to generate some official stats to quantify the >performance, but I've been running a modified mod_access doing lookups >against the spamhaus.org RBL for about a year and I've never noticed a >measurable change in performance as a user. In short, a local DNS (in >my case bind) seems to do an good job of caching the lookups, in the >same manner that a local DNS seems to do a good enough job with SMTP >RBLs. > >Based on this anecdotal experience,I don't think that an internal cache >would be necessary for mod_security to support RBL lookups. It may not >even be optimal in some cases, as TTLs may require one lookup to be >retried in 1 minute, another in 3600, etc. further complicating the code >in mod_security to age these out differently. But, I definitely could >see some users not being aware of the need to setup a local DNS and >experiencing a significant performance problem and then perceiving that >as a mod_security issue. > >Regardless, a local DNS in my experience does seem to do an adequate >job, so my vote would be to add the RBL lookup capability to >mod_security minus the cache for the initial testing release, and if the >performance seems suboptimal with a local DNS, then add in a cache for >the next phase of testing. > > > |
|
From: Ivan R. <iv...@we...> - 2006-02-25 16:14:14
|
Zach Roberts wrote: > I apologize for being absent for most of the discussion. My schedule has > been quite full lately. > > I have been using a forked mod_access_rbl for about a year now. While I > don't use it to scan every request that comes in I do use it to control > access to two or three files that are accessed quite a bit. For these > three files I am using seven different blacklists and I've noticed no > drop in performance. Without a local cache? > As a matter of fact, ModSecurity 1.8.x-dev was able to interface > with external spam checkers. I announced it on the list (I think) > but since no one used it I removed it prior to 1.9 final. > > I believe this sort of checking needs to be internal. Accessing an > external Perl script for example would be far too resource intensive if > it were used to scan a very large number of incoming connections. Forking to execute a Perl script might not be feasible, but talking to an already-running daemon may be better. I'd really hate to see ModSecurity integrate a spam checker :) > I can see you guys have a good handle on the situation. The future > features of 2.0.0 look very promising with functionality similar to > mod_evasive. BTW, even now you can have protection better than with mod_evasive using httpd-guardian (http://www.apachesecurity.net/tools/). And, in terms of performance, probably faster than what will be available in ModSecurity v2.0. > If the functionality works with Frontpage too (mod_evasive > does not) it will be all that much better. That's interesting. What is the problem with FrontPage? -- Ivan Ristic, Technical Director Thinking Stone, http://www.thinkingstone.com ModSecurity: Open source Web Application Firewall Apache Security (O'Reilly): http://www.apachesecurity.net |
|
From: Zach R. <ad...@li...> - 2006-02-25 21:59:47
|
Ivan Ristic wrote: >Zach Roberts wrote: > > >>I apologize for being absent for most of the discussion. My schedule has >>been quite full lately. >> >>I have been using a forked mod_access_rbl for about a year now. While I >>don't use it to scan every request that comes in I do use it to control >>access to two or three files that are accessed quite a bit. For these >>three files I am using seven different blacklists and I've noticed no >>drop in performance. >> >> > > Without a local cache? > > > Just a local DNS cache. >> As a matter of fact, ModSecurity 1.8.x-dev was able to interface >> with external spam checkers. I announced it on the list (I think) >> but since no one used it I removed it prior to 1.9 final. >> >>I believe this sort of checking needs to be internal. Accessing an >>external Perl script for example would be far too resource intensive if >>it were used to scan a very large number of incoming connections. >> >> > > Forking to execute a Perl script might not be feasible, but > talking to an already-running daemon may be better. I'd really > hate to see ModSecurity integrate a spam checker :) > > > > I would hate to see the spam checker daemon die for some reason and then Apache serve broken pages. Perhaps backreferences and RBL lookups built internally for the sake of the system administrators? ;) >>I can see you guys have a good handle on the situation. The future >>features of 2.0.0 look very promising with functionality similar to >>mod_evasive. >> >> > > BTW, even now you can have protection better than with mod_evasive > using httpd-guardian (http://www.apachesecurity.net/tools/). And, > in terms of performance, probably faster than what will be available > in ModSecurity v2.0. > > > > I'll look at it. It might prove useful. >>If the functionality works with Frontpage too (mod_evasive >>does not) it will be all that much better. >> >> > > That's interesting. What is the problem with FrontPage? > > > It interferes with publishing content via port 80. Nothing critical in my eyes since it gave me a good excuse to get rid of the Frontpage extensions completely. ;) Zach |
|
From: Ivan R. <iv...@we...> - 2006-02-25 22:46:17
|
Zach Roberts wrote: > Ivan Ristic wrote: > >> Zach Roberts wrote: >> >> >>> I apologize for being absent for most of the discussion. My schedule has >>> been quite full lately. >>> >>> I have been using a forked mod_access_rbl for about a year now. While I >>> don't use it to scan every request that comes in I do use it to control >>> access to two or three files that are accessed quite a bit. For these >>> three files I am using seven different blacklists and I've noticed no >>> drop in performance. >>> >> >> Without a local cache? >> > Just a local DNS cache. Just to double-check: and by that you mean the cache that's in the libresolve library, not a local caching DNS server? >> BTW, even now you can have protection better than with mod_evasive >> using httpd-guardian (http://www.apachesecurity.net/tools/). And, >> in terms of performance, probably faster than what will be available >> in ModSecurity v2.0. >> > I'll look at it. It might prove useful. I am looking for testers. You can even cluster it using Spread. >>> If the functionality works with Frontpage too (mod_evasive >>> does not) it will be all that much better. >>> >> >> That's interesting. What is the problem with FrontPage? >> > It interferes with publishing content via port 80. I meant to ask if you had any specific knowledge of how FrontPage triggers mod_evasive. Does it perform too many request in a short period of time? Anything that would help me avoid the problem ;) -- Ivan Ristic, Technical Director Thinking Stone, http://www.thinkingstone.com ModSecurity: Open source Web Application Firewall Apache Security (O'Reilly): http://www.apachesecurity.net |
|
From: Michael S. <mi...@go...> - 2006-02-26 04:02:37
|
Zach Roberts wrote: >> >> Forking to execute a Perl script might not be feasible, but >> talking to an already-running daemon may be better. I'd really >> hate to see ModSecurity integrate a spam checker :) >> >> >> >> > I would hate to see the spam checker daemon die for some reason and > then Apache serve broken pages. > > Perhaps backreferences and RBL lookups built internally for the sake > of the system administrators? ;) Or just fail safe with a pass. |
|
From: Michael S. <mi...@go...> - 2006-02-26 04:01:46
|
Ivan Ristic wrote: >> I believe this sort of checking needs to be internal. Accessing an >> external Perl script for example would be far too resource intensive if >> it were used to scan a very large number of incoming connections. >> > > > Forking to execute a Perl script might not be feasible, but > talking to an already-running daemon may be better. I'd really > hate to see ModSecurity integrate a spam checker :) > Oh god, please no internal spam checker. :-) |
|
From: Zach R. <ad...@li...> - 2006-02-26 04:04:36
|
Internal spam checker meaning a means of calling dnsrbl lookups from the surbl based on backreferences or dnsrbl lookups on the remote address to try to prevent open proxy access? Zach Michael Shinn wrote: >Ivan Ristic wrote: > > >>>I believe this sort of checking needs to be internal. Accessing an >>>external Perl script for example would be far too resource intensive if >>>it were used to scan a very large number of incoming connections. >>> >>> >>> >> Forking to execute a Perl script might not be feasible, but >> talking to an already-running daemon may be better. I'd really >> hate to see ModSecurity integrate a spam checker :) >> >> >> > >Oh god, please no internal spam checker. :-) > > >------------------------------------------------------- >This SF.Net email is sponsored by xPML, a groundbreaking scripting language >that extends applications into web and mobile media. Attend the live webcast >and join the prime developer group breaking into this new coding territory! >http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 >_______________________________________________ >mod-security-users mailing list >mod...@li... >https://lists.sourceforge.net/lists/listinfo/mod-security-users > > |
|
From: Ivan R. <iv...@we...> - 2006-02-26 10:13:07
|
Zach Roberts wrote: > Internal spam checker meaning a means of calling dnsrbl lookups from the > surbl based on backreferences or dnsrbl lookups on the remote address to > try to prevent open proxy access? That I will add. I was referring to e.g. having a Bayesian filter built into ModSecurity. -- Ivan Ristic, Technical Director Thinking Stone, http://www.thinkingstone.com ModSecurity: Open source Web Application Firewall Apache Security (O'Reilly): http://www.apachesecurity.net |
|
From: Zach R. <ad...@li...> - 2006-02-26 11:20:14
|
A bayesian style filter is much better suited to an external daemon. I hope you caught what Michael wrote about a softfail setting. If the external daemon fails having mod_security allow the connection anyway. Zach Ivan Ristic wrote: >Zach Roberts wrote: > > >>Internal spam checker meaning a means of calling dnsrbl lookups from the >>surbl based on backreferences or dnsrbl lookups on the remote address to >>try to prevent open proxy access? >> >> > > That I will add. I was referring to e.g. having a Bayesian filter > built into ModSecurity. > > > |
|
From: Ivan R. <iv...@we...> - 2006-02-25 16:08:19
|
Michael Shinn wrote: > > Based on this anecdotal experience,I don't think that an internal cache > would be necessary for mod_security to support RBL lookups. I agree. Still, I will have a cache which will be used for other things (e.g. for brute force attacks detection). If this cache is enabled then there is practically no further cost for the RBL cache. -- Ivan Ristic, Technical Director Thinking Stone, http://www.thinkingstone.com ModSecurity: Open source Web Application Firewall Apache Security (O'Reilly): http://www.apachesecurity.net |
|
From: Ivan R. <iv...@we...> - 2006-02-23 19:18:32
|
Zach Roberts wrote:
>
> The problem is that the idea of using flatfiles for a blacklist cannot
> possibly be sustained indefinitely as more of this comment spam
> surfaces. Even blocking the robots by IPs will be nearly impossible
> using firewalls or flatfiles as even firewalls will start to slow down
> servers after tens of thousands of IPs are added.
That's a problem because these devices are rule-based and they
need to be processed sequentially.
Some news: the 2.0.0 code in the CVS supports blacklisting on the
Apache level. The IP addresses are stored in a SDBM database and
only one lookup is needed per request to establish whether it is
blacklisted or not.
There is also a new action - "blockip:DURATION". This may not be
very useful at the moment but:
1. 2.0.0 will also add a rating mechanism, similar to that used
by spam filters.
2. I want to enable ModSecurity to keep track of IP, user, session,
and address ratings.
So, for example, if you get too many hits from the same IP address
you can choose to block it for a while.
OK, now back to the original proposal. There are two ways to approach
it:
1. At the moment the database contains only the blacklisted
addresses. It is possible to start caching clean IP addresses.
That would replace one or multiple DNS resolution attempts with
a single lookup.
2. ModSecurity v2.0.0 is also likely to have an API (web-based)
to allow IP addresses to be added and removed from the list.
An external tool could be used to add/remove the IP addresses.
> Blar's mod_access_rbl was one attempt at this but, the results aren't
> cached so it isn't very efficient.
This is v1 above - it's pretty trivial to add to ModSecurity.
> A rule such as..
>
> SecFilterSelective "ARG_url" "^(http|https):/"
> lookup:combined.surbl.org,denyonfail
What would the above lookup? The contents of paramter "url"?
Perhaps it is a better idea to use regex backreferences for
this...
> Even a way of mod_security extracting the domain from the arguement and
> then passing it to the surbl would be even better.
Right, backreferences.
--
Ivan Ristic, Technical Director
Thinking Stone, http://www.thinkingstone.com
ModSecurity: Open source Web Application Firewall
|
|
From: Ivan R. <iv...@we...> - 2006-02-23 19:43:09
|
Tom Anderson wrote: > > If I were running a blog or forum, I might pipe requests through > Bogofilter or another statistical filter in order to remove spam. As a matter of fact, ModSecurity 1.8.x-dev was able to interface with external spam checkers. I announced it on the list (I think) but since no one used it I removed it prior to 1.9 final. If there's interest it can go back. However, I certainly don't have time to test the effectiveness of that approach. -- Ivan Ristic, Technical Director Thinking Stone, http://www.thinkingstone.com ModSecurity: Open source Web Application Firewall |
|
From: Tom A. <tan...@oa...> - 2006-02-23 20:13:14
|
Ivan Ristic wrote: > As a matter of fact, ModSecurity 1.8.x-dev was able to interface > with external spam checkers. I announced it on the list (I think) > but since no one used it I removed it prior to 1.9 final. > > If there's interest it can go back. However, I certainly don't > have time to test the effectiveness of that approach. Sounds interesting and worthwhile. I'm not a candidate for testing it out right now, but maybe Zach Roberts would be interested. Tom |
|
From: BassPlayer <bas...@an...> - 2006-02-23 20:17:48
|
I'd love to be able to pipe content through dspam! BP Ivan Ristic wrote: > Tom Anderson wrote: >> >> If I were running a blog or forum, I might pipe requests through >> Bogofilter or another statistical filter in order to remove spam. > > As a matter of fact, ModSecurity 1.8.x-dev was able to interface > with external spam checkers. I announced it on the list (I think) > but since no one used it I removed it prior to 1.9 final. > > If there's interest it can go back. However, I certainly don't > have time to test the effectiveness of that approach. > > -- > Ivan Ristic, Technical Director > Thinking Stone, http://www.thinkingstone.com > ModSecurity: Open source Web Application Firewall > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting > language > that extends applications into web and mobile media. Attend the live > webcast > and join the prime developer group breaking into this new coding > territory! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > mod-security-users mailing list > mod...@li... > https://lists.sourceforge.net/lists/listinfo/mod-security-users > > !DSPAM:43fe1549275342220012932! > |
|
From: Zach R. <ad...@li...> - 2006-02-26 02:04:47
|
Ivan Ristic wrote: >Zach Roberts wrote: > > >>Ivan Ristic wrote: >> >> >> >>>Zach Roberts wrote: >>> >>> >>> >>> >>>>I apologize for being absent for most of the discussion. My schedule has >>>>been quite full lately. >>>> >>>>I have been using a forked mod_access_rbl for about a year now. While I >>>>don't use it to scan every request that comes in I do use it to control >>>>access to two or three files that are accessed quite a bit. For these >>>>three files I am using seven different blacklists and I've noticed no >>>>drop in performance. >>>> >>>> >>>> >>> Without a local cache? >>> >>> >>> >>Just a local DNS cache. >> >> > > Just to double-check: and by that you mean the cache that's in the > libresolve library, not a local caching DNS server? > > > > I meant a local caching DNS server. >>> BTW, even now you can have protection better than with mod_evasive >>> using httpd-guardian (http://www.apachesecurity.net/tools/). And, >>> in terms of performance, probably faster than what will be available >>> in ModSecurity v2.0. >>> >>> >>> >>I'll look at it. It might prove useful. >> >> > > I am looking for testers. You can even cluster it using Spread. > > > > I'll try to look at it within the next week as I get time. >>>>If the functionality works with Frontpage too (mod_evasive >>>>does not) it will be all that much better. >>>> >>>> >>>> >>> That's interesting. What is the problem with FrontPage? >>> >>> >>> >>It interferes with publishing content via port 80. >> >> > > I meant to ask if you had any specific knowledge of how > FrontPage triggers mod_evasive. Does it perform too many > request in a short period of time? Anything that would help > me avoid the problem ;) > > > When I wrote that I meant that the method it uses to detect incoming DoS attacks interferes with Frontpage's operation. Most likely the reason being that it sees Frontpage's requests as a DoS due to the amount of connections Frontpage uses to publish. Zach |
|
From: Ryan B. <rcb...@gm...> - 2006-02-26 02:53:48
|
On 2/25/06, Zach Roberts <ad...@li...> wrote: > > > I meant to ask if you had any specific knowledge of how > > FrontPage triggers mod_evasive. Does it perform too many > > request in a short period of time? Anything that would help > > me avoid the problem ;) > > > > > > > When I wrote that I meant that the method it uses to detect incoming DoS > attacks interferes with Frontpage's operation. Most likely the reason > being that it sees Frontpage's requests as a DoS due to the amount of > connections Frontpage uses to publish. I am assuming that you would be using Frontpage to allow a small group of people to upload files. With this in mind, you can tweak mod_evasive in 2 ways - 1) Use the whitelist directive to tell mod_evasive to ignore those authorized addresses who are using frontpage, and/or 2) Tweak the DOSSiteCount/DOSSiteInterval and DOSPageCount/DOSPageInterval ratios to a threshold that will allow frontpage to work but will still trigger when some launches a DoS attack. I had to tweak these settings in my environment to allow some of our own we= b monitoring tools to work. Just my $00.2. -- Ryan C. Barnett Web Application Security Consortium (WASC) Member CIS Apache Benchmark Project Lead SANS Instructor: Securing Apache GCIA, GCFA, GCIH, GSNA, GCUX, GSEC Author: Preventing Web Attacks with Apache |