Re: [mod-security-users] DDOS on the appl level, timeouts and blacklisting
Brought to you by:
victorhora,
zimmerletw
From: Ivan R. <iva...@gm...> - 2006-09-18 14:18:57
|
On 9/16/06, Christian Folini <chr...@ti...> wrote: > Hello, > > ... > > A private exchange with Ryan and Ivan evolved. This directed me > away from mod_security. No, no, no, we have to find a way to get you back now :) > A single apache thread or process lacks the "whole picture" and > it can not do a proper assessment of the probability of an attack. With ModSecurity 2.0 you actually get a persistent data store that is shared across processes and threads. But even that will not solve a problem if you have a cluster of web servers. (At least not until I release a cluster-ready version of ModSecurity, that is.) > Therefore i propose to defend in 3 seperate steps: > - reconnaissance (apache log) > - analysis (external process scanning logfiles in realtime) > - defense (ip blacklisting or anything else) I believe it was Ryan who mentioned httpd_guardian in response to your recent email. On a single box httpd_guardian will connect to Apache via the piped logging mechanism. If you need to support a cluster it will work with the Spread toolkit to receive the required information in real time. This utility is at this point capable of detecting brute force attacks, but it can (and should) be extended to handle DoS attacks you describe. > It can be argued, that 2 and 3 should be within a single > mechanism, but the important thing is to seperate reconnaissance > and analysis. I fully agree. > > What do we need to notice the request delaying attack when > it is happening? We need information about the state of > a request. This is what mod_status is about, but mod_status > does not tell us the ip address of a client host during the > read phase and furthermore, accessing http://localhost/status > may be a hopeless task during a DoS attack. Slightly off-topic but potentially useful to some: mod_backdoor (http://people.apache.org/~trawick/) keeps a separate process ready for access for cases like that. > The problem is, that there is nothing like a close connection hook. > However, in http://hypermail.linklord.com/new-httpd/2004/Feb/3597.html > i have found a patch from Joe Orton, that brings a finish_connection hook. But such hook might not be necessary. Each connection comes with its own memory pool, which is destroyed after the connection is closed. You can simply register a call-back with this memory pool and it will be invoked just before the pool is destroyed. > I do not know, if this makes any sense at all. I wonder what you guys > think and welcome any feedback. I have some concerns (some of which you identified already): 1. A large number of messages may be needed per connection. This could be a performance bottleneck for large installations. 2. I don't think the Apache error log is the right place to write these messages. The sheer volume of messages will hide other potentially interesting log entries. 3. It is not possible for such external checker to take care of large request bodies and file uploads. How about this: 1. We keep the log-oriented approach. 2. In addition to working with Apache hooks as you did, we write an Apache input filter that looks at raw data. We look at the number of bytes received on every invocation and the time it took for the data to arrive. This could allow us to enforce: 2.1 A time limit for the request headers to arrive. 2.2 Minimum communication speed. A log entry is made only when a problem is detected. 3. Because log entries now appear only on attacks we can use the error log for this. One drawback of the filter approach, though, is that it is passive - it can only do its thing when the next batch of data arrives. This shouldn't be a big problem provided the Apache timeout value is low enough. -- Ivan Ristic, Technical Director Thinking Stone, http://www.thinkingstone.com ModSecurity: Open source Web Application Firewall |