From: Bert v. L. <ber...@gm...> - 2005-05-25 10:24:52
|
Perhaps it is not possible with snort-inline, that is what I am trying to= =20 determine. I was hoping the "per-flow" rules would somehow be able to turn= =20 off after a while, e.g. after n bytes of content had been seen etc., but I= =20 guess there will always be many "per packet" rules checking for things like= =20 illegal IP fragments and IP header weirdness etc.=20 I was not considering using custom signatures or any other customised=20 solution. The more I think about this, what I wanted to achieve doesn't=20 sound possible, at least not with snort, and perhaps not at all. Is there perhaps a kind of IP_QUEUE mmap solution which will save cpu by no= t=20 having to actually copy the packet from kernel to userspace? (e.g. similar= =20 to mmap pcap where ring buffers in the kernel are mapped to userspace=20 directly) Roland Turner said:=20 I'm not sure that I understand how this is possible. The fact that the > first n packets of a flow don't match any signatures does not mean that n= o > subsequent packets will. >=20 > Do you have a specific situation in mind? (Custom signatures perhaps?) >=20 >=20 > - Raz > |
From: Will M. <wil...@gm...> - 2005-05-25 13:10:09
|
>Is there perhaps a kind of IP_QUEUE mmap solution which will save cpu by not >having to actually copy the packet from kernel to userspace? (e.g. similar to >mmap pcap where ring buffers in the kernel are mapped to userspace directly) hmmmm, not currently. You want to write one? Regards, Will On 5/25/05, Bert van Leeuwen <ber...@gm...> wrote: > Perhaps it is not possible with snort-inline, that is what I am trying to > determine. I was hoping the "per-flow" rules would somehow be able to tur= n > off after a while, e.g. after n bytes of content had been seen etc., but = I > guess there will always be many "per packet" rules checking for things li= ke > illegal IP fragments and IP header weirdness etc.=20 >=20 > I was not considering using custom signatures or any other customised > solution. The more I think about this, what I wanted to achieve doesn't > sound possible, at least not with snort, and perhaps not at all. >=20 > Is there perhaps a kind of IP_QUEUE mmap solution which will save cpu by = not > having to actually copy the packet from kernel to userspace? (e.g. simila= r > to mmap pcap where ring buffers in the kernel are mapped to userspace > directly) >=20 > Roland Turner said:=20 >=20 > > I'm not sure that I understand how this is possible. The fact that the > > first n packets of a flow don't match any signatures does not mean that= no > > subsequent packets will. > >=20 > > Do you have a specific situation in mind? (Custom signatures perhaps?) > >=20 > >=20 > > - Raz > >=20 >=20 > |
From: Roland T. (SourceForge) <raz...@co...> - 2005-05-26 08:42:29
|
Bert van Leeuwen said: > Perhaps it is not possible with snort-inline, that is what I am trying > to determine. I was hoping the "per-flow" rules would somehow be able > to turn off after a while, e.g. after n bytes of content had been seen > etc., but I guess there will always be many "per packet" rules > checking for things like illegal IP fragments and IP header weirdness > etc. I'm not clear on what you mean by a "per-flow" rule. Snort's basic inline operation is to examine each datagram that arrives to see whether it matches any signatures ("rules") and pass it through if not. This is neccessarily per-datagram; if it were to hold up traffic until all of a "flow" had passed, it would disrupt the very communication that it's trying to protect. > Is there perhaps a kind of IP_QUEUE mmap solution which will save cpu > by not having to actually copy the packet from kernel to userspace? > (e.g. similar to mmap pcap where ring buffers in the kernel are mapped > to userspace directly) I suspect that IP_QUEUE already does exactly this. The major performance problem isn't data copying, it's the kernel-user-kernel roundtrip for _each_ datagram. There is apparently work in progress to extend libipq (and the corresponding kernel interface) to allow batches of datagrams to be passed across in a single kernel-user transition. - Raz |
From: Bert v. L. <ber...@gm...> - 2005-05-26 10:12:52
|
On 5/26/05, Roland Turner (SourceForge) <raz...@co...> wrote= : >=20 >=20 > I'm not clear on what you mean by a "per-flow" rule. Snort's basic inline > operation is to examine each datagram that arrives to see whether it > matches any signatures ("rules") and pass it through if not. This is > neccessarily per-datagram; if it were to hold up traffic until all of a > "flow" had passed, it would disrupt the very communication that it's > trying to protect. I didn't mean that it should hold up traffic, but by "per-flow" rules I=20 meant things that maintain some sort of per flow state, e.g. the stream4=20 processor. However, I realise now that there are many rules/signatures that= =20 MUST operate on a per packet basis, since some attacks can also appear in= =20 datagrams that appear to be "mid-flow", so what I originally wanted to do i= s=20 probably impossible. I guess the only viable alternative currently is to=20 write the iptables rules in such a way that not all traffic goes to the=20 QUEUE, but certain IPs or ports are considered "safe" (e.g. for intra-site= =20 backups or DB replication or whatever) and bypass the QUEUE (and thus snort= =20 too), but this could be dangerous (and security through obscurity is no=20 security at all). > Is there perhaps a kind of IP_QUEUE mmap solution which will save cpu > > by not having to actually copy the packet from kernel to userspace? >=20 > I suspect that IP_QUEUE already does exactly this. The major performance > problem isn't data copying, it's the kernel-user-kernel roundtrip for > _each_ datagram. There is apparently work in progress to extend libipq > (and the corresponding kernel interface) to allow batches of datagrams to > be passed across in a single kernel-user transition. >=20 > I'm not too sure about this, but from what I can see, the packets ARE=20 actually physically copied from kernel memory to userspace memory, and this= =20 CAN have a measurable performance hit for a high speed network (e.g. gigE).= =20 Try writing a simple userspace program which simply copies 1500 bytes many= =20 times, you'll see what I mean.=20 If one looks at the libipq code in snort, it initialises the ipq by doing: ipq_set_mode(ipqh, IPQ_COPY_PACKET, PKT_BUFSIZE); and when it reads a packet, it needs to supply a user allocated buffer: ipq_read(ipqh, buf, PKT_BUFSIZE, 0); The packets are at least not copied back from userspace to kernel space=20 though, unless the packet content has been modified, in which case=20 ipq_set_verdict must be called with a non-null buf parameter. The only alternative to the IPQ_COPY_PACKET mode is the IPQ_COPY_META mode,= =20 which allows the userspace program to only access the metadata of the packe= t=20 (not very useful for content based filtering). It would be nice if there were an IPQ_MMAP_PACKET mode too, and, as you=20 mentioned, a mode where several packets can be batched at once to also save= =20 thread context switches. --=20 BvL |
From: Roland T. (SourceForge) <raz...@co...> - 2005-05-26 12:12:30
|
Bert van Leeuwen said: > The only alternative to the IPQ_COPY_PACKET mode is the IPQ_COPY_META > mode, which allows the userspace program to only access the metadata > of the packet (not very useful for content based filtering). > > It would be nice if there were an IPQ_MMAP_PACKET mode too, and, as you > mentioned, a mode where several packets can be batched at once to also > save thread context switches. Urk! You are quite right, it is copying to a user-allocated buffer and, I agree, an MMAP option would offer some performance an improvement. - Raz |