You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(15) |
Jun
(23) |
Jul
(54) |
Aug
(20) |
Sep
(18) |
Oct
(19) |
Nov
(36) |
Dec
(30) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(48) |
Feb
(16) |
Mar
(36) |
Apr
(36) |
May
(45) |
Jun
(47) |
Jul
(93) |
Aug
(29) |
Sep
(28) |
Oct
(42) |
Nov
(45) |
Dec
(53) |
2005 |
Jan
(62) |
Feb
(51) |
Mar
(65) |
Apr
(28) |
May
(57) |
Jun
(23) |
Jul
(24) |
Aug
(72) |
Sep
(16) |
Oct
(53) |
Nov
(53) |
Dec
(3) |
2006 |
Jan
(56) |
Feb
(6) |
Mar
(15) |
Apr
(14) |
May
(35) |
Jun
(57) |
Jul
(35) |
Aug
(7) |
Sep
(22) |
Oct
(16) |
Nov
(18) |
Dec
(9) |
2007 |
Jan
(8) |
Feb
(3) |
Mar
(11) |
Apr
(35) |
May
(6) |
Jun
(10) |
Jul
(26) |
Aug
(4) |
Sep
|
Oct
(29) |
Nov
|
Dec
(7) |
2008 |
Jan
(1) |
Feb
(2) |
Mar
(2) |
Apr
(13) |
May
(8) |
Jun
(3) |
Jul
(19) |
Aug
(20) |
Sep
(6) |
Oct
(5) |
Nov
|
Dec
(4) |
2009 |
Jan
(1) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(10) |
Jul
(2) |
Aug
(5) |
Sep
|
Oct
(1) |
Nov
|
Dec
(5) |
2010 |
Jan
(10) |
Feb
(10) |
Mar
(2) |
Apr
|
May
(7) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2011 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
(2) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Victor J. <vi...@nk...> - 2006-01-19 13:21:39
|
ni...@el... wrote: > Hi, list > following is my machine configuration > > Intel(R) Celeron(R) CPU 2.00GHz with 128KB cache and intel 10/100Mb NIC... > Memory:- 1GB > > The thing is after patching snort 2.3.3 with snort_inline patch... I have > 2 different configuration for Stream4 > > 1.) preprocessor stream4: disable_evasion_alerts > > In this case my CPU is less than 10 % for a set of traffic > > 2.) preprocessor stream4: disable_evasion_alerts, stream4inline, memcap > 134217728, timeout 3600, midstream_drop_alerts > > In this case my CPU hits 50% at specific intervals don't know interval is > random or some specific..... :) with same set of traffic.... > > Is it due to the inline modifications in stream4 ???? Yes, that is possible since stream4inline does a lot more work than normal stream4 (even in inline mode). This is because it constantly scans a reassembled buffer, which is more costly. However, you don't need to enable the stream4inline option to use stream4 in inline mode. I do however think that with the stream4inline option enabled, there is less chance that you miss an attack. Regards, Victor |
From: <ni...@el...> - 2006-01-19 12:58:22
|
Hi, list following is my machine configuration Intel(R) Celeron(R) CPU 2.00GHz with 128KB cache and intel 10/100Mb NIC... Memory:- 1GB The thing is after patching snort 2.3.3 with snort_inline patch... I have 2 different configuration for Stream4 1.) preprocessor stream4: disable_evasion_alerts In this case my CPU is less than 10 % for a set of traffic 2.) preprocessor stream4: disable_evasion_alerts, stream4inline, memcap 134217728, timeout 3600, midstream_drop_alerts In this case my CPU hits 50% at specific intervals don't know interval is random or some specific..... :) with same set of traffic.... Is it due to the inline modifications in stream4 ???? Regards, Nishit Shah. |
From: christopher <ch...@sy...> - 2006-01-17 03:43:15
|
for me i am using iptables forward all the FORWARD traffic to queue iptables -A FORWARD -j QUEUE cheer! On Mon, 2006-01-16 at 09:33 -0300, Rigo wrote: > Ok. Actually i've briging configured (brigde-tools under linux), my > question is what's the diference, in two modes i've to use iptables > ??? how can i send the traffic to the queue using ebtables or bridge > tools??. My box is working using briging but i'm using iptables to > send the trafic to queue.... > > On 1/15/06, christopher <ch...@sy...> wrote: > > bridge mode you have to configure under the Linux (if you are using > > linux) there are lot of articles on internet teach you how to do it. > > (you can google it) > > > > bridge just act like an interface under linux there for the snort_in > > line configuration almost the same. > > > > cheer! > > > > On Fri, 2006-01-13 at 09:29 -0300, Rigo wrote: > > > Hi, > > > > > > I've read in this list that snort_inline can run in nat or bridge > > > mode, my question is when can i find doc to see how can i configure > > > this modes... > > > > > > Regards > > > Rigo > > > > > > > > > ------------------------------------------------------- > > > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files > > > for problems? Stop! Download the new AJAX search engine that makes > > > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > > > http://ads.osdn.com/?ad_idv37&alloc_id865&op=click > > > _______________________________________________ > > > Snort-inline-users mailing list > > > Sno...@li... > > > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > > > > > > > > |
From: Michael W C. <co...@ca...> - 2006-01-16 16:41:22
|
On Fri, 13 Jan 2006 12:31:30 +0100, you wrote: >I am not totally sure, but i think only NEW traffic is passed the to the >QUEUE. As soon as it is ESTABLISHED, it will be ACCEPTed by the above >'-A net2fw -m state --state RELATED,ESTABLISHED -j ACCEPT' rule. But >like i said, i'm not completely sure, so you better check the shorewall >support channels for that. If i am right, snort_inline will hardly see >any traffic, so then it is not so strange it doesn't cause alerts... > >Another way to check this is to enable the enforce_state option in >stream4. If that blocks all your traffic you can be pretty sure >snort_inline sees only a part of the traffic... > >Hope this helps, >Victor > Hi Victor, As you'll probably already have seen elsewhere, I did indeed have snort_inline configured to check only the first packet in a transaction, but that was my goof, not shorewalls. I never got around to upgrading my rules file when version 3.0 of shorewall shipped. Firxed, and the how-to has been corrected. I do still have a question about the reporting capabilities of snort_inline though - I'm unsure if snort_inline contains the same logging and output capabilities as snort. Mike- -- If you're not confused, you're not trying hard enough. -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments, |
From: Michael W C. <co...@ca...> - 2006-01-16 16:24:29
|
Corrected an error in setting the shoreline rules file to pass packets thru snort-inline. If you configured using the previous instructions, please reviw this document. "Installing snort-Inline in series with the Shoreline firewall on SuSE 9.3" is available at http://www.catherders.com/tikiwiki-1.9.1/tiki-read_article.php?articleId=47 -- If you're not confused, you're not trying hard enough. -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments, |
From: christopher <ch...@sy...> - 2006-01-16 02:29:11
|
bridge mode you have to configure under the Linux (if you are using linux) there are lot of articles on internet teach you how to do it. (you can google it) bridge just act like an interface under linux there for the snort_in line configuration almost the same. cheer! On Fri, 2006-01-13 at 09:29 -0300, Rigo wrote: > Hi, > > I've read in this list that snort_inline can run in nat or bridge > mode, my question is when can i find doc to see how can i configure > this modes... > > Regards > Rigo > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id865&op=click > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > |
From: Rigo <ri...@gm...> - 2006-01-13 12:29:12
|
Hi, I've read in this list that snort_inline can run in nat or bridge mode, my question is when can i find doc to see how can i configure this modes... Regards Rigo |
From: Will M. <wil...@gm...> - 2006-01-13 06:22:28
|
Eeeeeee anybody out there know anything about Shorewall? <begin plug> Did you know that Victor Julien one of the snort-inline code junkies has is own ncurses based firewall management program that you might want to check into. I can't pronounce it so I lovingly refer to it as the other white meat..... Since it takes time away from him coding "the pig." It must be late I'm making bad jokes. Below is the link..... http://vuurmuur.sourceforge.net <end plug> Regards, Will On 1/12/06, Michael W Cocke <co...@ca...> wrote: > On Thu, 12 Jan 2006 18:30:19 -0600, you wrote: > > >Hmmmmm are you running bridge or nat mode? If you start with -v do > >you see traffic passing? If you are in NAT mode are you allowing > >stream4 to see both sides of the conversation i.e. queueing in both > >INPUT and OUTPUT? > > > >Regards, > > > >Will > > This is going to sound stupid, but there's a reason for it. I'm in > NAT mode, but I'm not entirely sure how to check for bidirectional > queuing. See, I'm not using native iptables, I'm using shorewall > 3.04. As for the -v startup, I do get some stuff on the screen that > looks like packet details (src & dest IP address, UDP, &etc., but to > be brutally honest, I can't make heads or tails of it. There doesn't > seem to be an indication what rule it hit (if any) or if I'm just > seeing traffic. > > Mike- > -- > If you're not confused, you're not trying hard enough. > -- > Please note - Due to the intense volume of spam, we have installed > site-wide spam filters at catherders.com. If email from you bounces, > try non-HTML, non-encoded, non-attachments, > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=3D7637&alloc_id=3D16865&op=3Dclick > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > |
From: Michael W C. <co...@ca...> - 2006-01-13 01:33:39
|
On Thu, 12 Jan 2006 18:30:19 -0600, you wrote: >Hmmmmm are you running bridge or nat mode? If you start with -v do >you see traffic passing? If you are in NAT mode are you allowing >stream4 to see both sides of the conversation i.e. queueing in both >INPUT and OUTPUT? > >Regards, > >Will This is going to sound stupid, but there's a reason for it. I'm in NAT mode, but I'm not entirely sure how to check for bidirectional queuing. See, I'm not using native iptables, I'm using shorewall 3.04. As for the -v startup, I do get some stuff on the screen that looks like packet details (src & dest IP address, UDP, &etc., but to be brutally honest, I can't make heads or tails of it. There doesn't seem to be an indication what rule it hit (if any) or if I'm just seeing traffic. Mike- -- If you're not confused, you're not trying hard enough. -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments, |
From: Will M. <wil...@gm...> - 2006-01-13 00:30:26
|
Hmmmmm are you running bridge or nat mode? If you start with -v do you see traffic passing? If you are in NAT mode are you allowing stream4 to see both sides of the conversation i.e. queueing in both INPUT and OUTPUT? Regards, Will On 1/12/06, Michael W Cocke <co...@ca...> wrote: > I've managed to make snort_inline 2.4.3-RC3 work - I think. The ascii > log files are empty (yes, I started with -K ascii), and I don't know > if/how I can log to base or squil (I think that's how it's spelled). > Ideally I'd like to do both, but I'll settle for either. > > I'm pretty sure something should be in the logs at this point because > when I accidentally made snort work I had a dozen incidents in an > hour. I'm fairly certain that it's working because I've configured my > firewall to dump to ip_queue and I can still connect, but I'd be > happier with confirmation. 8-)> > > Thanks for any assistance. > > Mike- > -- > If you're not confused, you're not trying hard enough. > -- > Please note - Due to the intense volume of spam, we have installed > site-wide spam filters at catherders.com. If email from you bounces, > try non-HTML, non-encoded, non-attachments, > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=3D7637&alloc_id=3D16865&op=3Dclick > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > |
From: Michael W C. <co...@ca...> - 2006-01-12 19:11:58
|
I've managed to make snort_inline 2.4.3-RC3 work - I think. The ascii log files are empty (yes, I started with -K ascii), and I don't know if/how I can log to base or squil (I think that's how it's spelled). Ideally I'd like to do both, but I'll settle for either. I'm pretty sure something should be in the logs at this point because when I accidentally made snort work I had a dozen incidents in an hour. I'm fairly certain that it's working because I've configured my firewall to dump to ip_queue and I can still connect, but I'd be happier with confirmation. 8-)> Thanks for any assistance. Mike- -- If you're not confused, you're not trying hard enough. -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments, |
From: <ni...@el...> - 2006-01-11 06:07:19
|
Well I am using all the rules dave........ Any ways last night I saw the things in tcpdump as per gulfie and I found something goes wrong as I got tiny packets from tcpdump.....After that i changed the SMTP server and again tcpdumed the things and it's works for me..... also CPU is under control.....It seems to me like some problem related to path MTU..... I have also tested things with snort 2.3.3 & 2.4.3 and there is a definate performance gain with snort 2.4.3........ Also from last few mails it seems, there are less options to improve perfomance by changing or modifying any of the software components....... What you say ????? :) Regards, Nishit Shah. > On Mon, 9 Jan 2006, sno...@li... > wrote: > > + I am running snort_inline process on Pentium 4 2.4 GHz machine with > +kernel 2.4 with 100 Mbps card. > > Is the P4 hyperthread capable? If not, get one that is. > > + Now, Problem I am facing is, in case of heavy traffic of > +interactive protocols like SMTP.POP3,IMAP etc (i.e 5-8 Mbps) number > +of context switches between userspace and kernel space increses due > +to large number of netlink send and recvfrom callls. Each send and > +recvfrom call contains very few bytes due to the nature of the > +interactive protocol and thus number of packets that snort_inline has to > +process is very high but number of bytes are very less, thus even at load > +of 5-8 Mbps my CPU hits 50-70% CPU. > > ip_queue, as Will replied, is pretty hefty CPU-wise. netfilter_queues, the > 2.6.14+ replacement, aren't any lighter. Also, as Will indicates, there > are no "easy" answers to these problems. > > You don't indicate what rules you're using; we've found that some of the > PCRE rules are deadly time wasters, especially for certain protocols (SMTP > being one). Also, as Gulfie replied, there are a number of things about > the hardware that are tunable. (Better NICs, IRQ tuning, where cards are > plugged in, binding processes to different processors, etc.). > > + On other hand in case of bulk transfer protocols like ftp-data, > +context switches due to netlink send and recvfrom are not as much as > +in case of interactive protocols 'caz each call contains large > +number of bytes thus even at load of 70 80 Mbps my snort_inline > +process hits 30 to 35% CPU. > + So basically I am suffering the problem of high CPU in case of > +interactive protocols. > > Welcome to the club 8-/. > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > |
From: Dave R. <da...@re...> - 2006-01-10 16:34:13
|
On Mon, 9 Jan 2006, sno...@li... wrote: + I am running snort_inline process on Pentium 4 2.4 GHz machine with +kernel 2.4 with 100 Mbps card. Is the P4 hyperthread capable? If not, get one that is. + Now, Problem I am facing is, in case of heavy traffic of +interactive protocols like SMTP.POP3,IMAP etc (i.e 5-8 Mbps) number +of context switches between userspace and kernel space increses due +to large number of netlink send and recvfrom callls. Each send and +recvfrom call contains very few bytes due to the nature of the +interactive protocol and thus number of packets that snort_inline has to +process is very high but number of bytes are very less, thus even at load +of 5-8 Mbps my CPU hits 50-70% CPU. ip_queue, as Will replied, is pretty hefty CPU-wise. netfilter_queues, the 2.6.14+ replacement, aren't any lighter. Also, as Will indicates, there are no "easy" answers to these problems. You don't indicate what rules you're using; we've found that some of the PCRE rules are deadly time wasters, especially for certain protocols (SMTP being one). Also, as Gulfie replied, there are a number of things about the hardware that are tunable. (Better NICs, IRQ tuning, where cards are plugged in, binding processes to different processors, etc.). + On other hand in case of bulk transfer protocols like ftp-data, +context switches due to netlink send and recvfrom are not as much as +in case of interactive protocols 'caz each call contains large +number of bytes thus even at load of 70 80 Mbps my snort_inline +process hits 30 to 35% CPU. + So basically I am suffering the problem of high CPU in case of +interactive protocols. Welcome to the club 8-/. |
From: Gulfie <gu...@gr...> - 2006-01-10 09:30:21
|
On Tue, Jan 10, 2006 at 12:22:29PM +0530, ni...@el... wrote: > Well traffic profile is very simple. I am sending 20-25 MB file through > SMTP..... Try multiple files transfered at the same time. You may be running up against the time delay product of the single TCP session. =20 Or is your specific problem that you have to make a single TCP session go faster? =20 > Actually right now I don't have any useful statistics for different > traffic profiles and different configurations of snort_inline.conf file. > But as per my tests there is not a big difference in latency with > snort_inline.conf.full and snort_inline.conf.null or even in case of a > simple userspace utility that reads packest from IP_QUEUE and just issues > NF_ACCEPT verdict.. Ya difference is in CPU usgae. The difference in latency will be minimal untell the machine starts running out of CPU and queueing packets. A good tcp stack will have several packets / second outstanding at any given time. Latency is only one factor in determining tcp speed.=20 > The thing is in case of ftp traffic, each call of netlink recv contains > 1500 to 1600 bytes of data and 150 to 170 bytes in case of SMTP > traffic(results from strace -p snort_inline process id) thus you can see > that 6-7 Mbps of SMTP traffic is able to produce same effect as 70 to 80 > Mbps of ftp traffic or even more(more number of ACks etc..) strace -p will slow down the device. Try gathering perf information from the perfmonitor preprocessor. =20 How is the SMTP traffic getting generated? An average packet size of 150-170 byte for a 25 megabyte file seems to be rather abnormal. Maybe= =20 there is something like a misbehaving virus/spam filter in the way. Is the traffic fragmented? A tcpdump would tell you. There may be a way to trick the kenrel into defraging the packets for you. =20 > Thus in case > of SMTP traffic it is definate that number of packets/sec that has to be > processed by snort_inline is definatly much more than ftp traffic and if > snort itself takes more CPU cycles, more number of packets will definatly > increase CPU usage... This is correct. Snort_inline performance tends to be more related to packets per second than the size of the packets. Most software IDS/IPSe= s=20 perform this way.=20 i.e. :=20 http://grotto-group.com/~gulfie/tmp/snort_inline/ganamead/ir_wm_snort_inli= ne.all_preproc/all.snmp.outofdut.lgraph.png =20 > Regards, > Nishit Shah. >=20 > > > > It is not that bad, okay maybe it is. There are a few > > optimizations that can be done first. > > > > 1) ditch the 100 BT cards. They kill you. [see data below] > > 32 bit Intel Epro 1000's work well. 64 bit work better. > > They sell Dual and Quad cards, they work. > > If possible, move the NIC as physically close to the CP= U / > > bridge as you can. > > It's silly, but it works for 10% speed gain on > > some scenarios. > > > > 2) Get a real physical second processor. If you can get one cpu > > doing the kernel packet shoveling and one doing the snorting, you > > win a small level of parallelazation. > > > > 3) You can use net filter rules to discard some traffic going to > > snort. > > > > 4) Go parallel above snort. Place several inline devices inline > > with each other ( or in parallel with the right routers/switches) > > , and have each only work on a fraction of the traffic. You can > > split traffic by protocol, ip ranges, whatever you can think of. > > > > > > Previous results: > > > > Take it with a grain of salt, these are very unfinished results, from > > tests I was doing 6 month ago. I'm sorry for the > > english, I'm a native speaker, there is no excuse. > > > > > > http://www.grotto-group.com/~gulfie/tmp/snort_inline/finer_deta= il.html > > > > There were two primary test boxes, the realtek_dual, and the > > intel_dual. Each is a 2 Ghz Celeron with 128 KB c > > aches. One has a pair of r8169 cards, the other has a pair of intel e1= 000 > > Pro cards. The traffic was http across a l > > an, some of it with hosts having smaller MTUs. > > > > It's not SMTP/POP/IMAP, but it's a start. > > > > I'm interested to know a few things about you traffic profile. > > > > 0) Take the IP_QUEUE out and how fast can the machine switch/forward > > packets without snort even running? > > 1) Use a null snort.conf and find out how fast it can go without > > processing any of the packets. > > How fast is that? (pps,mbits,sys/sec,alerts/sec, etc). > > 2) 5-8 Mbits, but how many kpkts / sec? > > 3) What is the average bytes / packet your sensor is seeing. > > 4) How many syns / second are you seeing on the SMTP / POP / IMAP > > workload? > > > > > > > > > > > > > > > > > > On Mon, Jan 09, 2006 at 08:15:29AM -0600, Will Metcalf wrote: > >> Ummmm unless you want to rewrite ip_queue you are probably out of > >> luck. IP_QUEUE and NFQUEUE were not really built with speed in mind. > >> We had ideas in the past that we have not implemented due to lack of > >> time. Things like an mmapable QUEUE target, but like I said we have > >> not had the time, and we do not pretend to be kernel hackers. If you > >> come up with anything let us know ;-) > >> > >> Regards, > >> > >> Will > >> On 1/9/06, ni...@el... <ni...@el...> wrote: > >> > Hi, > >> > > >> > I am running snort_inline process on Pentium 4 2.4 GHz machine with > >> > kernel 2.4 with 100 Mbps card. > >> > Now, Problem I am facing is, in case of heavy traffic of > >> > interactive protocols like SMTP.POP3,IMAP etc (i.e 5-8 Mbps) number > >> > of context switches between userspace and kernel space increses due > >> > to large number of netlink send and recvfrom callls. Each send and > >> > recvfrom call contains very few bytes due to the nature of the > >> > interactive protocol and thus number of packets that snort_inline has > >> to > >> > process is very high but number of bytes are very less, thus even at > >> load > >> > of 5-8 Mbps my CPU hits 50-70% CPU. > >> > On other hand in case of bulk transfer protocols like ftp-data, > >> > context switches due to netlink send and recvfrom are not as much as > >> > in case of interactive protocols 'caz each call contains large > >> > number of bytes thus even at load of 70 80 Mbps my snort_inline > >> > process hits 30 to 35% CPU. > >> > So basically I am suffering the problem of high CPU in case of > >> > interactive protocols. > >> > > >> > Waiting for your help..................... > >> > > >> > > >> > Regards, > >> > Nishit Shah. > >> > > >> > > >> > ------------------------------------------------------- > >> > This SF.net email is sponsored by: Splunk Inc. Do you grep through l= og > >> files > >> > for problems? Stop! Download the new AJAX search engine that makes > >> > searching your log files as easy as surfing the web. DOWNLOAD > >> SPLUNK! > >> > http://ads.osdn.com/?ad_id=3D7637&alloc_id=3D16865&op=3Dclick > >> > _______________________________________________ > >> > Snort-inline-users mailing list > >> > Sno...@li... > >> > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > >> > > >> > >> > >> ------------------------------------------------------- > >> This SF.net email is sponsored by: Splunk Inc. Do you grep through log > >> files > >> for problems? Stop! Download the new AJAX search engine that makes > >> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > >> http://ads.osdn.com/?ad_idv37&alloc_id=16865&op?k > >> _______________________________________________ > >> Snort-inline-users mailing list > >> Sno...@li... > >> https://lists.sourceforge.net/lists/listinfo/snort-inline-users > > > > > > ------------------------------------------------------- > > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > > files > > for problems? Stop! Download the new AJAX search engine that makes > > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > > http://ads.osdn.com/?ad_idv37&alloc_id=16865&op=3Dclick > > _______________________________________________ > > Snort-inline-users mailing list > > Sno...@li... > > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > > >=20 >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=3D7637&alloc_id=3D16865&op=3Dclick > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users |
From: <ni...@el...> - 2006-01-10 07:40:42
|
My traffic pattern is simple... I am sending 20-30 MB file through SMTP.. and my CPU hits 60 to 80% :( I have tested on 100 Mbps cards and even snort_inline.conf.null & snort_inline.conf.full not varied as much in performance as only latency is concerned, may be i am wrong in that...... but I don't feel any vast difference in latency in above 2 cases.... Yes there is a definate change in CPU usage. My snort_inline.conf.full took 60 to 80% CPU & snort_inline.conf_null took 8 to 15% CPU...... After that i have straced snort_inline process in both the cases. In case of ftp as well as SMTP and difference i found is, 1.) In case of ftp data, each call of netlink recvfrom contains 1500 to 1600 bytes of data (traffic at 60 to 80 Mbps) 2.) In case of smtp data, each call of netlink recvfrom contains 150 to 170 bytes of data (traffic at 6 - 8 Mbps) Thus what i get is number of packets/sec in case of SMTP is much more than number of packets/sec in case of ftp (also overhead of more number of ACKs is also there in interactive protocols...) and snort process itself is packets/sec sensitive so my CPU is far more higher.... So, what i feel is even multiple packets in single call of netlink will not help me that much....... I have also though of some kind of packet reassembly in case of interactive protocols in ip_queue code but that was also not the approch 'caz i have to wait for ACKs from server and all other related stuff... Regards, Nishit Shah. > > It is not that bad, okay maybe it is. There are a few > optimizations that can be done first. > > 1) ditch the 100 BT cards. They kill you. [see data below] > 32 bit Intel Epro 1000's work well. 64 bit work better. > They sell Dual and Quad cards, they work. > If possible, move the NIC as physically close to the CPU / > bridge as you can. > It's silly, but it works for 10% speed gain on > some scenarios. > > 2) Get a real physical second processor. If you can get one cpu > doing the kernel packet shoveling and one doing the snorting, you > win a small level of parallelazation. > > 3) You can use net filter rules to discard some traffic going to > snort. > > 4) Go parallel above snort. Place several inline devices inline > with each other ( or in parallel with the right routers/switches) > , and have each only work on a fraction of the traffic. You can > split traffic by protocol, ip ranges, whatever you can think of. > > > Previous results: > > Take it with a grain of salt, these are very unfinished results, from > tests I was doing 6 month ago. I'm sorry for the > english, I'm a native speaker, there is no excuse. > > > http://www.grotto-group.com/~gulfie/tmp/snort_inline/finer_detail.html > > There were two primary test boxes, the realtek_dual, and the > intel_dual. Each is a 2 Ghz Celeron with 128 KB c > aches. One has a pair of r8169 cards, the other has a pair of intel e1000 > Pro cards. The traffic was http across a l > an, some of it with hosts having smaller MTUs. > > It's not SMTP/POP/IMAP, but it's a start. > > I'm interested to know a few things about you traffic profile. > > 0) Take the IP_QUEUE out and how fast can the machine switch/forward > packets without snort even running? > 1) Use a null snort.conf and find out how fast it can go without > processing any of the packets. > How fast is that? (pps,mbits,sys/sec,alerts/sec, etc). > 2) 5-8 Mbits, but how many kpkts / sec? > 3) What is the average bytes / packet your sensor is seeing. > 4) How many syns / second are you seeing on the SMTP / POP / IMAP > workload? > > > > > > > > > On Mon, Jan 09, 2006 at 08:15:29AM -0600, Will Metcalf wrote: >> Ummmm unless you want to rewrite ip_queue you are probably out of >> luck. IP_QUEUE and NFQUEUE were not really built with speed in mind. >> We had ideas in the past that we have not implemented due to lack of >> time. Things like an mmapable QUEUE target, but like I said we have >> not had the time, and we do not pretend to be kernel hackers. If you >> come up with anything let us know ;-) >> >> Regards, >> >> Will >> On 1/9/06, ni...@el... <ni...@el...> wrote: >> > Hi, >> > >> > I am running snort_inline process on Pentium 4 2.4 GHz machine with >> > kernel 2.4 with 100 Mbps card. >> > Now, Problem I am facing is, in case of heavy traffic of >> > interactive protocols like SMTP.POP3,IMAP etc (i.e 5-8 Mbps) number >> > of context switches between userspace and kernel space increses due >> > to large number of netlink send and recvfrom callls. Each send and >> > recvfrom call contains very few bytes due to the nature of the >> > interactive protocol and thus number of packets that snort_inline has >> to >> > process is very high but number of bytes are very less, thus even at >> load >> > of 5-8 Mbps my CPU hits 50-70% CPU. >> > On other hand in case of bulk transfer protocols like ftp-data, >> > context switches due to netlink send and recvfrom are not as much as >> > in case of interactive protocols 'caz each call contains large >> > number of bytes thus even at load of 70 80 Mbps my snort_inline >> > process hits 30 to 35% CPU. >> > So basically I am suffering the problem of high CPU in case of >> > interactive protocols. >> > >> > Waiting for your help..................... >> > >> > >> > Regards, >> > Nishit Shah. >> > >> > >> > ------------------------------------------------------- >> > This SF.net email is sponsored by: Splunk Inc. Do you grep through log >> files >> > for problems? Stop! Download the new AJAX search engine that makes >> > searching your log files as easy as surfing the web. DOWNLOAD >> SPLUNK! >> > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click >> > _______________________________________________ >> > Snort-inline-users mailing list >> > Sno...@li... >> > https://lists.sourceforge.net/lists/listinfo/snort-inline-users >> > >> >> >> ------------------------------------------------------- >> This SF.net email is sponsored by: Splunk Inc. Do you grep through log >> files >> for problems? Stop! Download the new AJAX search engine that makes >> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! >> http://ads.osdn.com/?ad_idv37&alloc_id865&op?k >> _______________________________________________ >> Snort-inline-users mailing list >> Sno...@li... >> https://lists.sourceforge.net/lists/listinfo/snort-inline-users > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id865&op=click > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > |
From: <ni...@el...> - 2006-01-10 07:38:25
|
Well traffic profile is very simple. I am sending 20-25 MB file through SMTP..... Actually right now I don't have any useful statistics for different traffic profiles and different configurations of snort_inline.conf file. But as per my tests there is not a big difference in latency with snort_inline.conf.full and snort_inline.conf.null or even in case of a simple userspace utility that reads packest from IP_QUEUE and just issues NF_ACCEPT verdict.. Ya difference is in CPU usgae. The thing is in case of ftp traffic, each call of netlink recv contains 1500 to 1600 bytes of data and 150 to 170 bytes in case of SMTP traffic(results from strace -p snort_inline process id) thus you can see that 6-7 Mbps of SMTP traffic is able to produce same effect as 70 to 80 Mbps of ftp traffic or even more(more number of ACks etc..) Thus in case of SMTP traffic it is definate that number of packets/sec that has to be processed by snort_inline is definatly much more than ftp traffic and if snort itself takes more CPU cycles, more number of packets will definatly increase CPU usage... Regards, Nishit Shah. > > It is not that bad, okay maybe it is. There are a few > optimizations that can be done first. > > 1) ditch the 100 BT cards. They kill you. [see data below] > 32 bit Intel Epro 1000's work well. 64 bit work better. > They sell Dual and Quad cards, they work. > If possible, move the NIC as physically close to the CPU / > bridge as you can. > It's silly, but it works for 10% speed gain on > some scenarios. > > 2) Get a real physical second processor. If you can get one cpu > doing the kernel packet shoveling and one doing the snorting, you > win a small level of parallelazation. > > 3) You can use net filter rules to discard some traffic going to > snort. > > 4) Go parallel above snort. Place several inline devices inline > with each other ( or in parallel with the right routers/switches) > , and have each only work on a fraction of the traffic. You can > split traffic by protocol, ip ranges, whatever you can think of. > > > Previous results: > > Take it with a grain of salt, these are very unfinished results, from > tests I was doing 6 month ago. I'm sorry for the > english, I'm a native speaker, there is no excuse. > > > http://www.grotto-group.com/~gulfie/tmp/snort_inline/finer_detail.html > > There were two primary test boxes, the realtek_dual, and the > intel_dual. Each is a 2 Ghz Celeron with 128 KB c > aches. One has a pair of r8169 cards, the other has a pair of intel e1000 > Pro cards. The traffic was http across a l > an, some of it with hosts having smaller MTUs. > > It's not SMTP/POP/IMAP, but it's a start. > > I'm interested to know a few things about you traffic profile. > > 0) Take the IP_QUEUE out and how fast can the machine switch/forward > packets without snort even running? > 1) Use a null snort.conf and find out how fast it can go without > processing any of the packets. > How fast is that? (pps,mbits,sys/sec,alerts/sec, etc). > 2) 5-8 Mbits, but how many kpkts / sec? > 3) What is the average bytes / packet your sensor is seeing. > 4) How many syns / second are you seeing on the SMTP / POP / IMAP > workload? > > > > > > > > > On Mon, Jan 09, 2006 at 08:15:29AM -0600, Will Metcalf wrote: >> Ummmm unless you want to rewrite ip_queue you are probably out of >> luck. IP_QUEUE and NFQUEUE were not really built with speed in mind. >> We had ideas in the past that we have not implemented due to lack of >> time. Things like an mmapable QUEUE target, but like I said we have >> not had the time, and we do not pretend to be kernel hackers. If you >> come up with anything let us know ;-) >> >> Regards, >> >> Will >> On 1/9/06, ni...@el... <ni...@el...> wrote: >> > Hi, >> > >> > I am running snort_inline process on Pentium 4 2.4 GHz machine with >> > kernel 2.4 with 100 Mbps card. >> > Now, Problem I am facing is, in case of heavy traffic of >> > interactive protocols like SMTP.POP3,IMAP etc (i.e 5-8 Mbps) number >> > of context switches between userspace and kernel space increses due >> > to large number of netlink send and recvfrom callls. Each send and >> > recvfrom call contains very few bytes due to the nature of the >> > interactive protocol and thus number of packets that snort_inline has >> to >> > process is very high but number of bytes are very less, thus even at >> load >> > of 5-8 Mbps my CPU hits 50-70% CPU. >> > On other hand in case of bulk transfer protocols like ftp-data, >> > context switches due to netlink send and recvfrom are not as much as >> > in case of interactive protocols 'caz each call contains large >> > number of bytes thus even at load of 70 80 Mbps my snort_inline >> > process hits 30 to 35% CPU. >> > So basically I am suffering the problem of high CPU in case of >> > interactive protocols. >> > >> > Waiting for your help..................... >> > >> > >> > Regards, >> > Nishit Shah. >> > >> > >> > ------------------------------------------------------- >> > This SF.net email is sponsored by: Splunk Inc. Do you grep through log >> files >> > for problems? Stop! Download the new AJAX search engine that makes >> > searching your log files as easy as surfing the web. DOWNLOAD >> SPLUNK! >> > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click >> > _______________________________________________ >> > Snort-inline-users mailing list >> > Sno...@li... >> > https://lists.sourceforge.net/lists/listinfo/snort-inline-users >> > >> >> >> ------------------------------------------------------- >> This SF.net email is sponsored by: Splunk Inc. Do you grep through log >> files >> for problems? Stop! Download the new AJAX search engine that makes >> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! >> http://ads.osdn.com/?ad_idv37&alloc_id865&op?k >> _______________________________________________ >> Snort-inline-users mailing list >> Sno...@li... >> https://lists.sourceforge.net/lists/listinfo/snort-inline-users > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id865&op=click > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > |
From: <ni...@el...> - 2006-01-10 07:38:23
|
Well traffic profile is very simple. I am sending 20-25 MB file through SMTP..... Actually right now I don't have any useful statistics for different traffic profiles and different configurations of snort_inline.conf file. But as per my tests there is not a big difference in latency with snort_inline.conf.full and snort_inline.conf.null or even in case of a simple userspace utility that reads packest from IP_QUEUE and just issues NF_ACCEPT verdict.. Ya difference is in CPU usgae. The thing is in case of ftp traffic, each call of netlink recv contains 1500 to 1600 bytes of data and 150 to 170 bytes in case of SMTP traffic(results from strace -p snort_inline process id) thus you can see that 6-7 Mbps of SMTP traffic is able to produce same effect as 70 to 80 Mbps of ftp traffic or even more(more number of ACks etc..) Thus in case of SMTP traffic it is definate that number of packets/sec that has to be processed by snort_inline is definatly much more than ftp traffic and if snort itself takes more CPU cycles, more number of packets will definatly increase CPU usage... Regards, Nishit Shah. > > It is not that bad, okay maybe it is. There are a few > optimizations that can be done first. > > 1) ditch the 100 BT cards. They kill you. [see data below] > 32 bit Intel Epro 1000's work well. 64 bit work better. > They sell Dual and Quad cards, they work. > If possible, move the NIC as physically close to the CPU / > bridge as you can. > It's silly, but it works for 10% speed gain on > some scenarios. > > 2) Get a real physical second processor. If you can get one cpu > doing the kernel packet shoveling and one doing the snorting, you > win a small level of parallelazation. > > 3) You can use net filter rules to discard some traffic going to > snort. > > 4) Go parallel above snort. Place several inline devices inline > with each other ( or in parallel with the right routers/switches) > , and have each only work on a fraction of the traffic. You can > split traffic by protocol, ip ranges, whatever you can think of. > > > Previous results: > > Take it with a grain of salt, these are very unfinished results, from > tests I was doing 6 month ago. I'm sorry for the > english, I'm a native speaker, there is no excuse. > > > http://www.grotto-group.com/~gulfie/tmp/snort_inline/finer_detail.html > > There were two primary test boxes, the realtek_dual, and the > intel_dual. Each is a 2 Ghz Celeron with 128 KB c > aches. One has a pair of r8169 cards, the other has a pair of intel e1000 > Pro cards. The traffic was http across a l > an, some of it with hosts having smaller MTUs. > > It's not SMTP/POP/IMAP, but it's a start. > > I'm interested to know a few things about you traffic profile. > > 0) Take the IP_QUEUE out and how fast can the machine switch/forward > packets without snort even running? > 1) Use a null snort.conf and find out how fast it can go without > processing any of the packets. > How fast is that? (pps,mbits,sys/sec,alerts/sec, etc). > 2) 5-8 Mbits, but how many kpkts / sec? > 3) What is the average bytes / packet your sensor is seeing. > 4) How many syns / second are you seeing on the SMTP / POP / IMAP > workload? > > > > > > > > > On Mon, Jan 09, 2006 at 08:15:29AM -0600, Will Metcalf wrote: >> Ummmm unless you want to rewrite ip_queue you are probably out of >> luck. IP_QUEUE and NFQUEUE were not really built with speed in mind. >> We had ideas in the past that we have not implemented due to lack of >> time. Things like an mmapable QUEUE target, but like I said we have >> not had the time, and we do not pretend to be kernel hackers. If you >> come up with anything let us know ;-) >> >> Regards, >> >> Will >> On 1/9/06, ni...@el... <ni...@el...> wrote: >> > Hi, >> > >> > I am running snort_inline process on Pentium 4 2.4 GHz machine with >> > kernel 2.4 with 100 Mbps card. >> > Now, Problem I am facing is, in case of heavy traffic of >> > interactive protocols like SMTP.POP3,IMAP etc (i.e 5-8 Mbps) number >> > of context switches between userspace and kernel space increses due >> > to large number of netlink send and recvfrom callls. Each send and >> > recvfrom call contains very few bytes due to the nature of the >> > interactive protocol and thus number of packets that snort_inline has >> to >> > process is very high but number of bytes are very less, thus even at >> load >> > of 5-8 Mbps my CPU hits 50-70% CPU. >> > On other hand in case of bulk transfer protocols like ftp-data, >> > context switches due to netlink send and recvfrom are not as much as >> > in case of interactive protocols 'caz each call contains large >> > number of bytes thus even at load of 70 80 Mbps my snort_inline >> > process hits 30 to 35% CPU. >> > So basically I am suffering the problem of high CPU in case of >> > interactive protocols. >> > >> > Waiting for your help..................... >> > >> > >> > Regards, >> > Nishit Shah. >> > >> > >> > ------------------------------------------------------- >> > This SF.net email is sponsored by: Splunk Inc. Do you grep through log >> files >> > for problems? Stop! Download the new AJAX search engine that makes >> > searching your log files as easy as surfing the web. DOWNLOAD >> SPLUNK! >> > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click >> > _______________________________________________ >> > Snort-inline-users mailing list >> > Sno...@li... >> > https://lists.sourceforge.net/lists/listinfo/snort-inline-users >> > >> >> >> ------------------------------------------------------- >> This SF.net email is sponsored by: Splunk Inc. Do you grep through log >> files >> for problems? Stop! Download the new AJAX search engine that makes >> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! >> http://ads.osdn.com/?ad_idv37&alloc_id865&op?k >> _______________________________________________ >> Snort-inline-users mailing list >> Sno...@li... >> https://lists.sourceforge.net/lists/listinfo/snort-inline-users > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id865&op=click > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > |
From: Gulfie <gu...@gr...> - 2006-01-09 23:58:30
|
It is not that bad, okay maybe it is. There are a few optimization= s that can be done first. 1) ditch the 100 BT cards. They kill you. [see data below] 32 bit Intel Epro 1000's work well. 64 bit work better.=20 They sell Dual and Quad cards, they work. If possible, move the NIC as physically close to the CPU / = bridge as you can.=20 It's silly, but it works for 10% speed gain on some= scenarios. 2) Get a real physical second processor. If you can get one cpu do= ing the kernel packet shoveling and one doing the snorting, you win a small= level of parallelazation. =20 3) You can use net filter rules to discard some traffic going to sn= ort. =20 4) Go parallel above snort. Place several inline devices inline wit= h each other ( or in parallel with the right routers/switches) , and have e= ach only work on a fraction of the traffic. You can split traffic by proto= col, ip ranges, whatever you can think of.=20 =20 Previous results: Take it with a grain of salt, these are very unfinished results, from tests= I was doing 6 month ago. I'm sorry for the english, I'm a native speaker, there is no excuse. http://www.grotto-group.com/~gulfie/tmp/snort_inline/finer_detail.h= tml There were two primary test boxes, the realtek_dual, and the intel_= dual. Each is a 2 Ghz Celeron with 128 KB c aches. One has a pair of r8169 cards, the other has a pair of intel e1000 = Pro cards. The traffic was http across a l an, some of it with hosts having smaller MTUs. =20 It's not SMTP/POP/IMAP, but it's a start.=20 =20 I'm interested to know a few things about you traffic profile. 0) Take the IP_QUEUE out and how fast can the machine switch/forward packet= s without snort even running? 1) Use a null snort.conf and find out how fast it can go without processing= any of the packets.=20 How fast is that? (pps,mbits,sys/sec,alerts/sec, etc). =20 2) 5-8 Mbits, but how many kpkts / sec? 3) What is the average bytes / packet your sensor is seeing. 4) How many syns / second are you seeing on the SMTP / POP / IMAP workload? On Mon, Jan 09, 2006 at 08:15:29AM -0600, Will Metcalf wrote: > Ummmm unless you want to rewrite ip_queue you are probably out of > luck. IP_QUEUE and NFQUEUE were not really built with speed in mind.=20 > We had ideas in the past that we have not implemented due to lack of > time. Things like an mmapable QUEUE target, but like I said we have > not had the time, and we do not pretend to be kernel hackers. If you > come up with anything let us know ;-) > > Regards, >=20 > Will > On 1/9/06, ni...@el... <ni...@el...> wrote: > > Hi, > > > > I am running snort_inline process on Pentium 4 2.4 GHz machine with > > kernel 2.4 with 100 Mbps card. > > Now, Problem I am facing is, in case of heavy traffic of > > interactive protocols like SMTP.POP3,IMAP etc (i.e 5-8 Mbps) number > > of context switches between userspace and kernel space increses due > > to large number of netlink send and recvfrom callls. Each send and > > recvfrom call contains very few bytes due to the nature of the > > interactive protocol and thus number of packets that snort_inline has to > > process is very high but number of bytes are very less, thus even at lo= ad > > of 5-8 Mbps my CPU hits 50-70% CPU. > > On other hand in case of bulk transfer protocols like ftp-data, > > context switches due to netlink send and recvfrom are not as much as > > in case of interactive protocols 'caz each call contains large > > number of bytes thus even at load of 70 80 Mbps my snort_inline > > process hits 30 to 35% CPU. > > So basically I am suffering the problem of high CPU in case of > > interactive protocols. > > > > Waiting for your help..................... > > > > > > Regards, > > Nishit Shah. > > > > > > ------------------------------------------------------- > > This SF.net email is sponsored by: Splunk Inc. Do you grep through log = files > > for problems? Stop! Download the new AJAX search engine that makes > > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > > http://ads.osdn.com/?ad_id=3D7637&alloc_id=3D16865&op=3Dclick > > _______________________________________________ > > Snort-inline-users mailing list > > Sno...@li... > > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > > >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id=16865&op?k > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users |
From: Will M. <wil...@gm...> - 2006-01-09 14:15:37
|
Ummmm unless you want to rewrite ip_queue you are probably out of luck. IP_QUEUE and NFQUEUE were not really built with speed in mind.=20 We had ideas in the past that we have not implemented due to lack of time. Things like an mmapable QUEUE target, but like I said we have not had the time, and we do not pretend to be kernel hackers. If you come up with anything let us know ;-) Regards, Will On 1/9/06, ni...@el... <ni...@el...> wrote: > Hi, > > I am running snort_inline process on Pentium 4 2.4 GHz machine with > kernel 2.4 with 100 Mbps card. > Now, Problem I am facing is, in case of heavy traffic of > interactive protocols like SMTP.POP3,IMAP etc (i.e 5-8 Mbps) number > of context switches between userspace and kernel space increses due > to large number of netlink send and recvfrom callls. Each send and > recvfrom call contains very few bytes due to the nature of the > interactive protocol and thus number of packets that snort_inline has to > process is very high but number of bytes are very less, thus even at load > of 5-8 Mbps my CPU hits 50-70% CPU. > On other hand in case of bulk transfer protocols like ftp-data, > context switches due to netlink send and recvfrom are not as much as > in case of interactive protocols 'caz each call contains large > number of bytes thus even at load of 70 80 Mbps my snort_inline > process hits 30 to 35% CPU. > So basically I am suffering the problem of high CPU in case of > interactive protocols. > > Waiting for your help..................... > > > Regards, > Nishit Shah. > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=3D7637&alloc_id=3D16865&op=3Dclick > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > |
From: <ni...@el...> - 2006-01-09 06:21:36
|
Hi, I am running snort_inline process on Pentium 4 2.4 GHz machine with kernel 2.4 with 100 Mbps card. Now, Problem I am facing is, in case of heavy traffic of interactive protocols like SMTP.POP3,IMAP etc (i.e 5-8 Mbps) number of context switches between userspace and kernel space increses due to large number of netlink send and recvfrom callls. Each send and recvfrom call contains very few bytes due to the nature of the interactive protocol and thus number of packets that snort_inline has to process is very high but number of bytes are very less, thus even at load of 5-8 Mbps my CPU hits 50-70% CPU. On other hand in case of bulk transfer protocols like ftp-data, context switches due to netlink send and recvfrom are not as much as in case of interactive protocols 'caz each call contains large number of bytes thus even at load of 70 80 Mbps my snort_inline process hits 30 to 35% CPU. So basically I am suffering the problem of high CPU in case of interactive protocols. Waiting for your help..................... Regards, Nishit Shah. |
From: Will M. <wil...@gm...> - 2006-01-03 22:01:05
|
Sounds like something is messed up with DNAT/SNAT send me the output of iptables -L -t nat after a rule is added. Regards, Will On 1/3/06, JT <oi...@si...> wrote: > Here's another example from the bleeding-virus.rules file. > > When used with snort_inline with the baitandswitch preprocessor turned on > this rule will drop the packets that match the rule and will log the even= t > without any problem. > > drop tcp $HOME_NET any -> 200.18.132.166 any (msg:"BLEEDING-EDGE VIRUS > W97M.Nometz.A Sending Info Home"; flags: S,12; threshold:type limit, trac= k > by_src, count 1, seconds 60; reference:url,securityres > ponse.symantec.com/avcenter/venc/data/w97m.nometz.a.html; > classtype:trojan-activity; sid:2002360; rev:1; ) > > However, if I add the baitandswitch part to the rule, like so, > > drop tcp $HOME_NET any -> 200.18.132.166 any (msg:"BLEEDING-EDGE VIRUS > W97M.Nometz.A Sending Info Home"; flags: S,12; threshold:type limit, trac= k > by_src, count 1, seconds 60; reference:url,securityres > ponse.symantec.com/avcenter/venc/data/w97m.nometz.a.html; > classtype:trojan-activity; sid:2002360; rev:1; > bait-and-switch:60,src,172.16.99.1; ) > > baitandswitch will immediately start dropping all traffic - no matter if > the traffic matches the rule or not. When this happens alerts are not > generated. > > Other rules that seem to also cause this sort of activity are ones that > contain flowbits:isnotset - such as: > > alert tcp $EXTERNAL_NET any -> $HOME_NET 135 (msg: "BLEEDING-EDGE Worm > Rbot.Gen Infection Attempt"; flowbits:isnotset,tagged; content:"|4d 45 4f > 57|"; nocase; offset: 122; depth: 4; content:"|cc cc cc cc|"; nocase; tag= : > host,5,packets,src; flowbits: set,tagged; reference: > url,www.f-secure.com/v-descs/rbot.shtml; classtype: trojan-activity; sid: > 2001554; rev:4; ) > > This rule works fine with an "alert" or "drop" keyword, but if I use drop > and the baitandswitch entry, the same "drop everything" activity happens. > > Anybody else use baitandswitch with theses rules? > > JT wrote: > > Are there any howtos for writing/adjusting signatures for snort_inline? > > I can find plenty of documentation for writing rules for the stock > > snort, just didn't know what changes have to be made to the sigs for th= e > > inline version - if any. I've only found a few references to it in > > README.INLINE. > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=3D7637&alloc_id=3D16865&op=3Dclick > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > |
From: JT <oi...@si...> - 2006-01-03 21:53:45
|
Here's another example from the bleeding-virus.rules file. When used with snort_inline with the baitandswitch preprocessor turned on this rule will drop the packets that match the rule and will log the event without any problem. drop tcp $HOME_NET any -> 200.18.132.166 any (msg:"BLEEDING-EDGE VIRUS W97M.Nometz.A Sending Info Home"; flags: S,12; threshold:type limit, track by_src, count 1, seconds 60; reference:url,securityres ponse.symantec.com/avcenter/venc/data/w97m.nometz.a.html; classtype:trojan-activity; sid:2002360; rev:1; ) However, if I add the baitandswitch part to the rule, like so, drop tcp $HOME_NET any -> 200.18.132.166 any (msg:"BLEEDING-EDGE VIRUS W97M.Nometz.A Sending Info Home"; flags: S,12; threshold:type limit, track by_src, count 1, seconds 60; reference:url,securityres ponse.symantec.com/avcenter/venc/data/w97m.nometz.a.html; classtype:trojan-activity; sid:2002360; rev:1; bait-and-switch:60,src,172.16.99.1; ) baitandswitch will immediately start dropping all traffic - no matter if the traffic matches the rule or not. When this happens alerts are not generated. Other rules that seem to also cause this sort of activity are ones that contain flowbits:isnotset - such as: alert tcp $EXTERNAL_NET any -> $HOME_NET 135 (msg: "BLEEDING-EDGE Worm Rbot.Gen Infection Attempt"; flowbits:isnotset,tagged; content:"|4d 45 4f 57|"; nocase; offset: 122; depth: 4; content:"|cc cc cc cc|"; nocase; tag: host,5,packets,src; flowbits: set,tagged; reference: url,www.f-secure.com/v-descs/rbot.shtml; classtype: trojan-activity; sid: 2001554; rev:4; ) This rule works fine with an "alert" or "drop" keyword, but if I use drop and the baitandswitch entry, the same "drop everything" activity happens. Anybody else use baitandswitch with theses rules? JT wrote: > Are there any howtos for writing/adjusting signatures for snort_inline? > I can find plenty of documentation for writing rules for the stock > snort, just didn't know what changes have to be made to the sigs for the > inline version - if any. I've only found a few references to it in > README.INLINE. |
From: JT <oi...@si...> - 2006-01-02 22:43:35
|
An example: drop tcp !$SMTP_SERVERS any -> !$HOME_NET 25 (msg: "BLEEDING-EDGE POLICY Outbound Multiple Non-SMTP Server Emails"; flags: S,12; threshold: type threshold, track by_src,count 10, seconds 120; classtype: misc-activity; sid: 2000328; rev:7;bait-and-switch:20,src,172.16.99.1; ) This rule will make baitandswitch drop all traffic - even traffic not on port 25. Nothing is ever logged in the alert files. Am I doing something stupid?? I have a few other examples. I'll post them when I have access to that box. Thanks. JT wrote: > Are there any howtos for writing/adjusting signatures for snort_inline? > I can find plenty of documentation for writing rules for the stock > snort, just didn't know what changes have to be made to the sigs for the > inline version - if any. I've only found a few references to it in > README.INLINE. > |
From: Will M. <wil...@gm...> - 2005-12-09 17:25:05
|
This shouldn't affect your build. I see this all the time on my redhat/fedora boxes. You could probably just ignore it. Does it not build properly for you? Regards, Will On 12/9/05, cross <cr...@sm...> wrote: > <http://sourceforge.net/mailarchive/message.php?msg_id=3D13762338>When I > execute the 'autojunk.sh' > The problem follows > -- > configure.in:169: warning: underquoted definition of SN_CHECK_DECL > run info '(automake)Extending aclocal' > or see http://sources.redhat.com/automake/automake.html#Extending-aclocal > configure.in:202: warning: underquoted definition of SN_CHECK_DECLS > configure.in:298: warning: underquoted definition of FAIL_MESSAGE > -- > How to fix? > > Regards, > cross > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=3D7637&alloc_id=3D16865&op=3Dclick > _______________________________________________ > Snort-inline-users mailing list > Sno...@li... > https://lists.sourceforge.net/lists/listinfo/snort-inline-users > |
From: cross <cr...@sm...> - 2005-12-09 09:25:30
|
<http://sourceforge.net/mailarchive/message.php?msg_id=13762338>When I execute the 'autojunk.sh' The problem follows -- configure.in:169: warning: underquoted definition of SN_CHECK_DECL run info '(automake)Extending aclocal' or see http://sources.redhat.com/automake/automake.html#Extending-aclocal configure.in:202: warning: underquoted definition of SN_CHECK_DECLS configure.in:298: warning: underquoted definition of FAIL_MESSAGE -- How to fix? Regards, cross |