simple-evcorr-users Mailing List for Simple Event Correlator (Page 6)
Brought to you by:
ristov
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(5) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(6) |
Feb
(14) |
Mar
(8) |
Apr
(13) |
May
(6) |
Jun
(24) |
Jul
(7) |
Aug
(9) |
Sep
(20) |
Oct
(7) |
Nov
(1) |
Dec
|
2003 |
Jan
(10) |
Feb
|
Mar
(6) |
Apr
(12) |
May
(12) |
Jun
(44) |
Jul
(39) |
Aug
(10) |
Sep
(28) |
Oct
(35) |
Nov
(66) |
Dec
(51) |
2004 |
Jan
(21) |
Feb
(33) |
Mar
(32) |
Apr
(59) |
May
(59) |
Jun
(34) |
Jul
(6) |
Aug
(39) |
Sep
(13) |
Oct
(13) |
Nov
(19) |
Dec
(9) |
2005 |
Jan
(18) |
Feb
(36) |
Mar
(24) |
Apr
(18) |
May
(51) |
Jun
(34) |
Jul
(9) |
Aug
(34) |
Sep
(52) |
Oct
(20) |
Nov
(11) |
Dec
(12) |
2006 |
Jan
(20) |
Feb
(3) |
Mar
(68) |
Apr
(41) |
May
(11) |
Jun
(39) |
Jul
(17) |
Aug
(34) |
Sep
(40) |
Oct
(42) |
Nov
(25) |
Dec
(33) |
2007 |
Jan
(6) |
Feb
(28) |
Mar
(32) |
Apr
(25) |
May
(11) |
Jun
(20) |
Jul
(8) |
Aug
(12) |
Sep
(13) |
Oct
(42) |
Nov
(37) |
Dec
(16) |
2008 |
Jan
(25) |
Feb
(1) |
Mar
(28) |
Apr
(34) |
May
(16) |
Jun
(23) |
Jul
(45) |
Aug
(26) |
Sep
(5) |
Oct
(5) |
Nov
(20) |
Dec
(39) |
2009 |
Jan
(14) |
Feb
(24) |
Mar
(40) |
Apr
(47) |
May
(11) |
Jun
(19) |
Jul
(15) |
Aug
(13) |
Sep
(7) |
Oct
(34) |
Nov
(27) |
Dec
(24) |
2010 |
Jan
(14) |
Feb
(5) |
Mar
(16) |
Apr
(12) |
May
(25) |
Jun
(43) |
Jul
(13) |
Aug
(12) |
Sep
(10) |
Oct
(40) |
Nov
(23) |
Dec
(29) |
2011 |
Jan
(25) |
Feb
(7) |
Mar
(28) |
Apr
(36) |
May
(18) |
Jun
(26) |
Jul
(7) |
Aug
(16) |
Sep
(21) |
Oct
(29) |
Nov
(13) |
Dec
(36) |
2012 |
Jan
(26) |
Feb
(13) |
Mar
(12) |
Apr
(13) |
May
(12) |
Jun
(2) |
Jul
(3) |
Aug
(15) |
Sep
(34) |
Oct
(49) |
Nov
(25) |
Dec
(23) |
2013 |
Jan
(1) |
Feb
(35) |
Mar
(32) |
Apr
(6) |
May
(11) |
Jun
(68) |
Jul
(15) |
Aug
(8) |
Sep
(58) |
Oct
(27) |
Nov
(19) |
Dec
(15) |
2014 |
Jan
(40) |
Feb
(49) |
Mar
(21) |
Apr
(8) |
May
(26) |
Jun
(9) |
Jul
(33) |
Aug
(35) |
Sep
(18) |
Oct
(7) |
Nov
(13) |
Dec
(8) |
2015 |
Jan
(12) |
Feb
(2) |
Mar
(16) |
Apr
(33) |
May
(4) |
Jun
(25) |
Jul
(20) |
Aug
(9) |
Sep
(10) |
Oct
(40) |
Nov
(15) |
Dec
(17) |
2016 |
Jan
(16) |
Feb
(16) |
Mar
(4) |
Apr
(40) |
May
(9) |
Jun
(21) |
Jul
(9) |
Aug
(16) |
Sep
(13) |
Oct
(17) |
Nov
(14) |
Dec
(26) |
2017 |
Jan
(9) |
Feb
(6) |
Mar
(23) |
Apr
(7) |
May
(1) |
Jun
(6) |
Jul
(11) |
Aug
(17) |
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(1) |
2018 |
Jan
(14) |
Feb
(2) |
Mar
(12) |
Apr
(10) |
May
(1) |
Jun
(12) |
Jul
(6) |
Aug
(1) |
Sep
(1) |
Oct
(9) |
Nov
(3) |
Dec
(6) |
2019 |
Jan
(1) |
Feb
|
Mar
(5) |
Apr
|
May
(3) |
Jun
(3) |
Jul
(2) |
Aug
(9) |
Sep
(11) |
Oct
(7) |
Nov
(10) |
Dec
(11) |
2020 |
Jan
(9) |
Feb
(14) |
Mar
(15) |
Apr
(26) |
May
(1) |
Jun
|
Jul
|
Aug
(4) |
Sep
|
Oct
(6) |
Nov
|
Dec
(6) |
2021 |
Jan
|
Feb
(1) |
Mar
(11) |
Apr
(1) |
May
(5) |
Jun
|
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(6) |
Apr
(3) |
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(6) |
Dec
(5) |
2023 |
Jan
|
Feb
|
Mar
(11) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(2) |
Sep
(9) |
Oct
|
Nov
|
Dec
(1) |
2024 |
Jan
|
Feb
|
Mar
|
Apr
(6) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Risto V. <ris...@gm...> - 2020-03-25 16:52:32
|
hi Richard, if CPU utilization has reached 100%, no rules or log file events would be skipped, but SEC would simply not be able to process events at their arrival rate and fall behind of events. If your events include timestamps, you would probably see events with past timestamps in dump file (among other data, SEC dump file reports the last processed event for each input file). As for debugging the reasons of high CPU utilization, I would recommend to have a look into rule match statistics, and make sure that rules with most matches appear in the top of their rule files (if possible). However, current versions of SEC do not report the CPU time spent on matching each pattern against events. Just out of curiosity -- how many rules are you currently having in your rule base, and are all these rules connected to each other? How many events are you currently receiving per second? Also, are all 50 input files containing the same event types (e.g., httpd events) that need to be processed by all rules? If this is not the case, and each input file contains different events which are processed by different rules, I would strongly recommend to consider a hierarchical setup for your rule files. The principles of hierarchical setup have been described in SEC official documentation, for example: http://simple-evcorr.github.io/man.html#lbBE. Also, there is a recent paper which provides a relevant example: https://ristov.github.io/publications/cogsima15-sec-web.pdf. In addition, you could also consider running several instances of SEC for your input files. For example, if some input files contain messages from a specific application which are processed by few specific rule files, a separate SEC process could be started for handling these messages with given rule files. In that way, it might be possible to divide the rule files and input files into several independent groups, and having a separate SEC process for each group allows to balance the load across several CPU's. hope this helps, risto Kontakt Richard Ostrochovský (<ric...@gm...>) kirjutas kuupäeval K, 25. märts 2020 kell 17:07: > Hello friends, > > I have SEC monitoring over 50 log files with various correlations, and it > is consuming 100% of single CPU (luckily on 10-CPU machine, so not whole > system affected, as SEC is single-CPU application). > > This could mean, that SEC does not prosecute processing of all rules, and > I am curious, what are possible effects, if this means increasing delays > (first in, processing, first out), or skipping some lines from input files, > or anything other (?). > > And how to troubleshoot, finding bottlenecks. I can see quantities of log > messages per contexts or log files in sec.dump, this is some indicator. Are > there also other indicators? Is it possible, somehow, see also processing > times of patterns (per rules)? > > Thank you in advance. > > Richard > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users > |
From: Clayton D. <cd...@lo...> - 2020-03-25 16:48:33
|
Hi Richard, I’d love if you could share your Rules. We have implemented SEC into our product (with Risto’s permission of course!) and I want to help build a user community with shared rules. FWIW, you could use LogZilla to help with this problem because it matches on single event patterns before calling SEC. For example: only send a Cisco Change Event to SEC instead of all incoming events for SEC to figure out if there’s a match. P.S. Please don’t take this as me being opportunistic to sell our product, I am most interested in helping the users of SEC. We are a small company and I built LogZilla to solve the problems with crappy NetOps/logging tools – which includes scalability, event enrichment from external sources, and the ability to automate repairs/tasks based on triggered events (and, of course, Event Correlation 😊 ) Our GitHub repo for shared stuff is at https://github.com/logzilla/extras (but we don’t have any SEC rules yet). You can see how we implement SEC here: http://demo.logzilla.net/help/event_correlation/intro_to_event_correlation [cid:image001.png@01D3FCD9.6DFA7670]First they'll say it's impossible, then they'll say it was inevitable Clayton Dukes CEO LogZilla Corp m: 919-600-3198 a: 4819 Emperor Boulevard Raleigh, NC 27703 w: logzilla.net<https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.logzilla.net%2F&data=02%7C01%7Ccdukes%40logzilla.net%7C437136cdd3a14c8e3f3208d76a18c814%7C17fac5c255634489bef45cda2e65588f%7C0%7C0%7C637094527007060540&sdata=afr7xXoZZUxg1t%2Bw9QlnNeCThgwDIfJgK5SRypTrdS8%3D&reserved=0> e: cd...@lo...<mailto:cd...@lo...> [http://cdn2.hubspot.net/hubfs/184235/dev_images/signature_app/twitter_sig.png]<https://twitter.com/logzilla> [http://cdn2.hubspot.net/hubfs/184235/dev_images/signature_app/linkedin_sig.png] <https://www.linkedin.com/in/lzcdukes/> [signature_825941423] The NEO platform delivers scalable, real-time, intelligent insight for enterprise networks worldwide From: Richard Ostrochovský <ric...@gm...> Date: Wednesday, March 25, 2020 at 11:06 AM To: "sim...@li..." <sim...@li...> Subject: [Simple-evcorr-users] SEC CPU utilization Hello friends, I have SEC monitoring over 50 log files with various correlations, and it is consuming 100% of single CPU (luckily on 10-CPU machine, so not whole system affected, as SEC is single-CPU application). This could mean, that SEC does not prosecute processing of all rules, and I am curious, what are possible effects, if this means increasing delays (first in, processing, first out), or skipping some lines from input files, or anything other (?). And how to troubleshoot, finding bottlenecks. I can see quantities of log messages per contexts or log files in sec.dump, this is some indicator. Are there also other indicators? Is it possible, somehow, see also processing times of patterns (per rules)? Thank you in advance. Richard IMPORTANT NOTICE: This e-mail message is intended to be received only by persons entitled to receive the confidential information it may contain. E-mail messages to clients of Logzilla Corporation may contain information that is confidential and legally privileged. Please do not read, copy, forward, or store this message unless you are an intended recipient of it. If you have received this message in error, please forward it to the sender and delete it completely from your computer system. |
From: Richard O. <ric...@gm...> - 2020-03-25 15:06:30
|
Hello friends, I have SEC monitoring over 50 log files with various correlations, and it is consuming 100% of single CPU (luckily on 10-CPU machine, so not whole system affected, as SEC is single-CPU application). This could mean, that SEC does not prosecute processing of all rules, and I am curious, what are possible effects, if this means increasing delays (first in, processing, first out), or skipping some lines from input files, or anything other (?). And how to troubleshoot, finding bottlenecks. I can see quantities of log messages per contexts or log files in sec.dump, this is some indicator. Are there also other indicators? Is it possible, somehow, see also processing times of patterns (per rules)? Thank you in advance. Richard |
From: John P. R. <ro...@cs...> - 2020-03-20 19:56:16
|
Hi Martin: In message <CAK...@ma...> , Martin Etcheverry writes: >I have a question , i have this > >type=PairWithWindow >ptype=RegExp >pattern=\w{3}\W*\d{1,2}\W\d{2}\W\d{2}\W\d{2}\W\d*\W\d*\W\d*\W\d*\W\W*something:\Wstarted\W(.*)\W\W >(.*) >desc= $1 >action=pipe '%s' telegram -C '$1 something something $2 ';pipe '%s' mail -s >'$1' som...@so... >ptype2=RegExp >pattern2=\w{3}\W*\d{1,2}\W\d{2}\W\d{2}\W\d{2}\W\d*\W\d*\W\d*\W\d*\W\W*something:\Whas\Wbeen\Wresolved\W(.*)\W\W >*(.*)* >desc2=event for $1 was cancelled >action2=logonly >window=420 > >In the first pattern i have two groups $1 and $2 , $2 is a unique >event code. I whant to use $2 in the pattern2 in substitution of the >red part. You can use $2 in pattern2. However for context2, descr2 etc, you need to use %2. See https://simple-evcorr.github.io/man.html#lbAP specifically: In order to access match variables set by pattern, %-prefixed match variables have to be used in context2, desc2, and action2 fields. For example, if pattern and pattern2 are regular expressions, then %1 in the desc2 field refers to the value set by the first capture group in pattern (i.e., it has the same value as $1 in the desc field). also the example shows: type=PairWithWindow ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2 desc=User $1 has been unable to log in from $2 over SSH during 1 minute action=pipe '%t: %s' /bin/mail root@localhost ptype2=RegExp pattern2=sshd\[\d+\]: Accepted .+ for $1 from $2 port \d+ ssh2 desc2=SSH login successful for %1 from %2 after initial failure action2=logonly window=60 $1, $2 are defined by the 'pattern' up to the point 'pattern2' matches. Once pattern2 has matched: desc2, action2, context2 need to use %2 to refrence the matched patterns from 'pattern'. $1, $2 etc are redefined/overwritten by the matches from pattern2. Reading the man page a few times is required to use sec efficiently. Have fun. -- -- rouilj John Rouillard =========================================================================== My employers don't acknowledge my existence much less my opinions. |
From: Martin E. <ma...@et...> - 2020-03-20 19:42:01
|
Hi SEC users. I have a question , i have this type=PairWithWindow ptype=RegExp pattern=\w{3}\W*\d{1,2}\W\d{2}\W\d{2}\W\d{2}\W\d*\W\d*\W\d*\W\d*\W\W*something:\Wstarted\W(.*)\W\W (.*) desc= $1 action=pipe '%s' telegram -C '$1 something something $2 ';pipe '%s' mail -s '$1' som...@so... ptype2=RegExp pattern2=\w{3}\W*\d{1,2}\W\d{2}\W\d{2}\W\d{2}\W\d*\W\d*\W\d*\W\d*\W\W*something:\Whas\Wbeen\Wresolved\W(.*)\W\W *(.*)* desc2=event for $1 was cancelled action2=logonly window=420 In the first pattern i have two groups $1 and $2 , $2 is a unique event code. I whant to use $2 in the pattern2 in substitution of the red part. I don't know if its posible or how will be the syntax , any help will be deeply appreciated Thanks Martin fdffdffdfs |
From: Michael R. <mic...@no...> - 2020-03-10 15:21:22
|
Risto, Huh! These rules were written years ago (this had been working for some time, but then the server had a disk corruption and I had to set it up again), but you're right about the backticks. I never noticed that. 'useradd' and 'new group' is actually okay, because it's looking for the group creation in conjunction with a new user. There should be (but isn't) another rule to look for groupadd. So ... my command line looks like this: /usr/local/bin/sec --input=/var/log/sec --pid=/var/run/sec.pid --log=/var/log/sec.log --conf "/etc/sec/*.sec" --detach --debug=6 Since you found issues with my regexp, I decided to go through them all again. I don't think the syntax has changed since I first did these (in 2012), but maybe something in the interpreters did because they totally failed. They were coming up with a hostname of "Mar" for everything (from the date at the beginning), and for a while they matched everything, which was fun. Ultimately I found that I had to escape the '|' characters, which I didn't do on RHEL 7. But the new server is RHEL 8, and changing every "||" to "\|\|" made all the difference. Now the rules are working as intended. Thank you! <MR> ----------------------------------- Michael Raugh, BCH, CI, VCP5, LPIC, ITILv2011 Foundation NOAA/NESDIS-HQ Sr. Systems Engineer NIIS - Team ActioNet - NESDIS Office: 301-713-0519 Contractor On Mon, Mar 9, 2020 at 4:29 PM Risto Vaarandi <ris...@gm... <mailto:ris...@gm...>> wrote: hi Michael, let me provide my quick feedback below: This is weird. I'm not a real experienced user, but I thought I was doing it right. All I'm doing is running a few simple rules to pull "interesting" events out and post them. I have one machine receiving all logs and writing them to a named pipe and a file in parallel (for debugging). These are really simple rules, like: ... type=Single ptype=regexp pattern=([\w\.,]+).+useradd.+new group: name=(.+?), desc=New group created: $2 action=shellcmd /usr/local/bin/newalert.pl <http://newalert.pl> -r $1 -c AccountAudit -m "%s" This regular expression assumes that a new group is created with useradd program, but groupadd program appears in example events. My guess is that the program name probably needs changing in the regular expression. type=Single ptype=regexp pattern=([\w\.,]+).+userdel.+delete user `(\w+)' desc=Account $2 deleted action=shellcmd /usr/local/bin/newalert.pl <http://newalert.pl> -r $1 -c AccountAudit -m "%s" type=Single ptype=regexp pattern=([\w\.,]+).+userdel.+removed group `(\w+)' desc=Group $2 deleted action=shellcmd /usr/local/bin/newalert.pl <http://newalert.pl> -r $1 -c AccountAudit -m "%s" In the above rules, backtick symbol ` has been used in regular expressions before (\w+) capture group, but in actual events there is apostrophe symbol ' in that position. For making regular expressions to match the events, backticks need to be replaced with apostrophes. When I do a test action on a workstation, the log entry shows up in the text file like this: Mar 9 14:56:23||seker||140.90.236.53||10,6||groupadd|| group added to /etc/group: name=tcpdump, GID=72 Mar 9 14:56:23||seker||140.90.236.53||10,6||groupadd|| group added to /etc/gshadow: name=tcpdump Mar 9 14:56:23||seker||140.90.236.53||10,6||groupadd|| new group: name=tcpdump, GID=72 Mar 9 14:56:23||seker||140.90.236.53||10,6||useradd|| new user: name=tcpdump, UID=72, GID=72, home=/, shell=/sbin/nologin Mar 9 14:56:23||seker||140.90.236.53||1,6||yum|| Installed: 14:tcpdump-4.9.2-4.el7_7.1.x86_64 Mar 9 14:56:41||seker||140.90.236.53||1,6||yum|| Erased: 14:tcpdump-4.9.2-4.el7_7.1.x86_64 Mar 9 14:56:57||seker||140.90.236.53||10,6||userdel|| delete user 'tcpdump' Mar 9 14:56:57||seker||140.90.236.53||10,6||userdel|| removed group 'tcpdump' owned by 'tcpdump' Mar 9 14:56:57||seker||140.90.236.53||10,6||userdel|| removed shadow group 'tcpdump' owned by 'tcpdump' But no action is taken by SEC -- the "newalert.pl <http://newalert.pl>" script is never run. Nothing shows up in the SEC log with debugging at maximum. Yet when I grep my text file with the regexp I'm using for, for example, adding a user it totally matches. And sometimes, at seemingly random, it does actually work -- but mostly not. Am I missing something obvious here? From above events you have provided, the fourth one should actually match the first rule in the rule base. Do you see that particular match happening? (It worked for me when I tested the ruleset.) If there is no match, there is something else which needs fixing in the setup. In order to make troubleshooting easier, can you also provide the command line flags that have been provided to SEC? Also, when you send the USR1 signal to SEC process, does it report that it has the input file open and all rules loaded? kind regards, risto <MR> ----------------------------------- Michael Raugh, BCH, CI, VCP5, LPIC, ITILv2011 Foundation NOAA/NESDIS-HQ Sr. Systems Engineer NIIS - Team ActioNet - NESDIS Office: 301-713-0519 Contractor _______________________________________________ Simple-evcorr-users mailing list Sim...@li... <mailto:Sim...@li...> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users |
From: Risto V. <ris...@gm...> - 2020-03-09 20:29:45
|
hi Michael, let me provide my quick feedback below: This is weird. I'm not a real experienced user, but I thought I was doing > it right. > > All I'm doing is running a few simple rules to pull "interesting" events > out and post them. I have one machine receiving all logs and writing them > to a named pipe and a file in parallel (for debugging). These are really > simple rules, like: > > ... type=Single > ptype=regexp > pattern=([\w\.,]+).+useradd.+new group: name=(.+?), > desc=New group created: $2 > action=shellcmd /usr/local/bin/newalert.pl -r $1 -c AccountAudit -m "%s" > This regular expression assumes that a new group is created with useradd program, but groupadd program appears in example events. My guess is that the program name probably needs changing in the regular expression. > > type=Single > ptype=regexp > pattern=([\w\.,]+).+userdel.+delete user `(\w+)' > desc=Account $2 deleted > action=shellcmd /usr/local/bin/newalert.pl -r $1 -c AccountAudit -m "%s" > > type=Single > ptype=regexp > pattern=([\w\.,]+).+userdel.+removed group `(\w+)' > desc=Group $2 deleted > action=shellcmd /usr/local/bin/newalert.pl -r $1 -c AccountAudit -m "%s" > In the above rules, backtick symbol ` has been used in regular expressions before (\w+) capture group, but in actual events there is apostrophe symbol ' in that position. For making regular expressions to match the events, backticks need to be replaced with apostrophes. > When I do a test action on a workstation, the log entry shows up in the > text file like this: > Mar 9 14:56:23||seker||140.90.236.53||10,6||groupadd|| group added to > /etc/group: name=tcpdump, GID=72 > Mar 9 14:56:23||seker||140.90.236.53||10,6||groupadd|| group added to > /etc/gshadow: name=tcpdump > Mar 9 14:56:23||seker||140.90.236.53||10,6||groupadd|| new group: > name=tcpdump, GID=72 > Mar 9 14:56:23||seker||140.90.236.53||10,6||useradd|| new user: > name=tcpdump, UID=72, GID=72, home=/, shell=/sbin/nologin > Mar 9 14:56:23||seker||140.90.236.53||1,6||yum|| Installed: > 14:tcpdump-4.9.2-4.el7_7.1.x86_64 > Mar 9 14:56:41||seker||140.90.236.53||1,6||yum|| Erased: > 14:tcpdump-4.9.2-4.el7_7.1.x86_64 > Mar 9 14:56:57||seker||140.90.236.53||10,6||userdel|| delete user > 'tcpdump' > Mar 9 14:56:57||seker||140.90.236.53||10,6||userdel|| removed group > 'tcpdump' owned by 'tcpdump' > Mar 9 14:56:57||seker||140.90.236.53||10,6||userdel|| removed shadow > group 'tcpdump' owned by 'tcpdump' > > But no action is taken by SEC -- the "newalert.pl" script is never run. > Nothing shows up in the SEC log with debugging at maximum. Yet when I grep > my text file with the regexp I'm using for, for example, adding a user it > totally matches. And sometimes, at seemingly random, it does actually work > -- but mostly not. Am I missing something obvious here? > >From above events you have provided, the fourth one should actually match the first rule in the rule base. Do you see that particular match happening? (It worked for me when I tested the ruleset.) If there is no match, there is something else which needs fixing in the setup. In order to make troubleshooting easier, can you also provide the command line flags that have been provided to SEC? Also, when you send the USR1 signal to SEC process, does it report that it has the input file open and all rules loaded? kind regards, risto > <MR> > ----------------------------------- > Michael Raugh, BCH, CI, VCP5, LPIC, ITILv2011 Foundation > NOAA/NESDIS-HQ Sr. Systems Engineer > NIIS - Team ActioNet - NESDIS > Office: 301-713-0519 > Contractor > > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users > |
From: Michael R. - N. A. <mic...@no...> - 2020-03-09 20:04:45
|
This is weird. I'm not a real experienced user, but I thought I was doing it right. All I'm doing is running a few simple rules to pull "interesting" events out and post them. I have one machine receiving all logs and writing them to a named pipe and a file in parallel (for debugging). These are really simple rules, like: type=Single ptype=regexp pattern=(\d+\.\d+\.\d+\.\d+).+useradd.+new user: name=(.+?), desc=New account created: $2 action=shellcmd /usr/local/bin/newalert.pl -r $1 -c AccountAudit -m "%s" type=Single ptype=regexp pattern=([\w\.,]+).+useradd.+new group: name=(.+?), desc=New group created: $2 action=shellcmd /usr/local/bin/newalert.pl -r $1 -c AccountAudit -m "%s" type=Single ptype=regexp pattern=([\w\.,]+).+userdel.+delete user `(\w+)' desc=Account $2 deleted action=shellcmd /usr/local/bin/newalert.pl -r $1 -c AccountAudit -m "%s" type=Single ptype=regexp pattern=([\w\.,]+).+userdel.+removed group `(\w+)' desc=Group $2 deleted action=shellcmd /usr/local/bin/newalert.pl -r $1 -c AccountAudit -m "%s" When I do a test action on a workstation, the log entry shows up in the text file like this: Mar 9 14:56:23||seker||140.90.236.53||10,6||groupadd|| group added to /etc/group: name=tcpdump, GID=72 Mar 9 14:56:23||seker||140.90.236.53||10,6||groupadd|| group added to /etc/gshadow: name=tcpdump Mar 9 14:56:23||seker||140.90.236.53||10,6||groupadd|| new group: name=tcpdump, GID=72 Mar 9 14:56:23||seker||140.90.236.53||10,6||useradd|| new user: name=tcpdump, UID=72, GID=72, home=/, shell=/sbin/nologin Mar 9 14:56:23||seker||140.90.236.53||1,6||yum|| Installed: 14:tcpdump-4.9.2-4.el7_7.1.x86_64 Mar 9 14:56:41||seker||140.90.236.53||1,6||yum|| Erased: 14:tcpdump-4.9.2-4.el7_7.1.x86_64 Mar 9 14:56:57||seker||140.90.236.53||10,6||userdel|| delete user 'tcpdump' Mar 9 14:56:57||seker||140.90.236.53||10,6||userdel|| removed group 'tcpdump' owned by 'tcpdump' Mar 9 14:56:57||seker||140.90.236.53||10,6||userdel|| removed shadow group 'tcpdump' owned by 'tcpdump' But no action is taken by SEC -- the "newalert.pl" script is never run. Nothing shows up in the SEC log with debugging at maximum. Yet when I grep my text file with the regexp I'm using for, for example, adding a user it totally matches. And sometimes, at seemingly random, it does actually work -- but mostly not. Am I missing something obvious here? <MR> ----------------------------------- Michael Raugh, BCH, CI, VCP5, LPIC, ITILv2011 Foundation NOAA/NESDIS-HQ Sr. Systems Engineer NIIS - Team ActioNet - NESDIS Office: 301-713-0519 Contractor |
From: James L. <jl...@sl...> - 2020-03-04 21:17:34
|
Brilliant..thank you so much Risto! James On 2020-03-04 13:13, Risto Vaarandi wrote: > hi James, > > you are observing this behavior since --detach option involves > changing working directory to root directory (that's a standard part > of turning the process into a daemon). Actually, when you look into > debug messages from SEC, there is also a message about directory > change in the terminal. Since the input file and rule file names are > relative, they will be not found in /, and to fix this issue, absolute > paths have to be used with --detach option. > > hope this helps, > > risto > > Kontakt James Lay (<jl...@sl...>) kirjutas kuupäeval K, > 4. märts 2020 kell 21:58: > >> Hey all, >> >> So...I'm testing some stuff.....when I have an entry with logonly >> things >> work fine. I'm trying to run sec against a file and have the action >> of >> pipe work, but the pipe command is echoed to the screen. As I was >> testing I noticed something odd....in a dir I have sec-test.conf and >> >> tabbedfile. Command: >> >> sec --conf=sec-test.conf --input=tabbedfile >> SEC (Simple Event Correlator) 2.8.2 >> Reading configuration from sec-test.conf >> 4 rules loaded from sec-test.conf >> No --bufsize command line option or --bufsize=0, setting --bufsize >> to 1 >> Opening input file tabbedfile >> Interactive process, SIGINT can't be used for changing the logging >> level >> >> however when I try --detach I get: >> >> sec --conf=sec-test.conf --input=tabbedfile --detach >> SEC (Simple Event Correlator) 2.8.2 >> Changing working directory to / >> Reading configuration from sec-test.conf >> Can't open configuration file sec-test.conf (No such file or >> directory) >> No --bufsize command line option or --bufsize=0, setting --bufsize >> to 1 >> Opening input file tabbedfile >> Input file tabbedfile does not exist! >> >> No matter where in the command I put --detach tabbedfile suddenly is >> not >> found. Odd behaviour. Thank you. >> >> James >> >> _______________________________________________ >> Simple-evcorr-users mailing list >> Sim...@li... >> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users |
From: Risto V. <ris...@gm...> - 2020-03-04 20:13:51
|
hi James, you are observing this behavior since --detach option involves changing working directory to root directory (that's a standard part of turning the process into a daemon). Actually, when you look into debug messages from SEC, there is also a message about directory change in the terminal. Since the input file and rule file names are relative, they will be not found in /, and to fix this issue, absolute paths have to be used with --detach option. hope this helps, risto Kontakt James Lay (<jl...@sl...>) kirjutas kuupäeval K, 4. märts 2020 kell 21:58: > Hey all, > > So...I'm testing some stuff.....when I have an entry with logonly things > work fine. I'm trying to run sec against a file and have the action of > pipe work, but the pipe command is echoed to the screen. As I was > testing I noticed something odd....in a dir I have sec-test.conf and > tabbedfile. Command: > > sec --conf=sec-test.conf --input=tabbedfile > SEC (Simple Event Correlator) 2.8.2 > Reading configuration from sec-test.conf > 4 rules loaded from sec-test.conf > No --bufsize command line option or --bufsize=0, setting --bufsize to 1 > Opening input file tabbedfile > Interactive process, SIGINT can't be used for changing the logging level > > however when I try --detach I get: > > sec --conf=sec-test.conf --input=tabbedfile --detach > SEC (Simple Event Correlator) 2.8.2 > Changing working directory to / > Reading configuration from sec-test.conf > Can't open configuration file sec-test.conf (No such file or directory) > No --bufsize command line option or --bufsize=0, setting --bufsize to 1 > Opening input file tabbedfile > Input file tabbedfile does not exist! > > No matter where in the command I put --detach tabbedfile suddenly is not > found. Odd behaviour. Thank you. > > James > > > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users > |
From: James L. <jl...@sl...> - 2020-03-04 19:57:40
|
Hey all, So...I'm testing some stuff.....when I have an entry with logonly things work fine. I'm trying to run sec against a file and have the action of pipe work, but the pipe command is echoed to the screen. As I was testing I noticed something odd....in a dir I have sec-test.conf and tabbedfile. Command: sec --conf=sec-test.conf --input=tabbedfile SEC (Simple Event Correlator) 2.8.2 Reading configuration from sec-test.conf 4 rules loaded from sec-test.conf No --bufsize command line option or --bufsize=0, setting --bufsize to 1 Opening input file tabbedfile Interactive process, SIGINT can't be used for changing the logging level however when I try --detach I get: sec --conf=sec-test.conf --input=tabbedfile --detach SEC (Simple Event Correlator) 2.8.2 Changing working directory to / Reading configuration from sec-test.conf Can't open configuration file sec-test.conf (No such file or directory) No --bufsize command line option or --bufsize=0, setting --bufsize to 1 Opening input file tabbedfile Input file tabbedfile does not exist! No matter where in the command I put --detach tabbedfile suddenly is not found. Odd behaviour. Thank you. James |
From: Risto V. <ris...@gm...> - 2020-02-20 20:24:01
|
hi Richard, I think this scenario is best addressed by creating a relevant SEC context when 'addinput' action is called. In fact, handling such scenarios is one of the purposes of contexts, and here is an example rule which illustrates this idea: type=single ptype=regexp pattern=start monitoring (\S+) context=!FILE_$1_MONITORED desc=add $1 to list of inputs action=addinput $1; create FILE_$1_MONITORED Whenever "start monitoring <filename>" event appears, the rule will match only if context FILE_<filename>_MONITORED does not exist. If rule matches, it executes 'addinput' action for the given file and creates the context, in order to manifest the fact that 'addinput' has already been executed for the given file. Also, as you can see from the above rule, the presence of the context for a file will prevent the execution of 'addinput' again for this file. In order to keep contexts in sync with files that are monitored, the context for a file should be deleted when 'dropinput' action is executed for it. Note that when HUP signal is received, SEC will stop monitoring input files set up with 'addinput' action. However, on receiving HUP signal SEC will also drop all its contexts, so there is no need to take any extra steps in that case. Hope this helps, risto Kontakt Richard Ostrochovský (<ric...@gm...>) kirjutas kuupäeval N, 20. veebruar 2020 kell 21:43: > Hello Risto and friends, > > having mechanism for dynamic opening (addinput) and closing (dropinput) > files, I would like to be able to check, if the file is already opened, > before trying to open it again, to avoid it. That way I would like to > eliminate this error message from SEC log (present also in debug level 3): > > Dynamic input file '/path/to/file.log' already exists in the list of > inputs, can't add > > This information is present in sec.dump, but maybe there exists more > instant and straightforward way how to achieve it (without parsing > intermediary files). > > Thank you. > > Richard > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users > |
From: Richard O. <ric...@gm...> - 2020-02-20 19:42:23
|
Hello Risto and friends, having mechanism for dynamic opening (addinput) and closing (dropinput) files, I would like to be able to check, if the file is already opened, before trying to open it again, to avoid it. That way I would like to eliminate this error message from SEC log (present also in debug level 3): Dynamic input file '/path/to/file.log' already exists in the list of inputs, can't add This information is present in sec.dump, but maybe there exists more instant and straightforward way how to achieve it (without parsing intermediary files). Thank you. Richard |
From: Dusan S. <dus...@ho...> - 2020-02-20 12:11:45
|
Hi Risto, Thank you for your explanation. All works well for me now. I using SEC v 2.7.12 therefore I see that compilation error with lcall and :> operator. Thank you, Dusan ________________________________ Od: Risto Vaarandi <ris...@gm...> Odoslané: streda 19. februára 2020 14:52 Komu: Dusan Sovic <dus...@ho...> Kópia: sim...@li... <sim...@li...> Predmet: Re: [Simple-evcorr-users] How to introduce new match variable hi Dusan, you can find my comments below: > > I try to add new variable using “context” and :> operator also using “lcall” action but no luck. > Any idea how to achieve this? > > This is what I have produced so far: > > Config file: dusko.sec > ---------------------------- > rem=Rule 1 > type=Single > ptype=RegExp > pattern=^(?<EVENT>\S+) (?<SEVERITY>\S+)$ > varmap=MY_EVENT > continue=TakeNext > desc=Parsing Event > action=write - R1: Parsing event: $+{EVENT} $+{SEVERITY} > > rem=Rule 2 > type=Single > ptype=Cached > pattern=MY_EVENT > context=MY_EVENT :> ( sub { return $_[0]->{"NEW"} = "new_entry"; } ) > desc=Introducing new variable > action=lcall %o MY_EVENT -> ( sub { $_[0]->{"NEW"} = "value" } ); \ > write - R2: NEW = $+{NEW} > Rule #2 is not having an expected effect, since SEC rule matching involves several steps in the following order: 1) pattern is matched against an incoming event 2) if pattern matched the event, collect match variable values for substitutions (e.g., substitutions in 'context' field of the rule) 3) evaluate the context expression of the rule (provided with 'context' field) If any new match variables are created during step 3, they are not used during substitutions within the current rule, since the set of match variables and their values were fixed during previous step. However, the match variable would be visible in the following rules. In order to make the variable visible immediately in the current rule, you can enclose the context expression in square brackets [ ], which means that context expression has to be evaluated *before* the pattern match (in other words, step 3 would be taken before step 1 now). For example: rem=Rule 2 type=Single ptype=Cached pattern=MY_EVENT context=[ MY_EVENT :> ( sub { return $_[0]->{"NEW"} = "new_entry"; } ) ] desc=Introducing new variable action=write - R2: NEW = $+{NEW} The use of [ ] operator involves one caveat -- since match variables (e.g., $1 or $2) are produced by pattern match, they will not have any values yet when context expression is evaluated, and are therefore not substituted. However, this is not a problem for the above rule, since the context expression in this rule contains no references to match variables (such as $1 or $+{NEW}). > > Also if I want to replace “->” with “:>” for lcall action: > action=lcall %o MY_EVENT :> ( sub { $_[0]->{"NEW"} = "value" } ); \ > write - R2: NEW = $+{NEW} > > I got compilation error: > Rule in ./dusko.sec at line 10: Eval '{"NEW"} = "value" } )' didn't return a code reference: syntax error at (eval 9) line 1, near "} =" > Unmatched right curly bracket at (eval 9) line 1, at end of line > Rule in ./dusko.sec at line 10: Invalid action list ' lcall %o MY_EVENT :> ( sub { $_[0]->{"NEW"} = "value" } ); write - R2: NEW = $+{NEW} ' This is because the :> operator for 'lcall' action was introduced in sec-2.8.0, and is not supported by previous versions (such as sec-2.7.X). When I tried your rule with sec-2.8.2, everything worked fine, but testing it with sec-2.7.12 produced the same error message. Therefore I suspect that you have an earlier version than 2.8.0, and would recommend to upgrade to 2.8.2 (the latest version). But with the above workaround, you would not need 'lcall %o MY_EVENT :> ( sub { $_[0]->{"NEW"} = "value" } )' action anyway. Hope this helps, risto > > Thanks for any help, > Dusan > > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li...<mailto:Sim...@li...> > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users |
From: Risto V. <ris...@gm...> - 2020-02-19 13:52:23
|
hi Dusan, you can find my comments below: > > I try to add new variable using “context” and :> operator also using “lcall” action but no luck. > Any idea how to achieve this? > > This is what I have produced so far: > > Config file: dusko.sec > ---------------------------- > rem=Rule 1 > type=Single > ptype=RegExp > pattern=^(?<EVENT>\S+) (?<SEVERITY>\S+)$ > varmap=MY_EVENT > continue=TakeNext > desc=Parsing Event > action=write - R1: Parsing event: $+{EVENT} $+{SEVERITY} > > rem=Rule 2 > type=Single > ptype=Cached > pattern=MY_EVENT > context=MY_EVENT :> ( sub { return $_[0]->{"NEW"} = "new_entry"; } ) > desc=Introducing new variable > action=lcall %o MY_EVENT -> ( sub { $_[0]->{"NEW"} = "value" } ); \ > write - R2: NEW = $+{NEW} > Rule #2 is not having an expected effect, since SEC rule matching involves several steps in the following order: 1) pattern is matched against an incoming event 2) if pattern matched the event, collect match variable values for substitutions (e.g., substitutions in 'context' field of the rule) 3) evaluate the context expression of the rule (provided with 'context' field) If any new match variables are created during step 3, they are not used during substitutions within the current rule, since the set of match variables and their values were fixed during previous step. However, the match variable would be visible in the following rules. In order to make the variable visible immediately in the current rule, you can enclose the context expression in square brackets [ ], which means that context expression has to be evaluated *before* the pattern match (in other words, step 3 would be taken before step 1 now). For example: rem=Rule 2 type=Single ptype=Cached pattern=MY_EVENT context=[ MY_EVENT :> ( sub { return $_[0]->{"NEW"} = "new_entry"; } ) ] desc=Introducing new variable action=write - R2: NEW = $+{NEW} The use of [ ] operator involves one caveat -- since match variables (e.g., $1 or $2) are produced by pattern match, they will not have any values yet when context expression is evaluated, and are therefore not substituted. However, this is not a problem for the above rule, since the context expression in this rule contains no references to match variables (such as $1 or $+{NEW}). > > Also if I want to replace “->” with “:>” for lcall action: > action=lcall %o MY_EVENT :> ( sub { $_[0]->{"NEW"} = "value" } ); \ > write - R2: NEW = $+{NEW} > > I got compilation error: > Rule in ./dusko.sec at line 10: Eval '{"NEW"} = "value" } )' didn't return a code reference: syntax error at (eval 9) line 1, near "} =" > Unmatched right curly bracket at (eval 9) line 1, at end of line > Rule in ./dusko.sec at line 10: Invalid action list ' lcall %o MY_EVENT :> ( sub { $_[0]->{"NEW"} = "value" } ); write - R2: NEW = $+{NEW} ' This is because the :> operator for 'lcall' action was introduced in sec-2.8.0, and is not supported by previous versions (such as sec-2.7.X). When I tried your rule with sec-2.8.2, everything worked fine, but testing it with sec-2.7.12 produced the same error message. Therefore I suspect that you have an earlier version than 2.8.0, and would recommend to upgrade to 2.8.2 (the latest version). But with the above workaround, you would not need 'lcall %o MY_EVENT :> ( sub { $_[0]->{"NEW"} = "value" } )' action anyway. Hope this helps, risto > > Thanks for any help, > Dusan > > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users |
From: Dusan S. <dus...@ho...> - 2020-02-19 12:48:22
|
Hi SEC users, I want to create / introduce new match variable in my rules. I search forum posts and found this: "Once you have cached match results, they become visible across all rules and you can modify them. In order to do this, you have to use the :> context expression operator for getting a reference to the set of cached match variables. Once you have the reference, you can not only modify individual variables, but you can also delete existing match variables, and even introduce new variables (for example, $_[0]->{"newvariable"} = 1 would set the variable $+{newvariable} to 1)." I try to add new variable using “context” and :> operator also using “lcall” action but no luck. Any idea how to achieve this? This is what I have produced so far: Config file: dusko.sec ---------------------------- rem=Rule 1 type=Single ptype=RegExp pattern=^(?<EVENT>\S+) (?<SEVERITY>\S+)$ varmap=MY_EVENT continue=TakeNext desc=Parsing Event action=write - R1: Parsing event: $+{EVENT} $+{SEVERITY} rem=Rule 2 type=Single ptype=Cached pattern=MY_EVENT context=MY_EVENT :> ( sub { return $_[0]->{"NEW"} = "new_entry"; } ) desc=Introducing new variable action=lcall %o MY_EVENT -> ( sub { $_[0]->{"NEW"} = "value" } ); \ write - R2: NEW = $+{NEW} Star sec ----------- sec -input=- -conf=./dusko.sec -intevents -intcontexts --debug=6 Put this input event: --------------------------- Event1 Normal Result into: ---------------- R1: Parsing event: Event1 Normal R2: NEW = Also if I want to replace “->” with “:>” for lcall action: action=lcall %o MY_EVENT :> ( sub { $_[0]->{"NEW"} = "value" } ); \ write - R2: NEW = $+{NEW} I got compilation error: Rule in ./dusko.sec at line 10: Eval '{"NEW"} = "value" } )' didn't return a code reference: syntax error at (eval 9) line 1, near "} =" Unmatched right curly bracket at (eval 9) line 1, at end of line Rule in ./dusko.sec at line 10: Invalid action list ' lcall %o MY_EVENT :> ( sub { $_[0]->{"NEW"} = "value" } ); write - R2: NEW = $+{NEW} ' Thanks for any help, Dusan |
From: Fulvio S. <tra...@gm...> - 2020-02-12 18:45:10
|
Hello Stuart. Since you are, from what I can see, trying to match portions of text between ASCII spaces with ".*?", why don't you avoid non-greedy quantifiers, opting for a clearer \S+ or [^ ]+, matching specifically on sequences of 1 or more non-whitespace characters (I guess there will always be more than one)? That's more specific and avoids the repeated backtracking of non-greedy quantifiers. As to why the suppression does not take place, reading from the sec man page on the SingleWithSuppress rule description: "When an event has matched the rule, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule does not exist, SEC will create it with the lifetime of T seconds, and the operation immediately executes an action list. If the operation exists, it consumes the matching event without any action." Now, correct me if I am wrong but your "desc" contains the exact time, from hour to second, of each event. Therefore it can be unique only once every day and every event has got a different desc string. For instance, for the log line you've posted, the desc string ought to be "Native VLAN mismatch reported on 05:01:47". If that's what you meant the following event might, for instance, have a desc string such as "Native VLAN mismatch reported on 05:02:37", which is different from the previous desc line and therefore is not correlated to the first as a new instance of the same event. Did you perhaps wish to use the hostname ( $2 ) in the desc line? (I hope I didn't make any glaring mistake in my interpretation. I guess Risto might be able to correct me in that case.) Have a good day, Fulvio Scapin Il giorno mer 12 feb 2020 alle ore 18:42 Stuart Kendrick < st...@al...> ha scritto: > Given the following in the log line: > > > > 2020-02-12T05:01:47.606728-08:00 5n-2-esx-mgmt 32231: 032195: Feb 12 > 05:01:46.600 pst: %CDP-4-NATIVE_VLAN_MISMATCH: Native VLAN mismatch > discovered on GigabitEthernet3/0/18 (64), with > 5n-1-esx.corp.alleninstitute.org GigabitEthernet2/0/42 (60). > > > > And the following sec.conf stanza: > > > > type=SingleWithSuppress > > ptype=regexp > > pattern=T(\d\d:\d\d:\d\d).*? (.*?) .*%CDP-4-NATIVE_VLAN_MISMATCH: (.*) > > desc=Native VLAN mismatch reported on $1 > > action=write /home/tocops/.tocpipe ops $1 $2 $3 > > window=3600 > > > > I would have predicted that the ‘action’ would be performed once/hour > (given a steady stream of these messages, which is what I am seeing) > > > > In fact, the action does not get performed > > > > > > In contrast, the following snippet does result in the action being execute > (although the suppression window isn’t honored) > > > > type=singleWithSuppress > > ptype=regexp > > pattern=T(\d\d:\d\d:\d\d).*? (.*?) .*%CDP-4-NATIVE_VLAN_MISMATCH: Native > VLAN mismatch discovered on (\S+) \((\d+)\), with (\S+) (\S+) \((\d+)\) > > desc=Native VLAN mistmatch reported between $1 interface $2 (native VLAN > $3) and host $4 interface $5 (native VLAN $6) > > action=write /home/tocops/.tocpipe ops $1 $2 Native VLAN Mismatch on > interface $3 (native VLAN $4) and $5 interface $6 (native VLAN $7) > > window=3600 > > > > Is there some aspect of pattern matching on “.*” that I am not > understanding? > > > > --sk > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users > |
From: Stuart K. <st...@al...> - 2020-02-12 17:42:00
|
Given the following in the log line: 2020-02-12T05:01:47.606728-08:00 5n-2-esx-mgmt 32231: 032195: Feb 12 05:01:46.600 pst: %CDP-4-NATIVE_VLAN_MISMATCH: Native VLAN mismatch discovered on GigabitEthernet3/0/18 (64), with 5n-1-esx.corp.alleninstitute.org GigabitEthernet2/0/42 (60). And the following sec.conf stanza: type=SingleWithSuppress ptype=regexp pattern=T(\d\d:\d\d:\d\d).*? (.*?) .*%CDP-4-NATIVE_VLAN_MISMATCH: (.*) desc=Native VLAN mismatch reported on $1 action=write /home/tocops/.tocpipe ops $1 $2 $3 window=3600 I would have predicted that the 'action' would be performed once/hour (given a steady stream of these messages, which is what I am seeing) In fact, the action does not get performed In contrast, the following snippet does result in the action being execute (although the suppression window isn't honored) type=singleWithSuppress ptype=regexp pattern=T(\d\d:\d\d:\d\d).*? (.*?) .*%CDP-4-NATIVE_VLAN_MISMATCH: Native VLAN mismatch discovered on (\S+) \((\d+)\), with (\S+) (\S+) \((\d+)\) desc=Native VLAN mistmatch reported between $1 interface $2 (native VLAN $3) and host $4 interface $5 (native VLAN $6) action=write /home/tocops/.tocpipe ops $1 $2 Native VLAN Mismatch on interface $3 (native VLAN $4) and $5 interface $6 (native VLAN $7) window=3600 Is there some aspect of pattern matching on ".*" that I am not understanding? --sk |
From: Risto V. <ris...@gm...> - 2020-02-07 14:58:58
|
hi Richard, In this context I am also curious, what would be the effect of using > --check-timeout / --poll-timeout, if the log file will be closed or remain > open during timeout... I am trying to find a way, how to use SEC in "close > after read" mode - used to use this mode in previous log event correlation > solution, because keeping log files "always open" causes described problem > with their deletion (by external archivation script) on NFS... > > From SEC manual: "Each input file is tracked both by its name and i-node, > and input file rotations are handled seamlessly. If the input file is > recreated or truncated, SEC will reopen it and process its content from the > beginning. If the input file is removed (i.e., there is just an i-node left > without a name), SEC will keep the i-node open and wait for the input file > recreation." > > Maybe it would be sufficient having an option to (immediately?) close > (re)moved file, instead of keeping original i-node open until its > recreation in its original location. > > This behavior is intentional and necessary, in order to not miss events that are written into input file. For example, consider the following situation: 1) process X is running and writing its events into a log file which is monitored by SEC 2) log rotation tool (e.g., logrotate) will delete the log file 3) log rotation tool will send a signal to process X, forcing the process to reopen the log file (this step will recreate the log file on disk) Note that after step 2 we have a situation where process X is still writing into nameless file and could log additional events that SEC needs to process. Therefore, closing the log file immediately without waiting for the appearance of new log file on disk involves the risk of missing events. That risk increases with custom log rotation scripts which might involve a larger time gap between steps 2 and 3. One could also imagine other similar scenarios like accidental removal of log file from disk, and that is the reason why SEC does not close the log file when its name disappears from directory tree. Hope this helps, risto |
From: Richard O. <ric...@gm...> - 2020-02-07 13:03:50
|
In this context I am also curious, what would be the effect of using --check-timeout / --poll-timeout, if the log file will be closed or remain open during timeout... I am trying to find a way, how to use SEC in "close after read" mode - used to use this mode in previous log event correlation solution, because keeping log files "always open" causes described problem with their deletion (by external archivation script) on NFS... >From SEC manual: "Each input file is tracked both by its name and i-node, and input file rotations are handled seamlessly. If the input file is recreated or truncated, SEC will reopen it and process its content from the beginning. If the input file is removed (i.e., there is just an i-node left without a name), SEC will keep the i-node open and wait for the input file recreation." Maybe it would be sufficient having an option to (immediately?) close (re)moved file, instead of keeping original i-node open until its recreation in its original location. pi 7. 2. 2020 o 12:38 Richard Ostrochovský <ric...@gm...> napísal(a): > hi Risto, > > thank you for your helpful explanation about inner functionality of SEC. > Closed files in my case were not existing, that was the reason. > > Richard > > ut 4. 2. 2020 o 23:09 Risto Vaarandi <ris...@gm...> > napísal(a): > >> hi Richard, >> >> I have never used SEC for monitoring files on NFS file systems, but I can >> provide few short comments on how input files are handled. After SEC has >> successfully opened an input file, it will be kept open permanently. When >> input file is removed or renamed, input file is still kept open (at that >> point the file handle is pointing to disk blocks that no longer have a >> name). When a new file appears on disk with the same name as original input >> file, SEC will close the file handle pointing to nameless disk blocks and >> will open the new input file, starting to process it from the beginning. >> However, this operation is atomic and the input file will never show as >> "Closed" in the dump file. Status "Closed" in dump file usually indicates >> that SEC was not able to open the input file when it was started or >> restarted with a signal (a common situation when file does not exist or >> there is no permission to open the file), but it can also indicate that SEC >> has closed the file due to IO error when reading from file (that should >> normally not happen and usually means a serious disk issue). >> >> In order to find out why input file is in closed state, I would recommend >> to check the log of SEC (you can set it up with --log command line option). >> For example, if the input file did not exist when SEC was started or >> restarted with a signal, there should be the following lines in the log: >> >> Opening input file test.log >> Input file test.log does not exist! >> >> Also, if file is closed due to IO error, there is a relevant message in >> the log, for example: Input file test.log IO error (Input/output error), >> closing the file. >> >> Hope these comments are helpful, >> risto >> >> >> Hi Risto and friends, >>> >>> I am unsure about one conceptual question about how SEC keeps open >>> monitored files. >>> >>> Using SEC as systemd service, when files stored in NFS (opened via >>> addinput) being watched by SEC are moved elsewhere, and then their removal >>> is tried, NFS persistently keeps .nfsNUMBER files in existence, unable to >>> remove whole folder of them. This functionality of NFS is described e.g. >>> here: >>> https://www.ibm.com/su >>> pport/pages/what-are-nfs-files-accumulate-and-why-cant-they-be-deleted-even-after-stopping-cognos-8 >>> <https://www.ibm.com/support/pages/what-are-nfs-files-accumulate-and-why-cant-they-be-deleted-even-after-stopping-cognos-8> >>> >>> >>> It seems, that SEC running as service (not re-opening and closing each >>> monitored file in each "run") is keeping watched files permanently open, >>> and it is not desirable in such setup. >>> >>> But: when I look into the dump, some files have "status: Open" and some >>> "status: Closed", so maybe this my assumption about log files permanently >>> opened by SEC is not correct - I am confused. >>> >>> How it is in reality? Has somebody experience with described behaviour >>> (SEC+NFS, but it could arise also in other setups) and how to overcome it? >>> >>> Thank you. >>> >>> Richard >>> _______________________________________________ >>> Simple-evcorr-users mailing list >>> Sim...@li... >>> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users >>> >> |
From: Richard O. <ric...@gm...> - 2020-02-07 11:38:25
|
hi Risto, thank you for your helpful explanation about inner functionality of SEC. Closed files in my case were not existing, that was the reason. Richard ut 4. 2. 2020 o 23:09 Risto Vaarandi <ris...@gm...> napísal(a): > hi Richard, > > I have never used SEC for monitoring files on NFS file systems, but I can > provide few short comments on how input files are handled. After SEC has > successfully opened an input file, it will be kept open permanently. When > input file is removed or renamed, input file is still kept open (at that > point the file handle is pointing to disk blocks that no longer have a > name). When a new file appears on disk with the same name as original input > file, SEC will close the file handle pointing to nameless disk blocks and > will open the new input file, starting to process it from the beginning. > However, this operation is atomic and the input file will never show as > "Closed" in the dump file. Status "Closed" in dump file usually indicates > that SEC was not able to open the input file when it was started or > restarted with a signal (a common situation when file does not exist or > there is no permission to open the file), but it can also indicate that SEC > has closed the file due to IO error when reading from file (that should > normally not happen and usually means a serious disk issue). > > In order to find out why input file is in closed state, I would recommend > to check the log of SEC (you can set it up with --log command line option). > For example, if the input file did not exist when SEC was started or > restarted with a signal, there should be the following lines in the log: > > Opening input file test.log > Input file test.log does not exist! > > Also, if file is closed due to IO error, there is a relevant message in > the log, for example: Input file test.log IO error (Input/output error), > closing the file. > > Hope these comments are helpful, > risto > > > Hi Risto and friends, >> >> I am unsure about one conceptual question about how SEC keeps open >> monitored files. >> >> Using SEC as systemd service, when files stored in NFS (opened via >> addinput) being watched by SEC are moved elsewhere, and then their removal >> is tried, NFS persistently keeps .nfsNUMBER files in existence, unable to >> remove whole folder of them. This functionality of NFS is described e.g. >> here: >> https://www.ibm.com/su >> pport/pages/what-are-nfs-files-accumulate-and-why-cant-they-be-deleted-even-after-stopping-cognos-8 >> <https://www.ibm.com/support/pages/what-are-nfs-files-accumulate-and-why-cant-they-be-deleted-even-after-stopping-cognos-8> >> >> >> It seems, that SEC running as service (not re-opening and closing each >> monitored file in each "run") is keeping watched files permanently open, >> and it is not desirable in such setup. >> >> But: when I look into the dump, some files have "status: Open" and some >> "status: Closed", so maybe this my assumption about log files permanently >> opened by SEC is not correct - I am confused. >> >> How it is in reality? Has somebody experience with described behaviour >> (SEC+NFS, but it could arise also in other setups) and how to overcome it? >> >> Thank you. >> >> Richard >> _______________________________________________ >> Simple-evcorr-users mailing list >> Sim...@li... >> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users >> > |
From: Risto V. <ris...@gm...> - 2020-02-04 22:09:53
|
hi Richard, I have never used SEC for monitoring files on NFS file systems, but I can provide few short comments on how input files are handled. After SEC has successfully opened an input file, it will be kept open permanently. When input file is removed or renamed, input file is still kept open (at that point the file handle is pointing to disk blocks that no longer have a name). When a new file appears on disk with the same name as original input file, SEC will close the file handle pointing to nameless disk blocks and will open the new input file, starting to process it from the beginning. However, this operation is atomic and the input file will never show as "Closed" in the dump file. Status "Closed" in dump file usually indicates that SEC was not able to open the input file when it was started or restarted with a signal (a common situation when file does not exist or there is no permission to open the file), but it can also indicate that SEC has closed the file due to IO error when reading from file (that should normally not happen and usually means a serious disk issue). In order to find out why input file is in closed state, I would recommend to check the log of SEC (you can set it up with --log command line option). For example, if the input file did not exist when SEC was started or restarted with a signal, there should be the following lines in the log: Opening input file test.log Input file test.log does not exist! Also, if file is closed due to IO error, there is a relevant message in the log, for example: Input file test.log IO error (Input/output error), closing the file. Hope these comments are helpful, risto Hi Risto and friends, > > I am unsure about one conceptual question about how SEC keeps open > monitored files. > > Using SEC as systemd service, when files stored in NFS (opened via > addinput) being watched by SEC are moved elsewhere, and then their removal > is tried, NFS persistently keeps .nfsNUMBER files in existence, unable to > remove whole folder of them. This functionality of NFS is described e.g. > here: > https://www.ibm.com/su > pport/pages/what-are-nfs-files-accumulate-and-why-cant-they-be-deleted-even-after-stopping-cognos-8 > <https://www.ibm.com/support/pages/what-are-nfs-files-accumulate-and-why-cant-they-be-deleted-even-after-stopping-cognos-8> > > > It seems, that SEC running as service (not re-opening and closing each > monitored file in each "run") is keeping watched files permanently open, > and it is not desirable in such setup. > > But: when I look into the dump, some files have "status: Open" and some > "status: Closed", so maybe this my assumption about log files permanently > opened by SEC is not correct - I am confused. > > How it is in reality? Has somebody experience with described behaviour > (SEC+NFS, but it could arise also in other setups) and how to overcome it? > > Thank you. > > Richard > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users > |
From: Richard O. <ric...@gm...> - 2020-02-04 20:23:05
|
Hi Risto and friends, I am unsure about one conceptual question about how SEC keeps open monitored files. Using SEC as systemd service, when files stored in NFS (opened via addinput) being watched by SEC are moved elsewhere, and then their removal is tried, NFS persistently keeps .nfsNUMBER files in existence, unable to remove whole folder of them. This functionality of NFS is described e.g. here: https://www.ibm.com/su pport/pages/what-are-nfs-files-accumulate-and-why-cant-they-be-deleted-even-after-stopping-cognos-8 <https://www.ibm.com/support/pages/what-are-nfs-files-accumulate-and-why-cant-they-be-deleted-even-after-stopping-cognos-8> It seems, that SEC running as service (not re-opening and closing each monitored file in each "run") is keeping watched files permanently open, and it is not desirable in such setup. But: when I look into the dump, some files have "status: Open" and some "status: Closed", so maybe this my assumption about log files permanently opened by SEC is not correct - I am confused. How it is in reality? Has somebody experience with described behaviour (SEC+NFS, but it could arise also in other setups) and how to overcome it? Thank you. Richard |
From: Risto V. <ris...@gm...> - 2020-02-04 16:38:42
|
hi Richard, That's an interesting question. Dump files in JSON format have been supported only across few recent versions (from 2.8.0 to 2.8.2) and don't have long history behind them, but so far their format has stayed the same. As for dump files in text format, there have been a number of changes since early versions of SEC, but most of these changes have been related to adding sections with new data into dump file, not changing the format of existing information. I did a quick test with version 2.6.2 (released in 2012) and version 2.8.2 (most recent version), and the only difference which can influence parsing is how rule usage statistics is represented. With version 2.6.2, a single space character is used for separating words in each printed line, and rule usage statistics looks like follows: Statistics for the rules from test.sec (loaded at Tue Feb 4 17:54:34 2020) ------------------------------------------------------------ Rule 1 at line 1 (test) has matched 1 events Rule 2 at line 7 (test) has matched 0 events Rule 3 at line 16 (test) has matched 0 events Rule 4 at line 23 (test) has matched 0 events However, in version 2.8.2 the numerals in rule usage statistics are space-padded on the left, and the width of each numeric field is determined by the number of characters in largest numeral (that format change was introduced in 2013 for readability reasons: https://sourceforge.net/p/simple-evcorr/mailman/message/30185995/). For example, if you have three rules that have matched 1, 823 and 7865 times, the first numeral is printed as " 1", the second one as " 823", and the third one as "7865". Here is an example of rule usage statistics data for version 2.8.2: Statistics for the rules from test.sec (loaded at Tue Feb 4 17:53:30 2020) ------------------------------------------------------------ Rule 1 line 1 matched 1 events (test) Rule 2 line 7 matched 0 events (test) Rule 3 line 16 matched 0 events (test) Rule 4 line 23 matched 0 events (test) The other differences between dump files are some extra data that is printed for version 2.8.2 (for example, effective user and group IDs of the SEC process). So one can say that the format of textual dump file has not gone through major changes during the last 7-8 years, and also, there are no plans for such changes in upcoming versions. kind regards, risto Hi Risto, > > thank you for positive answer - dumping period in minutes in enough, not > needed many times per second. And also for useful tips - I already noticed > JSON option, just considering, that default is maybe more universally > usable, because extra modules (JSON.pm) may not be installed or installable > on any systems (not only due technical reasons, but also security policies > - more code auditing needed). Maybe the question also could be, how > frequent are changes of dump file formats during SEC development, which > could break monitoring implemented for particular version of SEC. > > Richard > > ut 28. 1. 2020 o 12:40 Risto Vaarandi <ris...@gm...> > napísal(a): > >> hi Richard, >> >> as I understand from your post, you would like to create SEC dump files >> periodically, in order to monitor performance of SEC based on these dump >> files. Let me first provide some comments on performance related question. >> Essentially, creation of dump file involves a pass over all major internal >> data structures, so that summaries about internal data structures can be >> written into dump file. In fact, SEC traverses all internal data structures >> for housekeeping purposes anyway once a second (you can change this >> interval with --cleantime command line option). Therefore, if you re-create >> the dump file after reasonable time intervals (say, after couple of >> minutes), it wouldn't noticeably increase CPU consumption (things would >> become different if dumps would be generated many times a second). >> >> For production use, I would provide couple of recommendations. Firstly, >> you could consider generating dump files in JSON format which might make >> their parsing easier. That feature was requested couple of years ago >> specifically for monitoring purposes ( >> https://sourceforge.net/p/simple-evcorr/mailman/message/36334142/), and >> you can activate JSON format for dump file with --dumpfjson command line >> option. You could also consider using --dumpfts option which adds numeric >> timestamp suffix to dump file names (e.g., /tmp/sec.dump.1580210466). SEC >> does not overwrite the dump file if it already exists, so having timestamps >> in file names avoids the need for dump file rotation (you would still need >> to delete old dump files from time to time, though). >> >> Hopefully this information is helpful. >> >> kind regards, >> risto >> >> Kontakt Richard Ostrochovský (<ric...@gm...>) kirjutas >> kuupäeval E, 27. jaanuar 2020 kell 22:04: >> >>> Greetings, friends, >>> >>> there is an idea to monitor high-volume event flows (per rules or log >>> files, e.g. in bytes per time unit) from dump files from *periodic >>> dumping*. >>> >>> The question is, if this is recommended for *production use* - answer >>> depends at least on if dump creation somehow slows down or stops SEC >>> processing, or it has not significant overhead (also for thousands of >>> rules, complex configurations). >>> >>> Are there some best practises for (or experiences with) performance >>> self-monitoring and tuning of SEC? >>> >>> Thank you. >>> >>> Richard >>> _______________________________________________ >>> Simple-evcorr-users mailing list >>> Sim...@li... >>> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users >>> >> |
From: Richard O. <ric...@gm...> - 2020-02-04 14:37:37
|
Hi Risto, thank you for positive answer - dumping period in minutes in enough, not needed many times per second. And also for useful tips - I already noticed JSON option, just considering, that default is maybe more universally usable, because extra modules (JSON.pm) may not be installed or installable on any systems (not only due technical reasons, but also security policies - more code auditing needed). Maybe the question also could be, how frequent are changes of dump file formats during SEC development, which could break monitoring implemented for particular version of SEC. Richard ut 28. 1. 2020 o 12:40 Risto Vaarandi <ris...@gm...> napísal(a): > hi Richard, > > as I understand from your post, you would like to create SEC dump files > periodically, in order to monitor performance of SEC based on these dump > files. Let me first provide some comments on performance related question. > Essentially, creation of dump file involves a pass over all major internal > data structures, so that summaries about internal data structures can be > written into dump file. In fact, SEC traverses all internal data structures > for housekeeping purposes anyway once a second (you can change this > interval with --cleantime command line option). Therefore, if you re-create > the dump file after reasonable time intervals (say, after couple of > minutes), it wouldn't noticeably increase CPU consumption (things would > become different if dumps would be generated many times a second). > > For production use, I would provide couple of recommendations. Firstly, > you could consider generating dump files in JSON format which might make > their parsing easier. That feature was requested couple of years ago > specifically for monitoring purposes ( > https://sourceforge.net/p/simple-evcorr/mailman/message/36334142/), and > you can activate JSON format for dump file with --dumpfjson command line > option. You could also consider using --dumpfts option which adds numeric > timestamp suffix to dump file names (e.g., /tmp/sec.dump.1580210466). SEC > does not overwrite the dump file if it already exists, so having timestamps > in file names avoids the need for dump file rotation (you would still need > to delete old dump files from time to time, though). > > Hopefully this information is helpful. > > kind regards, > risto > > Kontakt Richard Ostrochovský (<ric...@gm...>) kirjutas > kuupäeval E, 27. jaanuar 2020 kell 22:04: > >> Greetings, friends, >> >> there is an idea to monitor high-volume event flows (per rules or log >> files, e.g. in bytes per time unit) from dump files from *periodic >> dumping*. >> >> The question is, if this is recommended for *production use* - answer >> depends at least on if dump creation somehow slows down or stops SEC >> processing, or it has not significant overhead (also for thousands of >> rules, complex configurations). >> >> Are there some best practises for (or experiences with) performance >> self-monitoring and tuning of SEC? >> >> Thank you. >> >> Richard >> _______________________________________________ >> Simple-evcorr-users mailing list >> Sim...@li... >> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users >> > |