simple-evcorr-users Mailing List for Simple Event Correlator (Page 4)
Brought to you by:
ristov
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(5) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(6) |
Feb
(14) |
Mar
(8) |
Apr
(13) |
May
(6) |
Jun
(24) |
Jul
(7) |
Aug
(9) |
Sep
(20) |
Oct
(7) |
Nov
(1) |
Dec
|
2003 |
Jan
(10) |
Feb
|
Mar
(6) |
Apr
(12) |
May
(12) |
Jun
(44) |
Jul
(39) |
Aug
(10) |
Sep
(28) |
Oct
(35) |
Nov
(66) |
Dec
(51) |
2004 |
Jan
(21) |
Feb
(33) |
Mar
(32) |
Apr
(59) |
May
(59) |
Jun
(34) |
Jul
(6) |
Aug
(39) |
Sep
(13) |
Oct
(13) |
Nov
(19) |
Dec
(9) |
2005 |
Jan
(18) |
Feb
(36) |
Mar
(24) |
Apr
(18) |
May
(51) |
Jun
(34) |
Jul
(9) |
Aug
(34) |
Sep
(52) |
Oct
(20) |
Nov
(11) |
Dec
(12) |
2006 |
Jan
(20) |
Feb
(3) |
Mar
(68) |
Apr
(41) |
May
(11) |
Jun
(39) |
Jul
(17) |
Aug
(34) |
Sep
(40) |
Oct
(42) |
Nov
(25) |
Dec
(33) |
2007 |
Jan
(6) |
Feb
(28) |
Mar
(32) |
Apr
(25) |
May
(11) |
Jun
(20) |
Jul
(8) |
Aug
(12) |
Sep
(13) |
Oct
(42) |
Nov
(37) |
Dec
(16) |
2008 |
Jan
(25) |
Feb
(1) |
Mar
(28) |
Apr
(34) |
May
(16) |
Jun
(23) |
Jul
(45) |
Aug
(26) |
Sep
(5) |
Oct
(5) |
Nov
(20) |
Dec
(39) |
2009 |
Jan
(14) |
Feb
(24) |
Mar
(40) |
Apr
(47) |
May
(11) |
Jun
(19) |
Jul
(15) |
Aug
(13) |
Sep
(7) |
Oct
(34) |
Nov
(27) |
Dec
(24) |
2010 |
Jan
(14) |
Feb
(5) |
Mar
(16) |
Apr
(12) |
May
(25) |
Jun
(43) |
Jul
(13) |
Aug
(12) |
Sep
(10) |
Oct
(40) |
Nov
(23) |
Dec
(29) |
2011 |
Jan
(25) |
Feb
(7) |
Mar
(28) |
Apr
(36) |
May
(18) |
Jun
(26) |
Jul
(7) |
Aug
(16) |
Sep
(21) |
Oct
(29) |
Nov
(13) |
Dec
(36) |
2012 |
Jan
(26) |
Feb
(13) |
Mar
(12) |
Apr
(13) |
May
(12) |
Jun
(2) |
Jul
(3) |
Aug
(15) |
Sep
(34) |
Oct
(49) |
Nov
(25) |
Dec
(23) |
2013 |
Jan
(1) |
Feb
(35) |
Mar
(32) |
Apr
(6) |
May
(11) |
Jun
(68) |
Jul
(15) |
Aug
(8) |
Sep
(58) |
Oct
(27) |
Nov
(19) |
Dec
(15) |
2014 |
Jan
(40) |
Feb
(49) |
Mar
(21) |
Apr
(8) |
May
(26) |
Jun
(9) |
Jul
(33) |
Aug
(35) |
Sep
(18) |
Oct
(7) |
Nov
(13) |
Dec
(8) |
2015 |
Jan
(12) |
Feb
(2) |
Mar
(16) |
Apr
(33) |
May
(4) |
Jun
(25) |
Jul
(20) |
Aug
(9) |
Sep
(10) |
Oct
(40) |
Nov
(15) |
Dec
(17) |
2016 |
Jan
(16) |
Feb
(16) |
Mar
(4) |
Apr
(40) |
May
(9) |
Jun
(21) |
Jul
(9) |
Aug
(16) |
Sep
(13) |
Oct
(17) |
Nov
(14) |
Dec
(26) |
2017 |
Jan
(9) |
Feb
(6) |
Mar
(23) |
Apr
(7) |
May
(1) |
Jun
(6) |
Jul
(11) |
Aug
(17) |
Sep
|
Oct
(1) |
Nov
(3) |
Dec
(1) |
2018 |
Jan
(14) |
Feb
(2) |
Mar
(12) |
Apr
(10) |
May
(1) |
Jun
(12) |
Jul
(6) |
Aug
(1) |
Sep
(1) |
Oct
(9) |
Nov
(3) |
Dec
(6) |
2019 |
Jan
(1) |
Feb
|
Mar
(5) |
Apr
|
May
(3) |
Jun
(3) |
Jul
(2) |
Aug
(9) |
Sep
(11) |
Oct
(7) |
Nov
(10) |
Dec
(11) |
2020 |
Jan
(9) |
Feb
(14) |
Mar
(15) |
Apr
(26) |
May
(1) |
Jun
|
Jul
|
Aug
(4) |
Sep
|
Oct
(6) |
Nov
|
Dec
(6) |
2021 |
Jan
|
Feb
(1) |
Mar
(11) |
Apr
(1) |
May
(5) |
Jun
|
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(6) |
Apr
(3) |
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(6) |
Dec
(5) |
2023 |
Jan
|
Feb
|
Mar
(11) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(2) |
Sep
(9) |
Oct
|
Nov
|
Dec
(1) |
2024 |
Jan
|
Feb
|
Mar
|
Apr
(6) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Risto V. <ris...@gm...> - 2021-03-13 11:19:30
|
hi all, this email provides a more detailed description of major new features in SEC-2.9.alpha1. Firstly, one can use 'egptype' and 'egpattern' fields in EventGroup rule that specify an additional event group matching condition to conventional threshold conditions. The 'egptype' and 'egpattern' fields define the *event group pattern* which can be a regular expression, string pattern, or a Perl function. Event group pattern is used for matching the *event group string* which reflects all events the event correlation operation has seen within its event correlation window. For example, consider the EventGroup2 operation which has observed three events, so that the earliest event has matched its 'pattern' field, and the following two events its 'pattern2' field. In that case, the event group string is "1 2 2". The event group string is matched with the event group pattern only after *all* traditional numeric threshold conditions have evaluated true. To illustrate how event group patterns work, consider the following EventGroup2 rule: type=EventGroup2 ptype=SubStr pattern=EVENT_A thresh=2 ptype2=SubStr pattern2=EVENT_B thresh2=2 desc=Sequence of two or more As and Bs with 'A B' at the end action=write - %s egptype=RegExp egpattern=1 2$ window=60 Also, suppose the following events occur, and each event timestamp reflects the time SEC observes the event: Mar 10 12:05:31 EVENT_B Mar 10 12:05:32 EVENT_B Mar 10 12:05:38 EVENT_A Mar 10 12:05:39 EVENT_A Mar 10 12:05:42 EVENT_B When these events are observed by the above EventGroup2 rule, the rule starts an event correlation operation at 12:05:31. When the fourth event appears at 12:05:39, all threshold conditions (thresh=2 and thresh2=2) become satisfied. Note that without 'egptype' and 'egpattern' rule fields, the operation would execute the 'write' action. However, since these fields are present, the following event group string is built from the first four events "2 2 1 1", and this string is matched with the regular expression 1 2$ (the event group pattern provided with the 'egpattern' field). Since there is no match, the operation will *not* execute the 'write' action given with the 'action' field. When the fifth event appears at 12:05:42, all threshold conditions are again satisfied, and all observed events produce the following event group string: "2 2 1 1 2". Since this time the event group string matches the regular expression given with the 'egpattern' field, the operation will write the string "Sequence of two or more As and Bs with 'A B' at the end" to standard output with the 'write' action. To summarize, the 'egptype' and 'egpattern' fields allow for matching specific event sequences in a given time window (e.g., one can verify if events appear in specific order). The 2.9.alpha1 version is also supporting five new actions: 'cmdexec', 'spawnexec', 'cspawnexec', 'pipeexec', and 'reportexec'. These actions are similar to 'shellcmd', 'spawn', 'cspawn', 'pipe', and 'report' actions, but they execute command lines without shell interpretation. For example, consider the following action definition: cmdexec rm /tmp/report* This action will execute the command line 'rm /tmp/report*', but unlike the 'shellcmd' action, the asterisk is not treated as a file pattern but just as a file name character. Therefore, the action will remove the file with the name "/tmp/report*", and not the files /tmp/report1 and /tmp/report2 if they are present in the /tmp directory. The new actions you can find in 2.9.alpha1 allow for executing external programs in a more secure way and avoiding unexpected side effects if shell metacharacters are injected into command lines. Also, the SingleWithScript rule has an additional 'shell' rule field in the new version for running external programs with or without shell interpretation. Finally, the new version is no longer using the Perl JSON module and has switched to JSON::PP, since unlike the JSON module, JSON::PP is a part of the standard Perl distribution. This means that SEC is only using standard Perl modules now which come together with Perl, and does not require any additional modules. Since Sys::Syslog and JSON::PP modules might be missing from some old Perl distributions (Perl versions 5.8, 5.10 and 5.12 usually don't have them installed by default), the presence of these modules is not mandatory. If you have such an old Perl distribution and don't want to install Sys::Syslog and JSON::PP manually, SEC will simply run with a couple of non-essential features disabled, producing a warning message if you attempt to use these features. As for other new features and changes in SEC-2.9.alpha1, please see the changelog of the new version. kind regards, risto |
From: Risto V. <ris...@gm...> - 2021-03-12 19:37:43
|
hi all, SEC-2.9.alpha1 has been released which is available for download from SEC home page (link for direct download: https://github.com/simple-evcorr/sec/releases/download/2.9.alpha1/sec-2.9.alpha1.tar.gz). This version is an alpha version of the upcoming 2.9 major release, and it introduces a number of improvements, including five new actions and enhancements into EventGroup rule. Trying out the new version and providing feedback is very much appreciated :) Here is the changelog for the 2.9.alpha1 version: * added support for 'cmdexec', 'spawnexec', 'cspawnexec', 'pipeexec' and 'reportexec' actions. * added support for 'shell' field in SingleWithScript rules. * added support for 'egptype' and 'egpattern' fields in EventGroup rules. * added support for %.sp built-in action list variable. * added ipv6 support for 'tcpsock' and 'udpsock' actions. * bugfixes for 'write', 'writen', 'owritecl', 'udgram', 'ustream', 'udpsock' and 'tcpsock' actions. * starting from this version, a program provided with --timeout-script command line option is executed without shell interpretation. * starting from this version, SEC uses Perl JSON::PP module instead of JSON module (JSON::PP is included in the standard Perl installation). kind regards, risto |
From: Risto V. <ris...@gm...> - 2021-02-28 11:33:14
|
hi all, for your information, the SEC FAQ has been updated with an example about matching input lines in UTF-8 and other encodings: https://simple-evcorr.github.io/FAQ.html#23 kind regards, risto |
From: Risto V. <ris...@gm...> - 2020-12-16 18:03:52
|
hi Agustin, Currently, there are no variables which could be set from one rule, and be accessible in *all* fields of other rules. However, the same action list variable can be accessed in all rules, but the use of action list variables is limited to action* rule fields only. kind regards, risto Kontakt Agustín Lara Romero (<ag...@ho...>) kirjutas kuupäeval K, 16. detsember 2020 kell 12:19: > > Hi Risto, > Is it possible to use global variables? > For example: > VARIABLE1 = msg 1 > VARIABLE2 = msg 2 > > and after use these variables in a different rules: > type=Single > ptype=RegExp > pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2 > desc=use of VARIABLE1 > action=logonly $1 $VARIABLE1 > > type=Single > ptype=RegExp > pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2 > desc=use of VARIABLE2 > action=logonly $1 $VARIABLE2 > > Thanks you! > > > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users |
From: Agustín L. R. <ag...@ho...> - 2020-12-16 10:19:07
|
Hi Risto, Is it possible to use global variables? For example: VARIABLE1 = msg 1 VARIABLE2 = msg 2 and after use these variables in a different rules: type=Single ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2 desc=use of VARIABLE1 action=logonly $1 $VARIABLE1 type=Single ptype=RegExp pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2 desc=use of VARIABLE2 action=logonly $1 $VARIABLE2 Thanks you! |
From: Risto V. <ris...@gm...> - 2020-12-16 09:18:17
|
...one additional note -- SEC official documentation on the 'while' action has other relevant examples about processing context event stores (you can find them in the end of the "ACTIONS, ACTION LISTS AND ACTION LIST VARIABLES" section of SEC man page: https://simple-evcorr.github.io/man.html#lbAI). kind regards, risto > > hi Penelope, > > since 'obsolete' is a SEC action, it can not be called in Perl, but > you rather need some sort of loop written in the SEC rule language. > Fortunately, SEC supports the 'while' action that executes an action > list as long as the given action list variable evaluates true in > boolean context. That allows you to write a loop for processing a > context event store, since there is 'getsize' action for finding the > number of events in the store, and 'shift' (or 'pop') action for > removing an element from the beginning (or end) of the store. For > taking advantage of this functionality for your task, you just have to > write relevant context names into the event store of some context, and > then process this context with a loop. > > Here is an example ruleset that illustrates the idea: > > type=Single > ptype=SubStr > pattern=SEC_SHUTDOWN > context=SEC_INTERNAL_EVENT > desc=Save contexts msg_* into /tmp/report.* on shutdown > action=lcall %ret -> ( sub { join("\n", grep { /^msg_/ } keys > %main::context_list) } ); \ > fill BUFFER %ret; getsize %size BUFFER; \ > while %size ( shift BUFFER %name; obsolete %name; getsize %size BUFFER ) > > type=single > ptype=regexp > pattern=create (\S+) > desc=create the $1 context > action=create $1 3600 ( report $1 /bin/cat > /tmp/report.$1 ) > > type=single > ptype=regexp > pattern=add (\S+) (.+) > desc=add string $2 to the $1 context > action=add $1 $2 > > The 'lcall' action in the first rule executes the following Perl code: > join("\n", grep { /^msg_/ } keys %main::context_list) > This code is matching all context names with the "msg_" prefix and > joining such names into a multiline string. > The following 'fill' action splits this multiline string by newline, > and writes individual context names into the event store of the BUFFER > context. > The number of context names in the event store is then established > with getsize %size BUFFER, and then the 'while' loop gets executed: > while %size ( shift BUFFER %name; obsolete %name; getsize %size BUFFER) > Inside the loop, context names are taken from the event store one by > one, and the 'obsolete' action is called for each context name. > > One note of caution -- 'obsolete' triggers the 'report' action which > forks a separate process, and a forked process has 3 seconds for > finishing its work before receiving TERM signal from SEC (if the > process has to run longer, a signal handler must be set up for TERM). > > Hopefully the above rule example is useful. > > kind regards, > risto > > > > Kontakt sec-user--- via Simple-evcorr-users > (<sim...@li...>) kirjutas kuupäeval T, > 15. detsember 2020 kell 01:39: > > > > Hello! > > > > I'm dabbling with SEC, experimenting with adding lines into contexts and only when the context is finished, decide what to do with it. Essentially it's taking a look at the group of log messages emitted by sendmail for every connection, looking for behaviour that is not consistent with being an honored guest on the internet, and blocking the source with iptables and ipset. > > > > The problem is that I'm testing with the same input file over and over, but the 'report' actions aren't running because the entire log file is processed in less than 10 seconds: > > > > sec --conf sendmail.test \ > > --input /tmp/all.logs \ > > --fromstart \ > > --notail \ > > --bufsize=1 \ > > --log=- \ > > --intevents \ > > --intcontexts \ > > --debug=50 > > > > Rather than write some perl to run in the SEC_SHUTDOWN internal event to write the context buffers to files, I'd really rather just run the 'obsolete' action on all contexts. Is there a straightforward way to do that? > > > > type=Single > > ptype=SubStr > > pattern=SEC_SHUTDOWN > > context=SEC_INTERNAL_EVENT > > desc=Save contexts msg_* into /tmp/report.* on shutdown > > action=logonly; lcall %ret -> ( sub { my($context); \ > > foreach $context (keys %main::context_list) { obsolete $context; } \ > > } ) > > > > Mon Dec 14 14:49:34 2020: Code 'CODE(0x560fca302fb8)' runtime error: Can't locate object method "obsolete" via package "msg_sendmail[4208]" (perhaps you forgot to load "msg_sendmail[4208]"?) at (eval 9) line 1. > > > > For better testing, it would be cool if SEC's idea of the current time could be derived from the timestamps in the log file instead of wall-clock time, so that context actions happen at the right time relative to log messages (rather than 30 seconds after the program ends! :-), but that's probably a bit too much to ask for. > > > > Thanks! > > > > -- > > > > Penelope Fudd > > > > sec...@ch... > > _______________________________________________ > > Simple-evcorr-users mailing list > > Sim...@li... > > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users |
From: Risto V. <ris...@gm...> - 2020-12-15 21:06:53
|
> > For better testing, it would be cool if SEC's idea of the current time could be derived from the timestamps in the log file instead of wall-clock time, so that context actions happen at the right time relative to log messages (rather than 30 seconds after the program ends! :-), but that's probably a bit too much to ask for. > Implementing this idea is actually quite complex, since the internal state of SEC is not only affected by new events in input log files. There are many state changes which are associated with the system clock, for example, actions which are executed by Calendar rules, PairWithWindow rules, expiring contexts, etc. While some actions might not necessarily change the state of SEC (e.g., sending an email to someone does not influence event processing), there are actions which have an impact on state. For example, if an expiring context creates a synthetic event, this event might match other rules and trigger a non-trivial event processing scheme. Also, if a Calendar rule executes the 'addinput' action, a new input file is opened which might add a large number of new input events into play, while creating a context from expiring PairWithWindow operation might disable several frequently matching rules in the rule base. Therefore, implementing a simple artificial clock which is incremented according to the timestamp of each new input event does not allow to test a large part of the event correlation functionality. In order to do that, one must also identify relevant time moments in between event timestamps from input files, and carefully replay each such time moment. In short, a proper implementation of an artificial clock is a complex issue, and requires too much effort for too limited value. kind regards, risto |
From: Risto V. <ris...@gm...> - 2020-12-15 19:46:05
|
hi Penelope, since 'obsolete' is a SEC action, it can not be called in Perl, but you rather need some sort of loop written in the SEC rule language. Fortunately, SEC supports the 'while' action that executes an action list as long as the given action list variable evaluates true in boolean context. That allows you to write a loop for processing a context event store, since there is 'getsize' action for finding the number of events in the store, and 'shift' (or 'pop') action for removing an element from the beginning (or end) of the store. For taking advantage of this functionality for your task, you just have to write relevant context names into the event store of some context, and then process this context with a loop. Here is an example ruleset that illustrates the idea: type=Single ptype=SubStr pattern=SEC_SHUTDOWN context=SEC_INTERNAL_EVENT desc=Save contexts msg_* into /tmp/report.* on shutdown action=lcall %ret -> ( sub { join("\n", grep { /^msg_/ } keys %main::context_list) } ); \ fill BUFFER %ret; getsize %size BUFFER; \ while %size ( shift BUFFER %name; obsolete %name; getsize %size BUFFER ) type=single ptype=regexp pattern=create (\S+) desc=create the $1 context action=create $1 3600 ( report $1 /bin/cat > /tmp/report.$1 ) type=single ptype=regexp pattern=add (\S+) (.+) desc=add string $2 to the $1 context action=add $1 $2 The 'lcall' action in the first rule executes the following Perl code: join("\n", grep { /^msg_/ } keys %main::context_list) This code is matching all context names with the "msg_" prefix and joining such names into a multiline string. The following 'fill' action splits this multiline string by newline, and writes individual context names into the event store of the BUFFER context. The number of context names in the event store is then established with getsize %size BUFFER, and then the 'while' loop gets executed: while %size ( shift BUFFER %name; obsolete %name; getsize %size BUFFER) Inside the loop, context names are taken from the event store one by one, and the 'obsolete' action is called for each context name. One note of caution -- 'obsolete' triggers the 'report' action which forks a separate process, and a forked process has 3 seconds for finishing its work before receiving TERM signal from SEC (if the process has to run longer, a signal handler must be set up for TERM). Hopefully the above rule example is useful. kind regards, risto Kontakt sec-user--- via Simple-evcorr-users (<sim...@li...>) kirjutas kuupäeval T, 15. detsember 2020 kell 01:39: > > Hello! > > I'm dabbling with SEC, experimenting with adding lines into contexts and only when the context is finished, decide what to do with it. Essentially it's taking a look at the group of log messages emitted by sendmail for every connection, looking for behaviour that is not consistent with being an honored guest on the internet, and blocking the source with iptables and ipset. > > The problem is that I'm testing with the same input file over and over, but the 'report' actions aren't running because the entire log file is processed in less than 10 seconds: > > sec --conf sendmail.test \ > --input /tmp/all.logs \ > --fromstart \ > --notail \ > --bufsize=1 \ > --log=- \ > --intevents \ > --intcontexts \ > --debug=50 > > Rather than write some perl to run in the SEC_SHUTDOWN internal event to write the context buffers to files, I'd really rather just run the 'obsolete' action on all contexts. Is there a straightforward way to do that? > > type=Single > ptype=SubStr > pattern=SEC_SHUTDOWN > context=SEC_INTERNAL_EVENT > desc=Save contexts msg_* into /tmp/report.* on shutdown > action=logonly; lcall %ret -> ( sub { my($context); \ > foreach $context (keys %main::context_list) { obsolete $context; } \ > } ) > > Mon Dec 14 14:49:34 2020: Code 'CODE(0x560fca302fb8)' runtime error: Can't locate object method "obsolete" via package "msg_sendmail[4208]" (perhaps you forgot to load "msg_sendmail[4208]"?) at (eval 9) line 1. > > For better testing, it would be cool if SEC's idea of the current time could be derived from the timestamps in the log file instead of wall-clock time, so that context actions happen at the right time relative to log messages (rather than 30 seconds after the program ends! :-), but that's probably a bit too much to ask for. > > Thanks! > > -- > > Penelope Fudd > > sec...@ch... > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users |
From: <sec...@ch...> - 2020-12-14 23:38:34
|
Hello! I'm dabbling with SEC, experimenting with adding lines into contexts and only when the context is finished, decide what to do with it. Essentially it's taking a look at the group of log messages emitted by sendmail for every connection, looking for behaviour that is not consistent with being an honored guest on the internet, and blocking the source with iptables and ipset. The problem is that I'm testing with the same input file over and over, but the 'report' actions aren't running because the entire log file is processed in less than 10 seconds: sec --conf sendmail.test \ --input /tmp/all.logs \ --fromstart \ --notail \ --bufsize=1 \ --log=- \ --intevents \ --intcontexts \ --debug=50 Rather than write some perl to run in the SEC_SHUTDOWN internal event to write the context buffers to files, I'd really rather just run the 'obsolete' action on all contexts. Is there a straightforward way to do that? type=Single ptype=SubStr pattern=SEC_SHUTDOWN context=SEC_INTERNAL_EVENT desc=Save contexts msg_* into /tmp/report.* on shutdown action=logonly; lcall %ret -> ( sub { my($context); \ foreach $context (keys %main::context_list) { obsolete $context; } \ } ) Mon Dec 14 14:49:34 2020: Code 'CODE(0x560fca302fb8)' runtime error: Can't locate object method "obsolete" via package "msg_sendmail[4208]" (perhaps you forgot to load "msg_sendmail[4208]"?) at (eval 9) line 1. For better testing, it would be cool if SEC's idea of the current time could be derived from the timestamps in the log file instead of wall-clock time, so that context actions happen at the right time relative to log messages (rather than 30 seconds after the program ends! :-), but that's probably a bit too much to ask for. Thanks! -- Penelope Fudd sec...@ch... |
From: Risto V. <ris...@gm...> - 2020-10-18 18:15:31
|
hi Michael, thanks a lot for sharing examples from your rulebase! I am sure they will be helpful for people who have to tackle similar tasks in the future, and will be searching the mailing list for relevant examples. kind regards, risto Risto- > > > > Thank you for taking time to respond so thoroughly. Your examples were > quite clear. I would not have thought to try to use global variables. > Ultimately I chose your cached/context approach, which worked great. > > > > I ended up taking a monolithic config file of nearly 400 hard to > maintain/organize rules and spread them more logically across 29 (and > counting) configuration files. The impetus was needing to re-use some (but > not all) of the rules in a second network that I admin. > > > > In an attempt to give back, I included below my general approach in case > others find it useful. > > > > Thanks again, > > -Michael > > > > ==============/============= > > > > # 000-main.config > > # this file will be the same on each network > > > > type=Single > > ptype=SubStr > > pattern=SEC_STARTUP > > desc=Started SEC > > action=assign %a /destinationfile.log > > > > type=Single > > ptype=RegExp > > pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+ > > varmap=SYSLOG; hostname=1 > > desc=hostname > > action=none > > continue=TakeNext > > > > type=Jump > > ptype=cached > > pattern=SYSLOG > > context=$+{hostname} -> ( sub { return 1 if $_[0]; return 0 } ) > > cfset=001-all > > continue=EndMatch > > > > # This is a catch-all rule to dump to the logfile anything that didn't > match above.. > > type=Single > > ptype=RegExp > > pattern=.* > > desc=$0 > > action=write %a $0 > > > > ---------/-------- > > > > #001-all.config > > # this file will be different between networks due to differences in > vendors, device naming standards, etc > > > > type=Options > > joincfset=001-all > > procallin=no > > > > type=Jump > > ptype=cached > > pattern=SYSLOG > > context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^r-/ } ) > > cfset=050-juniper > > continue=EndMatch > > > > type=Jump > > ptype=cached > > pattern=SYSLOG > > context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^[ts]-/ } ) > > > > # This is a catch-all rule to dump to the logfile anything that doesn't > match above.. > > type=Single > > ptype=RegExp > > pattern=.* > > desc=$0 > > action=write %a $0 > > > > ---------/-------- > > > > # 050-juniper.config > > # these files will be the same between networks, and where most of the > rules and re-use will come in. > > > > type=Options > > joincfset=050-juniper > > procallin=no > > > > # near top due to frequency.. > > type=Jump > > ptype=RegExp > > pattern=.+ > > cfset=110-juniper-mx104 > > continue=TakeNext > > > > # two examples, I have many stanzas like this for individual JunOS daemons > [hence many files] > > type=Jump > > ptype=RegExp > > pattern= mgd\[[0-9]+\]: > > cfset=150-juniper-mgd > > continue=EndMatch > > > > type=Jump > > ptype=RegExp > > pattern= rpd\[[0-9]+\]: > > cfset=150-juniper-rpd > > continue=EndMatch > > > > # things that don’t match specific daemons > > type=Jump > > ptype=RegExp > > pattern=.+ > > cfset=100-juniper-nodaemon > > continue=TakeNext > > > > # This is a catch-all rule to dump to the logfile anything that survived > > type=Single > > ptype=RegExp > > pattern=.* > > desc=$0 > > action=write %a $0 > > > > [FIN] > > > > *From:* ris...@gm... <ris...@gm...> > *Sent:* Saturday, October 17, 2020 5:13 PM > *To:* Michael Hare <mic...@wi...> > *Cc:* sim...@li... > *Subject:* Re: [Simple-evcorr-users] using variables learned in rule A in > rule B's perlfunc: possible? > > > > hi Michael, > > > > there are a couple of ways to address this problem. Firstly, instead of > using sec match variables, one can set up Perl's native variables for > sharing data between rules. For example, the regular expression pattern of > the first rule can be easily converted into perlfunc pattern, so that the > pattern would assign the hostname to Perl global variable $hostname. This > global variable can then be accessed in perfunc patterns of other rules. > Here is an example that illustrates the idea: > > > > type=Single > ptype=perlfunc > pattern=sub { if ($_[0] =~ /^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) /) \ > { $hostname = $1; } else { $hostname = ""; } return 1; } > desc=hostname > action=none > continue=TakeNext > > type=Jump > ptype=perlfunc > pattern=sub { return 1 if $hostname =~ m/^first-use-case/ } > cfset=rules-for-this-match-1 > > type=Jump > ptype=perlfunc > pattern=sub { return 1 if $hostname =~ m/^second-use-case/ } > cfset=rules-for-this-match-2 > > > > Since SEC supports caching the results of pattern matching, one could also > store the matching result from the first rule into cache, and then retrieve > the result from cache with the 'cached' pattern type. Since this pattern > type assumes that the name of the cached entry is provided in the 'pattern' > field, the hostname check with a perl function has to be implemented in a > context expression (the expression is evaluated after the pattern match). > > > > Here is an example which creates an entry named SYSLOG in a pattern match > cache, so that all match variables created in the first rule can be > retrieved in later rules. Note that the entry is created in the 'varmap' > field which also sets up named match variable $+{hostname}. In further > rules, the 'cached' pattern type is used for retrieving the SYSLOG entry > from cache, and creating all match variables from this entry. In order to > check the hostname, the $+{hostname} variable that was originally set in > the first rule is passed into perlfunc patterns in the second and third > rule. Also, if you need to check more than just few match variables in > perlfunc pattern, it is more efficient to pass a reference to the whole > cache entry into the Perl function, so that individual cached match > variables can be accessed through the reference (I have added an additional > fourth rule into the example which illustrates this idea): > > > > type=Single > ptype=RegExp > pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+ > varmap=SYSLOG; hostname=1 > desc=hostname > action=none > continue=TakeNext > > type=Jump > ptype=cached > pattern=SYSLOG > context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^first-use-case/ } ) > cfset=rules-for-this-match-1 > > type=Jump > ptype=cached > pattern=SYSLOG > context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^second-use-case/ } > ) > cfset=rules-for-this-match-2 > > type=Jump > ptype=cached > pattern=SYSLOG > context=SYSLOG :> ( sub { return 1 if $_[0]->{"hostname"} =~ > m/^third-use-case/ } ) > cfset=rules-for-this-match-3 > > > > A small side-note about the first rule -- you can also create the > $+{hostname} variable in the regular expression itself by rewriting it as > follows: > > ^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (?<hostname>.+?) .+ > > And in that case, you can rewrite the 'varmap=SYSLOG; hostname=1' > statement in the first rule simply as 'varmap=SYSLOG'. > > > > Hopefully these examples are useful and help you to tackle the problem. > > > > kind regards, > > risto > > > > > > > > Hi- > > I'm sorry to ask what is probably very basic question, but I have > struggling with this for awhile (I have perused the manual a lot and the > mailing list a bit) and could use some guidance. > > The short version is: is there a way to take the results of a pattern > match in one rule and use that value in a perlfunc in another? > > More verbosely, at this time I use SEC for network syslog exclusion; > nothing fancy. I would like to start using Jump rules based on hostname. > Hostname is derived from the incoming log line. > > I thought I would be clever and use a single rule to determine if there > was a hostname or not, save it somewhere reusable, and then launch jump > rules based on that. > > something like > > type=Single > ptype=RegExp > pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+ > varmap= hostname=1 > desc=hostname > action=assign %r $+{hostname} > continue=TakeNext > > type=Jump > ptype=perlfunc > pattern=sub { return 1 if $+{hostname} =~ m/^first-use-case/ } > cfset=rules-for-this-match-1 > > type=Jump > ptype=perlfunc > pattern=sub { return 1 if $+{hostname} =~ m/^second-use-case/ } > cfset=rules-for-this-match-2 > > I know this doesn't work. I understand that '%r' is not a perl hash, and > is an action list variable, and that $+{hostname} is undef inside the > type=Jump rule perlfunc. I also know that %r is being set correctly, I see > it in "variables -> r" if I do SIGUSR1 dump. > > So is it possible stash away a variable from one rule and use it in a Jump > rule like above? I can work around this easily by using a single rule like > below, but if I have for example 20 jump permutations, it seems quite > redundant to keep recalculating the hostname for comparison. > > type=Jump > ptype=perlfunc > pattern=sub { return 0 unless (defined($_[1]) && $_[0] =~ /^\w+\s+[0-9]+ > [0-9]+:[0-9]+:[0-9]+ (.+?) .+/); return 1 if $1 =~ m/^first-use-case/} > cfset=all-rules > > Thanks in advance, > -Michael > > > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users > > |
From: Michael H. <mic...@wi...> - 2020-10-18 15:33:51
|
Risto- Thank you for taking time to respond so thoroughly. Your examples were quite clear. I would not have thought to try to use global variables. Ultimately I chose your cached/context approach, which worked great. I ended up taking a monolithic config file of nearly 400 hard to maintain/organize rules and spread them more logically across 29 (and counting) configuration files. The impetus was needing to re-use some (but not all) of the rules in a second network that I admin. In an attempt to give back, I included below my general approach in case others find it useful. Thanks again, -Michael ==============/============= # 000-main.config # this file will be the same on each network type=Single ptype=SubStr pattern=SEC_STARTUP desc=Started SEC action=assign %a /destinationfile.log type=Single ptype=RegExp pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+ varmap=SYSLOG; hostname=1 desc=hostname action=none continue=TakeNext type=Jump ptype=cached pattern=SYSLOG context=$+{hostname} -> ( sub { return 1 if $_[0]; return 0 } ) cfset=001-all continue=EndMatch # This is a catch-all rule to dump to the logfile anything that didn't match above.. type=Single ptype=RegExp pattern=.* desc=$0 action=write %a $0 ---------/-------- #001-all.config # this file will be different between networks due to differences in vendors, device naming standards, etc type=Options joincfset=001-all procallin=no type=Jump ptype=cached pattern=SYSLOG context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^r-/ } ) cfset=050-juniper continue=EndMatch type=Jump ptype=cached pattern=SYSLOG context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^[ts]-/ } ) # This is a catch-all rule to dump to the logfile anything that doesn't match above.. type=Single ptype=RegExp pattern=.* desc=$0 action=write %a $0 ---------/-------- # 050-juniper.config # these files will be the same between networks, and where most of the rules and re-use will come in. type=Options joincfset=050-juniper procallin=no # near top due to frequency.. type=Jump ptype=RegExp pattern=.+ cfset=110-juniper-mx104 continue=TakeNext # two examples, I have many stanzas like this for individual JunOS daemons [hence many files] type=Jump ptype=RegExp pattern= mgd\[[0-9]+\]: cfset=150-juniper-mgd continue=EndMatch type=Jump ptype=RegExp pattern= rpd\[[0-9]+\]: cfset=150-juniper-rpd continue=EndMatch # things that don’t match specific daemons type=Jump ptype=RegExp pattern=.+ cfset=100-juniper-nodaemon continue=TakeNext # This is a catch-all rule to dump to the logfile anything that survived type=Single ptype=RegExp pattern=.* desc=$0 action=write %a $0 [FIN] From: ris...@gm... <ris...@gm...> Sent: Saturday, October 17, 2020 5:13 PM To: Michael Hare <mic...@wi...> Cc: sim...@li... Subject: Re: [Simple-evcorr-users] using variables learned in rule A in rule B's perlfunc: possible? hi Michael, there are a couple of ways to address this problem. Firstly, instead of using sec match variables, one can set up Perl's native variables for sharing data between rules. For example, the regular expression pattern of the first rule can be easily converted into perlfunc pattern, so that the pattern would assign the hostname to Perl global variable $hostname. This global variable can then be accessed in perfunc patterns of other rules. Here is an example that illustrates the idea: type=Single ptype=perlfunc pattern=sub { if ($_[0] =~ /^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) /) \ { $hostname = $1; } else { $hostname = ""; } return 1; } desc=hostname action=none continue=TakeNext type=Jump ptype=perlfunc pattern=sub { return 1 if $hostname =~ m/^first-use-case/ } cfset=rules-for-this-match-1 type=Jump ptype=perlfunc pattern=sub { return 1 if $hostname =~ m/^second-use-case/ } cfset=rules-for-this-match-2 Since SEC supports caching the results of pattern matching, one could also store the matching result from the first rule into cache, and then retrieve the result from cache with the 'cached' pattern type. Since this pattern type assumes that the name of the cached entry is provided in the 'pattern' field, the hostname check with a perl function has to be implemented in a context expression (the expression is evaluated after the pattern match). Here is an example which creates an entry named SYSLOG in a pattern match cache, so that all match variables created in the first rule can be retrieved in later rules. Note that the entry is created in the 'varmap' field which also sets up named match variable $+{hostname}. In further rules, the 'cached' pattern type is used for retrieving the SYSLOG entry from cache, and creating all match variables from this entry. In order to check the hostname, the $+{hostname} variable that was originally set in the first rule is passed into perlfunc patterns in the second and third rule. Also, if you need to check more than just few match variables in perlfunc pattern, it is more efficient to pass a reference to the whole cache entry into the Perl function, so that individual cached match variables can be accessed through the reference (I have added an additional fourth rule into the example which illustrates this idea): type=Single ptype=RegExp pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+ varmap=SYSLOG; hostname=1 desc=hostname action=none continue=TakeNext type=Jump ptype=cached pattern=SYSLOG context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^first-use-case/ } ) cfset=rules-for-this-match-1 type=Jump ptype=cached pattern=SYSLOG context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^second-use-case/ } ) cfset=rules-for-this-match-2 type=Jump ptype=cached pattern=SYSLOG context=SYSLOG :> ( sub { return 1 if $_[0]->{"hostname"} =~ m/^third-use-case/ } ) cfset=rules-for-this-match-3 A small side-note about the first rule -- you can also create the $+{hostname} variable in the regular expression itself by rewriting it as follows: ^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (?<hostname>.+?) .+ And in that case, you can rewrite the 'varmap=SYSLOG; hostname=1' statement in the first rule simply as 'varmap=SYSLOG'. Hopefully these examples are useful and help you to tackle the problem. kind regards, risto Hi- I'm sorry to ask what is probably very basic question, but I have struggling with this for awhile (I have perused the manual a lot and the mailing list a bit) and could use some guidance. The short version is: is there a way to take the results of a pattern match in one rule and use that value in a perlfunc in another? More verbosely, at this time I use SEC for network syslog exclusion; nothing fancy. I would like to start using Jump rules based on hostname. Hostname is derived from the incoming log line. I thought I would be clever and use a single rule to determine if there was a hostname or not, save it somewhere reusable, and then launch jump rules based on that. something like type=Single ptype=RegExp pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+ varmap= hostname=1 desc=hostname action=assign %r $+{hostname} continue=TakeNext type=Jump ptype=perlfunc pattern=sub { return 1 if $+{hostname} =~ m/^first-use-case/ } cfset=rules-for-this-match-1 type=Jump ptype=perlfunc pattern=sub { return 1 if $+{hostname} =~ m/^second-use-case/ } cfset=rules-for-this-match-2 I know this doesn't work. I understand that '%r' is not a perl hash, and is an action list variable, and that $+{hostname} is undef inside the type=Jump rule perlfunc. I also know that %r is being set correctly, I see it in "variables -> r" if I do SIGUSR1 dump. So is it possible stash away a variable from one rule and use it in a Jump rule like above? I can work around this easily by using a single rule like below, but if I have for example 20 jump permutations, it seems quite redundant to keep recalculating the hostname for comparison. type=Jump ptype=perlfunc pattern=sub { return 0 unless (defined($_[1]) && $_[0] =~ /^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+/); return 1 if $1 =~ m/^first-use-case/} cfset=all-rules Thanks in advance, -Michael _______________________________________________ Simple-evcorr-users mailing list Sim...@li...<mailto:Sim...@li...> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users |
From: Risto V. <ris...@gm...> - 2020-10-17 22:13:31
|
hi Michael, there are a couple of ways to address this problem. Firstly, instead of using sec match variables, one can set up Perl's native variables for sharing data between rules. For example, the regular expression pattern of the first rule can be easily converted into perlfunc pattern, so that the pattern would assign the hostname to Perl global variable $hostname. This global variable can then be accessed in perfunc patterns of other rules. Here is an example that illustrates the idea: type=Single ptype=perlfunc pattern=sub { if ($_[0] =~ /^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) /) \ { $hostname = $1; } else { $hostname = ""; } return 1; } desc=hostname action=none continue=TakeNext type=Jump ptype=perlfunc pattern=sub { return 1 if $hostname =~ m/^first-use-case/ } cfset=rules-for-this-match-1 type=Jump ptype=perlfunc pattern=sub { return 1 if $hostname =~ m/^second-use-case/ } cfset=rules-for-this-match-2 Since SEC supports caching the results of pattern matching, one could also store the matching result from the first rule into cache, and then retrieve the result from cache with the 'cached' pattern type. Since this pattern type assumes that the name of the cached entry is provided in the 'pattern' field, the hostname check with a perl function has to be implemented in a context expression (the expression is evaluated after the pattern match). Here is an example which creates an entry named SYSLOG in a pattern match cache, so that all match variables created in the first rule can be retrieved in later rules. Note that the entry is created in the 'varmap' field which also sets up named match variable $+{hostname}. In further rules, the 'cached' pattern type is used for retrieving the SYSLOG entry from cache, and creating all match variables from this entry. In order to check the hostname, the $+{hostname} variable that was originally set in the first rule is passed into perlfunc patterns in the second and third rule. Also, if you need to check more than just few match variables in perlfunc pattern, it is more efficient to pass a reference to the whole cache entry into the Perl function, so that individual cached match variables can be accessed through the reference (I have added an additional fourth rule into the example which illustrates this idea): type=Single ptype=RegExp pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+ varmap=SYSLOG; hostname=1 desc=hostname action=none continue=TakeNext type=Jump ptype=cached pattern=SYSLOG context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^first-use-case/ } ) cfset=rules-for-this-match-1 type=Jump ptype=cached pattern=SYSLOG context=$+{hostname} -> ( sub { return 1 if $_[0] =~ m/^second-use-case/ } ) cfset=rules-for-this-match-2 type=Jump ptype=cached pattern=SYSLOG context=SYSLOG :> ( sub { return 1 if $_[0]->{"hostname"} =~ m/^third-use-case/ } ) cfset=rules-for-this-match-3 A small side-note about the first rule -- you can also create the $+{hostname} variable in the regular expression itself by rewriting it as follows: ^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (?<hostname>.+?) .+ And in that case, you can rewrite the 'varmap=SYSLOG; hostname=1' statement in the first rule simply as 'varmap=SYSLOG'. Hopefully these examples are useful and help you to tackle the problem. kind regards, risto Hi- > > I'm sorry to ask what is probably very basic question, but I have > struggling with this for awhile (I have perused the manual a lot and the > mailing list a bit) and could use some guidance. > > The short version is: is there a way to take the results of a pattern > match in one rule and use that value in a perlfunc in another? > > More verbosely, at this time I use SEC for network syslog exclusion; > nothing fancy. I would like to start using Jump rules based on hostname. > Hostname is derived from the incoming log line. > > I thought I would be clever and use a single rule to determine if there > was a hostname or not, save it somewhere reusable, and then launch jump > rules based on that. > > something like > > type=Single > ptype=RegExp > pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+ > varmap= hostname=1 > desc=hostname > action=assign %r $+{hostname} > continue=TakeNext > > type=Jump > ptype=perlfunc > pattern=sub { return 1 if $+{hostname} =~ m/^first-use-case/ } > cfset=rules-for-this-match-1 > > type=Jump > ptype=perlfunc > pattern=sub { return 1 if $+{hostname} =~ m/^second-use-case/ } > cfset=rules-for-this-match-2 > > I know this doesn't work. I understand that '%r' is not a perl hash, and > is an action list variable, and that $+{hostname} is undef inside the > type=Jump rule perlfunc. I also know that %r is being set correctly, I see > it in "variables -> r" if I do SIGUSR1 dump. > > So is it possible stash away a variable from one rule and use it in a Jump > rule like above? I can work around this easily by using a single rule like > below, but if I have for example 20 jump permutations, it seems quite > redundant to keep recalculating the hostname for comparison. > > type=Jump > ptype=perlfunc > pattern=sub { return 0 unless (defined($_[1]) && $_[0] =~ /^\w+\s+[0-9]+ > [0-9]+:[0-9]+:[0-9]+ (.+?) .+/); return 1 if $1 =~ m/^first-use-case/} > cfset=all-rules > > Thanks in advance, > -Michael > > > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users > |
From: Michael H. <mic...@wi...> - 2020-10-17 19:53:28
|
Hi- I'm sorry to ask what is probably very basic question, but I have struggling with this for awhile (I have perused the manual a lot and the mailing list a bit) and could use some guidance. The short version is: is there a way to take the results of a pattern match in one rule and use that value in a perlfunc in another? More verbosely, at this time I use SEC for network syslog exclusion; nothing fancy. I would like to start using Jump rules based on hostname. Hostname is derived from the incoming log line. I thought I would be clever and use a single rule to determine if there was a hostname or not, save it somewhere reusable, and then launch jump rules based on that. something like type=Single ptype=RegExp pattern=^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+ varmap= hostname=1 desc=hostname action=assign %r $+{hostname} continue=TakeNext type=Jump ptype=perlfunc pattern=sub { return 1 if $+{hostname} =~ m/^first-use-case/ } cfset=rules-for-this-match-1 type=Jump ptype=perlfunc pattern=sub { return 1 if $+{hostname} =~ m/^second-use-case/ } cfset=rules-for-this-match-2 I know this doesn't work. I understand that '%r' is not a perl hash, and is an action list variable, and that $+{hostname} is undef inside the type=Jump rule perlfunc. I also know that %r is being set correctly, I see it in "variables -> r" if I do SIGUSR1 dump. So is it possible stash away a variable from one rule and use it in a Jump rule like above? I can work around this easily by using a single rule like below, but if I have for example 20 jump permutations, it seems quite redundant to keep recalculating the hostname for comparison. type=Jump ptype=perlfunc pattern=sub { return 0 unless (defined($_[1]) && $_[0] =~ /^\w+\s+[0-9]+ [0-9]+:[0-9]+:[0-9]+ (.+?) .+/); return 1 if $1 =~ m/^first-use-case/} cfset=all-rules Thanks in advance, -Michael |
From: Risto V. <ris...@gm...> - 2020-10-08 15:24:53
|
hi Agustin, if you want to reset the entire state of SEC (not just event correlation operations, but also contexts, action list variables and other data), you can use 'sigemul HUP' action. This action will emulate the reception of the HUP signal which is used to reset all internal state of SEC. However, if you want to reset only the event correlation operations for one or more rule files, you can use the following trick -- run an external script (for example, with 'shellcmd' action) that uses 'touch' utility for updating the timestamps of these rule files (for example, touch -c /etc/sec/myrules*.sec), and then sends the ABRT signal to SEC process (for example, kill -ABRT `cat /run/sec.pid`). The ABRT signal forces SEC to reset event correlation operations for all rule files with updated modification timestamps, and also forces SEC to reload rules from these rule files. kind regards, risto Hi Risto, My name is Agustín, > > Is it possible to reset all the rules when an event is received? > > Kind regards, > Agustín > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users > |
From: Agustín L. R. <ag...@ho...> - 2020-10-08 07:31:48
|
Hi Risto, My name is Agustín, Is it possible to reset all the rules when an event is received? Kind regards, Agustín |
From: Risto V. <ris...@gm...> - 2020-08-06 18:21:13
|
hi Suat, one possible solution for addressing this task is to combine the EventGroup rule with contexts. Since EventGroup rule allows matching unordered event groups (e.g., events A, B and C can appear in any order), the purpose of contexts is to force specific event matching order. The example given below processes events EVENT_A, EVENT_B and EVENT_C, and for the sake of simplicity, these events are matched with SubStr patterns. Also, for simplifying cleanup procedure, the example rule sets up only one context EVCONT for event correlation, and employs context aliases EVCONT_A and EVCONT_B for indicating that events A and B have already been observed (deleting EVCONT would automatically remove both alias names). Since three events need to be processed, the rule type has been set to EventGroup3: type=EventGroup3 init=create EVCONT end=delete EVCONT slide=reset 0; delete EVCONT ptype=SubStr pattern=EVENT_A context=!EVCONT_A count=alias EVCONT EVCONT_A ptype2=SubStr pattern2=EVENT_B context2=EVCONT_A && !EVCONT_B count2=alias EVCONT EVCONT_B ptype3=SubStr pattern3=EVENT_C context3=EVCONT_A && EVCONT_B desc=sequence A, B and C observed action=write - %s; reset 0; delete EVCONT window=600 When the above rule starts an event correlation operation, the operation creates the context EVCONT (see the 'init' field), and when the operation ends, EVCONT is deleted (see the 'end' field). Also, the window of the operation is not sliding and when not all events have been observed by the end of the 600 second window, the operation terminates itself with 'reset' action and deletes the EVCONT context (see the 'slide' field). As mentioned before, the EVCONT context has two alias names -- EVCONT_A indicates that event A has already been observed, while EVCONT_B manifests the fact that event B has been seen. The above rule has three patterns ('pattern', 'pattern2' and 'pattern3' fields) and normally, a match against any of these patterns would start the event correlation operation. However, due to 'context2' and 'context3' fields, events B and C will not match this rule initially, and only event A can match the rule in the beginning (because unlike 'context2' and 'context3' fields, the boolean expression in the 'context' field evaluates true). When event A appears, the event correlation operation is started, and after the event has been processed, the operation also creates EVCONT_A alias name (see the 'count' field). Note that the creation of this alias name means that event A no longer matches this rule (since the boolean expression provided by 'context' field now evaluates false), but event B is now matching (since boolean expression in 'context2' field now evaluates true). In other words, the rule is now expecting to see event B. Similarly, after event B has been observed, the creation of the alias name EVCONT_B makes the rule to expect event C (see 'count2' and 'context3' fields). Finally, when event C appears, the string "sequence A, B and C observed" is written to standard output, and the event correlation operation will terminate itself with 'reset' action and delete EVCONT context. The above solution has one drawback -- when several instances of the same event can appear in the sequence, some sequences might pass unnoticed. For example, consider the following scenario of four events with timestamps: 12:00:00 event A 12:00:30 event A 12:05:00 event B 12:10:05 event C The above solution would start an event correlation operation at 12:00:00 which would fail to see the expected sequence by 12:10:00 and thus terminate. On the other hand, there is a valid sequence in the event stream that starts from 12:00:30. If you want to handle advanced cases with repeated events in sequences, a more complex solution is needed. kind regards, risto Kontakt Suat Toksöz (<st...@gm...>) kirjutas kuupäeval N, 6. august 2020 kell 10:19: > Thanks for the answer. I am looking for window based detection, simple it > is going to be something like SIEM log correlation. Within 10 min event A,B > and C must occur and this three event must be in order (first A, then B > last C) > > Thanks > Suat Toksoz > > On Wed, Aug 5, 2020 at 11:58 PM Risto Vaarandi <ris...@gm...> > wrote: > >> hi Suat, >> >> are you interested in some rule examples about detecting event sequences, >> or are you investigating opportunities for creating a new rule type for >> matching sequences of events? Many event sequences can be handled by >> combining existing rules and contexts, so a new rule type might not be >> needed for the task that you have. To clarify the task a little bit, should >> the solution apply a sliding window based detection if the entire sequence >> has not been observed within 10 minutes, or is it not important and >> incomplete sequence after 10 minutes (say, A and B are present but C is >> missing) terminates the event correlation scheme? >> >> kind regards, >> risto >> >> Kontakt Suat Toksöz (<st...@gm...>) kirjutas kuupäeval K, 5. >> august 2020 kell 15:52: >> >>> hi all, >>> >>> is it possible to have multiple (3,4..) correlation rule on SEC? >>> >>> For example, If event *A* happens then event *B* happens then event *C* >>> happens and all events happen within 10 min. >>> >>> -- >>> >>> Best regards, >>> >>> *Suat Toksoz* >>> _______________________________________________ >>> Simple-evcorr-users mailing list >>> Sim...@li... >>> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users >>> >> > > -- > > Best regards, > > *Suat Toksoz* > |
From: Suat T. <st...@gm...> - 2020-08-06 07:19:24
|
Thanks for the answer. I am looking for window based detection, simple it is going to be something like SIEM log correlation. Within 10 min event A,B and C must occur and this three event must be in order (first A, then B last C) Thanks Suat Toksoz On Wed, Aug 5, 2020 at 11:58 PM Risto Vaarandi <ris...@gm...> wrote: > hi Suat, > > are you interested in some rule examples about detecting event sequences, > or are you investigating opportunities for creating a new rule type for > matching sequences of events? Many event sequences can be handled by > combining existing rules and contexts, so a new rule type might not be > needed for the task that you have. To clarify the task a little bit, should > the solution apply a sliding window based detection if the entire sequence > has not been observed within 10 minutes, or is it not important and > incomplete sequence after 10 minutes (say, A and B are present but C is > missing) terminates the event correlation scheme? > > kind regards, > risto > > Kontakt Suat Toksöz (<st...@gm...>) kirjutas kuupäeval K, 5. august > 2020 kell 15:52: > >> hi all, >> >> is it possible to have multiple (3,4..) correlation rule on SEC? >> >> For example, If event *A* happens then event *B* happens then event *C* >> happens and all events happen within 10 min. >> >> -- >> >> Best regards, >> >> *Suat Toksoz* >> _______________________________________________ >> Simple-evcorr-users mailing list >> Sim...@li... >> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users >> > -- Best regards, *Suat Toksoz* |
From: Risto V. <ris...@gm...> - 2020-08-05 20:58:51
|
hi Suat, are you interested in some rule examples about detecting event sequences, or are you investigating opportunities for creating a new rule type for matching sequences of events? Many event sequences can be handled by combining existing rules and contexts, so a new rule type might not be needed for the task that you have. To clarify the task a little bit, should the solution apply a sliding window based detection if the entire sequence has not been observed within 10 minutes, or is it not important and incomplete sequence after 10 minutes (say, A and B are present but C is missing) terminates the event correlation scheme? kind regards, risto Kontakt Suat Toksöz (<st...@gm...>) kirjutas kuupäeval K, 5. august 2020 kell 15:52: > hi all, > > is it possible to have multiple (3,4..) correlation rule on SEC? > > For example, If event *A* happens then event *B* happens then event *C* > happens and all events happen within 10 min. > > -- > > Best regards, > > *Suat Toksoz* > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users > |
From: Suat T. <st...@gm...> - 2020-08-05 12:52:18
|
hi all, is it possible to have multiple (3,4..) correlation rule on SEC? For example, If event *A* happens then event *B* happens then event *C* happens and all events happen within 10 min. -- Best regards, *Suat Toksoz* |
From: Risto V. <ris...@gm...> - 2020-05-02 13:40:13
|
hi all, SEC version 2.8.3 has been released, and here is the change log for the new version: * added support for collecting rule performance data, and the --ruleperf and --noruleperf command line options. * improved dump file generation in JSON format (some numeric fields that were reported as JSON strings are now reported as JSON numbers). The new version can be downloaded from SEC home page (here is the link for downloading it directly: https://github.com/simple-evcorr/sec/releases/download/2.8.3/sec-2.8.3.tar.gz ). Version 2.8.3 introduces the --ruleperf command line option which enables performance data (CPU time) collection for rules, with performance data being reported in the dump file. The CPU time reported for the rule does not only reflect the cost of matching the rule against input events, but all processing cost. For example, if there is a frequently matching Single rule, its CPU time also includes the cost of executing rule action list. Thanks to John Rouillard for suggesting the rule performance profiling feature! kind regards, risto |
From: Risto V. <ris...@gm...> - 2020-04-08 12:54:18
|
hi Richard, it is good to hear the rule example is useful for you. Just one small note -- the execution of 'lcall' action takes place in SEC process which is single threaded, and custom perl code should not contain blocking function calls (or anything that would run for significant amount of time). Otherwise, the custom code would block the SEC process from doing anything else. This note is also relevant for my previous example, since perl 'open()' function will block if it is invoked for a named pipe and the pipe does not have a writer. For avoiding such situations, make sure that the file pattern in the code will match regular files only, or check if the file is a regular file before attempting to open it. Here is an example modification to 'lcall' code from my previous post: @files = grep { -f $_ } glob("*.log"); This code applies perl grep() function to original list of files, in order to filter out files which are not regular files. kind regards, risto Thank you, Risto, we could try to integrate lcall with calendar into our > setup. SELinux was just theoretical example, we have no problem with that > now, I was just seeking for bullet-proof solution. > > Richard > > st 8. 4. 2020 o 12:05 Risto Vaarandi <ris...@gm...> > napísal(a): > >> hi Richard, >> >> if you want to find input files which can not be opened because of >> permission issues, and want to conduct all checks specifically from SEC >> process without forking anything, I would recommend to set up an 'lcall' >> action that runs all checks from a Perl function. Since this function is >> executed within SEC process, it will have the same permissions for >> accessing the file as SEC itself. If you return one or more values from the >> function, 'lcall' will set the output variable by joining return values >> into single string, so that newline is acting as a separator between return >> values. Also, if no values are returned from the function, output variable >> will be set to perl 'undef'. After collecting the output from the function >> in this way, you can provide the output variable to 'event' action that >> creates synthetic event from its content. Note that if multi-line string is >> submitted to 'event' action, several synthetic events are generated (one >> for each line). And now that you have synthetic events about files that are >> not accessible, you can process them further in any way you like (e.g., an >> e-mail could be sent to admin about input file with wrong permissions). >> >> Here is an example rule that executes the input file check once a minute >> from Calendar rule: >> >> type=Calendar >> time=* * * * * >> desc=check files >> action=lcall %events -> ( sub { my(@files, @list, $file); \ >> @files = glob("*.log"); foreach $file (@files) { \ >> if (!open(FILE, $file)) { push @list, "$file is not accessible"; >> } else { close(FILE); } \ >> } return @list; } ); \ >> if %events ( event %events ) >> >> The action list of the Calendar rule consists of two actions -- the >> 'lcall' action and 'if' action, with 'if' action executing 'event' action >> conditionally. The 'lcall' action tries to open given input files and >> verify that each open succeeds. For all files that it could not open, >> 'lcall' returns a list of messages "<file> is not accessible", and the list >> is assigned to %events variable as a multi-line string (each line in the >> multi-line string represents one message for some file). If the list is >> empty, %events variable is set to perl 'undef' value. After 'lcall' action, >> the 'if' action is executed which verifies that %events is true in perl >> boolean context (if %events has been previously set to 'undef', that check >> will fail). If the check succeeds (in other words, we have some messages to >> report), the 'event' action will generate synthetic events from messages. >> >> That is one way how to check input files without forking anything from >> SEC. However, if the root cause of the problem is related to SELinux, it is >> probably much better to adjust SELinux configuration (perhaps by changing >> file contexts), so that the problem would not appear. >> >> kind regards, >> risto >> >> >> Kontakt Richard Ostrochovský (<ric...@gm...>) kirjutas >> kuupäeval E, 6. aprill 2020 kell 17:21: >> >>> Hello friends, >>> >>> I am thinking about how to monitor not only events from log files, but >>> also those files existence and accessibility (for user running SEC) - in >>> cases, where this is considered to be a problem. >>> >>> As I saw in the past, these were logged into SEC log file, but higher >>> debug level was required, so it is not suitable for production. >>> >>> There are handful of other options how to monitor it "externally", e.g. >>> via some script, or other monitoring agent, but this is not ultimate >>> solution, as e.g. SELinux may be configured, allowing external script or >>> agent (running under the same user as SEC) to see nad open file for >>> reading, but not SEC (theoretical caveat of such solution). >>> >>> So, has somebody some "best practise" for reliable and production-ready >>> way, how to "self-monitor" log files being accessed by SEC, if: >>> >>> - they exist >>> - they are accessible by SEC >>> >>> ? >>> >>> Thanks in advance. >>> >>> Richard >>> _______________________________________________ >>> Simple-evcorr-users mailing list >>> Sim...@li... >>> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users >>> >> |
From: Richard O. <ric...@gm...> - 2020-04-08 11:49:32
|
Thank you, Risto, we could try to integrate lcall with calendar into our setup. SELinux was just theoretical example, we have no problem with that now, I was just seeking for bullet-proof solution. Richard st 8. 4. 2020 o 12:05 Risto Vaarandi <ris...@gm...> napísal(a): > hi Richard, > > if you want to find input files which can not be opened because of > permission issues, and want to conduct all checks specifically from SEC > process without forking anything, I would recommend to set up an 'lcall' > action that runs all checks from a Perl function. Since this function is > executed within SEC process, it will have the same permissions for > accessing the file as SEC itself. If you return one or more values from the > function, 'lcall' will set the output variable by joining return values > into single string, so that newline is acting as a separator between return > values. Also, if no values are returned from the function, output variable > will be set to perl 'undef'. After collecting the output from the function > in this way, you can provide the output variable to 'event' action that > creates synthetic event from its content. Note that if multi-line string is > submitted to 'event' action, several synthetic events are generated (one > for each line). And now that you have synthetic events about files that are > not accessible, you can process them further in any way you like (e.g., an > e-mail could be sent to admin about input file with wrong permissions). > > Here is an example rule that executes the input file check once a minute > from Calendar rule: > > type=Calendar > time=* * * * * > desc=check files > action=lcall %events -> ( sub { my(@files, @list, $file); \ > @files = glob("*.log"); foreach $file (@files) { \ > if (!open(FILE, $file)) { push @list, "$file is not accessible"; > } else { close(FILE); } \ > } return @list; } ); \ > if %events ( event %events ) > > The action list of the Calendar rule consists of two actions -- the > 'lcall' action and 'if' action, with 'if' action executing 'event' action > conditionally. The 'lcall' action tries to open given input files and > verify that each open succeeds. For all files that it could not open, > 'lcall' returns a list of messages "<file> is not accessible", and the list > is assigned to %events variable as a multi-line string (each line in the > multi-line string represents one message for some file). If the list is > empty, %events variable is set to perl 'undef' value. After 'lcall' action, > the 'if' action is executed which verifies that %events is true in perl > boolean context (if %events has been previously set to 'undef', that check > will fail). If the check succeeds (in other words, we have some messages to > report), the 'event' action will generate synthetic events from messages. > > That is one way how to check input files without forking anything from > SEC. However, if the root cause of the problem is related to SELinux, it is > probably much better to adjust SELinux configuration (perhaps by changing > file contexts), so that the problem would not appear. > > kind regards, > risto > > > Kontakt Richard Ostrochovský (<ric...@gm...>) kirjutas > kuupäeval E, 6. aprill 2020 kell 17:21: > >> Hello friends, >> >> I am thinking about how to monitor not only events from log files, but >> also those files existence and accessibility (for user running SEC) - in >> cases, where this is considered to be a problem. >> >> As I saw in the past, these were logged into SEC log file, but higher >> debug level was required, so it is not suitable for production. >> >> There are handful of other options how to monitor it "externally", e.g. >> via some script, or other monitoring agent, but this is not ultimate >> solution, as e.g. SELinux may be configured, allowing external script or >> agent (running under the same user as SEC) to see nad open file for >> reading, but not SEC (theoretical caveat of such solution). >> >> So, has somebody some "best practise" for reliable and production-ready >> way, how to "self-monitor" log files being accessed by SEC, if: >> >> - they exist >> - they are accessible by SEC >> >> ? >> >> Thanks in advance. >> >> Richard >> _______________________________________________ >> Simple-evcorr-users mailing list >> Sim...@li... >> https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users >> > |
From: Risto V. <ris...@gm...> - 2020-04-08 10:05:21
|
hi Richard, if you want to find input files which can not be opened because of permission issues, and want to conduct all checks specifically from SEC process without forking anything, I would recommend to set up an 'lcall' action that runs all checks from a Perl function. Since this function is executed within SEC process, it will have the same permissions for accessing the file as SEC itself. If you return one or more values from the function, 'lcall' will set the output variable by joining return values into single string, so that newline is acting as a separator between return values. Also, if no values are returned from the function, output variable will be set to perl 'undef'. After collecting the output from the function in this way, you can provide the output variable to 'event' action that creates synthetic event from its content. Note that if multi-line string is submitted to 'event' action, several synthetic events are generated (one for each line). And now that you have synthetic events about files that are not accessible, you can process them further in any way you like (e.g., an e-mail could be sent to admin about input file with wrong permissions). Here is an example rule that executes the input file check once a minute from Calendar rule: type=Calendar time=* * * * * desc=check files action=lcall %events -> ( sub { my(@files, @list, $file); \ @files = glob("*.log"); foreach $file (@files) { \ if (!open(FILE, $file)) { push @list, "$file is not accessible"; } else { close(FILE); } \ } return @list; } ); \ if %events ( event %events ) The action list of the Calendar rule consists of two actions -- the 'lcall' action and 'if' action, with 'if' action executing 'event' action conditionally. The 'lcall' action tries to open given input files and verify that each open succeeds. For all files that it could not open, 'lcall' returns a list of messages "<file> is not accessible", and the list is assigned to %events variable as a multi-line string (each line in the multi-line string represents one message for some file). If the list is empty, %events variable is set to perl 'undef' value. After 'lcall' action, the 'if' action is executed which verifies that %events is true in perl boolean context (if %events has been previously set to 'undef', that check will fail). If the check succeeds (in other words, we have some messages to report), the 'event' action will generate synthetic events from messages. That is one way how to check input files without forking anything from SEC. However, if the root cause of the problem is related to SELinux, it is probably much better to adjust SELinux configuration (perhaps by changing file contexts), so that the problem would not appear. kind regards, risto Kontakt Richard Ostrochovský (<ric...@gm...>) kirjutas kuupäeval E, 6. aprill 2020 kell 17:21: > Hello friends, > > I am thinking about how to monitor not only events from log files, but > also those files existence and accessibility (for user running SEC) - in > cases, where this is considered to be a problem. > > As I saw in the past, these were logged into SEC log file, but higher > debug level was required, so it is not suitable for production. > > There are handful of other options how to monitor it "externally", e.g. > via some script, or other monitoring agent, but this is not ultimate > solution, as e.g. SELinux may be configured, allowing external script or > agent (running under the same user as SEC) to see nad open file for > reading, but not SEC (theoretical caveat of such solution). > > So, has somebody some "best practise" for reliable and production-ready > way, how to "self-monitor" log files being accessed by SEC, if: > > - they exist > - they are accessible by SEC > > ? > > Thanks in advance. > > Richard > _______________________________________________ > Simple-evcorr-users mailing list > Sim...@li... > https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users > |
From: Agustín L. R. <ag...@ho...> - 2020-04-07 15:45:28
|
Hi Risto and John, I think the context solution with duration is the best solution. Thank you very much for your contributions! Kind regards ________________________________ Hi Risto: In message <CAG...@ma...> , Risto Vaarandi writes: >hi Agustin, >> Hi Risto, >> >> Thank you very much for your help. >> I have another question related to this problem. >> >> Suppose we have the next entry in less than 60 seconds: >> EVENT_TYPE_A 1.1.1.1 <--- the beginning of input for SEC >> EVENT_TYPE_A 2.2.2.2 >> EVENT_TYPE_B 1.1.1.1 >> EVENT_TYPE_B 2.2.2.2 >> EVENT_TYPE_C 1.1.1.1 >> FINISH <--- (FINISH is also an event) the >> end of input for SEC >> >> We have the following rule: >> Rule 1: >> type=EventGroup2 >> ptype=RegExp >> pattern=EVENT_TYPE_A ([\d.]+) >> continue=TakeNext >> ptype2=RegExp >> pattern2=EVENT_TYPE_B ([\d.]+) >> continue2=TakeNext >> desc=Events A and B observed for IP $1 within 60 seconds >> action=logonly Events A and B observed for IP $1 >> window=60 >> >> Rule 2: >> type=EventGroup3 >> ptype=RegExp >> pattern=EVENT_TYPE_A ([\d.]+) >> continue=TakeNext >> ptype2=RegExp >> pattern2=EVENT_TYPE_B ([\d.]+) >> continue2=TakeNext >> ptype3=RegExp >> pattern3=EVENT_TYPE_C ([\d.]+) >> continue3=TakeNext >> desc=Events A, B and C observed for IP $1 within 60 seconds >> action=logonly Events A , B and C observed for IP $1 >> window=60 >> >> We get the following output: >> Events A and B observed for IP 1.1.1.1 >> Events A and B observed for IP 2.2.2.2 >> Events A , B and C observed for IP 1.1.1.1 >> >> I'm waiting for the next output: >> Events A and B observed for IP 2.2.2.2 >> Events A , B and C observed for IP 1.1.1.1 >> >> The idea is to reduce the output. >> > >One approach that is relatively easy to implement is the following -- when >the first message for an IP address is generated, a context is created for >this IP address which will prevent further matches by rules for this IP. >Also, in all rule definitions, pattern* fields would be complemented with >context* fields which verify that the context for current IP address does >not exist. For example, in the following ruleset repeated messages for the >same IP address are suppressed for 300 seconds (that's the lifetime of >SUPPRESS_<ip> contexts): > >type=EventGroup2 >ptype=RegExp >pattern=EVENT_TYPE_A ([\d.]+) >context=!SUPPRESS_$1 >continue=TakeNext >ptype2=RegExp >pattern2=EVENT_TYPE_B ([\d.]+) >context2=!SUPPRESS_$1 >continue2=TakeNext >desc=Events A and B observed for IP $1 within 60 seconds >action=write - %s; create SUPPRESS_$1 300 >window=60 > >type=EventGroup3 >ptype=RegExp >pattern=EVENT_TYPE_A ([\d.]+) >context=!SUPPRESS_$1 >continue=TakeNext >ptype2=RegExp >pattern2=EVENT_TYPE_B ([\d.]+) >context2=!SUPPRESS_$1 >continue2=TakeNext >ptype3=RegExp >pattern3=EVENT_TYPE_C ([\d.]+) >context3=!SUPPRESS_$1 >continue3=TakeNext >desc=Events A, B and C observed for IP $1 within 60 seconds >action=write - %s; create SUPPRESS_$1 300 >window=60 > >With these rules, the following output events would be produced for your >example input: >Events A and B observed for IP 1.1.1.1 within 60 seconds >Events A and B observed for IP 2.2.2.2 within 60 seconds > >However, if you would like to suppress the output message that is generated >on 3rd input event and rather generate an output message "Events A , B and >C observed for IP 1.1.1.1" on 5th input event, it is not possible to >achieve that goal with EventGroup (or any other) rules, since after seeing >the 3rd event, it is not possible to know in advance what events will >appear in the future. In other words, SEC rules execute actions immediately >when a first matching set of events has been seen, and it is neither >possible to reprocess past events nor postpone actions in the hope of >better future match (which might never occur). I agree SEC has no built in delay mechanism for actions, you can create a context that expires in the future and runs an action on expiration. How about creating a delayed reporting context rather than writing: Events A and B observed for IP 1.1.1.1 in EventGroup2, use an action like: create report_$1_event_A_and_B 60 (write - %s) and add the action destroy report_$1_event_A_and_B to EventGroup3. This should prevent the reporting of A and B when A, B and C are seen. It will delay the reporting of A and B until the window to find A, B and C is done. This may not be desirable, but the laws of physics won't permit another option. Is there a variable that records how much time is left in the window? If so you can have the report_$1_event_A_and_B expire in that many seconds rather than in 60 seconds. (I assume the window expiration times of both EventGroups will be the same since they are triggered by the same event.) This is a bit of a hack but I think ti will work to delay the write action. Thoughts? Have a great week. -- -- rouilj John Rouillard =========================================================================== My employers don't acknowledge my existence much less my opinions. |
From: Risto V. <ris...@gm...> - 2020-04-06 22:22:08
|
hi John, Hi Risto: > > ... > > > > >However, if you would like to suppress the output message that is > generated > >on 3rd input event and rather generate an output message "Events A , B and > >C observed for IP 1.1.1.1" on 5th input event, it is not possible to > >achieve that goal with EventGroup (or any other) rules, since after seeing > >the 3rd event, it is not possible to know in advance what events will > >appear in the future. In other words, SEC rules execute actions > immediately > >when a first matching set of events has been seen, and it is neither > >possible to reprocess past events nor postpone actions in the hope of > >better future match (which might never occur). > > I agree SEC has no built in delay mechanism for actions, you can > create a context that expires in the future and runs an action on > expiration. > > How about creating a delayed reporting context rather than > writing: > > Events A and B observed for IP 1.1.1.1 > > in EventGroup2, use an action like: > > create report_$1_event_A_and_B 60 (write - %s) > > and add the action > > destroy report_$1_event_A_and_B > > to EventGroup3. > That's another nice way for handling the problem. In my previous post, I suggested to store all events for the same IP into the same context and then use some condition to select most relevant event, but this approach looks more lightweight and easy-to-use. > This should prevent the reporting of A and B when A, B and C are > seen. It will delay the reporting of A and B until the window to find > A, B and C is done. This may not be desirable, but the laws of physics > won't permit another option. > > Is there a variable that records how much time is left in the window? > If so you can have the report_$1_event_A_and_B expire in that many > seconds rather than in 60 seconds. (I assume the window expiration > times of both EventGroups will be the same since they are triggered by > the same event.) > > This is a bit of a hack but I think ti will work to delay the write > action. > > Thoughts? > If one wants to query the number of remaining second in the event correlation window, there is no such variable, but there is 'getwpos' action for finding the beginning of the event correlation window of any operation (in seconds since Epoch). For example, if one includes the following action in the rule definition getwpos %window 0 and this action gets executed by the event correlation operation, then the %window variable is set to the time when the calling operation started. Since %u action list variable holds a timestamp for the current moment (in seconds since Epoch), one can calculate remaining seconds in the 60 second window as follows: getwpos %window 0; lcall %size %u %window -> ( sub { 60 - ($_[0] - $_[1]) } ) The above function will find the difference between %u and %window variables, and subtract the result from 60. If above action list gets invoked during the last second of operation's lifetime, the function will return 0, and using this value directly for context lifetime will create a context with infinite lifetime. To handle this special case, one could modify the above function to always return a positive value (e.g., 1 instead of 0). kind regards, risto > > Have a great week. > -- > -- rouilj > John Rouillard > =========================================================================== > My employers don't acknowledge my existence much less my opinions. > |