Re: [Simple-evcorr-users] Counting several messages but..
Brought to you by:
ristov
From: Norberto A. <nal...@re...> - 2005-06-30 17:32:45
|
When everything fails read the manual... From the HPUX 11i man syslogd: -r Don't suppress duplicate messages. Would be better the FreeBSD option: -c Disable the compression of repeated instances of the same line into a single line of the form ``last message repeated N times'' when the output is a pipe to another program. If specified twice, disable this compression in all cases. Many thanks for your help. I will test the rules following your advice. Regards Norberto ---------- Original Message ----------- From: "John P. Rouillard" <ro...@cs...> To: "Norberto Altalef" <nal...@re...> Cc: sim...@li... Sent: Thu, 30 Jun 2005 12:11:08 -0400 Subject: Re: [Simple-evcorr-users] Counting several messages but.. > In message <200...@re...>, > "Norberto Altalef" writes: > >> In message <200...@re...>, > >> "Norberto Altalef" writes: > >> > >> >Hi John Risto. Thank you both for the answers. > >> > > >> >Actually the messages "repeated n times" is generated for an standard > >> >syslog daemon from a HPUX 11.0 server. I use syslog-ng in the central > >> >syslog. In addition the same messages "repeated n times" is showed > >> >after any other message. I need a way to rebuild the original msg or > >> >repeat the original msg n times, before counting...no easy. > >> > >> Just to clarify, you are seeing the "last message repeated..." on your > >> syslog master running syslog-ng for hosts that run HPUX? If so sounds > >> like it's syslog is a brain damaged as the Ultrix and old AIX. Sigh. > >> > >Exactly. The "last messages repeated.." are in the HPUX syslog and in the > >central syslog-ng. I may check if there is patch or new version for the hpux > >syslog daemon. > > > >You mean that the "correct" behavior would be use this "last messages.." in > >the local syslog, but send the complete messages stream to the remote syslog > >server ? > > In my opinion, yes. The whole event stream should be sent to the > central syslog server. The client should not generate "last message > repeated N times" messages to the central server. The reason for the > duplicate suppression as Risto mentioned earlier was to save disk > space in the event of a message storm. If you are not saving to disk, > but instead sending your event stream to another program that can do > it's own duplicate suppression (if it chooses), the syslog daemon > should provide the raw and not processed events. Imagine the fun if > you chain multiple syslogs together, so it looks like: > > client -> workgroup master -> site master -> company master > > which is actually how one site had its event logging framework set up. > Having all the intermediate syslog daemons doing duplicate suppressions > made analysis at the company master server quite fun 8-). > > >> To get around it, you can make a simple modification to Risto's > >> example keeping a lastline for each host. Maybe something like: > >> > >> type=Single > >> ptype=RegExp > >> context=!_INTERNAL_EVENT > >> pattern=([^ ]) last message repeated (\d+) times > >> desc='Decompress' $2 repeated messages compressed by syslogd > >> action=call %events %decompr_msg_func %lastline_$1 $2; event %events > >> > >> # if the current syslog line is NOT "last message repeated N times" > >> # message, write it to the variable %lastline, and pass it to the > >> # following rules > >> > >> type=Single > >> ptype=RegExp > >> context=!_INTERNAL_EVENT && !SEC_INTERNAL_EVENT > >> pattern=^... +[0-9]+ [0-9:]+ ([^ ]*) *.* > >> continue=TakeNext > >> desc=Keep the last syslog line in memory > >> action=assign %lastline_$1 $0 > >> > >> Where $1 in both cases matches the host name in the syslog entry. This > >> keeps a variable %lastline_<hostname> that is replicated each time. I > >> am not sure if %lastline_$1 will actually create the appropriate > >> variable name (say %lastline_hostname), you may have to use a context > >> to store the last line (add lastline_$1 $0) and then assign it to a > >> variable using "empty" (empty lastline_$1 %lastline) before calling: > >> > >> call %events %decompr_msg_func %lastline $2; event %events > >> > > > >Sorry if my question is so basic (I´ begining to play with SEC). > > Not a problem, we were all where you are now at some point. > > >After this rules, how can I rise an alarm if the number of messages exceed a > >threshold ? Using normal SingleWithThreshold ? > > Correct, use a normal SingleWithThreshold. All these rules are doing > is trying to reconstruct the event stream as though there was no > duplicate suppression going on within syslog. > > You may have to experiment a bit to get the rules right. I would > suggest setting up an experimental setup with SEC to test your rules. > > Copy and paste the syslog lines you want to test with into one or more > files (file1, file2, file3... fileN). Set up the sec rules and run > sec from a file in the local directory (don't use the real syslog > messages file at this point). Say sec -rules MyTestRules -input > MyTestInput. Then cat each of your files with the syslog messages > appending them to the MyTestInput file. You can even write a shell > script with sleep commands etc to simulate the arrival of data over > time, or you can just type the "cat file1 >> MyTestInput" commands > manually for each fileN. > > Also to test patterns if your rules aren't running quite right, I use > perl directly on the command line with: > > perl -ne 'print "Passed: $_" if /pattern to test/;' file1 > > where "pattern to test" is replaced by one of the patterns in the sec > rule file. > > -- rouilj > John Rouillard > =========================================================================== > My employers don't acknowledge my existence much less my opinions. ------- End of Original Message ------- |