You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Áº½¡ <lj2...@ne...> - 2001-09-18 06:13:51
|
confirm 463781 =================================================== http://sms.163.com ÍøÒ×¶ÌÐÅ ·¢×ÔÄÚÐÄ http://love.163.com Ç£ÒÁÖ®ÊÖ¹²ÏíÈËÉú»¶Ó飡NEW£¡ http://alumni.163.com ±ÌÔÆÌ죬»ÆÒ¶µØ£¬Ð£Ô°Ç黳ÒÀÈçÎô |
From: Leigh P. <Lei...@in...> - 2000-04-19 00:20:57
|
Guys, Well, I'm still waiting for some funds so I can acquire a couple of test machines. However, here's a few initial observations / comments prior to me starting to play with the source. 1) Streaming capability One of the major problems I found with the solaris BSM audit capability was the lack of targetted auditing. Sure you could say: * only audit lo, ad, fc, or * audit lo, ad in general, fc for these particular users, or even * Create your own class 'zz' which includes a subset of many solaris default classes. However, you could not do something like: "tell me all file reads that occur in /data/secretstuff" Now I was prepared to accept the processor overhead of a global (non-root) file-access event being enabled, but what I wasn't prepared to accept is the HUGE volume of audit records written to disk, which require significant post-processing to draw out the particular events I am interested in (ie: /data/secretstuff) On a decent system, with a fair number of users, the volume of file access events is amazing. Particularly since most unixes search the library path for libs on each executable load. So every time someone runs 'awk' for example, the OS scans each member of the library path for something like 'libawk.1' (poor example - don't know if awk even accesses a library, but you get the idea): file access: /lib/libawk.1 (fail) file access: /usr/lib/libawk.1 (fail) file access: /usr/X11/lib/libawk.1 (fail) file access: /usr/local/lib/libawk.1 (success!) - so thats four file accesses, just for a library - plus the one to access the executable, plus the one to access the file that awk acts on... 6 file accesses for one command. It quickly adds up. What I did while at my previous employer, was to put in a modification to auditd to allow a streaming capability - ie: rather than writing to /etc/security/audit/<hostname>/files/xxxx, it would pipe it out to stdout. As such, you would have to run auditd -stream | praudit -l | followonprocessing.pl >somefile. What this allowed us to do was something like: auditd -stream | praudit -l | grep "/data/secretstuff/" >file (pseudo-code command). This allowed us to filter out the events we were not interested in BEFORE they made it onto disk. Not an ideal solution.. A better one would be to have BSM capable of turning audit on for specific areas of the file system... however, that is beyond the scope of linux-bsm for the moment. A better solution than the auditd -stream idea above is a direct write to /dev/audit. This is the mechanism that AIX uses by default, and it works very well. A follow-on process can then effectively tail -f /dev/audit, grab out the events of interest, and go from there. Summary: Suggest incorporating a streaming function to /dev/audit for raw audit events. 2) Enforcing appropriate delimiters Solaris BSM praudit had a REALLY annoying 'feature'. It would use the comma-delimiter our of context. Normally, the comma is used to delimit elements in the audit stream. As such, you'd get an event something like this (cant remember exact syntax): 99,header,sysinfo(2),99,99,17 Jan 2000 +12345, .... - therefore you could count on the date being the 4th element after the 'header' tag. However, when printing file open events, praudit throws in a comma to tell you what sort of open event we are talking about, which throws out the processing options later. eg: 99,header,open(2),read,99,99,17 Jan 2000 13:13:01 +12345 99,header,open(2),read,write,exec,99,99,17 Jan 2000 13:13:02 +12345 This really sucks from a processing point of view. Suggest changing the commas to colons or similar. Taking into account all these little bugs means that the token parsing code for solaris auditing runs into approximately: 425 lines of perl. We could get rid of half of those lines if praudit was more consistent. Summary: Ensure delimiters are used as they are intended. 3) Speaking of bugs, I think sun have fixed this one up as of solaris 7, but I'd better bring it p anyway. In the date field, some routines in the solaris audit code put a comma between the date/time and the number of milliseconds. ie: 17 Jan 2000 13:13:01 +12345 = normal 17 Jan 2000 13:13:01, +12345 = buggy. Since you're not working from the solaris source code, you should have avoided this one, but thought I'd mention it anyway. Summary: Beware buggy comma placement. Ok, that will do for the moment. BTW: Please add user 'intersect' into the project on sourceforge. Regards, Leigh. ---- Leigh Purdie, Director - InterSect Alliance Pty Ltd Lei...@in... - http://www.intersectalliance.com/ |