[Sudoscript-devel] Re: SudoScript update, multi-user
Brought to you by:
hbo
From: Howard O. <hb...@eg...> - 2002-05-21 21:58:09
|
I dunno how big they could get. I just know that they'll grow significantly faster that the equivilant sudo syslogs would, since sudoscript captures the output too. Since I could imagine cases where both large and small amounts of data would be logged, I designed the rotation system to check the size, and rotate on that basis. That handles both cases nicely. On the issue of loggers and log rotation, what the 1.0 daemon did was to check the log size, then fork off a child to do the compression before returning to reading the FIFO. With this model, sudoshell never knows the rotation has taken place. It just keeps writing to the FIFO, which is the main reason it's there. When I considered moving to a daemon-per-session model, I thought about the problem of rotating the individual log files. In that setup there would be several daemons, each with their own FIFOs. They could manage their own log files in a manner similar to the 1.0 daemon, but then noone would be looking out for the overall log sizes. And it's open ended, 10 daemons would produce 10 times as much data, all other things being equal. So it doesn't scale. Thinking about a solution for this, I considered that the master daemon could look after overall sizes. But then it would have to suspend the individual session daemons while it started the rotator/compressors. (This because the session daemons would be logging to the files directly, and would have to be told to close and reopen the log, instead of figuring it out for themselves as in the single-threaded case. Add complications like dead session daemons leaving large log files around that the master daemon would have to deal with and you start to worry about overall reliability. So I finally hit on the idea of merging back into a single logging daemon Now the session daemons don't have to worry about logfiles, they use FIFOs for both input and output. The backend daemon acts very similarly to the single threaded 1.0 daemon, so there's minimal disruption in that code. (Well, I made some other improvements while I was at it.) It doesn't have to lock the files or cat /dev/null onto them since it "owns" the log. It can just move them out of the way. The master daemon takes on the traffic cop role, tracking session startup and tear down. --On Tuesday, May 21, 2002 03:17:49 PM -0500 Tommy Smith <ts...@ea...> wrote: > Howard: > > No I haven't checked CVS lately (since a few weeks ago) I was busy > working on these modifications yesterday afternoon. > > to address your points: > > 1) As far as monitoring size on these files, how big could they get? I > mean, we have some processes that generate several hundred MB in a day or > week, but it would take alot of shell activity (esp as root) to bring the > files above a level where it would impact disk space on a system. Also, I > want to rotate logs via cron in a systematic 'cronological' way > (checkpointing them by an absolute date string). > > 2) multiple files/multiple logging daemons: how do the daemons know to lay > off and compress now? for a time I had the 'check_sudoscriptd' fuction > turned off for debugging - it does work per-user and just check the > {user}-pid file when a new instance is requested but its not seeing > itself running since the name of the daemon is hardcoded into the file > and so it starts another D - I haven't fixed this yet. this would still > run into the same problems with logging multiple sessions (if one was > logged in twice or had multiple terms open, which we are apt to do often) > to the same file and generating an out-of-sequence log without your > session identifier tags, so I'm going to check those out. As far as > telling the loggingD to rotate or checkpoint the file, if it can do it > for $log it should be able to do it for any $log ?? I didn't test it > though. > > so it would be rotating the logs to {logfile}-0,1,2 etc where the names > are not unique globally ??-- I often rotate logs by way of 'cp logfile > /backup-dir/logfile-{date}; cat /dev/null > logfile' which preserves the > filehandle for any running processes and its only drawback is that it > could clobber some data. (which could be a big drawback depending on the > application, but for most, we can avoid HUPing the daemon by using this > technique) There should be a way to 'flock' the file also & just block for > the short time /dev/null is being cat'ed to it. Compressing logfiles is > strictly the function of some other task on some other server (IMHO; hate > to see a production box being under load by gzip for a big logfile, > rather pop it over the net and compress elsewhere), I can do without that > function and would rather have it handled externally if even neccesary. > perhaps we could build a 'signal' into the daemon for it to > checkpoint/rotate the logfile. > > 3) re: session tagging and date tagging: I am probably going to look at > tagging the date in a > > YYYY:MM:DD:HH:MM format so that it will sort properly by the first field > and be more compact > > 4 the end product would be to integrate these things into the ssh login > process, so when you login to the system, it starts up a sudoscriptD for > the account automatically & cleans up on exit or interval. having a > signal hook on the daemon would also make it somewhat scriptable (thats > the other reason I like multiple {user}-logfiles; the grep work is > already done and ready for a script/browser/human to call the filename) > > On other fronts, I am also working on 'standardizing' the way all logfiles > are handled across our systems and from different programs. Seems as if > every one has a different way of rotating and archiving, if they even do > any rotating. > > > Still have not checked out your new stuff on CVS, but am going to check > out the diagram you sent (I've printed it) -- thanks! > > > the two modified codes are attached. take a look and see what you think (I > didn't modify much, but removed most of the comments and documentation so > I can print it out and eliminate extraneous distractions) -- perhaps > another 'branch' to the CVS tree might be in order? in any case, I'm > going to continue developing this for our site - I hope you find my > modifications interesting; I appreciate your feedback and writing the > code in the first place. > > > : ) > Tommy Smith > System Admin > Eatel Data Network Operations. > > Howard Owen "Even if you are on the right EGBOK Consultants track, you'll get run over if you hb...@eg... +1-650-339-5733 just sit there." - Will Rogers |