From: Ben C. <Be...@cl...> - 2004-09-30 07:46:06
|
Hi James, The reason for the hash error can only be that the file (serviceperf.log) has been altered. I have done testing on a file of this size without error. It is possible that the file is being altered rather than just appended to? I think you may want to try the new parser from Yves. It does not restart the log file. This will be the only parser in later versions of PP. Alternatively, there is another way of killing the log file. After each run, execute: echo -n "" > serviceperf.log This will loose three or four lines of log. But does not restart Nagios. If you can live with this, then this is another suggestions. Regards, Ben. James Ochs wrote: > A little more information on this issue as well... since nagios 2 beta > crashes on the kill -HUP signal, I turned off the log rotation via > perfparse. It appears after some testing last night that perfparse is > not correctly calculating hashes on my system. I upgraded to 100.6, > dropped the existing database, and used perfparse to load the current > log file. It stopped (it was running under the daemon shell script) > halfway through and I restarted perfparse. It then said that it was > moving to the line that it had stopped on, after which it said that the > hash was incorrect and started reloading the data from the beginning of > the file. I suspect that this is a regular occurrence because over the > last month a 1.5 million line logfile generated 62 million rows in the > perfdata_service_bin table. > > I am using perfparse 100.6 on fedora core 2 if that helps any... > > James > > -----Original Message----- > From: Jeff Scott [mailto:je...@sk...] > Sent: Wednesday, September 29, 2004 9:14 AM > To: James Ochs; per...@li... > Subject: Re: [Perfparse-users] database size.... > > I'm in a simular situation, but more servers....looking at about 600 > servers eventually, and I'd like to store about a year worth of data for > post analysis, and trending. What kind of storage am I going to need? > I'm looking at 86mil rows inserted per month....ouch. Also, has anyone > every used Oracle with Perfparse? What would be involved in writing it > modular to use any database backend? > > > On Tue, 28 Sep 2004 17:21:49 -0700, James Ochs <jo...@re...> > wrote: > > > Hi all, > > I sent an email to the list about a month ago on this issue and haven't > > gotten a response. > > I have perfparse and am tracking about 80 servers and about 10 or so > > monitors per server, all running at five minute intervals. I have 30 > > days > > worth of data in the database. > > Currently the perfdata_service_raw table is 45M rows and over 800M. The > > perfdata_service_bin table is 62M rows and about 1.6G (this is due to > not > > purging for a long time before I got the deletion policies working, I > > think > > its about 900M of actual data). > > Due to the number of rows the perfparse-db-purge actually crashes mysql > > due > > to running out of buffer space... I have upped this to 16M and it still > > crashes. > > I'd also like to be able to monitor trends in services over a longer > > period > > of time, like say the last month, the last year, the last 5 years > > similar to > > the way mrtg does, but with the dataset the way it is that is currently > > not > > feasible. > > How much data are other people retaining? Has anyone else run into > > similar > > issues? Does anyone have a solution? > > Thanks, > > James > |