From: Ben C. <Be...@cl...> - 2004-09-30 07:52:07
|
> I'm in a similar situation, but more servers....looking at about 600 > servers eventually, and I'd like to store about a year worth of data > for post analysis, and trending. What kind of storage am I going to > need? I'm looking at 86mil rows inserted per month....ouch. Also, has > anyone every used Oracle with Perfparse? What would be involved in > writing it modular to use any database backend? I can answer some of this. You may have to do some research. Get to the MySQL shell and enter: show table status\G Or show table status; Look for the two tables perfdata_service_bin and perfdata_service_raw. You can see there the average row length and the number of rows. If you know the number of rows being added, you therefore can calculate the space needed for a years data. The %_raw table holds one row for each service. If you have one service entry a minute from all 600 services, this would be 84MB/day. The %_bin table holds one row for each metric. See how many metrics per service you have from the output of 'perfparse -r'. If you have 1.5 metrics per service, this is 93MB/Day. Also take into account the deletion policies. Some services/metrics will therefore have a fixed size. A general note to all readers ============================= Are there any SQL developers out there who would want a go at writing some tools to monitor and predict database size for PP? Or an Excel spread sheet which users can enter their setup details and get back size figures? If so, you'd make a valuable addition to this product. Regards, Ben. Jeff Scott wrote: > I'm in a simular situation, but more servers....looking at about 600 > servers eventually, and I'd like to store about a year worth of data > for post analysis, and trending. What kind of storage am I going to > need? I'm looking at 86mil rows inserted per month....ouch. Also, has > anyone every used Oracle with Perfparse? What would be involved in > writing it modular to use any database backend? > > > On Tue, 28 Sep 2004 17:21:49 -0700, James Ochs <jo...@re...> > wrote: > >> Hi all, >> I sent an email to the list about a month ago on this issue and haven't >> gotten a response. >> I have perfparse and am tracking about 80 servers and about 10 or so >> monitors per server, all running at five minute intervals. I have 30 >> days >> worth of data in the database. >> Currently the perfdata_service_raw table is 45M rows and over 800M. The >> perfdata_service_bin table is 62M rows and about 1.6G (this is due to not >> purging for a long time before I got the deletion policies working, I >> think >> its about 900M of actual data). >> Due to the number of rows the perfparse-db-purge actually crashes >> mysql due >> to running out of buffer space... I have upped this to 16M and it still >> crashes. >> I'd also like to be able to monitor trends in services over a longer >> period >> of time, like say the last month, the last year, the last 5 years >> similar to >> the way mrtg does, but with the dataset the way it is that is >> currently not >> feasible. >> How much data are other people retaining? Has anyone else run into >> similar >> issues? Does anyone have a solution? >> Thanks, >> James > > > > ------------------------------------------------------- > This SF.net email is sponsored by: IT Product Guide on ITManagersJournal > Use IT products in your business? Tell us what you think of them. Give us > Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more > http://productguide.itmanagersjournal.com/guidepromo.tmpl > _______________________________________________ > Perfparse-users mailing list > Per...@li... > https://lists.sourceforge.net/lists/listinfo/perfparse-users > |