You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(22) |
Jul
(18) |
Aug
(30) |
Sep
(11) |
Oct
(45) |
Nov
(14) |
Dec
(21) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(10) |
Feb
(12) |
Mar
|
Apr
|
May
(5) |
Jun
(2) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(3) |
Nov
(2) |
Dec
|
2006 |
Jan
|
Feb
(1) |
Mar
|
Apr
(2) |
May
(1) |
Jun
(6) |
Jul
|
Aug
(2) |
Sep
(3) |
Oct
|
Nov
|
Dec
(3) |
2007 |
Jan
(5) |
Feb
(12) |
Mar
(14) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(26) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
(2) |
2008 |
Jan
(1) |
Feb
|
Mar
(1) |
Apr
(2) |
May
(1) |
Jun
(2) |
Jul
(1) |
Aug
(1) |
Sep
(14) |
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Yves M. <yme...@li...> - 2004-10-22 14:28:30
|
> Yves, > > *sigh* I started off good (not): I attached the wrong .c file to my pre= vious > post. Sorry about that. :) > So here's a second try (zipped this time). As for using patch format, I > suppose this is just the output from the diff command? Are there any op= tions > to use? I use -bruaN : $ ls perfparse-{version}.old perfparse-{version} $ diff -bruaN perfparse-{version}.old perfparse-{version} > perfparse-200= 41022.diff Then I edit the diff file to remove some files like configure or Makefile= :) To remove a file, just remove all the lines between 2 lines starting with= diff. This works only if diff was run with the -u option. Well, this is how I do. There are other ways to do, and no worse or bette= r way :) > > I'll look into these next things next week: > - changelog/last modified entries > - perfparse-db-tool > - deletion policies > > I don't think it would be wise to release my code yet. It might be bett= er if > others have a look at it first (I'm the careful type :) ). OK. So you can try to maintain the patch against the latest versions here : http://pagesperso.laposte.net/ymettier/perfparse-devel/ (0.102.0ym5 is there now :) Also send Ben and I a patch when you want to show us some important chang= e. For perfparse-db-tool, tell me when you start, because I don't know yet i= f I will put modules in 0.103 or in 0.104 :) I will tell you when I make changes in the mysql storage module, but I do= n't see any for now. For the CGI, implement modules is more work and will probably not be done= before a while. Have a nice week-end all of you :) Yves --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - Perfparse - http://perfparse.sf.net/ - |
From: Ben C. <Be...@cl...> - 2004-10-22 14:04:50
|
No worries, thanks for the later version. See my other comments. It's up to you where we go from here, so please let us know what you think. In all the analysis and checking for consistency/bugs, don't let me forget to thank any person for a contribution. It's wonderful to see the product develop for the users by the users and I am grateful for everything! Ben Tim Wuyts wrote: > Yves, > > *sigh* I started off good (not): I attached the wrong .c file to my previous > post. Sorry about that. > So here's a second try (zipped this time). As for using patch format, I > suppose this is just the output from the diff command? Are there any options > to use? > > I'll look into these next things next week: > - changelog/last modified entries > - perfparse-db-tool > - deletion policies > > I don't think it would be wise to release my code yet. It might be better if > others have a look at it first (I'm the careful type :) ). > > Tim > |
From: Ben C. <Be...@cl...> - 2004-10-22 14:01:21
|
Hi Tim, Forgive me if I repeat anything, three way conversations, three time zones, one Friday afternoon! One small fix: Can you make your delete policy default set to 'host'? I had a look at the code. This will be a great addition to PP. I want to get this available to all users as soon as we can! I think we are agreed that it cannot be included currently as there is no way to control the data, and no way to view it. So lets sort this out.... ----------------------------------- The deletion policies are no haste. The CGI code to define them needs amending. (This code is a picture of a tortured mind on another Friday afternoon, rather than crafted coding Yves consistently contributes.) Since you have followed the definition of the policies exactly, we only need to copy some code, change the table names, and a new menu option. My first question Tim, do you want to have a look and see what you think? Code is cgi_del_policy.c. Need to copy: void displayDeletePolicyBin() To new function: void displayDeletePolicySummaryBin() As I said, ignore all the logic, just change the table names and all should work. This is linked into the menu in perfgraph.c. ----------------------------------- Then the deletion policies need to be actioned. Again, copy similar code in the perfparse-db-purge code. This should be a very simple operation. ----------------------------------- Next, the --no-raw-data flag. Yves suggested this could be another callback to store the summary data. This worries me: 1. The call back will be identical to the existing one. 2. As we add more ways of analyzing data, more and more callbacks will be needed. I wonder as a an alternate option with one callback: Whether the attached modules should have their own option set, which is added to the main option set and displayed together when --help is shown, set by the same code. But this is complex for a small problem :) In the short term, I suggest making the option array visible to modules. Sorry, another global variable. They can read them and action them as they see fit. In this case we can now have two flags: --no-raw-data --no-raw-summary-data Which can be viewed in the function to store lines of data. What do you all think? This is one for my self or Yves to hack. ----------------------------------- There needs to be a function to automatically amend users tables, and update the schema number. See upgrade.c. No magic there. Also need to amend the mysql_create.sql and mysql_delete.sql. This can be released now as this does not effect functioning of the current versions. As I have done with the binary summary tables. ----------------------------------- Finally the display of data. I can think of various ways of displaying: - Report of activity for a day with summary. - Line graph showing percentage activity of each state. - Pie chart of summary.... - Exported CSV or XML reports. Tim, what do you want to see? ----------------------------------- Regards, Ben. Tim Wuyts wrote: > Hi, > > I've made some additions to storage_mysql.c to provide a summary table of > raw data (availability data) to perfparse. I also added two tables, > perfdata_summary_raw and perfdata_summary_raw_data. It's rather > straightforward stuff, so I guess it should be easy to incorporate into the > official version (if you think it's useful). > I followed Ben's ideas for the summary table, so that it can contain summary > for different 'epochs' (periods). > > Things on the TODO list: > - deletion policies: haven't read the docs on this yet. Are they still > uptodate? > - make summaries optional > - defaults (every service now gets a default summary with a epoch of 1 day) > - cgi report (-> I don't intend to write one. I don't really like c code > that printf's HTML ;)) > > And finally an important question: How do I submit my changes? I've never > used sourceforge (as a developer) before. For the moment, I just attached > the files to this email... > > Feedback always welcome! > > Regards, > Tim |
From: Tim W. <tim...@pi...> - 2004-10-22 13:52:27
|
Yves, *sigh* I started off good (not): I attached the wrong .c file to my previous post. Sorry about that. So here's a second try (zipped this time). As for using patch format, I suppose this is just the output from the diff command? Are there any options to use? I'll look into these next things next week: - changelog/last modified entries - perfparse-db-tool - deletion policies I don't think it would be wise to release my code yet. It might be better if others have a look at it first (I'm the careful type :) ). Tim |
From: Yves M. <yme...@li...> - 2004-10-22 13:13:26
|
> Hi, > > I saw my name mentioned eight times, let me add to this discussion :) Respect to our leader :) > I totally agree about submitting patches. There is a facility to do > this on http://perfparse.sf.net. I have not used this, but this might > guarantee all of us can see the patch and gives you somewhere to post > it. If there are more than a few of us, might be time to us it. OK, so I will apply the patch for 0.103.0ymX versions and I and Tim will = work together to have something for 0.103.1 :) > In the future there will be development and live versions. Right now I This is not future but present :) Development versions are here : http://pagesperso.laposte.net/ymettier/perfparse-devel/ Well, you mean official development versions :) [...] > Before this is included in the release version, we need to think about > making the provided facilities concurrent with the existing services. > > Let me list these below and we can comment on whether these are needed > for release or can wait: > > 1/ > > We need to ensure that we have the control option on the parsing, to se= t > whether the user gets this data or not. On current methods, I suggest = a: > --no-raw-summary-data > Flag. > > At the moment the single --no-raw-data flag will block both raw and raw > summary. :) This will be a important point. When --no-raw-data is enabled (that means= the feature is disabled - I hate negative options) the storage_module is not even called= . So the discussion will be : do we ask the module to test --no-raw-data, or do we add a callback for s= ummary ? In the discussion, we will also have to think how to have raw data in one= module and no raw data in another module. This is not very simple and the impact can be large :) > 2/ > > The deletion policies can wait a while. After all, the point of these > files is for long term storage. But this must come at some time. I don't agree. I have a small database here. And I get a huge amount of p= erf data. This is not tuned (still testing nagios), but it's good to test nagios an= d perfparse limits :) Well, this is my problem. But my problem is also that I have to keep only 2 or 3 days of data. Yves --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - Perfparse - http://perfparse.sf.net/ - |
From: Ben C. <Ben...@ro...> - 2004-10-22 12:44:20
|
Hi, I saw my name mentioned eight times, let me add to this discussion :) Love the code, nice work. I totally agree about submitting patches. There is a facility to do this on http://perfparse.sf.net. I have not used this, but this might guarantee all of us can see the patch and gives you somewhere to post it. If there are more than a few of us, might be time to us it. In the future there will be development and live versions. Right now I ask you to submit a new patch against each new release version until it's ready. If you think this is a good idea! Hopefully this will not be for too much time. I think it may be too late for the current version. I am in discussion with Yves about releasing over weekend. Before this is included in the release version, we need to think about making the provided facilities concurrent with the existing services. Let me list these below and we can comment on whether these are needed for release or can wait: 1/ We need to ensure that we have the control option on the parsing, to set whether the user gets this data or not. On current methods, I suggest a: --no-raw-summary-data Flag. At the moment the single --no-raw-data flag will block both raw and raw summary. :) 2/ The deletion policies can wait a while. After all, the point of these files is for long term storage. But this must come at some time. 3/ You are using a fixed epoch of one day. You comment about whether this is configurable. If you decide it should be, we need a way of defining the used epochs, and delete those that are not needed any more. 4/ Reports. Do you want any CGI reports? Do you want to copy the ones that exist, or write your own, or would you like me to write them for you. The final option may be the slowest :) Anything else? Anyway, busy time for me, I'll look over your code and make any coding comments directly with you. Regards, Ben. Yves Mettier wrote: > Hi Tim, > > > >>Hi, >> >>I've made some additions to storage_mysql.c to provide a summary table of > > > Thanks for contributing :) > I will let Ben accept or reject the patch after reviewing it. > > >>And finally an important question: How do I submit my changes? I've never >>used sourceforge (as a developer) before. For the moment, I just attached >>the files to this email... > > > I don't know if Ben uses sourceforge. I think that the best way is to : > 1/ send a patch (not the full file) > 2/ compress your patchs and files (with respect to those who still have low speed access > to internet) > 3/ describe what it does (you did it : good point :) > 4/ add one line in ChangeLog : we are sure not to forget to add it. Don't forget to > specify your initials :) > 5/ add your name in the "Last Modified" line at the beginning. Without it, we consider > that you give the copyright of your work to the author of the file > > Well, that's a lot to think. If you do only 1/, 2/ and 3/, this is correct. 4/ and 5/ > are rather for you, not for the project :) > > We prefer a patch because it's easier to > - patch the current development version and not the current stable release :) > - see the changes > > OK, you asked and I gave you a full (but personnal) answer to your question :) > > For both Ben and you : > - I made the patch. Find it attached. > - the coding style is the same as the rest. Very good. > - the patch applies successfully on my latest development version > (http://pagesperso.laposte.net/ymettier/perfparse-devel/) > - I have not tried to understand what it does. Ben, you have to check that :) > > I suggest that someone cat mysql_summary_raw.sql >> mysql_create.sql > > And I don't think this will be accepted for next version because it is already ready and > Ben can release it today evening if he has time. > And there is something missing in your changes : perfparse-db-tool has to be updated to > create your 2 new tables. > > Maybe Ben will accept it for 0.102.2 or 0.103.1 ? > > If Ben accepts, I can also accept next patches to include them in my development versions. > Well, when perfparse-0.102.0ym5.tar.gz can be found on > http://pagesperso.laposte.net/ymettier/perfparse-devel/, Ben can take it and release > 0.102.1 :) > > Yves > > > > |
From: Yves M. <yme...@li...> - 2004-10-22 11:40:36
|
Hi Tim, > Hi, > > I've made some additions to storage_mysql.c to provide a summary table = of Thanks for contributing :) I will let Ben accept or reject the patch after reviewing it. > And finally an important question: How do I submit my changes? I've nev= er > used sourceforge (as a developer) before. For the moment, I just attach= ed > the files to this email... I don't know if Ben uses sourceforge. I think that the best way is to : 1/ send a patch (not the full file) 2/ compress your patchs and files (with respect to those who still have l= ow speed access to internet) 3/ describe what it does (you did it : good point :) 4/ add one line in ChangeLog : we are sure not to forget to add it. Don't= forget to specify your initials :) 5/ add your name in the "Last Modified" line at the beginning. Without it= , we consider that you give the copyright of your work to the author of the file Well, that's a lot to think. If you do only 1/, 2/ and 3/, this is correc= t. 4/ and 5/ are rather for you, not for the project :) We prefer a patch because it's easier to - patch the current development version and not the current stable releas= e :) - see the changes OK, you asked and I gave you a full (but personnal) answer to your questi= on :) For both Ben and you : - I made the patch. Find it attached. - the coding style is the same as the rest. Very good. - the patch applies successfully on my latest development version (http://pagesperso.laposte.net/ymettier/perfparse-devel/) - I have not tried to understand what it does. Ben, you have to check tha= t :) I suggest that someone cat mysql_summary_raw.sql >> mysql_create.sql And I don't think this will be accepted for next version because it is al= ready ready and Ben can release it today evening if he has time. And there is something missing in your changes : perfparse-db-tool has to= be updated to create your 2 new tables. Maybe Ben will accept it for 0.102.2 or 0.103.1 ? If Ben accepts, I can also accept next patches to include them in my deve= lopment versions. Well, when perfparse-0.102.0ym5.tar.gz can be found on http://pagesperso.laposte.net/ymettier/perfparse-devel/, Ben can take it = and release 0.102.1 :) Yves --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - Perfparse - http://perfparse.sf.net/ - |
From: Tim W. <tim...@sc...> - 2004-10-22 10:47:16
|
Hi, I've made some additions to storage_mysql.c to provide a summary table of raw data (availability data) to perfparse. I also added two tables, perfdata_summary_raw and perfdata_summary_raw_data. It's rather straightforward stuff, so I guess it should be easy to incorporate into the official version (if you think it's useful). I followed Ben's ideas for the summary table, so that it can contain summary for different 'epochs' (periods). Things on the TODO list: - deletion policies: haven't read the docs on this yet. Are they still uptodate? - make summaries optional - defaults (every service now gets a default summary with a epoch of 1 day) - cgi report (-> I don't intend to write one. I don't really like c code that printf's HTML ;)) And finally an important question: How do I submit my changes? I've never used sourceforge (as a developer) before. For the moment, I just attached the files to this email... Feedback always welcome! Regards, Tim |
From: Ben C. <be...@cl...> - 2004-10-15 16:14:31
|
Dear friends, It's my pleasure to release a new version, 0.101.1. http://prdownloads.sourceforge.net/perfparse/perfparse-0.101.1.tar.gz?download The main parser has been recoded by our friend Yves into a modular unit which can be endlessly extended and accept data in many different ways. Don't worry if you like it as-is, there is still a perfparse.sh and if you like running from the command line, where as you used to use 'perfparse -r', it's now 'perfparse-log2db -r -s'. For those people who don't like killing Nagios on every run, this offers a solution to this problem. In fact two solutions. Please have a look at the UPGRADE document, and read the other documents re-written by Yves in the docs directory. Finally I have to note that this is of course a mayor rewrite. If you are the first to spot an error, please let us know so that you are the last to spot it as well :) More to come over the next few versions, including *instant* Nagios -> Perfparse storage. As well as an exciting MySQL -> PerfParse optional connections instead of PerfParse -> MySQL as we have now. If anybody want to have a go, an RRD export module is now very simple. Please look as the storage code 'storage-print' files. Simply copy and edit for any storage method you desire... Please enjoy our new version, Regards, Ben Clewett. |
From: Flo G. <fl...@bi...> - 2004-10-13 23:18:40
|
Hi, Yes, i did some work to create a menu like the existing one in php. I expect a easier integration and cusomizing in nagios and if someone wants it: some themes (html/css). I use 2 libraries: smarty and adodb. smarty is a template engine. This separates logic and design in the sources. We can have software developers and html designers. The software devs change the .php files, the designers change the files under the templates dir. adodb is a database abstraction layer. We could use pear::DB too, i'm used to use adodb - thats all. The libs are included in the source: http://bier.kicks-ass.net/php-perfparse/php-perfparse.tar.gz Install: On default nagios/perfparse installations just unpack the file and move all files within the new created php-perfparse dir to /usr/local/nagios/share. You'll probably have to create a templates_c directory where smarty can create compiled templates. So templates_c should be writable by apache. What works: The first 2 menu-options work and the display of graphs works. All else is easy, but we have to do it one day :-) If there is a html-designer out there: look at the templates dir and hack the files within there. Just don't touch the things in curly braces {}. I hope we can start a collaboration here! I know jpgraph a very little bit. The problem we had with tikiwiki (a other project i work on) is the license of jpgraph. I think the C graph drawing has some advantages. But we'll probably have 2 alternatives in the future. We'll see. Flo On Tue, 12 Oct 2004, Ben Clewett wrote: > William, > > I have CC'd your email to Flo, who is also working on a PHP front end, as > well as the perfparse-developers new group, hope you don't mind. :) > > Williams, P. Lane wrote: > >> Thanks Ben, >> >> We are currently trying to build a PHP/JPGRAPH front-end for the perfparse >> database info. It is quite a bit of work, but I think in the end it will >> be >> worth it... Perfparse itself has saved a ton of programming time on my >> end....The CGI's, for me, are more eye-candy for management. If our CGI >> works out, I'll pass it along. > > I am glad you like it. Can you tell me more about your graph front end? > Possible (privately if you want) send me a URL of an installation? > > I also want to, if at all possible, avoid two PHP front ends! I wonder if > your self and Flo want to see whether there is any obvious areas of overlap > to which can be looked at. Possibly stating a new official PP project to > provide a PHP front end? > > All the best, > > Ben Clewett. > > > >> >> Lane >> >> -----Original Message----- >> From: Ben Clewett [mailto:Be...@cl...] Sent: Tuesday, October 12, >> 2004 5:41 AM >> To: Williams, P. Lane >> Cc: 'yme...@li...' >> Subject: Re: perfparse >> >> >> Hi Lane, >> >> There is currently no way of doing this. >> >> I have a design for building a collection of comparable metrics, which can >> and will be displayed together. But I have not had time to look at it >> recently. So it's in the plan, but not planned into the work yet, sorry. >> >> Ben >> >> Williams, P. Lane wrote: >> >> >>> Is there a way to get perfparse to display multiple points of data in a >>> single graph? My 'disk checks' contain 5 points of data. I would like >>> to know if there is a code change I can make to graph all five points in >>> one graph. >>> >>> Thanks.....Once again this is a great add-on for Nagios, >>> >>> Lane >>> >> >> > > > > ------------------------------------------------------- > This SF.net email is sponsored by: IT Product Guide on ITManagersJournal > Use IT products in your business? Tell us what you think of them. Give us > Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more > http://productguide.itmanagersjournal.com/guidepromo.tmpl > _______________________________________________ > Perfparse-devel mailing list > Per...@li... > https://lists.sourceforge.net/lists/listinfo/perfparse-devel > |
From: Ben C. <Be...@cl...> - 2004-10-12 15:05:06
|
William, I have CC'd your email to Flo, who is also working on a PHP front end, as well as the perfparse-developers new group, hope you don't mind. :) Williams, P. Lane wrote: > Thanks Ben, > > We are currently trying to build a PHP/JPGRAPH front-end for the perfparse > database info. It is quite a bit of work, but I think in the end it will be > worth it... Perfparse itself has saved a ton of programming time on my > end....The CGI's, for me, are more eye-candy for management. If our CGI > works out, I'll pass it along. I am glad you like it. Can you tell me more about your graph front end? Possible (privately if you want) send me a URL of an installation? I also want to, if at all possible, avoid two PHP front ends! I wonder if your self and Flo want to see whether there is any obvious areas of overlap to which can be looked at. Possibly stating a new official PP project to provide a PHP front end? All the best, Ben Clewett. > > Lane > > -----Original Message----- > From: Ben Clewett [mailto:Be...@cl...] > Sent: Tuesday, October 12, 2004 5:41 AM > To: Williams, P. Lane > Cc: 'yme...@li...' > Subject: Re: perfparse > > > Hi Lane, > > There is currently no way of doing this. > > I have a design for building a collection of comparable metrics, which > can and will be displayed together. But I have not had time to look at > it recently. So it's in the plan, but not planned into the work yet, sorry. > > Ben > > Williams, P. Lane wrote: > > >>Is there a way to get perfparse to display multiple points of data in >>a >>single graph? My 'disk checks' contain 5 points of data. I would like >>to know if there is a code change I can make to graph all five points in >>one graph. >> >>Thanks.....Once again this is a great add-on for Nagios, >> >>Lane >> > > |
From: Ben C. <Be...@cl...> - 2004-10-12 07:21:06
|
Tim Wuyts wrote: > Thanks for your reply. A few remarks: > - My first step is to make sure I get something in the db that I can use for > reporting (in my case some sort of summary table). As for the report itself, > I'm actually considering of using a report generator, like Crystal Reports > (comes with Visual Studio .Net, more of that below). > - The idea of using epoch_start & epoch_length for a summary table is a good > one, I'll try to incorporate that. > - you are right about the UDF's, it's a pain to get it compiled. You need > the source tree from MySQL, and I spent half a day trying to figure out the > compile options on my Solaris machine. > - referential integrity: never leave home without it! (I was planning to add > it, honestly) Great! Maybe a partnership. If you do what you need to do, the rest of us can make the leap to PP. The option of using Crystal Reports and some .NET is exciting. We may be able to use you to complete a more general reporting tool for PP! May I ask which language you use? May I also ask whether your code will be compatible with Mono? Therefore runnable under Linux and Max OS? > But my most important question: what do you guys use for development > environment? Currently I do all my development work in Visual Studio .Net. > It has everything (code editor, debugger, CVS integration) in one single > powerful tool. Of course it is Windows based, but I also tend to use for all > things Perl and (small) C programs, that end up on Unix/Linux machines. I > have no idea if there is something like an IDE (Integrated Development > Environment) for native Unix/Linux C development, what a good editor might > be, the best tool for debugging, etc... On a Unix box I fall back to good > old vi and the command line :) So any advice in this area is more than > welcome. Personally I feel I have done my time with 'vi' :) I mainly use the Windows program EditPlus2. This not free. Some readers of this list will condemn me for this. I do recommend it as a very fine editor, if your company will pay for it. It can edit via FTP which is a great ability for OS projects, and works well on UNIX using 'Wine'. There are lots of other good projects out there. KDeveloper is similar to Visual Studio in look and feel, but tailored to OS projects. A small editor called SciED I quite like as it really understands the code, and therefore helps you construct bug free structured programs... I feel this is the start of an interesting thread! Regards, Ben. > > Kind regards, > Tim. > > > > -----Original Message----- > From: Ben Clewett [mailto:Be...@cl...] > Sent: maandag 11 oktober 2004 10:26 > To: Tim Wuyts > Cc: per...@li... > Subject: Re: [Perfparse-devel] Summary tables > > Hi Tim, > > Tim Wuyts wrote: > > >>Greetings Programs! >> >>I noticed you have a few tables in the PP db layout that are meant for >>summarizing data. What are the plans here? > > > These tables are present to satisfy a requirement to extract very large time > frames (epochs) of data. Eg, several years. These will store a summary > representation of the binary data. The average, standard deviation, max, > min etc, for a specific epoch. Eg, the summary for a days data in one > record. Therefore a graph for a year is an easy 364 points, not the > impossible half a million if every minutes data were to be plotted. As well > as a drastic cut in the database size. > > This has not been completed due to lack of developer time, and discussion > about whether this is the correct method of completing this requirement. If > there is a real need for this, then I can try and force some development > time. > > > >>I'm currently working on an 'availability' report. It is based on the >>raw plugin data, i.e. it only looks at the nagios_status field in the >>_raw table. Basically, I'm creating a report showing ALL hostgroups, >>hosts and services with their resp. uptime (%) and downtime, similar >>to the summary > > on > >>the raw history report. > > > I am excited to hear you are giving us a report! I look forward to getting > it merged into the product :) > > > >>My first approach was to create a Mysql UDF (user-defined function), > > called > >>'status_time' that can be used in a 'group by' clause. The resulting SQL >>then looks something like this: >>select host_name, service_description, >>status_time(unix_timestamp(ctime), nagios_status, 0) as UPTIME, >>status_time(unix_timestamp(ctime), nagios_status, 1) as WARNTIME, >>status_time(unix_timestamp(ctime), nagios_status, 2) as CRITICALTIME, >>status_time(unix_timestamp(ctime), nagios_status, 3) as UNDEFTIME >>from perfdata_service_raw where ctime > '2004-07-31' and ctime < >>'2004-09-01' >>group by host_name, service_description order by ctime > > > My first comment is a a worry. I have not used UDF, but looking at: > http://dev.mysql.com/doc/mysql/en/Adding_UDF.html I see that this is a > complex task involving recompiling MySQL. For those people using RPM's, > this may be hard. But there may be other ways. > > >>This works, but performance is bad (just think about how many records need >>to be processed for a few 100 services and 30 days of data!), and I am >>relying on MySQL to give me the data in chronological order (so far it's >>always been correct, but I'm not comfortable with it) >> >>Since the data is not changing once it was entered, I thought of > > simplifying > >>by 'data-warehousing' the raw data. Using the query above, I summarize the >>availability data every x time (x could be daily, weekly, monthly), and >>store them in a table, e.g. perfdata_summary_raw. I'm planning on creating > > a > >>more elaborate script (in Perl, I could try C, but it's been a while) to > > do > >>this. > > > Great idea! > > > >>Q1: did you plan on making something similar? If so, is there any code I >>could look at/use/improve ? > > > I know of no plans to create summary data from the raw output. > > Rather than writing a cron to do this, it may be best to add the data to > your summary during the initial addition to MySQL. Wait until version > 0.101.1, or get a pre-version from: > > http://ymettier.chez.tiscali.fr/perfparse-devel/index.php > > Look at storage_mysql.c and function 'storage_mysql_store_line'. This > will be the place to add data to your summary. > > Yves will have to reserve you an configuration option to use in your > code. Eg --Store_Summary_Raw. But getting ahead of my self here. > > >>Q2: what should the summary_raw table look like? Here's my current > > proposal: > >>CREATE TABLE `perfdata_summary_availability` ( >> `id` int(11) NOT NULL auto_increment, >> `period` varchar(10) NOT NULL default '', >> `host_name` varchar(75) NOT NULL default '', >> `service_description` varchar(75) NOT NULL default '', >> `sum_uptime` bigint(20) NOT NULL default '0', >> `sum_warntime` bigint(20) NOT NULL default '0', >> `sum_criticaltime` bigint(20) NOT NULL default '0', >> `sum_undeftime` bigint(20) NOT NULL default '0', >> PRIMARY KEY (`id`), >> UNIQUE KEY `perfdata_summary_availability_ix1` >>(`period`,`host_name`,`service_description`), >> KEY `perfdata_summary_availability_ix0` >>(`host_name`,`service_description`) >>) TYPE=InnoDB; > > > Looks good. Very much the same as our own workings on the binary summary. > > From the work we did on that, I will suggest a few things, which you > can ignore if you wish! > > The 'period VARCHAR(10)'. I am not sure what this will be for, or what > you will store in this field. I can suggest another way of defining a > period: > > If this file will always be for a time span of a constant number of > seconds. Eg, 1 hour = 3600. You can store the start time of the epoch > as part of an alternate PK: (Storing as UNIX time) > > epoch_start UNSIGNED LONG, > PRIMARY KEY (epoch_start, host, service) > > You can then work out which time period to use by subtracting the > modulus of the epoch duration (summary sample time period) against the time: > > file.c: epoch_start = ctime - (ctime % epoch_length); > > Or your update query is then something like: > > UPDATE perfdata_summary_availability > SET sum_uptime = sum_uptime + 1, sum_warn .... > WHERE epoch_start = ctime - MOD(ctime,epoch_length) > AND host = .... > > (If the summary record already exists.) > > A finally idea. If you include the epoch_length as part of the primary > key, this enables you to store data for many sample durations in the > same table. Eg, 1 hour, 1 day, 1 week etc... > > If you were to consider these ideas, it would make the table something like: > > epoch_period UNSIGNED INT NOT NULL, > epoch_start UNSIGNED LONG NOT NULL, > host_name VARCHAR(75) NOT NULL, > service_description VARCHAR(75) NOT NULL, > PRIMARY KEY (epoch_period, > epoch_start, host_name, service_description), > ... > > Lastly, a few house keeping bits: > > The host and service should really be foreign key references to the host > and service tables. This stops the parent host and service tables being > deleted without this table being sorted. > > Finally, there needs to be some deletion policies. See: > > http://sourceforge.net/docman/display_doc.php?docid=23729&group_id=109355 > > And look at other tables. > > Both of these would eventually need code in the deletion policy program > as well, at some distant later date. > > I hope this has not put you off. Please feel free to ignore any of this > to get your self the report you desire to start with. I would like to > see this report as part of the PP suit, so maybe you can start and we > can look at it after this? > > Good luck, > > Regards, Ben. > > > > |
From: Yves M. <yme...@li...> - 2004-10-11 22:47:51
|
> But my most important question: what do you guys use for development > environment? Currently I do all my development work in Visual Studio .N= et. [...] > be, the best tool for debugging, etc... On a Unix box I fall back to go= od > old vi and the command line :) So any advice in this area is more than > welcome. I'm using vi, gcc and man pages :) Well, if you know vi, have a look on vim and disable vi compatibilty (:se= t nocp) Also enable syntax highliting (:syn on) And maybe you will like gvim like me ? :) Yves (http://ymettier.chez.tiscali.fr/perfparse-devel/) --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - GTKtalog - http://www.nongnu.org/gtktalog/ - |
From: Tim W. <tim...@pi...> - 2004-10-11 20:44:06
|
Ben, Thanks for your reply. A few remarks: - My first step is to make sure I get something in the db that I can use for reporting (in my case some sort of summary table). As for the report itself, I'm actually considering of using a report generator, like Crystal Reports (comes with Visual Studio .Net, more of that below). - The idea of using epoch_start & epoch_length for a summary table is a good one, I'll try to incorporate that. - you are right about the UDF's, it's a pain to get it compiled. You need the source tree from MySQL, and I spent half a day trying to figure out the compile options on my Solaris machine. - referential integrity: never leave home without it! (I was planning to add it, honestly) But my most important question: what do you guys use for development environment? Currently I do all my development work in Visual Studio .Net. It has everything (code editor, debugger, CVS integration) in one single powerful tool. Of course it is Windows based, but I also tend to use for all things Perl and (small) C programs, that end up on Unix/Linux machines. I have no idea if there is something like an IDE (Integrated Development Environment) for native Unix/Linux C development, what a good editor might be, the best tool for debugging, etc... On a Unix box I fall back to good old vi and the command line :) So any advice in this area is more than welcome. Kind regards, Tim. -----Original Message----- From: Ben Clewett [mailto:Be...@cl...] Sent: maandag 11 oktober 2004 10:26 To: Tim Wuyts Cc: per...@li... Subject: Re: [Perfparse-devel] Summary tables Hi Tim, Tim Wuyts wrote: > Greetings Programs! > > I noticed you have a few tables in the PP db layout that are meant for > summarizing data. What are the plans here? These tables are present to satisfy a requirement to extract very large time frames (epochs) of data. Eg, several years. These will store a summary representation of the binary data. The average, standard deviation, max, min etc, for a specific epoch. Eg, the summary for a days data in one record. Therefore a graph for a year is an easy 364 points, not the impossible half a million if every minutes data were to be plotted. As well as a drastic cut in the database size. This has not been completed due to lack of developer time, and discussion about whether this is the correct method of completing this requirement. If there is a real need for this, then I can try and force some development time. > I'm currently working on an 'availability' report. It is based on the > raw plugin data, i.e. it only looks at the nagios_status field in the > _raw table. Basically, I'm creating a report showing ALL hostgroups, > hosts and services with their resp. uptime (%) and downtime, similar > to the summary on > the raw history report. I am excited to hear you are giving us a report! I look forward to getting it merged into the product :) > My first approach was to create a Mysql UDF (user-defined function), called > 'status_time' that can be used in a 'group by' clause. The resulting SQL > then looks something like this: > select host_name, service_description, > status_time(unix_timestamp(ctime), nagios_status, 0) as UPTIME, > status_time(unix_timestamp(ctime), nagios_status, 1) as WARNTIME, > status_time(unix_timestamp(ctime), nagios_status, 2) as CRITICALTIME, > status_time(unix_timestamp(ctime), nagios_status, 3) as UNDEFTIME > from perfdata_service_raw where ctime > '2004-07-31' and ctime < > '2004-09-01' > group by host_name, service_description order by ctime My first comment is a a worry. I have not used UDF, but looking at: http://dev.mysql.com/doc/mysql/en/Adding_UDF.html I see that this is a complex task involving recompiling MySQL. For those people using RPM's, this may be hard. But there may be other ways. > This works, but performance is bad (just think about how many records need > to be processed for a few 100 services and 30 days of data!), and I am > relying on MySQL to give me the data in chronological order (so far it's > always been correct, but I'm not comfortable with it) > > Since the data is not changing once it was entered, I thought of simplifying > by 'data-warehousing' the raw data. Using the query above, I summarize the > availability data every x time (x could be daily, weekly, monthly), and > store them in a table, e.g. perfdata_summary_raw. I'm planning on creating a > more elaborate script (in Perl, I could try C, but it's been a while) to do > this. Great idea! > Q1: did you plan on making something similar? If so, is there any code I > could look at/use/improve ? I know of no plans to create summary data from the raw output. Rather than writing a cron to do this, it may be best to add the data to your summary during the initial addition to MySQL. Wait until version 0.101.1, or get a pre-version from: http://ymettier.chez.tiscali.fr/perfparse-devel/index.php Look at storage_mysql.c and function 'storage_mysql_store_line'. This will be the place to add data to your summary. Yves will have to reserve you an configuration option to use in your code. Eg --Store_Summary_Raw. But getting ahead of my self here. > Q2: what should the summary_raw table look like? Here's my current proposal: > CREATE TABLE `perfdata_summary_availability` ( > `id` int(11) NOT NULL auto_increment, > `period` varchar(10) NOT NULL default '', > `host_name` varchar(75) NOT NULL default '', > `service_description` varchar(75) NOT NULL default '', > `sum_uptime` bigint(20) NOT NULL default '0', > `sum_warntime` bigint(20) NOT NULL default '0', > `sum_criticaltime` bigint(20) NOT NULL default '0', > `sum_undeftime` bigint(20) NOT NULL default '0', > PRIMARY KEY (`id`), > UNIQUE KEY `perfdata_summary_availability_ix1` > (`period`,`host_name`,`service_description`), > KEY `perfdata_summary_availability_ix0` > (`host_name`,`service_description`) > ) TYPE=InnoDB; Looks good. Very much the same as our own workings on the binary summary. From the work we did on that, I will suggest a few things, which you can ignore if you wish! The 'period VARCHAR(10)'. I am not sure what this will be for, or what you will store in this field. I can suggest another way of defining a period: If this file will always be for a time span of a constant number of seconds. Eg, 1 hour = 3600. You can store the start time of the epoch as part of an alternate PK: (Storing as UNIX time) epoch_start UNSIGNED LONG, PRIMARY KEY (epoch_start, host, service) You can then work out which time period to use by subtracting the modulus of the epoch duration (summary sample time period) against the time: file.c: epoch_start = ctime - (ctime % epoch_length); Or your update query is then something like: UPDATE perfdata_summary_availability SET sum_uptime = sum_uptime + 1, sum_warn .... WHERE epoch_start = ctime - MOD(ctime,epoch_length) AND host = .... (If the summary record already exists.) A finally idea. If you include the epoch_length as part of the primary key, this enables you to store data for many sample durations in the same table. Eg, 1 hour, 1 day, 1 week etc... If you were to consider these ideas, it would make the table something like: epoch_period UNSIGNED INT NOT NULL, epoch_start UNSIGNED LONG NOT NULL, host_name VARCHAR(75) NOT NULL, service_description VARCHAR(75) NOT NULL, PRIMARY KEY (epoch_period, epoch_start, host_name, service_description), ... Lastly, a few house keeping bits: The host and service should really be foreign key references to the host and service tables. This stops the parent host and service tables being deleted without this table being sorted. Finally, there needs to be some deletion policies. See: http://sourceforge.net/docman/display_doc.php?docid=23729&group_id=109355 And look at other tables. Both of these would eventually need code in the deletion policy program as well, at some distant later date. I hope this has not put you off. Please feel free to ignore any of this to get your self the report you desire to start with. I would like to see this report as part of the PP suit, so maybe you can start and we can look at it after this? Good luck, Regards, Ben. |
From: Ben C. <Be...@cl...> - 2004-10-11 08:25:55
|
Hi Tim, Tim Wuyts wrote: > Greetings Programs! > > I noticed you have a few tables in the PP db layout that are meant for > summarizing data. What are the plans here? These tables are present to satisfy a requirement to extract very large time frames (epochs) of data. Eg, several years. These will store a summary representation of the binary data. The average, standard deviation, max, min etc, for a specific epoch. Eg, the summary for a days data in one record. Therefore a graph for a year is an easy 364 points, not the impossible half a million if every minutes data were to be plotted. As well as a drastic cut in the database size. This has not been completed due to lack of developer time, and discussion about whether this is the correct method of completing this requirement. If there is a real need for this, then I can try and force some development time. > I'm currently working on an 'availability' report. It is based on the raw > plugin data, i.e. it only looks at the nagios_status field in the _raw > table. Basically, I'm creating a report showing ALL hostgroups, hosts and > services with their resp. uptime (%) and downtime, similar to the summary on > the raw history report. I am excited to hear you are giving us a report! I look forward to getting it merged into the product :) > My first approach was to create a Mysql UDF (user-defined function), called > 'status_time' that can be used in a 'group by' clause. The resulting SQL > then looks something like this: > select host_name, service_description, > status_time(unix_timestamp(ctime), nagios_status, 0) as UPTIME, > status_time(unix_timestamp(ctime), nagios_status, 1) as WARNTIME, > status_time(unix_timestamp(ctime), nagios_status, 2) as CRITICALTIME, > status_time(unix_timestamp(ctime), nagios_status, 3) as UNDEFTIME > from perfdata_service_raw where ctime > '2004-07-31' and ctime < > '2004-09-01' > group by host_name, service_description order by ctime My first comment is a a worry. I have not used UDF, but looking at: http://dev.mysql.com/doc/mysql/en/Adding_UDF.html I see that this is a complex task involving recompiling MySQL. For those people using RPM's, this may be hard. But there may be other ways. > This works, but performance is bad (just think about how many records need > to be processed for a few 100 services and 30 days of data!), and I am > relying on MySQL to give me the data in chronological order (so far it's > always been correct, but I'm not comfortable with it) > > Since the data is not changing once it was entered, I thought of simplifying > by 'data-warehousing' the raw data. Using the query above, I summarize the > availability data every x time (x could be daily, weekly, monthly), and > store them in a table, e.g. perfdata_summary_raw. I'm planning on creating a > more elaborate script (in Perl, I could try C, but it's been a while) to do > this. Great idea! > Q1: did you plan on making something similar? If so, is there any code I > could look at/use/improve ? I know of no plans to create summary data from the raw output. Rather than writing a cron to do this, it may be best to add the data to your summary during the initial addition to MySQL. Wait until version 0.101.1, or get a pre-version from: http://ymettier.chez.tiscali.fr/perfparse-devel/index.php Look at storage_mysql.c and function 'storage_mysql_store_line'. This will be the place to add data to your summary. Yves will have to reserve you an configuration option to use in your code. Eg --Store_Summary_Raw. But getting ahead of my self here. > Q2: what should the summary_raw table look like? Here's my current proposal: > CREATE TABLE `perfdata_summary_availability` ( > `id` int(11) NOT NULL auto_increment, > `period` varchar(10) NOT NULL default '', > `host_name` varchar(75) NOT NULL default '', > `service_description` varchar(75) NOT NULL default '', > `sum_uptime` bigint(20) NOT NULL default '0', > `sum_warntime` bigint(20) NOT NULL default '0', > `sum_criticaltime` bigint(20) NOT NULL default '0', > `sum_undeftime` bigint(20) NOT NULL default '0', > PRIMARY KEY (`id`), > UNIQUE KEY `perfdata_summary_availability_ix1` > (`period`,`host_name`,`service_description`), > KEY `perfdata_summary_availability_ix0` > (`host_name`,`service_description`) > ) TYPE=InnoDB; Looks good. Very much the same as our own workings on the binary summary. From the work we did on that, I will suggest a few things, which you can ignore if you wish! The 'period VARCHAR(10)'. I am not sure what this will be for, or what you will store in this field. I can suggest another way of defining a period: If this file will always be for a time span of a constant number of seconds. Eg, 1 hour = 3600. You can store the start time of the epoch as part of an alternate PK: (Storing as UNIX time) epoch_start UNSIGNED LONG, PRIMARY KEY (epoch_start, host, service) You can then work out which time period to use by subtracting the modulus of the epoch duration (summary sample time period) against the time: file.c: epoch_start = ctime - (ctime % epoch_length); Or your update query is then something like: UPDATE perfdata_summary_availability SET sum_uptime = sum_uptime + 1, sum_warn .... WHERE epoch_start = ctime - MOD(ctime,epoch_length) AND host = .... (If the summary record already exists.) A finally idea. If you include the epoch_length as part of the primary key, this enables you to store data for many sample durations in the same table. Eg, 1 hour, 1 day, 1 week etc... If you were to consider these ideas, it would make the table something like: epoch_period UNSIGNED INT NOT NULL, epoch_start UNSIGNED LONG NOT NULL, host_name VARCHAR(75) NOT NULL, service_description VARCHAR(75) NOT NULL, PRIMARY KEY (epoch_period, epoch_start, host_name, service_description), ... Lastly, a few house keeping bits: The host and service should really be foreign key references to the host and service tables. This stops the parent host and service tables being deleted without this table being sorted. Finally, there needs to be some deletion policies. See: http://sourceforge.net/docman/display_doc.php?docid=23729&group_id=109355 And look at other tables. Both of these would eventually need code in the deletion policy program as well, at some distant later date. I hope this has not put you off. Please feel free to ignore any of this to get your self the report you desire to start with. I would like to see this report as part of the PP suit, so maybe you can start and we can look at it after this? Good luck, Regards, Ben. |
From: Tim W. <tim...@pi...> - 2004-10-08 07:54:17
|
Greetings Programs! I noticed you have a few tables in the PP db layout that are meant for summarizing data. What are the plans here? I'm currently working on an 'availability' report. It is based on the raw plugin data, i.e. it only looks at the nagios_status field in the _raw table. Basically, I'm creating a report showing ALL hostgroups, hosts and services with their resp. uptime (%) and downtime, similar to the summary on the raw history report. My first approach was to create a Mysql UDF (user-defined function), called 'status_time' that can be used in a 'group by' clause. The resulting SQL then looks something like this: select host_name, service_description, status_time(unix_timestamp(ctime), nagios_status, 0) as UPTIME, status_time(unix_timestamp(ctime), nagios_status, 1) as WARNTIME, status_time(unix_timestamp(ctime), nagios_status, 2) as CRITICALTIME, status_time(unix_timestamp(ctime), nagios_status, 3) as UNDEFTIME from perfdata_service_raw where ctime > '2004-07-31' and ctime < '2004-09-01' group by host_name, service_description order by ctime This works, but performance is bad (just think about how many records need to be processed for a few 100 services and 30 days of data!), and I am relying on MySQL to give me the data in chronological order (so far it's always been correct, but I'm not comfortable with it) Since the data is not changing once it was entered, I thought of simplifying by 'data-warehousing' the raw data. Using the query above, I summarize the availability data every x time (x could be daily, weekly, monthly), and store them in a table, e.g. perfdata_summary_raw. I'm planning on creating a more elaborate script (in Perl, I could try C, but it's been a while) to do this. Q1: did you plan on making something similar? If so, is there any code I could look at/use/improve ? Q2: what should the summary_raw table look like? Here's my current proposal: CREATE TABLE `perfdata_summary_availability` ( `id` int(11) NOT NULL auto_increment, `period` varchar(10) NOT NULL default '', `host_name` varchar(75) NOT NULL default '', `service_description` varchar(75) NOT NULL default '', `sum_uptime` bigint(20) NOT NULL default '0', `sum_warntime` bigint(20) NOT NULL default '0', `sum_criticaltime` bigint(20) NOT NULL default '0', `sum_undeftime` bigint(20) NOT NULL default '0', PRIMARY KEY (`id`), UNIQUE KEY `perfdata_summary_availability_ix1` (`period`,`host_name`,`service_description`), KEY `perfdata_summary_availability_ix0` (`host_name`,`service_description`) ) TYPE=InnoDB; Thanks for the input. Tim |
From: Ben C. <Be...@cl...> - 2004-10-04 12:54:17
|
New version 0.100.7 out. This version does not contain Yves new parser, which I invite you to check out. This will come soon. This version has some of the options Jeff Scott requested. The 'Saved Graph' options are not far more competent. Jeff also asked for reports of multiple graphs. This is a long term idea I have not had time to develop, but it will come soon. There is now an option to permanently delete a host and all attached data. There is also a fix to two big bugs. My apologies to any users who experienced these. 1. New users might not have got *any* raw plugin output reporting to work. This is fixed. 2. Deletion Policies for Binary data may have crashed. This is also fixed. Regards, Ben. |
From: Yves M. <yme...@li...> - 2004-09-30 13:12:36
|
Hi developers ! After much work, I and Ben are proud to announce that perfparse-0.101.0ym= 10 is the first release candidate for the next version 0.101.1. This version and other 0.101.0 versions can be downloaded here : http://ymettier.chez.tiscali.fr/perfparse-devel/index.php This version contains major changes in the perfparse scanner, and nearly = no changes in the database and cgis. The perfparse scanner has been rewritten, nearly from scratch. New featur= es : - more modular. -> Have a look on storage_print.c to see how you can play with it :) -> output modules like storage_print and storage_mysql will probably b= e extracted and become stand-alone plugins one day. -> new perfdata parser that may become a stand-alone library one day - raw output : perfparse can output what it reads, in optionnally-rotated= files. - input : perfparse can still read a file, but also reads from stdin - file deletion for nagios logs is no more supported. This is not the job= of perfparse. You can do it with scripts. Future version will probably make obsolete th= at need itself (using sockets...) - file positionning is no more supported either. This was probably a nice= feature of perfparse, but was hard to make a clean implementation of it, and we prob= ably not need it in the future. Ben plans to add new features on the CGIs. They may be included in 0.101.= 1 or not. Wait and see :) Please test 0.101.0ym10 and tell us about bugs. 0.101.0ym10 does not have= too many new features compared to all that we can implement in it now. The reason is t= hat we want to validate that it works. If yes, future developments for the parser will b= e, in priority : - client-server ability to avoid nagios log files (me) - new storage modules (contributions welcome, particularly oracle and pos= tgresql storage, rrd output...) - extraction of the parser into a standalone library (me and Ben) - stand-alone storage modules instead of built-in ones (me) - optionnal loading of stand-alone storage modules (me). - ... I don't mention Ben's ideas for the CGI and the database, but there are m= any too :) Happy perfparsing :) Yves --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - GTKtalog - http://www.nongnu.org/gtktalog/ - |
From: Yves M. <yme...@li...> - 2004-09-17 13:26:52
|
Hi Flo, hi, everybody, I'm taking an old mail from Flo from this list. Who has already tested this ? Garry, could you write that method in the documentation as an alternate m= ethod to run perfparse, please ? More below... > - xpdfile* configuration in nagios did not work for me, but you should > note in README that it also works if you use Flo: this is probably because you compiled nagios without --with-file-per= fdata > > host_perfdata_command=3Dprocess-host-perfdata > service_perfdata_command=3Dprocess-service-perfdata > > in nagios.cfg and > define command{ > command_name process-host-perfdata > command_line /usr/bin/printf "%b" > "$TIMET$\t$HOSTNAME$\t$OUTPUT$\t$PERFDATA$\n" >> > /usr/local/nagios/var/host-perfdata.out > } > define command{ > command_name process-service-perfdata > command_line /usr/bin/printf "%b" > "$TIMET$\t$HOSTNAME$\t$SERVICEDESC$\t$OUTPUT$\t$SERVICESTATE$\t$PERFDAT= A$\n" >> > /usr/local/nagios/var/service-perfdata.out > } > > in misccommands.cfg > > Flo I just tested it. It works. But it is not so easy to do. Here are the tra= ps to avoid : 1/ recompile nagios with --with-default-perfdata instead of --with-file-p= erfdata. Those are exclusive and if you put both, it will not work like we want. 2/ Do what Flo describes above. 3/ You cannot remove that file without sending a signal to nagios to rest= art it. I wanted to prevent some problems with that, but in fact, this is a limitat= ion. Things will change in the future. So what you do is a script: #!/bin/sh mv /usr/local/nagios/var/service-perfdata.out /tmp/service-perfdata.out perfparse (where the file to scan is /tmp/service-perfdata.out) rm /tmp/service-perfdata.out In the future (not so far), I think I will not support to send a signal t= o nagios to reboot it. The 2 reasons are that it is a limitation that I would get rid= of, and because some users have problems with it. So the new way to run perfparse will be that one. And of course, perfpars= e will be able to mv and rm the file if you want to delete it. You won't need the script= . In the future, perfparse will also be able to read from the command line,= allowing the use of a pipe instead of a temporary file. In the future(*), perfparse will also be able to write a compressed and r= otated log with what it parsed (without the need that nagios writes it somewhere) Does anybody have a good reason to keep the current model with a signal s= ent to nagios ? Any comment, idea ? If anybody wants to code some rrd or xml output right now, please contact= me :) Thanks Flo and others for the ideas, Yves (*) that is already implemented in http://ymettier.chez.tiscali.fr/perfparse-devel/perfparse-0.101.0ym3.tar.= gz. But that version does not connect to mysql yet. Work to do... --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - GTKtalog - http://www.nongnu.org/gtktalog/ - |
From: Ben C. <Be...@cl...> - 2004-09-16 07:40:53
|
New version 0.100.6. Version from yesterday would not compile cleanly on some systems. This version does. There are no other changes to version 0.100.5. Please enjoy! Ben |
From: Harper M. <hm...@it...> - 2004-09-16 00:05:08
|
Forgot to mention: Need to re-run ./configure after installing glib2-devel. - Harper Harper Mann Groundwork Open Source Solutions 510-599-2075 (cell) -----Original Message----- From: per...@li... [mailto:per...@li...] On Behalf Of barry maclean Sent: Wednesday, September 15, 2004 1:58 PM To: per...@li... Subject: [Perfparse-devel] make problems I'm trying to compile on a redhat enterprise 3 server. I get the following errors. What I'm I missing???? [root@testnagios perfparse-0.100.5]# make make all-recursive make[1]: Entering directory `/downloads/perfparse-0.100.5' Making all in libperfparse make[2]: Entering directory `/downloads/perfparse-0.100.5/libperfparse' if /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I.. -I/usr/include/mysql -mc pu=i486 -fno-strength-reduce '-DSYSCONFDIR="/usr/local/nagios/etc"' -g -O2 -Wall -MT libpe rfparse_la-config_file.lo -MD -MP -MF ".deps/libperfparse_la-config_file.Tpo" -c -o libperfpar se_la-config_file.lo `test -f 'config_file.c' || echo './'`config_file.c; \ then mv -f ".deps/libperfparse_la-config_file.Tpo" ".deps/libperfparse_la-config_file.Plo"; el se rm -f ".deps/libperfparse_la-config_file.Tpo"; exit 1; fi gcc -DHAVE_CONFIG_H -I. -I. -I.. -I/usr/include/mysql -mcpu=i486 -fno-strength-reduce -DSYSCO NFDIR=\"/usr/local/nagios/etc\" -g -O2 -Wall -MT libperfparse_la-config_file.lo -MD -MP -MF .d eps/libperfparse_la-config_file.Tpo -c config_file.c -fPIC -DPIC -o .libs/libperfparse_la-con fig_file.o config_file.c:39:18: glib.h: No such file or directory config_file.c: In function `conf_new': config_file.c:146: `FALSE' undeclared (first use in this function) config_file.c:146: (Each undeclared identifier is reported only once config_file.c:146: for each function it appears in.) config_file.c: In function `conf_load': config_file.c:208: `TRUE' undeclared (first use in this function) config_file.c:208: `FALSE' undeclared (first use in this function) make[2]: *** [libperfparse_la-config_file.lo] Error 1 make[2]: Leaving directory `/downloads/perfparse-0.100.5/libperfparse' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/downloads/perfparse-0.100.5' make: *** [all] Error 2 [root@testnagios perfparse-0.100.5]# Many thanks Barry ------------------------------------------------------- This SF.Net email is sponsored by: thawte's Crypto Challenge Vl Crack the code and win a Sony DCRHC40 MiniDV Digital Handycam Camcorder. More prizes in the weekly Lunch Hour Challenge. Sign up NOW http://ad.doubleclick.net/clk;10740251;10262165;m _______________________________________________ Perfparse-devel mailing list Per...@li... https://lists.sourceforge.net/lists/listinfo/perfparse-devel |
From: Harper M. <hm...@it...> - 2004-09-16 00:04:23
|
You need glib2-devel Run "up2date glib2 glib2-devel" Cheers, - Harper Harper Mann Groundwork Open Source Solutions 510-599-2075 (cell) -----Original Message----- From: per...@li... [mailto:per...@li...] On Behalf Of barry maclean Sent: Wednesday, September 15, 2004 1:58 PM To: per...@li... Subject: [Perfparse-devel] make problems I'm trying to compile on a redhat enterprise 3 server. I get the following errors. What I'm I missing???? [root@testnagios perfparse-0.100.5]# make make all-recursive make[1]: Entering directory `/downloads/perfparse-0.100.5' Making all in libperfparse make[2]: Entering directory `/downloads/perfparse-0.100.5/libperfparse' if /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I.. -I/usr/include/mysql -mc pu=i486 -fno-strength-reduce '-DSYSCONFDIR="/usr/local/nagios/etc"' -g -O2 -Wall -MT libpe rfparse_la-config_file.lo -MD -MP -MF ".deps/libperfparse_la-config_file.Tpo" -c -o libperfpar se_la-config_file.lo `test -f 'config_file.c' || echo './'`config_file.c; \ then mv -f ".deps/libperfparse_la-config_file.Tpo" ".deps/libperfparse_la-config_file.Plo"; el se rm -f ".deps/libperfparse_la-config_file.Tpo"; exit 1; fi gcc -DHAVE_CONFIG_H -I. -I. -I.. -I/usr/include/mysql -mcpu=i486 -fno-strength-reduce -DSYSCO NFDIR=\"/usr/local/nagios/etc\" -g -O2 -Wall -MT libperfparse_la-config_file.lo -MD -MP -MF .d eps/libperfparse_la-config_file.Tpo -c config_file.c -fPIC -DPIC -o .libs/libperfparse_la-con fig_file.o config_file.c:39:18: glib.h: No such file or directory config_file.c: In function `conf_new': config_file.c:146: `FALSE' undeclared (first use in this function) config_file.c:146: (Each undeclared identifier is reported only once config_file.c:146: for each function it appears in.) config_file.c: In function `conf_load': config_file.c:208: `TRUE' undeclared (first use in this function) config_file.c:208: `FALSE' undeclared (first use in this function) make[2]: *** [libperfparse_la-config_file.lo] Error 1 make[2]: Leaving directory `/downloads/perfparse-0.100.5/libperfparse' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/downloads/perfparse-0.100.5' make: *** [all] Error 2 [root@testnagios perfparse-0.100.5]# Many thanks Barry ------------------------------------------------------- This SF.Net email is sponsored by: thawte's Crypto Challenge Vl Crack the code and win a Sony DCRHC40 MiniDV Digital Handycam Camcorder. More prizes in the weekly Lunch Hour Challenge. Sign up NOW http://ad.doubleclick.net/clk;10740251;10262165;m _______________________________________________ Perfparse-devel mailing list Per...@li... https://lists.sourceforge.net/lists/listinfo/perfparse-devel |
From: barry m. <bar...@sh...> - 2004-09-16 00:01:44
|
I'm trying to compile on a redhat enterprise 3 server. I get the following errors. What I'm I missing???? [root@testnagios perfparse-0.100.5]# make make all-recursive make[1]: Entering directory `/downloads/perfparse-0.100.5' Making all in libperfparse make[2]: Entering directory `/downloads/perfparse-0.100.5/libperfparse' if /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I.. -I/usr/include/mysql -mc pu=i486 -fno-strength-reduce '-DSYSCONFDIR="/usr/local/nagios/etc"' -g -O2 -Wall -MT libpe rfparse_la-config_file.lo -MD -MP -MF ".deps/libperfparse_la-config_file.Tpo" -c -o libperfpar se_la-config_file.lo `test -f 'config_file.c' || echo './'`config_file.c; \ then mv -f ".deps/libperfparse_la-config_file.Tpo" ".deps/libperfparse_la-config_file.Plo"; el se rm -f ".deps/libperfparse_la-config_file.Tpo"; exit 1; fi gcc -DHAVE_CONFIG_H -I. -I. -I.. -I/usr/include/mysql -mcpu=i486 -fno-strength-reduce -DSYSCO NFDIR=\"/usr/local/nagios/etc\" -g -O2 -Wall -MT libperfparse_la-config_file.lo -MD -MP -MF .d eps/libperfparse_la-config_file.Tpo -c config_file.c -fPIC -DPIC -o .libs/libperfparse_la-con fig_file.o config_file.c:39:18: glib.h: No such file or directory config_file.c: In function `conf_new': config_file.c:146: `FALSE' undeclared (first use in this function) config_file.c:146: (Each undeclared identifier is reported only once config_file.c:146: for each function it appears in.) config_file.c: In function `conf_load': config_file.c:208: `TRUE' undeclared (first use in this function) config_file.c:208: `FALSE' undeclared (first use in this function) make[2]: *** [libperfparse_la-config_file.lo] Error 1 make[2]: Leaving directory `/downloads/perfparse-0.100.5/libperfparse' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/downloads/perfparse-0.100.5' make: *** [all] Error 2 [root@testnagios perfparse-0.100.5]# Many thanks Barry |
From: Ben C. <Be...@cl...> - 2004-09-15 08:56:38
|
New version 0.100.5. This version adds competent date selection to the raw history reports. The also fixes a bug in the same report where summary only reported bad data. Regards, Ben. |
From: Ben C. <Be...@cl...> - 2004-09-08 08:20:17
|
New version 0.100.4 Some users are complaining about strange messages in the graph and slow response. This is with respect to an attempt to get the graph image to calculate the stats, then display them on the calling cgi code. This seems not to work with proxy servers and other as yet unknown factors. This is a release which does not display graph statistics until we have worked out a working model for this. Please keep the excellent feed back coming in! Regards, Ben. |