You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(22) |
Jul
(18) |
Aug
(30) |
Sep
(11) |
Oct
(45) |
Nov
(14) |
Dec
(21) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(10) |
Feb
(12) |
Mar
|
Apr
|
May
(5) |
Jun
(2) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(3) |
Nov
(2) |
Dec
|
2006 |
Jan
|
Feb
(1) |
Mar
|
Apr
(2) |
May
(1) |
Jun
(6) |
Jul
|
Aug
(2) |
Sep
(3) |
Oct
|
Nov
|
Dec
(3) |
2007 |
Jan
(5) |
Feb
(12) |
Mar
(14) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(26) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
(2) |
2008 |
Jan
(1) |
Feb
|
Mar
(1) |
Apr
(2) |
May
(1) |
Jun
(2) |
Jul
(1) |
Aug
(1) |
Sep
(14) |
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Ben C. <be...@cl...> - 2004-12-19 13:58:35
|
Dear users, We need some help with documentation. This is a plea for any users who would like to contribute to PerfParse. If you can do any of the following: - Proof-read documentation. - Provide documentation for DBA tasks. Including dumping and restoring PerfParse, which would include a deletion of InnoDB table space. If you can provide a script, even better :) - Provide indepth reports of setup examples you have found to work which may have been of use to other users. - Translate documentation. We would be very interested in hearing from you. Any work completed will of course be credited in your name if you so wish. Kind regards, Ben Clewett. |
From: Tim W. <tim...@gm...> - 2004-12-10 14:00:53
|
This might help: (from the MySQL documentation) To make it easier to reload dump files for tables that have foreign key relationships, mysqldump automatically includes a statement in the dump output to set FOREIGN_KEY_CHECKS to 0 as of MySQL 4.1.1.This avoids problems with tables having to be reloaded in a particular order when the dump is reloaded. For earlier versions, you can disable the variable manually within mysql when loading the dump file like this: mysql> SET FOREIGN_KEY_CHECKS = 0; mysql> SOURCE dump_file_name mysql> SET FOREIGN_KEY_CHECKS = 1; This allows you to import the tables in any order if the dump file contains tables that are not correctly ordered for foreign keys. It also speeds up the import operation. FOREIGN_KEY_CHECKS is available starting from MySQL 3.23.52 and 4.0.3. So you can easily keep the reference.... Tim On Fri, 10 Dec 2004 13:48:30 +0000, Ben Clewett <bcl...@pe...> wrote: > More Database design decisions. > > This is a problem which may effect many users, so I would really like to > get it right with new schema. > > We currently have a cyclic referential link in our tables. Yves has > reported that this makes PP data in MySQL impossible to dump and reload. > > This reference is as follows on this ERD: > > <- 1. Last value recorded > +------------------------------+ > | | > | | > [Binary Data] [Metrics] > \|/ | > | | > +------------------------------+ > 2. Metric details -> > > The '1. Last Value Recorded' allows the binary report. Showing for each > metric, when and what the last reading was. Useful information to have > for current and future work. > > The '2. Metric Details' link is used for Referential Integrity. > Stopping the metric being deleted if there is data for the metric. > Therefore avoiding hundreds of Mb of unreferenced data. If this is not > used, the user could end up with GB of orphaned data to which they > cannot access or delete. > > I can think of several possible solutions to this: > > (i) Not worry about Referential Integrity, loose link 2. Add the RI in > programaticly and hope that our programs work. > > (ii) Drop the Binary Report, loose link 1. > > (iii) Hope that newer versions of MySQL have a dump / import program > which works with cyclic referenced tables. > > (iv) Program our own dump program which works. > > I am tending towards option (iv)... > > As I know there are people on this list who know many time more about > databases than my self, I would really like to hear from you! > > Regards, > > Ben. > > -- > Ben Clewett bcl...@pe... > PerfParse http://www.perfparse.org > PP FAQ http://wiki.perfparse.org/tiki-list_faqs.php > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://productguide.itmanagersjournal.com/ > _______________________________________________ > Perfparse-devel mailing list > Per...@li... > https://lists.sourceforge.net/lists/listinfo/perfparse-devel > |
From: Ben C. <bcl...@pe...> - 2004-12-10 13:48:34
|
More Database design decisions. This is a problem which may effect many users, so I would really like to get it right with new schema. We currently have a cyclic referential link in our tables. Yves has reported that this makes PP data in MySQL impossible to dump and reload. This reference is as follows on this ERD: <- 1. Last value recorded +------------------------------+ | | | | [Binary Data] [Metrics] \|/ | | | +------------------------------+ 2. Metric details -> The '1. Last Value Recorded' allows the binary report. Showing for each metric, when and what the last reading was. Useful information to have for current and future work. The '2. Metric Details' link is used for Referential Integrity. Stopping the metric being deleted if there is data for the metric. Therefore avoiding hundreds of Mb of unreferenced data. If this is not used, the user could end up with GB of orphaned data to which they cannot access or delete. I can think of several possible solutions to this: (i) Not worry about Referential Integrity, loose link 2. Add the RI in programaticly and hope that our programs work. (ii) Drop the Binary Report, loose link 1. (iii) Hope that newer versions of MySQL have a dump / import program which works with cyclic referenced tables. (iv) Program our own dump program which works. I am tending towards option (iv)... As I know there are people on this list who know many time more about databases than my self, I would really like to hear from you! Regards, Ben. -- Ben Clewett bcl...@pe... PerfParse http://www.perfparse.org PP FAQ http://wiki.perfparse.org/tiki-list_faqs.php |
From: Ben C. <bcl...@pe...> - 2004-12-10 10:20:47
|
Tim Wuyts wrote: > Ben, > If I understand correctly, every record in the METRIC table has a > unique id, plus a reference to a SERVICE record (using the > service_id). Every record in the SERVICE table has a unique id and a > reference to a HOST record (using the host_id). > > In that case, good _relational_ db design commands that you use option > 2, since the other option is again duplication of information. > > Tim. Tim, These are my feeling as well. However the complex key structure was requested by a member of this group, in order that data for an entire service or host could be selected more easily. Not a use-case which is currently used, but may be needed by some user :) I'll wait a while to see if anybody else has strong feelings on the subject. > PS: Nice ERD ;) Thanks! Ben > > > On Fri, 10 Dec 2004 09:49:16 +0000, Ben Clewett <bcl...@pe...> wrote: > >>Hi Tim, >> >>No problem. >> >>The host, service and metric is a unique ID of a real host, service or >>metric entries. Referenced in host, service and metric tables. >> >>The host table, for example: >> >>CREATE TABLE IF NOT EXISTS perfdata_host ( >> host_name VARCHAR(75) NOT NULL PRIMARY KEY, >> host_id INT NOT NULL, >> UNIQUE(host_id), >> .... >> >>(host_name is PK because it is already in current tables. host_id is a >>new field.) >> >>The full table definition of the new binary tables may be one of these: >> >>Option 1: >> >>CREATE TABLE IF NOT EXISTS perfdata_bin ( >> host_id INT NOT NULL, >> service_id INT NOT NULL, >> metric_id INT NOT NULL, >> ctime DATETIME NOT NULL, >> PRIMARY KEY (host_id, service_id, metric_id, ctime), >> value DOUBLE, >> state TINYINT NOT NULL, >> bin_extra_id INT >> ) TYPE=InnoDB; >> >>Option 2: >> >>CREATE TABLE IF NOT EXISTS perfdata_bin ( >> metric_id INT NOT NULL, >> ctime DATETIME NOT NULL, >> PRIMARY KEY (metric_id, ctime), >> value DOUBLE, >> state TINYINT NOT NULL, >> bin_extra_id INT >> ) TYPE=InnoDB; >> >>The difference is that the second is: >>- Smaller >>- Faster >>- Easier to construct queries. >> >>A simple ERD: >> >>Option 1: >> >>[host] ---< [service] ---< [metric] >> | | | >> | +----------+ | >> | | | >> | | +-----------------------+ >> | | | >> ^ ^ ^ >>[bin data] >> v >> | >>[extra data] >> >>Option 2: >> >>[host] ---< [service] ---< [metric] >> | >> | >> | >> +-----------------------+ >> | >> ^ >>[bin data] >> v >> | >>[extra data] >> >> >>Ben. >> >> >> >> >>Tim Wuyts wrote: >> >> >>>Ben, >>>For the sake of keeping the discussion clear, could you provide us >>>with a complete database schema? For the moment, it is not clear what >>>e.g. metric is (it's declared INT, so I suppose it references some >>>other table, but this is not defined). >>> >>>A graphical representation (ERD) would be nice :) >>> >>>Thx, >>>Tim >>> >>>On Fri, 10 Dec 2004 08:39:43 +0000, Ben Clewett <bcl...@pe...> wrote: >>> >>> >>>>Dear PP development community. >>>> >>>>I am recoding our main database to be smaller and faster. As discussed >>>>in this document: >>>> >>>>http://wiki.perfparse.org/tiki-index.php?page=DatabaseConversionSpecification >>>> >>>>I want to look again at the key structure for the main binary table. >>>> >>>>The two options are: >>>> >>>>1: >>>> host INT, service INT, metric INT, ctime DATETIME, >>>> PRIMARY KEY (host, service, metric, ctime) >>>> >>>>2: >>>> metric INT, ctime DATETIME, >>>> PRIMARY KEY (metric, ctime) >>>> >>>>Originally we were going to use (2). However several of you commented >>>>that (1) would be more useful for extracting all data for, say, a host >>>>or a service. >>>> >>>>I am looking at this again and am moving back towards structure instead >>>>(2). This gives: >>>> >>>>- Smaller table space. >>>>- Faster keyed access. >>>>- Easier to construct queries. >>>> >>>>I am also looking with respect to the only two likely use-cases in the >>>>near future: >>>> >>>>- Extracting data for one metric. Eg, a graph. >>>>- Extracting data for two or more random metrics. A graph of multiple >>>>metrics. >>>> >>>>Neither of these options require the large complex key. >>>> >>>>If it was ever needed to get all data for a host, this can be completed >>>>simply using a JOIN. Slightly slower, but the number of times this may >>>>be used is small enough that the disadvantaged are not significant. >>>>Where as we all want fast graphs. >>>> >>>>Since it was users on this group who suggested using the longer key >>>>structure (1), I would very much like to know how you feel before I decide. >>>> >>>>Regards, >>>> >>>>Ben Clewett. >>>> >>>>-- >>>>Ben Clewett bcl...@pe... >>>>PerfParse http://www.perfparse.org >>>>PP FAQ http://wiki.perfparse.org/tiki-list_faqs.php >>>> >>>>------------------------------------------------------- >>>>SF email is sponsored by - The IT Product Guide >>>>Read honest & candid reviews on hundreds of IT Products from real users. >>>>Discover which products truly live up to the hype. Start reading now. >>>>http://productguide.itmanagersjournal.com/ >>>>_______________________________________________ >>>>Perfparse-devel mailing list >>>>Per...@li... >>>>https://lists.sourceforge.net/lists/listinfo/perfparse-devel >>>> >>> >>> >> >>-- >> >> >>Ben Clewett bcl...@pe... >>PerfParse http://www.perfparse.org >>PP FAQ http://wiki.perfparse.org/tiki-list_faqs.php >> > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://productguide.itmanagersjournal.com/ > _______________________________________________ > Perfparse-devel mailing list > Per...@li... > https://lists.sourceforge.net/lists/listinfo/perfparse-devel > -- Ben Clewett bcl...@pe... PerfParse http://www.perfparse.org PP FAQ http://wiki.perfparse.org/tiki-list_faqs.php |
From: Tim W. <tim...@gm...> - 2004-12-10 09:59:03
|
Ben, If I understand correctly, every record in the METRIC table has a unique id, plus a reference to a SERVICE record (using the service_id). Every record in the SERVICE table has a unique id and a reference to a HOST record (using the host_id). In that case, good _relational_ db design commands that you use option 2, since the other option is again duplication of information. Tim. PS: Nice ERD ;) On Fri, 10 Dec 2004 09:49:16 +0000, Ben Clewett <bcl...@pe...> wrote: > Hi Tim, > > No problem. > > The host, service and metric is a unique ID of a real host, service or > metric entries. Referenced in host, service and metric tables. > > The host table, for example: > > CREATE TABLE IF NOT EXISTS perfdata_host ( > host_name VARCHAR(75) NOT NULL PRIMARY KEY, > host_id INT NOT NULL, > UNIQUE(host_id), > .... > > (host_name is PK because it is already in current tables. host_id is a > new field.) > > The full table definition of the new binary tables may be one of these: > > Option 1: > > CREATE TABLE IF NOT EXISTS perfdata_bin ( > host_id INT NOT NULL, > service_id INT NOT NULL, > metric_id INT NOT NULL, > ctime DATETIME NOT NULL, > PRIMARY KEY (host_id, service_id, metric_id, ctime), > value DOUBLE, > state TINYINT NOT NULL, > bin_extra_id INT > ) TYPE=InnoDB; > > Option 2: > > CREATE TABLE IF NOT EXISTS perfdata_bin ( > metric_id INT NOT NULL, > ctime DATETIME NOT NULL, > PRIMARY KEY (metric_id, ctime), > value DOUBLE, > state TINYINT NOT NULL, > bin_extra_id INT > ) TYPE=InnoDB; > > The difference is that the second is: > - Smaller > - Faster > - Easier to construct queries. > > A simple ERD: > > Option 1: > > [host] ---< [service] ---< [metric] > | | | > | +----------+ | > | | | > | | +-----------------------+ > | | | > ^ ^ ^ > [bin data] > v > | > [extra data] > > Option 2: > > [host] ---< [service] ---< [metric] > | > | > | > +-----------------------+ > | > ^ > [bin data] > v > | > [extra data] > > > Ben. > > > > > Tim Wuyts wrote: > > > Ben, > > For the sake of keeping the discussion clear, could you provide us > > with a complete database schema? For the moment, it is not clear what > > e.g. metric is (it's declared INT, so I suppose it references some > > other table, but this is not defined). > > > > A graphical representation (ERD) would be nice :) > > > > Thx, > > Tim > > > > On Fri, 10 Dec 2004 08:39:43 +0000, Ben Clewett <bcl...@pe...> wrote: > > > >>Dear PP development community. > >> > >>I am recoding our main database to be smaller and faster. As discussed > >>in this document: > >> > >>http://wiki.perfparse.org/tiki-index.php?page=DatabaseConversionSpecification > >> > >>I want to look again at the key structure for the main binary table. > >> > >>The two options are: > >> > >>1: > >> host INT, service INT, metric INT, ctime DATETIME, > >> PRIMARY KEY (host, service, metric, ctime) > >> > >>2: > >> metric INT, ctime DATETIME, > >> PRIMARY KEY (metric, ctime) > >> > >>Originally we were going to use (2). However several of you commented > >>that (1) would be more useful for extracting all data for, say, a host > >>or a service. > >> > >>I am looking at this again and am moving back towards structure instead > >>(2). This gives: > >> > >>- Smaller table space. > >>- Faster keyed access. > >>- Easier to construct queries. > >> > >>I am also looking with respect to the only two likely use-cases in the > >>near future: > >> > >>- Extracting data for one metric. Eg, a graph. > >>- Extracting data for two or more random metrics. A graph of multiple > >>metrics. > >> > >>Neither of these options require the large complex key. > >> > >>If it was ever needed to get all data for a host, this can be completed > >>simply using a JOIN. Slightly slower, but the number of times this may > >>be used is small enough that the disadvantaged are not significant. > >>Where as we all want fast graphs. > >> > >>Since it was users on this group who suggested using the longer key > >>structure (1), I would very much like to know how you feel before I decide. > >> > >>Regards, > >> > >>Ben Clewett. > >> > >>-- > >>Ben Clewett bcl...@pe... > >>PerfParse http://www.perfparse.org > >>PP FAQ http://wiki.perfparse.org/tiki-list_faqs.php > >> > >>------------------------------------------------------- > >>SF email is sponsored by - The IT Product Guide > >>Read honest & candid reviews on hundreds of IT Products from real users. > >>Discover which products truly live up to the hype. Start reading now. > >>http://productguide.itmanagersjournal.com/ > >>_______________________________________________ > >>Perfparse-devel mailing list > >>Per...@li... > >>https://lists.sourceforge.net/lists/listinfo/perfparse-devel > >> > > > > > > > -- > > > Ben Clewett bcl...@pe... > PerfParse http://www.perfparse.org > PP FAQ http://wiki.perfparse.org/tiki-list_faqs.php > |
From: Ben C. <bcl...@pe...> - 2004-12-10 09:49:22
|
Hi Tim, No problem. The host, service and metric is a unique ID of a real host, service or metric entries. Referenced in host, service and metric tables. The host table, for example: CREATE TABLE IF NOT EXISTS perfdata_host ( host_name VARCHAR(75) NOT NULL PRIMARY KEY, host_id INT NOT NULL, UNIQUE(host_id), .... (host_name is PK because it is already in current tables. host_id is a new field.) The full table definition of the new binary tables may be one of these: Option 1: CREATE TABLE IF NOT EXISTS perfdata_bin ( host_id INT NOT NULL, service_id INT NOT NULL, metric_id INT NOT NULL, ctime DATETIME NOT NULL, PRIMARY KEY (host_id, service_id, metric_id, ctime), value DOUBLE, state TINYINT NOT NULL, bin_extra_id INT ) TYPE=InnoDB; Option 2: CREATE TABLE IF NOT EXISTS perfdata_bin ( metric_id INT NOT NULL, ctime DATETIME NOT NULL, PRIMARY KEY (metric_id, ctime), value DOUBLE, state TINYINT NOT NULL, bin_extra_id INT ) TYPE=InnoDB; The difference is that the second is: - Smaller - Faster - Easier to construct queries. A simple ERD: Option 1: [host] ---< [service] ---< [metric] | | | | +----------+ | | | | | | +-----------------------+ | | | ^ ^ ^ [bin data] v | [extra data] Option 2: [host] ---< [service] ---< [metric] | | | +-----------------------+ | ^ [bin data] v | [extra data] Ben. Tim Wuyts wrote: > Ben, > For the sake of keeping the discussion clear, could you provide us > with a complete database schema? For the moment, it is not clear what > e.g. metric is (it's declared INT, so I suppose it references some > other table, but this is not defined). > > A graphical representation (ERD) would be nice :) > > Thx, > Tim > > On Fri, 10 Dec 2004 08:39:43 +0000, Ben Clewett <bcl...@pe...> wrote: > >>Dear PP development community. >> >>I am recoding our main database to be smaller and faster. As discussed >>in this document: >> >>http://wiki.perfparse.org/tiki-index.php?page=DatabaseConversionSpecification >> >>I want to look again at the key structure for the main binary table. >> >>The two options are: >> >>1: >> host INT, service INT, metric INT, ctime DATETIME, >> PRIMARY KEY (host, service, metric, ctime) >> >>2: >> metric INT, ctime DATETIME, >> PRIMARY KEY (metric, ctime) >> >>Originally we were going to use (2). However several of you commented >>that (1) would be more useful for extracting all data for, say, a host >>or a service. >> >>I am looking at this again and am moving back towards structure instead >>(2). This gives: >> >>- Smaller table space. >>- Faster keyed access. >>- Easier to construct queries. >> >>I am also looking with respect to the only two likely use-cases in the >>near future: >> >>- Extracting data for one metric. Eg, a graph. >>- Extracting data for two or more random metrics. A graph of multiple >>metrics. >> >>Neither of these options require the large complex key. >> >>If it was ever needed to get all data for a host, this can be completed >>simply using a JOIN. Slightly slower, but the number of times this may >>be used is small enough that the disadvantaged are not significant. >>Where as we all want fast graphs. >> >>Since it was users on this group who suggested using the longer key >>structure (1), I would very much like to know how you feel before I decide. >> >>Regards, >> >>Ben Clewett. >> >>-- >>Ben Clewett bcl...@pe... >>PerfParse http://www.perfparse.org >>PP FAQ http://wiki.perfparse.org/tiki-list_faqs.php >> >>------------------------------------------------------- >>SF email is sponsored by - The IT Product Guide >>Read honest & candid reviews on hundreds of IT Products from real users. >>Discover which products truly live up to the hype. Start reading now. >>http://productguide.itmanagersjournal.com/ >>_______________________________________________ >>Perfparse-devel mailing list >>Per...@li... >>https://lists.sourceforge.net/lists/listinfo/perfparse-devel >> > > -- Ben Clewett bcl...@pe... PerfParse http://www.perfparse.org PP FAQ http://wiki.perfparse.org/tiki-list_faqs.php |
From: Tim W. <tim...@gm...> - 2004-12-10 09:36:29
|
Ben, For the sake of keeping the discussion clear, could you provide us with a complete database schema? For the moment, it is not clear what e.g. metric is (it's declared INT, so I suppose it references some other table, but this is not defined). A graphical representation (ERD) would be nice :) Thx, Tim On Fri, 10 Dec 2004 08:39:43 +0000, Ben Clewett <bcl...@pe...> wrote: > Dear PP development community. > > I am recoding our main database to be smaller and faster. As discussed > in this document: > > http://wiki.perfparse.org/tiki-index.php?page=DatabaseConversionSpecification > > I want to look again at the key structure for the main binary table. > > The two options are: > > 1: > host INT, service INT, metric INT, ctime DATETIME, > PRIMARY KEY (host, service, metric, ctime) > > 2: > metric INT, ctime DATETIME, > PRIMARY KEY (metric, ctime) > > Originally we were going to use (2). However several of you commented > that (1) would be more useful for extracting all data for, say, a host > or a service. > > I am looking at this again and am moving back towards structure instead > (2). This gives: > > - Smaller table space. > - Faster keyed access. > - Easier to construct queries. > > I am also looking with respect to the only two likely use-cases in the > near future: > > - Extracting data for one metric. Eg, a graph. > - Extracting data for two or more random metrics. A graph of multiple > metrics. > > Neither of these options require the large complex key. > > If it was ever needed to get all data for a host, this can be completed > simply using a JOIN. Slightly slower, but the number of times this may > be used is small enough that the disadvantaged are not significant. > Where as we all want fast graphs. > > Since it was users on this group who suggested using the longer key > structure (1), I would very much like to know how you feel before I decide. > > Regards, > > Ben Clewett. > > -- > Ben Clewett bcl...@pe... > PerfParse http://www.perfparse.org > PP FAQ http://wiki.perfparse.org/tiki-list_faqs.php > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://productguide.itmanagersjournal.com/ > _______________________________________________ > Perfparse-devel mailing list > Per...@li... > https://lists.sourceforge.net/lists/listinfo/perfparse-devel > |
From: Ben C. <bcl...@pe...> - 2004-12-10 08:39:51
|
Dear PP development community. I am recoding our main database to be smaller and faster. As discussed in this document: http://wiki.perfparse.org/tiki-index.php?page=DatabaseConversionSpecification I want to look again at the key structure for the main binary table. The two options are: 1: host INT, service INT, metric INT, ctime DATETIME, PRIMARY KEY (host, service, metric, ctime) 2: metric INT, ctime DATETIME, PRIMARY KEY (metric, ctime) Originally we were going to use (2). However several of you commented that (1) would be more useful for extracting all data for, say, a host or a service. I am looking at this again and am moving back towards structure instead (2). This gives: - Smaller table space. - Faster keyed access. - Easier to construct queries. I am also looking with respect to the only two likely use-cases in the near future: - Extracting data for one metric. Eg, a graph. - Extracting data for two or more random metrics. A graph of multiple metrics. Neither of these options require the large complex key. If it was ever needed to get all data for a host, this can be completed simply using a JOIN. Slightly slower, but the number of times this may be used is small enough that the disadvantaged are not significant. Where as we all want fast graphs. Since it was users on this group who suggested using the longer key structure (1), I would very much like to know how you feel before I decide. Regards, Ben Clewett. -- Ben Clewett bcl...@pe... PerfParse http://www.perfparse.org PP FAQ http://wiki.perfparse.org/tiki-list_faqs.php |
From: Ben C. <BCl...@pe...> - 2004-11-29 11:02:07
|
Dear users, We are happy to release the next version 0.104.1. This has many interesting additions: - dynamic libraries for perfparse-log2* and perfparsed - perfparse-log2any can load more than one storage module; the other perfparse-log2* are links, shortcuts to perfparse-log2any with one module only. - Perfparse is now internationalized, with a first French translation to be improved and others to be done (contributions welcome) - perfgraph.cgi renamed as perfparse.cgi. To avoid confusions, upgrades should remove their old perfparse.cgi. I will repeat the last change: perfgraph.cgi is now renamed to perfparse.cgi. Please delete perfgraph.cgi and change your links. Contributions will be particularly appreciated on: - translations (including improving the French one) - helping Garry on the documentation - porting perfparse on other systems than GNU/Linux and Solaris - making storage modules for other databases or RRD files Regards, PP Development team. |
From: barry m. <bar...@sh...> - 2004-11-25 16:46:02
|
When I try and run the command [root@localhost etc]# /usr/local/nagios/bin/perfparse-log2db -s -r No position mark path was specified. Either disable saving the position or specify a mark path. [root@localhost etc]# I also have the error_log defined in the .cfg see below # Error handling : Error_Log = "/usr/local/nagios/var/perfparse.log" Error_Log_Rotate = "Yes" Drop_File = "/tmp/perfparse.drop" Drop_File_Rotate = "Yes" I have left the script running overnight. Is there anyway I can enable debugging?? Many thanks for all your help. Barry ----- Original Message ----- From: Ben Clewett <BCl...@pe...> Date: Thursday, November 25, 2004 2:13 am Subject: Re: [Perfparse-devel] perfparse.sh problems > Hi Barry, > > Sorry to hear things are not working so well for you. I have > moved this > posing to perfparse-users where more users may be able to help you > identify this problem. > > One reason I can think of it that the script is doing something, > it's > just taking a long time. I guess you are using Method 1 described in: > > http://perfparse.sourceforge.net/docs.php#id265256 > > Can you extract the perfparse command from the perfparse.sh and > try > running it manually with the -s and -r flags. This will be > something like: > > $ /usr/local/nagios/bin/perfparse-log2db -s -r > > Can you tell me what is returned? > > Ben > > > barry maclean wrote: > > > I'm trying to use perfparse with nagios 1.2 on redhat enterprise > 3 without great success. > > > > When I try and run the script perfparse.sh I get the following > error.> > > [root@localhost bin]# ./perfparse.sh > > Error_Log undefined in /usr/local/nagios/etc/perfparse.cfg > > Cannot continue > > [root@localhost bin]# > > > > I then create a perfparse.log and path it in the perfparse.cfg > when the script just hangs and doesn't do anything. > > > > Help > > > > Thanks Barry > > > > > > > > ------------------------------------------------------- > > SF email is sponsored by - The IT Product Guide > > Read honest & candid reviews on hundreds of IT Products from > real users. > > Discover which products truly live up to the hype. Start reading > now. > > http://productguide.itmanagersjournal.com/ > > _______________________________________________ > > Perfparse-devel mailing list > > Per...@li... > > https://lists.sourceforge.net/lists/listinfo/perfparse-devel > > > > |
From: Viljo M. <na...@ma...> - 2004-11-25 14:26:13
|
Hey, Just an idea. I once had exact same problem and the reason was, that 'Error_Log' wasn't defined in perfparse.cfg, there was some other name given for it. Or the perfparse.sh script searched for other errorlog name, don't remember exactly. HTH Viljo Ben Clewett wrote: > Hi Barry, > > Sorry to hear things are not working so well for you. I have moved this > posing to perfparse-users where more users may be able to help you > identify this problem. > > One reason I can think of it that the script is doing something, it's > just taking a long time. I guess you are using Method 1 described in: > > http://perfparse.sourceforge.net/docs.php#id265256 > > Can you extract the perfparse command from the perfparse.sh and try > running it manually with the -s and -r flags. This will be something like: > > $ /usr/local/nagios/bin/perfparse-log2db -s -r > > Can you tell me what is returned? > > Ben > > > barry maclean wrote: > >> I'm trying to use perfparse with nagios 1.2 on redhat enterprise 3 >> without great success. >> >> When I try and run the script perfparse.sh I get the following error. >> >> [root@localhost bin]# ./perfparse.sh >> Error_Log undefined in /usr/local/nagios/etc/perfparse.cfg >> Cannot continue >> [root@localhost bin]# >> >> I then create a perfparse.log and path it in the perfparse.cfg when >> the script just hangs and doesn't do anything. |
From: Ben C. <BCl...@pe...> - 2004-11-25 09:13:15
|
Hi Barry, Sorry to hear things are not working so well for you. I have moved this posing to perfparse-users where more users may be able to help you identify this problem. One reason I can think of it that the script is doing something, it's just taking a long time. I guess you are using Method 1 described in: http://perfparse.sourceforge.net/docs.php#id265256 Can you extract the perfparse command from the perfparse.sh and try running it manually with the -s and -r flags. This will be something like: $ /usr/local/nagios/bin/perfparse-log2db -s -r Can you tell me what is returned? Ben barry maclean wrote: > I'm trying to use perfparse with nagios 1.2 on redhat enterprise 3 without great success. > > When I try and run the script perfparse.sh I get the following error. > > [root@localhost bin]# ./perfparse.sh > Error_Log undefined in /usr/local/nagios/etc/perfparse.cfg > Cannot continue > [root@localhost bin]# > > I then create a perfparse.log and path it in the perfparse.cfg when the script just hangs and doesn't do anything. > > Help > > Thanks Barry > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://productguide.itmanagersjournal.com/ > _______________________________________________ > Perfparse-devel mailing list > Per...@li... > https://lists.sourceforge.net/lists/listinfo/perfparse-devel > |
From: barry m. <bar...@sh...> - 2004-11-24 21:37:47
|
I'm trying to use perfparse with nagios 1.2 on redhat enterprise 3 without great success. When I try and run the script perfparse.sh I get the following error. [root@localhost bin]# ./perfparse.sh Error_Log undefined in /usr/local/nagios/etc/perfparse.cfg Cannot continue [root@localhost bin]# I then create a perfparse.log and path it in the perfparse.cfg when the script just hangs and doesn't do anything. Help Thanks Barry |
From: Ben C. <BCl...@pe...> - 2004-11-18 09:23:44
|
Dear Users, New version 0.103.2. This is mainly a bug fix for the following problems. If these problems are still unfixed, please let us know. 1. Compilation problems with libXpm.so sorted for Philipp. 2. printf format for BSD fixed for Julian. Please let me know if this is any better. 3. Fixed bug in tool to reset deletion policies: perfparse-db-tools --reset-host-delete-policy This now correctly accept a host name. Therefore you can use this to reset the deletion policies of all data for a named host. For example: $ perfparse-db-tools --reset-host-delete-policy h% Will reset all data from hosts which start with 'h'. The deletion policy will be set to the binary and raw default. Please check what this is before using. 4. Fixed compilation bugs where variables used before all variables defined. Strict ANSI C compilers will need this. 5. Amended CGI Host Delete Policy to allow faster editing. Please enjoy and report all problems. The next exciting version from Yves will be a language translation into French. We are looking for translators into other languages. Please let us know if you can give some time for this, you will be fully credited for all work. Regards, Yves and Ben. |
From: Yves <yme...@pe...> - 2004-11-10 15:12:19
|
Hi, I just put perfparse-0.104.0ym8 on http://pagesperso.laposte.net/ymettier/perfparse-devel/perfparse-0.104.0/= . I will be offline for 5 day, so play with that devel release and report b= ugs to Ben :) The main changes are : 1/ modules are now libraries and are loaded at runtime. 2/ internationalization beginning 3/ fixes and new features by Ben 1 -> to edit a new module, you should work only in modules/ now and it sh= ould be easier to understand how to hack a module. There is no documentation about how t= o make a module, but you should be able to hack the "print" module easily. 2 -> perfparse-log2*, perfparsed, perfparse-db-* and modules are ready fo= r translation. The CGIs still need some work. if you want to translate, that should not stop you and you can send Ben a= nd I your LL.po files. Copy po/perfparse.pot, rename it as LL.po (where LL is the 2 lette= rs for your language, pt for portuguese, es for spanish...) and edit it. If you want to contribute for the CGI, ask Ben. 3 -> Read the ChangeLog, and ask Ben if necessary :) Yves --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - Perfparse - http://perfparse.sf.net/ - |
From: Yves M. <yme...@li...> - 2004-11-08 09:15:09
|
> Hello, > > I don't know when this change was made, but in glib.h function name was > changed from g_string_hash() to g_str_hash() and this has to be changed > in perfparse/log_reader.c line 533 to compile correctly. > > I use glib-1.2 from Debian unstable. Not really. In glib-2.0, you have 2 functions with 2 different prototypes and with di= fferent goals : guint g_string_hash(GString*str) -> computes a hash for the GString* guint g_str_hash(gconstpointer*v) -> computes a hash for the NULL-termina= ted string In glib-1.2, g_string_hash does not seem to exist. Well, could you write this in log_reader.h ? #ifdef USE_GLIB12 #define g_string_hash(x) (g_str_hash((x)->str)) #endif > > P.S. I don't know what's the problem, but I just can't get perfparsed > work with pipe file. Should this work by simply echoing data to pipe > file and perfparsed then inserts this to mysql (or whatever) db? I try > echo for testing, of course. And when perfparse-service.log file is not > pipe then check info appears normally there. > > Can I somehow trace what perfparsed is doing, or why it's not doing wha= t > it is supposed to be doing? First check in your drop file if you don't make mistakes when echoing. Li= nes with wrong syntax go to DropFile (default is /tmp/perfparse.drop). Then, the tools are strace (truss on Solaris), and if gdb is a friend of = yours, gdb :) strace perfparse-log2* Yves --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - Perfparse - http://perfparse.sf.net/ - |
From: Viljo M. <na...@ma...> - 2004-11-08 08:51:00
|
Hello, I don't know when this change was made, but in glib.h function name was changed from g_string_hash() to g_str_hash() and this has to be changed in perfparse/log_reader.c line 533 to compile correctly. I use glib-1.2 from Debian unstable. P.S. I don't know what's the problem, but I just can't get perfparsed work with pipe file. Should this work by simply echoing data to pipe file and perfparsed then inserts this to mysql (or whatever) db? I try echo for testing, of course. And when perfparse-service.log file is not pipe then check info appears normally there. Can I somehow trace what perfparsed is doing, or why it's not doing what it is supposed to be doing? Rgds, Viljo |
From: Yves M. <yme...@li...> - 2004-11-03 15:13:39
|
Hi, perfparse-0.104.0ym1 is the 1st package for the new 0.104 development bra= nch. You can get it at http://pagesperso.laposte.net/ymettier/perfparse-devel/perfpars= e-0.104.0/ ChangeLog : - extraction of libnagios_perfdata_parser from perfparse/ - extraction of the storage modules from perfparse/ - dynamic load of storage modules (new option --storage_module_load) - perfparse-log2* are removed and replaced with perfparse-log2any - perfparsed no more linked with mysql About perfparse-log2*, I would say that perfparse-log2db is equivalent to perfparse-log2any --storage_module_load mysql All the new libraries are still inside perfparse, but I'd like to ask you= if I should extract libnagios_perfdata_parser out of the perfparse project, keep it i= nside the perfparse project but in some separate package, or like this, inside the = perfparse package. Same question for storage modules. What do you think ? What are your ideas ? Yves --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - Perfparse - http://perfparse.sf.net/ - |
From: Yves M. <yme...@li...> - 2004-11-02 14:44:56
|
> One day all DBMS will be ANSI. So the mountain in this case will come > to Mohamed. In the mean while, we can try to always keep as close as > possible.... When all DBMS are ANSI, and if you use only the ANSI, you will not take b= enefit of the specific feature of them, and run slower than you would be able to. Keep close, but don't hesitate to use specificities when it is good. The more important is to make it easy to port perfparse to other dbms. > > 2.5 Store the range of warning and critical, as either an inside or > > outside range. A value of NULL will indicate infinity. A range > > type of NULL will indicate no value. > > > > > >>>>>>>>>>>>>> Not precise enough. > > > > > > I suggest NULL for infinite (-inf for start range and +inf for > > end range, because you cannot have +inf for start range, and you > > cannot have -inf for end range :) > > > > I suggest the default values when not recorded (0 for start range > > and +inf for end range, as specified on the nagiosplug plugins > > specification page). > > > > You also need something to say that the range is inverted (the @ > > character in the range) > > > This section is vague. Can you look at section 4.3 where these ideas > are expanded, and see whether this is acceptable? Looks good. > > 4.1 The extraction of data for a graph between two dates can be > > completed as: > > > > SELECT * ... > > > >>>>>>>>>>>>>> For information, this is also possible with perfparsed > > since 0.103.1 if you enable file_output storage : telnet the > > perfparsed server and run this: > > history tm_start tm_end '1' '2' '3' > > > I haven't tested this yet my self, but I am sure this will lead to some > interesting applications. If this is the start of a new thread, we'll split. I just see one application as the CGI that draw the graphs. It can get th= e data directly from the server, without support of the database. Tests of speed will be necessary to see if it is better or worse than ask= ing the database (like now). If speed is correct, maybe those who only need to gr= aph performance data can do it without the support of any database ? This is not the deat= h of database storage because statistics need it. And asking the database can be easier= when the request is more complex than getting the data between two dates. > > 2.2 A check against the extra data will be completed to find out i= f > > this extra data has been entered, against the key 'extra_id0'. If > > this data has not been entered before, this should take place. > > > >>>>>>>>>>>>>> I suggest some option to do that check or not. You can > > also remove duplicates every night with perfparse-db-tool. Depends o= n > > how much performance you need when inserting data in the database. > > > Do you suggest writing *all* the extra data, one entry for each, then > reducing the data in a nightly parse? Ooops, forget that point :) I was thinking to parsing the same line twice and entering it twice into = the database :) > I think this one will need to be settled experimentally. Although your > suggestion doesn't require a lookup on every line. It does require > massive insertion and deletion, which are both heavy activities for a > database. When we get there, we can test it. With innodb, forget it. innodb databases cannot reduce their size. > Another thing you may be suggesting is a flag to ignore *all* the extra > data? Leave the linkage as NULL? This is an interesting idea is users > did not want the warn, crit, max or min. ? Bad remarks because of misunderstanding (me) can give very good ideas (yo= u) :) Yves --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - Perfparse - http://perfparse.sf.net/ - |
From: Ben C. <BCl...@pe...> - 2004-11-02 14:15:49
|
Hi, Let me address your comments. > binary usually means "$=A3*%F|=E9=E8@*%" data :) Good point. The name 'Binary' refereed to the way the data is stored,=20 as binary number types not text, and not the way Nagios sends the data.=20 I'll add a sentence... >>>>>>>>>>>> Not a Number values can be considered as missing values=20 and are coded NULL > > > too. So there is no problem with NaN values :) I'll add a comment to clarify this point. >>>>>>>>> Use some integer to store a timestamp. This should be enough. > > > However, this is not very simple on sybase (I don't know for > other dbms). So don't try to be too portable ! Keep It Simple > and Stupid :) I am moving toward using the ANSI-SQL standard of DATETIME. The same as=20 we have at the moment. It is likely that each DBMS will have it's own=20 best way of handling data/time, and queries which cannot be made=20 ANSI-SQL can optimize around this. One day all DBMS will be ANSI. So the mountain in this case will come=20 to Mohamed. In the mean while, we can try to always keep as close as=20 possible.... > Infinite value means that there is no mini or no maxi. Nothing > recorded also means no mini or no maxi. So I agree with using NULL > to store not recorded value. However, there will be a choice to do > when you read NULL in the database : will you present it as not > recorded or as infinite ? Well, this is presentation and has > nothing to do here. Good point. Classic Database Theory problem:- Does NULL mean No Data or=20 Bad Data. Nagios does not support infinite range, so NULL in this case=20 means no data. One of the advantages of this twin-table format is the ease at which=20 extra data fields can be added. When PP is used for more than just=20 Nagios, extra fields to clarify the meaning can be added. > 2.5 Store the range of warning and critical, as either an inside or > outside range. A value of NULL will indicate infinity. A range > type of NULL will indicate no value. > > >>>>>>>>>>>>>> Not precise enough. > > > I suggest NULL for infinite (-inf for start range and +inf for > end range, because you cannot have +inf for start range, and you > cannot have -inf for end range :) > > I suggest the default values when not recorded (0 for start range > and +inf for end range, as specified on the nagiosplug plugins > specification page). > > You also need something to say that the range is inverted (the @ > character in the range) This section is vague. Can you look at section 4.3 where these ideas=20 are expanded, and see whether this is acceptable? > 2.8 Use ANSI SQL where ever possible. > > >>>>>>>>>>>>>>> ...where ever the performance of insertion and queries > are not reduced > > > too much. > > You will never be able to write SQL that can be ported on mysql, > postgresql, oracle, > sybase, odbc, sqlite and other ones, even writing 100% ANSI SQL. Or > you will write some > very poor SQL and program yourself things that SQL dataservers can do > for you. You are right that ANSI sql cannot be used in all cases. So some form=20 of re-writing for different platforms will be required. By some=20 mechanism not yet decided. However, things will be a lot easier in the=20 future is ANSI-SQL is at least used where possible. I have changed the wording of this. > On that reflexion (that I made and that is the origin of storage > modules in perfparse), you will have to create an abstraction layer > (or use an existing one) like storage modules for perfparse-db-tools > (I will do that one) and CGIs (maybe me too, maybe not...). > > However, when the SQL code is close to ANSI SQL, porting some module > to talk to another database server is easier. So using ANSI SQL is > really recommended. Just don't drop performances, and don't reinvent > the wheel that SQL servers have already invented :) I totally agree. > 2.9 Referential integrity will not be important against the > data tables. > > 2.10 Duplicate data should be impossible to add. > >>>>>>>>>>>>>> For 2.9 and 2.10, I agree. However, some tools are > here to purge the databases : they can also do some additionnal > integrity checking and duplicates removing. Ok good, we can get back the RI in program space, and gain the=20 performance in the database. > 4.1 The extraction of data for a graph between two dates can be > completed as: > > SELECT * ... >>>>>>>>>>>>>> For information, this is also possible with perfparsed > since 0.103.1 if you enable file_output storage : telnet the > perfparsed server and run this: > history tm_start tm_end '1' '2' '3' I haven't tested this yet my self, but I am sure this will lead to some=20 interesting applications. > 2.2 A check against the extra data will be completed to find out if > this extra data has been entered, against the key 'extra_id0'. If > this data has not been entered before, this should take place. >>>>>>>>>>>>>> I suggest some option to do that check or not. You can > also remove duplicates every night with perfparse-db-tool. Depends on > how much performance you need when inserting data in the database. Do you suggest writing *all* the extra data, one entry for each, then=20 reducing the data in a nightly parse? I think this one will need to be settled experimentally. Although your=20 suggestion doesn't require a lookup on every line. It does require=20 massive insertion and deletion, which are both heavy activities for a=20 database. When we get there, we can test it. Another thing you may be suggesting is a flag to ignore *all* the extra=20 data? Leave the linkage as NULL? This is an interesting idea is users=20 did not want the warn, crit, max or min. ? Another version enclosed reflecting your comments. Ben |
From: Ben C. <Be...@cl...> - 2004-11-02 12:17:43
|
Dear Users, New version 0.103.1. This includes new abilities. All existing code from previous version remains the same. The MySQL compilation problem has also been fixed. New version, as explained by the author: Perfparsed, when run as a TCP/IP server (which was already possible with 0.102.1), implements a new command : "history". If the file_output storage module is enabled and working, you can ask perfparsed to get some data from the archives between two dates, with optional filters (host/metric/key). As a hidden flavour, with the "history command", perfparsed can read gzipped files if you decided to compress them :) Perfparse can read perf data from a new source : ">host:port". A source with the first character as ">", followed with the host name and port of a running perfparsed can be asked that way for data thanks to the new "history" feature of perfparsed. New options were added to get the data between two dates, with the optional filters host/metric/key. Those two features open the door of new ways to install perfparse, the choice between storing the data in a mysql database to show graphs and compute statistics, and storing data on the remote host as plain text files for easier manipulation, and retrieve the data only when needed. Yves & Ben. |
From: Yves M. <yme...@li...> - 2004-11-02 11:24:08
|
Comments inline... Proposal for conversion of PerfParse binary data. 1. Brief. =3D=3D=3D=3D=3D=3D=3D=3D=3D Binary data extracted from the performance data from Nagios is stored >>>>>>>> Say that binary data is in fact the performance data, eg the lis= t of "key=3DvalueUOM;ranges..." binary usually means "$=A3*%F|=E9=E8@*%" data :) 2. Requirements. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The functional requirements of the new structure can be listed: 2.1 To hold the metric value as a DOUBLE. Allow NULL to indicate a missing value which should be shown as a space on a graph. >>>>>>>>>>> Not a Number values can be considered as missing values and a= re coded NULL too. So there is no problem with NaN values :) 2.3 Store the time in a format easy to program and portable over different DBMS. >>>>>>>> Use some integer to store a timestamp. This should be enough. However, this is not very simple on sybase (I don't know for other dbms).= So don't try to be too portable ! Keep It Simple and Stupid :) 2.4 Store the maximum and minimum range of the metric with every value. This may be NULL of not recorded. >>>>>>>>>>>> Thinking... Infinite value means that there is no mini or no maxi. Nothing recorded a= lso means no mini or no maxi. So I agree with using NULL to store not recorded value. = However, there will be a choice to do when you read NULL in the database : will you pres= ent it as not recorded or as infinite ? Well, this is presentation and has nothing to d= o here. 2.5 Store the range of warning and critical, as either an inside or outside range. A value of NULL will indicate infinity. A range type of NULL will indicate no value. >>>>>>>>>>>>> Not precise enough. I suggest NULL for infinite (-inf for start range and +inf for end range,= because you cannot have +inf for start range, and you cannot have -inf for end range = :) I suggest the default values when not recorded (0 for start range and +in= f for end range, as specified on the nagiosplug plugins specification page). You also need something to say that the range is inverted (the @ characte= r in the range) 2.6 Offer the correct keys to enable likely queries. Including: 6.1 Extraction of an ordered series of data between two times. 6.2 Extraction of the last entered value for each metric. 6.3 Extraction of data relative to a specific host, service or metric. 2.7 Store the data is an efficient format being small and fast to extract. 2.8 Use ANSI SQL where ever possible. >>>>>>>>>>>>>> ...where ever the performance of insertion and queries are= not reduced too much. You will never be able to write SQL that can be ported on mysql, postgres= ql, oracle, sybase, odbc, sqlite and other ones, even writing 100% ANSI SQL. Or you w= ill write some very poor SQL and program yourself things that SQL dataservers can do for= you. On that reflexion (that I made and that is the origin of storage modules = in perfparse), you will have to create an abstraction layer (or use an existing one) lik= e storage modules for perfparse-db-tools (I will do that one) and CGIs (maybe me to= o, maybe not...). However, when the SQL code is close to ANSI SQL, porting some module to t= alk to another database server is easier. So using ANSI SQL is really recommended. Just = don't drop performances, and don't reinvent the wheel that SQL servers have already = invented :) 2.9 Referential integrity will not be important against the data tables. 2.10 Duplicate data should be impossible to add. >>>>>>>>>>>>> For 2.9 and 2.10, I agree. However, some tools are here to = purge the databases : they can also do some additionnal integrity checking and duplicates removing. 3. Table Schema =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The two new tables will be defined as: >>>>>>>>>>>> I'm too bad in SQL... No comments here. 4. Proposed Use Snapshots =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D 4.1 The extraction of data for a graph between two dates can be completed as: SELECT * FROM perfdata_bin JOIN perfdata_bin_extra ON perfdata_bin.bin_extra_id =3D perfdata_bin_extra.id WHERE host =3D 1 AND service =3D 2 AND metric =3D 3 AND ctime BETWEEN '2004-01-01' AND '2004-12-31' ORDER BY ctime; >>>>>>>>>>>>> For information, this is also possible with perfparsed sinc= e 0.103.1 if you enable file_output storage : telnet the perfparsed server and run thi= s : history tm_start tm_end '1' '2' '3' However, the SQL language makes it more flexible. When both are implement= ed, choose the one that best fit your needs ! :) 4.2 Addition of data will be completed as follows: 2.1 Calculation of the hash of the extra data. Eg, MD5(";1;2;3;4") This value will be supplied by the parser. 2.2 A check against the extra data will be completed to find out if this extra data has been entered, against the key 'extra_id0'. If this data has not been entered before, this should take place. In either case, the 'id' of the extra will be registered. >>>>>>>>>>>>> I suggest some option to do that check or not. You can also= remove duplicates every night with perfparse-db-tool. Depends on how much performance you need when inserting data in the database. >>>>>>>>>>>>>>>>> No other comments :) Yves --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - Perfparse - http://perfparse.sf.net/ - |
From: Ben C. <be...@cl...> - 2004-10-29 20:11:05
|
Dear developers, Attached is a proposed specification for the conversion of the PerfParse binary data. I have tried to take into account conversations on this news group. However I am sure there is room for improvement. I will not undertake any coding until the users of PP are happy with the changes we would like to make, and why we would like to make them. So please let me know... Enjoy :) Regards, Ben. |
From: Ben C. <Ben...@ro...> - 2004-10-29 14:35:08
|
Version 0.103.1 will probably be released on Tuesday. To give time for bug feedback from the pre-release. Ben Yves Mettier wrote: > Hi all, > > perfparse-0.103.0ym3 is here : > http://pagesperso.laposte.net/ymettier/perfparse-devel/perfparse-0.103.0/ > > I just want to write a new storage module to print lines in the same format they are > scanned. With the ton of ways to configure perfparse (see the now 6 methods in the doc), > there is one missing : input the data from nagios somewhere, and output it somewhere > else in the same syntax it was input :) > > While I'm working on that, any beta-testing is welcome before Ben releases 0.103.0ym4 as > 0.103.1 :) > > ChangeLog was updated. > > Run perfparsed, run perfparse-log2* with servicelog as ">host:port" and use the options > --history_start_tm and --history_end_tm. > Don't forget to enable file_output module on the server side. > > Updating the doc would be very appreciated (Garry, are you still busy ? :) > > Yves > |
From: Yves M. <yme...@li...> - 2004-10-29 13:33:59
|
Hi all, perfparse-0.103.0ym3 is here : http://pagesperso.laposte.net/ymettier/perfparse-devel/perfparse-0.103.0/ I just want to write a new storage module to print lines in the same form= at they are scanned. With the ton of ways to configure perfparse (see the now 6 metho= ds in the doc), there is one missing : input the data from nagios somewhere, and output i= t somewhere else in the same syntax it was input :) While I'm working on that, any beta-testing is welcome before Ben release= s 0.103.0ym4 as 0.103.1 :) ChangeLog was updated. Run perfparsed, run perfparse-log2* with servicelog as ">host:port" and u= se the options --history_start_tm and --history_end_tm. Don't forget to enable file_output module on the server side. Updating the doc would be very appreciated (Garry, are you still busy ? := ) Yves --=20 - Homepage - http://ymettier.free.fr - http://www.logicacmg.com - - GPG key - http://ymettier.free.fr/gpg.txt - - Maitretarot - http://www.nongnu.org/maitretarot/ - - Perfparse - http://perfparse.sf.net/ - |