ground-user Mailing List for The Ground Report
Status: Beta
Brought to you by:
calumfodder
You can subscribe to this list here.
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
|
Mar
(17) |
Apr
(21) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
(8) |
Mar
(20) |
Apr
(10) |
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
(7) |
Oct
(5) |
Nov
(1) |
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(3) |
From: Calum F. <cal...@gm...> - 2013-12-08 13:23:14
|
Hiya, I’ve had a look and the xml is complete. All the information (data) is there and so you should have had a pdf output. Normally this sort FOP error gets output if there is missing data or if there are is unexpected data (e.g. an extra cell in a row of a table). Can i ask if you are posting the complete stack trace for the FOP error? Can I ask you to try and run an IndividualTestArticle report? This is to see if you get the same error. I’ve never had this error reported for the ground report before however another user had an issue with a corrupted download of the ground report….this was running on windows. He was getting a different error. It is always possible to try and substitute the Xalan parser instead of the Saxon one and see if this makes a difference. Thanks for the info about the extra column in Grinder 3.11. I have been aware of it but have not had the time to put together a new release to address it. I am currently maintaining this project on minimal life support as I have other commitments that take priority unfortunately. So I’m afraid you will need to carry on deleting the extra column for now. Cheers Cal On 5 Dec 2013, at 15:11, Cody Coleman <co...@qu...> wrote: > Attached is the xml for the Summary report type. The test duration was 50 minutes running a bunch of tests repeated about 480 times with 5 clients. > > Also, since you are looking at things, with Grinder 3.11 it adds a "New Connections" column to the data that I have to delete in order for the file upload part to work. It's not a big deal once I figured out that was the problem, but it could catch other people off guard. > > > On Wed, Dec 4, 2013 at 11:29 PM, Calum Fitzgerald <cal...@gm...> wrote: > Hiya, > > Apologies if you have been waiting for a while for a response on the discussion. There is supposed to be an alert sent out when someone posts. This appears to not have happened. > > Right on to solving the issue. Please could you change the output type to XML and then post the resultant file? For some reason there is a missing section which is causing challenges. > > How long a duration was the test run for that is causing issues? > > Cheers > Cal > > On 4 Dec 2013, at 16:07, Cody Coleman <co...@qu...> wrote: > >> I've posted this on the discussion, but maybe the mailing list is better looked at. >> >> When running the ./generateReport.sh towards the end in the Post-Processing Article bit (which I assume is what makes the pdf report) I get the following: >> Dec 02, 2013 12:47:00 AM org.apache.fop.fo.FOTreeBuilder$MainFOHandler endElement >> WARNING: Mismatch: root (http://www.w3.org/1999/XSL/Format) vs. page-sequence (http://www.w3.org/1999/XSL/Format) >> Dec 02, 2013 12:47:00 AM org.apache.fop.fo.FOTreeBuilder fatalError >> SEVERE: org.xml.sax.SAXParseException; systemId: file:///Users/auto_test/ground_report-1.5/style/xsl/docbook-xsl-ns-1.75.0/fo/docbook.xsl; lineNumber: 222; columnNumber: 59; java.lang.IllegalStateException: endElement() called for fo:root where there is no current element. >> file:///Users/auto_test/ground_report-1.5/style/xsl/docbook-xsl-ns-1.75.0/fo/docbook.xsl; Line #222; Column #59; java.lang.IllegalStateException: endElement() called for fo:root where there is no current element. >> Time Taken = 00 Hours 00 Minutes 04 Seconds >> Needless to say the pdf that shows up in the output directory is corrupted. The rest of the graphs look to be alright though... any ideas as to what is causing the endElement error? >> ------------------------------------------------------------------------------ >> Sponsored by Intel(R) XDK >> Develop, test and display web and hybrid apps with a single code base. >> Download it for free now! >> http://pubads.g.doubleclick.net/gampad/clk?id=111408631&iu=/4140/ostg.clktrk >> _______________________________________________ >> Ground-user mailing list >> Gro...@li... >> https://lists.sourceforge.net/lists/listinfo/ground-user > > <summaryReport.xml> |
From: Calum F. <cal...@gm...> - 2013-12-05 06:29:19
|
Hiya, Apologies if you have been waiting for a while for a response on the discussion. There is supposed to be an alert sent out when someone posts. This appears to not have happened. Right on to solving the issue. Please could you change the output type to XML and then post the resultant file? For some reason there is a missing section which is causing challenges. How long a duration was the test run for that is causing issues? Cheers Cal > On 4 Dec 2013, at 16:07, Cody Coleman <co...@qu...> wrote: > > I've posted this on the discussion, but maybe the mailing list is better looked at. > > When running the ./generateReport.sh towards the end in the Post-Processing Article bit (which I assume is what makes the pdf report) I get the following: > Dec 02, 2013 12:47:00 AM org.apache.fop.fo.FOTreeBuilder$MainFOHandler endElement > WARNING: Mismatch: root (http://www.w3.org/1999/XSL/Format) vs. page-sequence (http://www.w3.org/1999/XSL/Format) > Dec 02, 2013 12:47:00 AM org.apache.fop.fo.FOTreeBuilder fatalError > SEVERE: org.xml.sax.SAXParseException; systemId: file:///Users/auto_test/ground_report-1.5/style/xsl/docbook-xsl-ns-1.75.0/fo/docbook.xsl; lineNumber: 222; columnNumber: 59; java.lang.IllegalStateException: endElement() called for fo:root where there is no current element. > file:///Users/auto_test/ground_report-1.5/style/xsl/docbook-xsl-ns-1.75.0/fo/docbook.xsl; Line #222; Column #59; java.lang.IllegalStateException: endElement() called for fo:root where there is no current element. > Time Taken = 00 Hours 00 Minutes 04 Seconds > Needless to say the pdf that shows up in the output directory is corrupted. The rest of the graphs look to be alright though... any ideas as to what is causing the endElement error? > ------------------------------------------------------------------------------ > Sponsored by Intel(R) XDK > Develop, test and display web and hybrid apps with a single code base. > Download it for free now! > http://pubads.g.doubleclick.net/gampad/clk?id=111408631&iu=/4140/ostg.clktrk > _______________________________________________ > Ground-user mailing list > Gro...@li... > https://lists.sourceforge.net/lists/listinfo/ground-user |
From: Cody C. <co...@qu...> - 2013-12-04 17:12:17
|
I've posted this on the discussion, but maybe the mailing list is better looked at. When running the ./generateReport.sh towards the end in the Post-Processing Article bit (which I assume is what makes the pdf report) I get the following: Dec 02, 2013 12:47:00 AM org.apache.fop.fo.FOTreeBuilder$MainFOHandler endElement WARNING: Mismatch: root (http://www.w3.org/1999/XSL/Format) vs. page-sequence (http://www.w3.org/1999/XSL/Format) Dec 02, 2013 12:47:00 AM org.apache.fop.fo.FOTreeBuilder fatalError SEVERE: org.xml.sax.SAXParseException; systemId: file:///Users/auto_test/ground_report-1.5/style/xsl/docbook-xsl-ns-1.75.0/fo/docbook.xsl; lineNumber: 222; columnNumber: 59; java.lang.IllegalStateException: endElement() called for fo:root where there is no current element. file:///Users/auto_test/ground_report-1.5/style/xsl/docbook-xsl-ns-1.75.0/fo/docbook.xsl; Line #222; Column #59; java.lang.IllegalStateException: endElement() called for fo:root where there is no current element. Time Taken = 00 Hours 00 Minutes 04 Seconds Needless to say the pdf that shows up in the output directory is corrupted. The rest of the graphs look to be alright though... any ideas as to what is causing the endElement error? |
From: Calum F. <cal...@gm...> - 2012-11-19 13:41:30
|
Hi, A couple of things: First this is not the right place to be looking for support for the ground report. That project has its own support forums and mailing list. Secondly it looks like your contractor has taken the original code and developed, at the least, a new database backend for it based on oracle. There is no way for us to be able to know what else this contractor has modified. This will make support extremely challenging. Your error message states that whatever you are using for a JVM does not recognise the options being passed to it. This would point to something either not being right with the syntax of the arguments being passed to the JVM or a problem with variant of JVM being used. Is it a sun JVM? Cheers Cal On 17 Nov 2012, at 18:32, <FM...@te...> wrote: > Can someone help me please with grinder report set up? Thanks > > From: Mehran, Fariba > Sent: Friday, November 16, 2012 09:11 AM > To: 'gri...@li...' <gri...@li...> > Subject: Problem with Ground Reports SetUp > > > > From: Mehran, Fariba > Sent: Friday, November 16, 2012 9:03 AM > To: 'gri...@li...' > Subject: > > Can you please see attached and help me to set up my ground report successfully? The contractor > Who worked for us, did set up a series of Oracle tables (shown here), but I don’t know how to load my data log to > These tables and connect ground reports to it to produce reports and graphs. > Thanks for your help > <image001.png> > > > <mime-attachment> > ------------------------------------------------------------------------------ > Monitor your physical, virtual and cloud infrastructure from a single > web console. Get in-depth insight into apps, servers, databases, vmware, > SAP, cloud infrastructure, etc. Download 30-day Free Trial. > Pricing starts from $795 for 25 servers or applications! > http://p.sf.net/sfu/zoho_dev2dev_nov > _______________________________________________ > grinder-use mailing list > gri...@li... > https://lists.sourceforge.net/lists/listinfo/grinder-use |
From: Calum F. <cal...@gm...> - 2011-11-10 13:52:36
|
Hiya, the reason for the error that you are receiving is that the docbook XML being produced for a report has an empty table, or rather a table with some table headers but no data rows in the table. It should be able to handle no data gracefully and therefore this is a bug. Please raise a bug report. In what order did you load the files and if you loaded the descriptions file after the data file did you refresh the materialised views? Regarding the descriptions file (and apologies that the documentation is not clear); for the use case of HTTP data the third column is for categorising the test numbers. The categories are: Page, P, which is used for composite tests (those tests ending with 00 if the plugin was used to generate the test numbers); Static, S, which is used for tests getting static html, css, images, javascript, pdf, etc (pregenerated content); Dynamic, D, which is used for tests where the backend system has to generate the response. In your case your web service calls would be categorised as dynamic, d, tests. However since your entire load run is made up of a single category of test then there is no need to use the descriptions file to separate out the results. If you ran a report without the descriptions file the element graphs and data results will be the same as if you used a descriptions file and were looking at the dynamic element graphs and data results. Hope this helps Cheers Cal On 8 Nov 2011, at 20:31, Ouray Viney wrote: > Hi: > > I am seeing issues generating a report with a http description file uploaded. I have reviewed the Ground Reports docs but don't see enough details =(. I am hoping that this is a matter of configuration changes required to the groundReport.data file. > > A little background information: > ========================= > - The test data files is generate using the Grinder HTTP client plugin, so http description file is a must (norm desc. file is for non-http tests). > - The http description file looks like... see example snippet below. > - I really don't know what the third column value should be as these are all web service calls over http. I assigned them all as static; though I am not sure if that is correct. > > "Test","Description","SorDorP" > 1101,"Small Group - Domestic -> POST Create Shipment",s > 1201,"Small Group - Domestic -> GET Shipment",s > 1301,"Small Group - Domestic -> GET Shipment Artifact",s > 1401,"Small Group - Domestic -> POST Transmit Shipments",s > 1501,"Small Group - Domestic -> GET Manifest",s > 1601,"Small Group - Domestic -> GET Manifest Artifact",s > > Right now when I try to generate a report with; > > 1) a grinder run uploaded and > 2) an http description file uploaded > > Here is the full output from the command line: > > GRAPH CREATION > ________________ > > > Generating Graphs > /n Time Taken = 00 Hours 00 Minutes 01 Seconds/n > > > REPORT CREATION > _________________ > > > Summary Article Information: > > > Generating Graphs > /n Time Taken = 00 Hours 00 Minutes 07 Seconds/n > > Generating Article > /n Time Taken = 00 Hours 00 Minutes 00 Seconds/n > > Post-Processing Article > Nov 8, 2011 3:21:37 PM org.apache.fop.render.rtf.RTFHandler startPageSequence > WARNING: Only simple-page-masters are supported on page-sequences: body > Nov 8, 2011 3:21:37 PM org.apache.fop.render.rtf.RTFHandler startPageSequence > WARNING: Using default simple-page-master from page-sequence-master... > Error on line 499 of formal.xsl: > SXCH0003: org.apache.fop.fo.ValidationException: > file:/D:/Headwall_Software/Projects/foobar/development/ground_report-1.5/style/xsl/docbook-xsl-ns-1.75.0/fo/formal.xsl: > 518:-1: Error(518/-1): fo:table-body is missing child elements. > Required Content Model: marker* (table-row+|table-cell+) > at xsl:apply-templates (file:/D:/Headwall_Software/Projects/foobar/development/ground_report-1.5/style/xsl/docbook-xsl- > ns-1.75.0/fo/sections.xsl#127) > processing /article/section[6]/table[1] > at xsl:call-template name="section.content" (file:/D:/Headwall_Software/Projects/foobar/development/ground_report-1.5/s > tyle/xsl/docbook-xsl-ns-1.75.0/fo/sections.xsl#60) > at xsl:apply-templates (file:/D:/Headwall_Software/Projects/foobar/development/ground_report-1.5/style/xsl/docbook-xsl- > ns-1.75.0/fo/component.xsl#720) > processing /article/section[6] > at xsl:apply-templates (file:///D:/Headwall_Software/Projects/foobar/development/ground_report-1.5/style/xsl/docbook-xs > l-ns-1.75.0/fo/docbook.xsl#310) > processing /article > in built-in template rule > at xsl:apply-templates (file:///D:/Headwall_Software/Projects/foobar/development/ground_report-1.5/style/xsl/docbook-xs > l-ns-1.75.0/fo/docbook.xsl#222) > processing / > Traceback (most recent call last): > File "D:/Headwall_Software/Projects/foobar/development/ground_report-1.5/lib/groundReport.py", line 351, in <module> > eval(articleType)() > File "D:/Headwall_Software/Projects/foobar/development/ground_report-1.5/lib/groundReport.py", line 351, in <module> > eval(articleType)() > File "D:\Headwall_Software\Projects\foobar\development\ground_report-1.5\lib\articleFactory.py", line 463, in createSum > maryArticle > self.postProcessArticle(output, fullChartList, filename) > File "D:\Headwall_Software\Projects\foobar\development\ground_report-1.5\lib\articleFactory.py", line 76, in postProces > sArticle > xt.articleFOPTransform() > File "D:\Headwall_Software\Projects\foobar\development\ground_report-1.5\lib\xmlUtilities.py", line 94, in articleFOPTr > ansform > transformer.transform(source, result) > net.sf.saxon.trans.XPathException: org.apache.fop.fo.ValidationException: file:/D:/Headwall_Software/Projects/foobar/deve > lopment/ground_report-1.5/style/xsl/docbook-xsl-ns-1.75.0/fo/formal.xsl:518:-1: Error(518/-1): fo:table-body is missing child > elements. > Required Content Model: marker* (table-row+|table-cell+) > at net.sf.saxon.event.ContentHandlerProxy.handleSAXException(ContentHandlerProxy.java:521) > at net.sf.saxon.event.ContentHandlerProxy.endElement(ContentHandlerProxy.java:393) > at net.sf.saxon.event.NamespaceReducer.endElement(NamespaceReducer.java:213) > at net.sf.saxon.event.ComplexContentOutputter.endElement(ComplexContentOutputter.java:432) > at net.sf.saxon.tinytree.TinyElementImpl.copy(TinyElementImpl.java:407) > at net.sf.saxon.tinytree.TinyDocumentImpl.copy(TinyDocumentImpl.java:293) > at net.sf.saxon.instruct.CopyOf.processLeavingTail(CopyOf.java:444) > at net.sf.saxon.instruct.Choose.processLeavingTail(Choose.java:686) > at net.sf.saxon.expr.LetExpression.processLeavingTail(LetExpression.java:549) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at net.sf.saxon.instruct.Template.applyLeavingTail(Template.java:203) > at net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:345) > at net.sf.saxon.instruct.ApplyTemplates$ApplyTemplatesPackage.processLeavingTail(ApplyTemplates.java:527) > at net.sf.saxon.instruct.CallTemplate.process(CallTemplate.java:259) > at net.sf.saxon.instruct.CallTemplate.processLeavingTail(CallTemplate.java:281) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at net.sf.saxon.instruct.Instruction.process(Instruction.java:93) > at net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:296) > at net.sf.saxon.instruct.Choose.processLeavingTail(Choose.java:686) > at net.sf.saxon.expr.LetExpression.processLeavingTail(LetExpression.java:549) > at net.sf.saxon.instruct.Choose.processLeavingTail(Choose.java:686) > at net.sf.saxon.instruct.Template.applyLeavingTail(Template.java:203) > at net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:345) > at net.sf.saxon.instruct.ApplyTemplates.apply(ApplyTemplates.java:210) > at net.sf.saxon.instruct.ApplyTemplates.processLeavingTail(ApplyTemplates.java:174) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at net.sf.saxon.expr.LetExpression.processLeavingTail(LetExpression.java:549) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at net.sf.saxon.instruct.Instruction.process(Instruction.java:93) > at net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:296) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at net.sf.saxon.instruct.Instruction.process(Instruction.java:93) > at net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:296) > at net.sf.saxon.expr.LetExpression.processLeavingTail(LetExpression.java:549) > at net.sf.saxon.instruct.Template.applyLeavingTail(Template.java:203) > at net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:345) > at net.sf.saxon.instruct.ApplyTemplates.apply(ApplyTemplates.java:210) > at net.sf.saxon.instruct.ApplyTemplates.processLeavingTail(ApplyTemplates.java:174) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at net.sf.saxon.instruct.Instruction.process(Instruction.java:93) > at net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:296) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at net.sf.saxon.expr.LetExpression.processLeavingTail(LetExpression.java:549) > at net.sf.saxon.instruct.Template.applyLeavingTail(Template.java:203) > at net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:345) > at net.sf.saxon.instruct.ApplyTemplates.defaultAction(ApplyTemplates.java:378) > at net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:333) > at net.sf.saxon.instruct.ApplyTemplates$ApplyTemplatesPackage.processLeavingTail(ApplyTemplates.java:527) > at net.sf.saxon.Controller.transformDocument(Controller.java:1812) > at net.sf.saxon.Controller.transform(Controller.java:1621) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > > net.sf.saxon.trans.XPathException: net.sf.saxon.trans.XPathException: org.apache.fop.fo.ValidationException: file:/D:/Headwal > l_Software/Projects/foobar/development/ground_report-1.5/style/xsl/docbook-xsl-ns-1.75.0/fo/formal.xsl:518:-1: Error(518/ > -1): fo:table-body is missing child elements. > Required Content Model: marker* (table-row+|table-cell+) > > > > -- > Ouray Viney > http://www.viney.ca |
From: Calum F. <cal...@gm...> - 2011-10-26 22:49:12
|
Hiya, Sorry about the slight delay in response and thank you for the stats that you have supplied, they are really interesting. I have been doing some research into the options It looks like you got some important performance gains by using pgtune. On my dev system (postgres 8.3) the time taken for one of my sample load runs, with about 1.75 million rows, is 3 minutes 14 seconds and the time taken for the materialised views to refresh is 2 minutes. Refreshing just the element_buckets_http_mv took 41 seconds. Loading the same file as a second load run (taking the number of rows in raw_data to 3.5 million rows) took 3 minutes 41 seconds and the materialised views refresh took 5 minutes 14 seconds. Refreshing just the element_buckets_http_mv took 2 minutes. Do you use a descriptions file? if so then this will push times out further as some of the materialised views, that don't do anything when there are no descriptions present, will kick in and have to do some processing. So comparing your and my stats things are looking fairly similar, or rather in line with expectations, based on the current way the db works and some assumptions about your usage and your reported volumes in the db. Also looking at the results its the full refresh of the materialised views for all the load runs in the db each time that starts to hurt. The upload time for the file seems to remain fairly static. Looking at options to speed things up: how to speed up uploads; how to speed up materialised view refreshes. The upload to the raw_data table is only really going to be improved by using an unlogged table. having had a look at partitioning I'm not sure there will be a performance improvement for uploads by implementing this, in fact due to the row based triggers needed (as the COPY command is used to upload from a file) upload times will be adversely impacted….and arguably this will counteract any performance improvements to materialised view refreshes there might be. As for the materialised view refresh improvements the best way for performance improvement would be to only update the tables based on the newly uploaded load run, and not a full refresh. Looking at the very lazy materialised views model it looks like it might be really complicated to implement. There are a large number of materialised views which would each require child tables to track changes in the underlying tables/materialised views with triggers on those underlying tables which each might have to update multiple tables tracking changes for their parent materialised views. Whilst the updates might be lighter weight when the refresh happens the extra IO that has to happen to keep track of changes will probably mean that overall the gains might not be that great. It also just looks too complicated. <chuckle> So my thoughts are coming back inline with my original future plans….remove the views and materialised views and replace them with results tables that are populated via stored procedures. These stored procedures can then perform updates to the tables rather than full refreshes. Simple really.. <cough> … though thinking about it it would just be a case of rewriting the each of the views' sql in the form or a stored procedure with some modifications….however there are a lot of views :-) I'm also guessing that implementing the command line interface would speed up reloading of the data into the db if needed. Cal On 20 Oct 2011, at 18:18, Kelvin Ward wrote: > Recently I managed to corrupt the postgres database when there were VM problems, sigh. Following that I found that I actually had postgres 8.4 installed on the VM, version 9 was only on my development machine. I've since reinstalled postgres 9.1 on the VM and I'm re-filling the tables with ground report data. I've been uploading data to it for many hours to bring about the performance issues again. After uploading about 170 small (15 minute) load runs the database is about half the size it was before I corrupted it. > > When I first started uploading a report, it took < 25 seconds. When I got to report ~160 the worse case was about 5m30s. At that point I ran pgtune setting it as a data warehouse type of DB. It has helped a bit (~40% improvement) as the worst upload time for the next few reports was 3m42s. > > Running select refresh_matview('element_buckets_http_mv') took about 55 seconds worst case, whereas before the tuning it was 84 seconds. > > The raw_data_http table is about 259MB, 2.247 million rows. It was about 5.82 million rows before the corruption, so I've got a way to go before it builds up again. > The whole DB is 970MB at the moment. The corrupted one was ~2.3GB. > > > On 16 October 2011 19:14, Calum Fitzgerald <cal...@gm...> wrote: > Hiya, > > You are using the ground report as it should be used. I was getting confused when you said that you were deleting the raw data…i assumed you were truncating the raw_data table within the db…and not deleting the data log files…mea culpa. > > Pgtune looks like being your best bet in the short term. > > The next best thing would be to try and implement the very lazy materialised views….this would give you a decent boost when adding in new load runs….I will have a look into how feasible it is to implement this whenI get 5 minutes to spare. > > How big is your raw_data_http (or norm) table? Since you have about 80 load runs loaded it would be useful to know how many rows and what size the table (in GB) is. Another performance improvement to add to the to do list would be to see if it was possible to create the raw_data table as a partitioned table split on the load run id. This would improve the performance when querying the raw_data table by load run id. > > Cheers > Cal > > > On 14 Oct 2011, at 10:57, Kelvin Ward wrote: > >> I use option 4 of databaseInterface.bash to upload the data from a run (which always refreshes the materialised views). After that I use generateReport.bash to make graphs for specific load runs. I'm not doing anything else to change the database. >> >> The raw data_*.log files from which the data is uploaded are usually deleted. From what I understand, I don't need to keep the data log files because the useful facts like mean, 95th percentile and transaction throughput are preserved for each load run in the DB and a report can be generated at any time. Do you typically use the option to delete the database instead of keeping all load runs in the database? I understand that would definitely be faster because there is only one run in the DB at one time, but requires preserving the data log files and re-uploading them each time a report is needed. >> >> I'll definitely check out pgtune. >> >> Thanks. >> >> On 12 October 2011 12:09, Calum Fitzgerald <cal...@gm...> wrote: >> Hi Kelvin, >> >> Are you deleting the raw data after every data upload? If you are doing this then the materialised views will only be contain results data from the load runs for which there was raw data in the db at the time of the refresh. Admittedly I have never used the ground report in the fashion that you have, do you have historic results in the materialised views from the previous loads runs for which you have deleted the raw data? >> >> If you don't have the historic data then it might well be worth recreating the db for each load run that you will be producing a report for but this will mess up the load run numbering (if you are using that data) as the sequence would be reset each time you recreated the db. >> >> If you are only processing the raw data for the load run in question each time then changing the method of the materialised view generation within the DB will not improve performance. If the serial processing of the materialised views works better for you then you could change the jython code to perform a serial refresh rather than the concurrent one it uses presently. However you must perform the refreshes in sequence otherwise the resulting data will be incorrect. >> >> Otherwise to improve performance for your current setup you need to look at tweaking the VM/OS/DB settings. Underload what is the CPU utilisation like? Is only one core of your 4 core VM pegged at 100% or is the load spread amongst the cores? Can the speed of the CPU for your VM be increased? Would you get better performance running under a single fast CPU VM versus a multicore VM? Can the OS be changed to buffer the net app storage or could the DB settings be changed to do similar since I/O is an issue. >> >> If the I/O is not great then your best bet is to buffer the data as much as possible. Try using the pgtune script, I use it for my initial db setup these days. If you're VM is a dedicated DB VM and you can allocate as much RAM as possible to the DB then the pgtune scripts will create more aggressive settings to use. This should have a beneficial impact on your I/O performance as it should increase the buffering. >> >> Cheers >> Cal >> >> >> On 12 Oct 2011, at 11:41, Kelvin Ward wrote: >> >>> I will try to find sometime to look into the issues and hack it. I've limited DB experience so could be something new and interesting. >>> >>> Right now I am deleting the raw data after it's stored in the DB, but it could be archived instead. >>> The DB is running on a xen virtual machine with 4 CPU and 4GB Ram. The I/O isn't great because the storage is a netapp shared with other VMs. >>> >>> Thanks >>> >>> On 11 October 2011 16:42, Calum Fitzgerald <cal...@gm...> wrote: >>> Hi Kelvin, >>> >>> It is great to hear that the ground report is being so heavily used. >>> >>> I was/am aware of the bottleneck in the uploading of the data and how the materialised views refresh themselves. The serial Vs concurrent processing information is interesting, in my usage it had been the other way around. Also I'm not 100% sure that the requests are truly concurrent and it is on the todo list to investigate that further, the calls should are concurrent within the jython code however they are going down a single DB connection/cursor and I was not sure if that reverted them back to serial operations. It is on the todo list to investigate creating a connection pool and having multiple cursors to handle the refreshed. >>> >>> Have a look at the following jython code: 'refreshMaterialisedViewsMT' in the file multithreadDatabaseFactory.py. This will show you the order in which the materialised views need to be run to produce the correcting results from the raw data. The views in each tier are order independent but the tiers must be processed in a serial fashion. This is due to dependencies between the materialised views regarding the input data for each materialised view. >>> >>> I had been planning to look at rewriting that whole mechanism for creating the report data. Currently there are materialised views that are built off the raw data and each time that the materialised views are refreshed the raw data needs to be reprocessed. The reason is that the materialised view mechanism that was written is a very simple one as creating a materialised view that updated based on changes to the underlying tables was a more complex and time consuming task to taken on than I had time for….there was no native materialised view functionality in postgres at the time of creation of the ground report….and I'm not sure there is native functionality now… >>> >>> If you feel like hacking here is a link to some code that you could use to change the implementation of the materialised views so that if would not create everything from scratch again. http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views#Very_Lazy_Materialized_Views >>> >>> I use the snapshot view implementation of materialised views from the above wiki in the ground report. >>> >>> Postgres 9 brings some interesting changes/additions to the table. My thoughts had been to rewrite the upload mechanism to make use of an unlogged or temporary table for the raw data and stored procedures to update the report tables (analogous to the current materialised views) and then dumping the temp table after the data had been processed. This would keep the overall DB smaller (though you would need to be able to handle the uploaded data volume) and there would not be the reprocessing of the historic data. It should also be faster than the materialised view approach as the concurrency would be programmed into the stored procedures. The only disadvantage is that you would loose the historical record of the raw data from within the db… whether this is a problem or not depends on whether you delete the data files from which the raw data originates. Would you miss having the raw data record within the reporting db? >>> >>> It would be interesting to have some runtime stats from postgres during the running of the materialised views when the ground report jython script is used….it would good to see if there is any locking going on or whether IO is the limiting factor. >>> >>> How much memory do you have in your box? have a look at running this script http://pgfoundry.org/projects/pgtune/ . It will outputs an optimised postgresql.conf based on information that you feed it. It has not been updated in a while however it may make some suggestions that could improve your db performance. If you can throw more memory at the DB process it maybe able to buffer the disk better and reduce your IO. >>> >>> Cheers >>> Cal >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On 11 Oct 2011, at 13:09, Kelvin Ward wrote: >>> >>>> Hi >>>> >>>> I've been using ground report a fair bit and I've noticed a performance bottleneck in uploading report data. Specifically, after data from a load run has been put into the database each database 'view' is refreshed using the postgres function refresh_matview. This is an I/O heavy function and running each request serially I see the following times to complete: >>>> >>>> SELECT refresh_matview('element_summary_http_mv'); 31s >>>> ... >>>> element_buckets_http_mv 269s ! >>>> element_percentile_http_mv 98s >>>> total_element_per_second_stats_http_mv 57s >>>> concurrent_users_byrun_http_mv 54s >>>> thread_stop_buckets_http_mv 49s >>>> individual_element_per_second_stats_http_mv 43s >>>> element_distribution_http_mv 43s >>>> ... >>>> element_totalbandwidth_http_mv 2s >>>> page_distribution_http_mv 6s >>>> page_summary_http_mv 6s >>>> element_max_time_http_mv 2s >>>> element_min_time_http_mv 2s >>>> static_element_percentile_http_mv 7s >>>> dynamic_element_percentile_http_mv 7s >>>> page_percentile_http_mv < 1s >>>> page_buckets_http_mv 3s >>>> page_stderror_http_mv 3s >>>> element_stderror_http_mv < 1s >>>> thread_start_buckets_http_mv 1s >>>> total_page_per_second_stats_http_mv 1s >>>> individual_page_per_second_stats_http_mv 1s >>>> static_element_per_second_stats_http_mv 1s >>>> dynamic_element_per_second_stats_http_mv 1s >>>> element_95confidence_http_mv 1s >>>> mean_load_throughput_stats_http_mv 1s >>>> concurrent_users_vs_tps_http_mv 1s >>>> concurrent_users_vs_pps_http_mv 1s >>>> concurrent_users_vs_statictps_http_mv 1s >>>> concurrent_users_vs_dynamictps_http_mv 1s >>>> concurrent_users_vs_totalrtime_http_mv 1s >>>> concurrent_users_vs_pagertime_http_mv 1s >>>> concurrent_users_vs_staticrtime_http_mv 1s >>>> concurrent_users_vs_dynamicrtime_http_mv 1s >>>> >>>> Right now I have about 80 load runs in the database. I know the I/O performance of my hardware is not great, but even with 80 load runs there's about 5.8 million rows in the slowest view 'element_buckets_http_mv'. >>>> I'm using postgres9 which has automatic vacuuming of the database and CPU or memory is not hit hard when running refresh_matview(), it's definitely a disk bottleneck. As ground report seems to do 10 concurrent calls to refresh_matview() the disk is being heavily thrashed and running in parallel seems to be slower that serial in this case - it's taking 19 minutes to complete refreshing all the views. >>>> >>>> Trying to understand the ground report python code and postgres, which I'm new to, it seems that refresh_matview deletes all rows in a table like 'element_buckets_http_mv' before copying in new data to it. I'm wondering if that can be improved? >>>> >>>> Cheers >>>> Kelvin. >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> All the data continuously generated in your IT infrastructure contains a >>>> definitive record of customers, application performance, security >>>> threats, fraudulent activity and more. Splunk takes this data and makes >>>> sense of it. Business sense. IT sense. Common sense. >>>> http://p.sf.net/sfu/splunk-d2d-oct_______________________________________________ >>>> Ground-user mailing list >>>> Gro...@li... >>>> https://lists.sourceforge.net/lists/listinfo/ground-user >>> >>> >> >> > > |
From: Calum F. <cal...@gm...> - 2011-10-16 18:14:17
|
Hiya, You are using the ground report as it should be used. I was getting confused when you said that you were deleting the raw data…i assumed you were truncating the raw_data table within the db…and not deleting the data log files…mea culpa. Pgtune looks like being your best bet in the short term. The next best thing would be to try and implement the very lazy materialised views….this would give you a decent boost when adding in new load runs….I will have a look into how feasible it is to implement this whenI get 5 minutes to spare. How big is your raw_data_http (or norm) table? Since you have about 80 load runs loaded it would be useful to know how many rows and what size the table (in GB) is. Another performance improvement to add to the to do list would be to see if it was possible to create the raw_data table as a partitioned table split on the load run id. This would improve the performance when querying the raw_data table by load run id. Cheers Cal On 14 Oct 2011, at 10:57, Kelvin Ward wrote: > I use option 4 of databaseInterface.bash to upload the data from a run (which always refreshes the materialised views). After that I use generateReport.bash to make graphs for specific load runs. I'm not doing anything else to change the database. > > The raw data_*.log files from which the data is uploaded are usually deleted. From what I understand, I don't need to keep the data log files because the useful facts like mean, 95th percentile and transaction throughput are preserved for each load run in the DB and a report can be generated at any time. Do you typically use the option to delete the database instead of keeping all load runs in the database? I understand that would definitely be faster because there is only one run in the DB at one time, but requires preserving the data log files and re-uploading them each time a report is needed. > > I'll definitely check out pgtune. > > Thanks. > > On 12 October 2011 12:09, Calum Fitzgerald <cal...@gm...> wrote: > Hi Kelvin, > > Are you deleting the raw data after every data upload? If you are doing this then the materialised views will only be contain results data from the load runs for which there was raw data in the db at the time of the refresh. Admittedly I have never used the ground report in the fashion that you have, do you have historic results in the materialised views from the previous loads runs for which you have deleted the raw data? > > If you don't have the historic data then it might well be worth recreating the db for each load run that you will be producing a report for but this will mess up the load run numbering (if you are using that data) as the sequence would be reset each time you recreated the db. > > If you are only processing the raw data for the load run in question each time then changing the method of the materialised view generation within the DB will not improve performance. If the serial processing of the materialised views works better for you then you could change the jython code to perform a serial refresh rather than the concurrent one it uses presently. However you must perform the refreshes in sequence otherwise the resulting data will be incorrect. > > Otherwise to improve performance for your current setup you need to look at tweaking the VM/OS/DB settings. Underload what is the CPU utilisation like? Is only one core of your 4 core VM pegged at 100% or is the load spread amongst the cores? Can the speed of the CPU for your VM be increased? Would you get better performance running under a single fast CPU VM versus a multicore VM? Can the OS be changed to buffer the net app storage or could the DB settings be changed to do similar since I/O is an issue. > > If the I/O is not great then your best bet is to buffer the data as much as possible. Try using the pgtune script, I use it for my initial db setup these days. If you're VM is a dedicated DB VM and you can allocate as much RAM as possible to the DB then the pgtune scripts will create more aggressive settings to use. This should have a beneficial impact on your I/O performance as it should increase the buffering. > > Cheers > Cal > > > On 12 Oct 2011, at 11:41, Kelvin Ward wrote: > >> I will try to find sometime to look into the issues and hack it. I've limited DB experience so could be something new and interesting. >> >> Right now I am deleting the raw data after it's stored in the DB, but it could be archived instead. >> The DB is running on a xen virtual machine with 4 CPU and 4GB Ram. The I/O isn't great because the storage is a netapp shared with other VMs. >> >> Thanks >> >> On 11 October 2011 16:42, Calum Fitzgerald <cal...@gm...> wrote: >> Hi Kelvin, >> >> It is great to hear that the ground report is being so heavily used. >> >> I was/am aware of the bottleneck in the uploading of the data and how the materialised views refresh themselves. The serial Vs concurrent processing information is interesting, in my usage it had been the other way around. Also I'm not 100% sure that the requests are truly concurrent and it is on the todo list to investigate that further, the calls should are concurrent within the jython code however they are going down a single DB connection/cursor and I was not sure if that reverted them back to serial operations. It is on the todo list to investigate creating a connection pool and having multiple cursors to handle the refreshed. >> >> Have a look at the following jython code: 'refreshMaterialisedViewsMT' in the file multithreadDatabaseFactory.py. This will show you the order in which the materialised views need to be run to produce the correcting results from the raw data. The views in each tier are order independent but the tiers must be processed in a serial fashion. This is due to dependencies between the materialised views regarding the input data for each materialised view. >> >> I had been planning to look at rewriting that whole mechanism for creating the report data. Currently there are materialised views that are built off the raw data and each time that the materialised views are refreshed the raw data needs to be reprocessed. The reason is that the materialised view mechanism that was written is a very simple one as creating a materialised view that updated based on changes to the underlying tables was a more complex and time consuming task to taken on than I had time for….there was no native materialised view functionality in postgres at the time of creation of the ground report….and I'm not sure there is native functionality now… >> >> If you feel like hacking here is a link to some code that you could use to change the implementation of the materialised views so that if would not create everything from scratch again. http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views#Very_Lazy_Materialized_Views >> >> I use the snapshot view implementation of materialised views from the above wiki in the ground report. >> >> Postgres 9 brings some interesting changes/additions to the table. My thoughts had been to rewrite the upload mechanism to make use of an unlogged or temporary table for the raw data and stored procedures to update the report tables (analogous to the current materialised views) and then dumping the temp table after the data had been processed. This would keep the overall DB smaller (though you would need to be able to handle the uploaded data volume) and there would not be the reprocessing of the historic data. It should also be faster than the materialised view approach as the concurrency would be programmed into the stored procedures. The only disadvantage is that you would loose the historical record of the raw data from within the db… whether this is a problem or not depends on whether you delete the data files from which the raw data originates. Would you miss having the raw data record within the reporting db? >> >> It would be interesting to have some runtime stats from postgres during the running of the materialised views when the ground report jython script is used….it would good to see if there is any locking going on or whether IO is the limiting factor. >> >> How much memory do you have in your box? have a look at running this script http://pgfoundry.org/projects/pgtune/ . It will outputs an optimised postgresql.conf based on information that you feed it. It has not been updated in a while however it may make some suggestions that could improve your db performance. If you can throw more memory at the DB process it maybe able to buffer the disk better and reduce your IO. >> >> Cheers >> Cal >> >> >> >> >> >> >> >> >> >> On 11 Oct 2011, at 13:09, Kelvin Ward wrote: >> >>> Hi >>> >>> I've been using ground report a fair bit and I've noticed a performance bottleneck in uploading report data. Specifically, after data from a load run has been put into the database each database 'view' is refreshed using the postgres function refresh_matview. This is an I/O heavy function and running each request serially I see the following times to complete: >>> >>> SELECT refresh_matview('element_summary_http_mv'); 31s >>> ... >>> element_buckets_http_mv 269s ! >>> element_percentile_http_mv 98s >>> total_element_per_second_stats_http_mv 57s >>> concurrent_users_byrun_http_mv 54s >>> thread_stop_buckets_http_mv 49s >>> individual_element_per_second_stats_http_mv 43s >>> element_distribution_http_mv 43s >>> ... >>> element_totalbandwidth_http_mv 2s >>> page_distribution_http_mv 6s >>> page_summary_http_mv 6s >>> element_max_time_http_mv 2s >>> element_min_time_http_mv 2s >>> static_element_percentile_http_mv 7s >>> dynamic_element_percentile_http_mv 7s >>> page_percentile_http_mv < 1s >>> page_buckets_http_mv 3s >>> page_stderror_http_mv 3s >>> element_stderror_http_mv < 1s >>> thread_start_buckets_http_mv 1s >>> total_page_per_second_stats_http_mv 1s >>> individual_page_per_second_stats_http_mv 1s >>> static_element_per_second_stats_http_mv 1s >>> dynamic_element_per_second_stats_http_mv 1s >>> element_95confidence_http_mv 1s >>> mean_load_throughput_stats_http_mv 1s >>> concurrent_users_vs_tps_http_mv 1s >>> concurrent_users_vs_pps_http_mv 1s >>> concurrent_users_vs_statictps_http_mv 1s >>> concurrent_users_vs_dynamictps_http_mv 1s >>> concurrent_users_vs_totalrtime_http_mv 1s >>> concurrent_users_vs_pagertime_http_mv 1s >>> concurrent_users_vs_staticrtime_http_mv 1s >>> concurrent_users_vs_dynamicrtime_http_mv 1s >>> >>> Right now I have about 80 load runs in the database. I know the I/O performance of my hardware is not great, but even with 80 load runs there's about 5.8 million rows in the slowest view 'element_buckets_http_mv'. >>> I'm using postgres9 which has automatic vacuuming of the database and CPU or memory is not hit hard when running refresh_matview(), it's definitely a disk bottleneck. As ground report seems to do 10 concurrent calls to refresh_matview() the disk is being heavily thrashed and running in parallel seems to be slower that serial in this case - it's taking 19 minutes to complete refreshing all the views. >>> >>> Trying to understand the ground report python code and postgres, which I'm new to, it seems that refresh_matview deletes all rows in a table like 'element_buckets_http_mv' before copying in new data to it. I'm wondering if that can be improved? >>> >>> Cheers >>> Kelvin. >>> >>> >>> ------------------------------------------------------------------------------ >>> All the data continuously generated in your IT infrastructure contains a >>> definitive record of customers, application performance, security >>> threats, fraudulent activity and more. Splunk takes this data and makes >>> sense of it. Business sense. IT sense. Common sense. >>> http://p.sf.net/sfu/splunk-d2d-oct_______________________________________________ >>> Ground-user mailing list >>> Gro...@li... >>> https://lists.sourceforge.net/lists/listinfo/ground-user >> >> > > |
From: Calum F. <cal...@gm...> - 2011-10-12 11:21:22
|
Begin forwarded message: > From: Calum Fitzgerald <cal...@gm...> > Subject: Re: [Ground-user] Refreshing Materialized Views Performance > Date: 12 October 2011 12:09:52 GMT+01:00 > To: Kelvin Ward <kel...@go...> > > Hi Kelvin, > > Are you deleting the raw data after every data upload? If you are doing this then the materialised views will only be contain results data from the load runs for which there was raw data in the db at the time of the refresh. Admittedly I have never used the ground report in the fashion that you have, do you have historic results in the materialised views from the previous loads runs for which you have deleted the raw data? > > If you don't have the historic data then it might well be worth recreating the db for each load run that you will be producing a report for but this will mess up the load run numbering (if you are using that data) as the sequence would be reset each time you recreated the db. > > If you are only processing the raw data for the load run in question each time then changing the method of the materialised view generation within the DB will not improve performance. If the serial processing of the materialised views works better for you then you could change the jython code to perform a serial refresh rather than the concurrent one it uses presently. However you must perform the refreshes in sequence otherwise the resulting data will be incorrect. > > Otherwise to improve performance for your current setup you need to look at tweaking the VM/OS/DB settings. Underload what is the CPU utilisation like? Is only one core of your 4 core VM pegged at 100% or is the load spread amongst the cores? Can the speed of the CPU for your VM be increased? Would you get better performance running under a single fast CPU VM versus a multicore VM? Can the OS be changed to buffer the net app storage or could the DB settings be changed to do similar since I/O is an issue. > > If the I/O is not great then your best bet is to buffer the data as much as possible. Try using the pgtune script, I use it for my initial db setup these days. If you're VM is a dedicated DB VM and you can allocate as much RAM as possible to the DB then the pgtune scripts will create more aggressive settings to use. This should have a beneficial impact on your I/O performance as it should increase the buffering. > > Cheers > Cal > > > On 12 Oct 2011, at 11:41, Kelvin Ward wrote: > >> I will try to find sometime to look into the issues and hack it. I've limited DB experience so could be something new and interesting. >> >> Right now I am deleting the raw data after it's stored in the DB, but it could be archived instead. >> The DB is running on a xen virtual machine with 4 CPU and 4GB Ram. The I/O isn't great because the storage is a netapp shared with other VMs. >> >> Thanks >> >> On 11 October 2011 16:42, Calum Fitzgerald <cal...@gm...> wrote: >> Hi Kelvin, >> >> It is great to hear that the ground report is being so heavily used. >> >> I was/am aware of the bottleneck in the uploading of the data and how the materialised views refresh themselves. The serial Vs concurrent processing information is interesting, in my usage it had been the other way around. Also I'm not 100% sure that the requests are truly concurrent and it is on the todo list to investigate that further, the calls should are concurrent within the jython code however they are going down a single DB connection/cursor and I was not sure if that reverted them back to serial operations. It is on the todo list to investigate creating a connection pool and having multiple cursors to handle the refreshed. >> >> Have a look at the following jython code: 'refreshMaterialisedViewsMT' in the file multithreadDatabaseFactory.py. This will show you the order in which the materialised views need to be run to produce the correcting results from the raw data. The views in each tier are order independent but the tiers must be processed in a serial fashion. This is due to dependencies between the materialised views regarding the input data for each materialised view. >> >> I had been planning to look at rewriting that whole mechanism for creating the report data. Currently there are materialised views that are built off the raw data and each time that the materialised views are refreshed the raw data needs to be reprocessed. The reason is that the materialised view mechanism that was written is a very simple one as creating a materialised view that updated based on changes to the underlying tables was a more complex and time consuming task to taken on than I had time for….there was no native materialised view functionality in postgres at the time of creation of the ground report….and I'm not sure there is native functionality now… >> >> If you feel like hacking here is a link to some code that you could use to change the implementation of the materialised views so that if would not create everything from scratch again. http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views#Very_Lazy_Materialized_Views >> >> I use the snapshot view implementation of materialised views from the above wiki in the ground report. >> >> Postgres 9 brings some interesting changes/additions to the table. My thoughts had been to rewrite the upload mechanism to make use of an unlogged or temporary table for the raw data and stored procedures to update the report tables (analogous to the current materialised views) and then dumping the temp table after the data had been processed. This would keep the overall DB smaller (though you would need to be able to handle the uploaded data volume) and there would not be the reprocessing of the historic data. It should also be faster than the materialised view approach as the concurrency would be programmed into the stored procedures. The only disadvantage is that you would loose the historical record of the raw data from within the db… whether this is a problem or not depends on whether you delete the data files from which the raw data originates. Would you miss having the raw data record within the reporting db? >> >> It would be interesting to have some runtime stats from postgres during the running of the materialised views when the ground report jython script is used….it would good to see if there is any locking going on or whether IO is the limiting factor. >> >> How much memory do you have in your box? have a look at running this script http://pgfoundry.org/projects/pgtune/ . It will outputs an optimised postgresql.conf based on information that you feed it. It has not been updated in a while however it may make some suggestions that could improve your db performance. If you can throw more memory at the DB process it maybe able to buffer the disk better and reduce your IO. >> >> Cheers >> Cal >> >> >> >> >> >> >> >> >> >> On 11 Oct 2011, at 13:09, Kelvin Ward wrote: >> >>> Hi >>> >>> I've been using ground report a fair bit and I've noticed a performance bottleneck in uploading report data. Specifically, after data from a load run has been put into the database each database 'view' is refreshed using the postgres function refresh_matview. This is an I/O heavy function and running each request serially I see the following times to complete: >>> >>> SELECT refresh_matview('element_summary_http_mv'); 31s >>> ... >>> element_buckets_http_mv 269s ! >>> element_percentile_http_mv 98s >>> total_element_per_second_stats_http_mv 57s >>> concurrent_users_byrun_http_mv 54s >>> thread_stop_buckets_http_mv 49s >>> individual_element_per_second_stats_http_mv 43s >>> element_distribution_http_mv 43s >>> ... >>> element_totalbandwidth_http_mv 2s >>> page_distribution_http_mv 6s >>> page_summary_http_mv 6s >>> element_max_time_http_mv 2s >>> element_min_time_http_mv 2s >>> static_element_percentile_http_mv 7s >>> dynamic_element_percentile_http_mv 7s >>> page_percentile_http_mv < 1s >>> page_buckets_http_mv 3s >>> page_stderror_http_mv 3s >>> element_stderror_http_mv < 1s >>> thread_start_buckets_http_mv 1s >>> total_page_per_second_stats_http_mv 1s >>> individual_page_per_second_stats_http_mv 1s >>> static_element_per_second_stats_http_mv 1s >>> dynamic_element_per_second_stats_http_mv 1s >>> element_95confidence_http_mv 1s >>> mean_load_throughput_stats_http_mv 1s >>> concurrent_users_vs_tps_http_mv 1s >>> concurrent_users_vs_pps_http_mv 1s >>> concurrent_users_vs_statictps_http_mv 1s >>> concurrent_users_vs_dynamictps_http_mv 1s >>> concurrent_users_vs_totalrtime_http_mv 1s >>> concurrent_users_vs_pagertime_http_mv 1s >>> concurrent_users_vs_staticrtime_http_mv 1s >>> concurrent_users_vs_dynamicrtime_http_mv 1s >>> >>> Right now I have about 80 load runs in the database. I know the I/O performance of my hardware is not great, but even with 80 load runs there's about 5.8 million rows in the slowest view 'element_buckets_http_mv'. >>> I'm using postgres9 which has automatic vacuuming of the database and CPU or memory is not hit hard when running refresh_matview(), it's definitely a disk bottleneck. As ground report seems to do 10 concurrent calls to refresh_matview() the disk is being heavily thrashed and running in parallel seems to be slower that serial in this case - it's taking 19 minutes to complete refreshing all the views. >>> >>> Trying to understand the ground report python code and postgres, which I'm new to, it seems that refresh_matview deletes all rows in a table like 'element_buckets_http_mv' before copying in new data to it. I'm wondering if that can be improved? >>> >>> Cheers >>> Kelvin. >>> >>> >>> ------------------------------------------------------------------------------ >>> All the data continuously generated in your IT infrastructure contains a >>> definitive record of customers, application performance, security >>> threats, fraudulent activity and more. Splunk takes this data and makes >>> sense of it. Business sense. IT sense. Common sense. >>> http://p.sf.net/sfu/splunk-d2d-oct_______________________________________________ >>> Ground-user mailing list >>> Gro...@li... >>> https://lists.sourceforge.net/lists/listinfo/ground-user >> >> > |
From: Calum F. <cal...@gm...> - 2011-10-11 15:43:30
|
Hi Kelvin, It is great to hear that the ground report is being so heavily used. I was/am aware of the bottleneck in the uploading of the data and how the materialised views refresh themselves. The serial Vs concurrent processing information is interesting, in my usage it had been the other way around. Also I'm not 100% sure that the requests are truly concurrent and it is on the todo list to investigate that further, the calls should are concurrent within the jython code however they are going down a single DB connection/cursor and I was not sure if that reverted them back to serial operations. It is on the todo list to investigate creating a connection pool and having multiple cursors to handle the refreshed. Have a look at the following jython code: 'refreshMaterialisedViewsMT' in the file multithreadDatabaseFactory.py. This will show you the order in which the materialised views need to be run to produce the correcting results from the raw data. The views in each tier are order independent but the tiers must be processed in a serial fashion. This is due to dependencies between the materialised views regarding the input data for each materialised view. I had been planning to look at rewriting that whole mechanism for creating the report data. Currently there are materialised views that are built off the raw data and each time that the materialised views are refreshed the raw data needs to be reprocessed. The reason is that the materialised view mechanism that was written is a very simple one as creating a materialised view that updated based on changes to the underlying tables was a more complex and time consuming task to taken on than I had time for….there was no native materialised view functionality in postgres at the time of creation of the ground report….and I'm not sure there is native functionality now… If you feel like hacking here is a link to some code that you could use to change the implementation of the materialised views so that if would not create everything from scratch again. http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views#Very_Lazy_Materialized_Views I use the snapshot view implementation of materialised views from the above wiki in the ground report. Postgres 9 brings some interesting changes/additions to the table. My thoughts had been to rewrite the upload mechanism to make use of an unlogged or temporary table for the raw data and stored procedures to update the report tables (analogous to the current materialised views) and then dumping the temp table after the data had been processed. This would keep the overall DB smaller (though you would need to be able to handle the uploaded data volume) and there would not be the reprocessing of the historic data. It should also be faster than the materialised view approach as the concurrency would be programmed into the stored procedures. The only disadvantage is that you would loose the historical record of the raw data from within the db… whether this is a problem or not depends on whether you delete the data files from which the raw data originates. Would you miss having the raw data record within the reporting db? It would be interesting to have some runtime stats from postgres during the running of the materialised views when the ground report jython script is used….it would good to see if there is any locking going on or whether IO is the limiting factor. How much memory do you have in your box? have a look at running this script http://pgfoundry.org/projects/pgtune/ . It will outputs an optimised postgresql.conf based on information that you feed it. It has not been updated in a while however it may make some suggestions that could improve your db performance. If you can throw more memory at the DB process it maybe able to buffer the disk better and reduce your IO. Cheers Cal On 11 Oct 2011, at 13:09, Kelvin Ward wrote: > Hi > > I've been using ground report a fair bit and I've noticed a performance bottleneck in uploading report data. Specifically, after data from a load run has been put into the database each database 'view' is refreshed using the postgres function refresh_matview. This is an I/O heavy function and running each request serially I see the following times to complete: > > SELECT refresh_matview('element_summary_http_mv'); 31s > ... > element_buckets_http_mv 269s ! > element_percentile_http_mv 98s > total_element_per_second_stats_http_mv 57s > concurrent_users_byrun_http_mv 54s > thread_stop_buckets_http_mv 49s > individual_element_per_second_stats_http_mv 43s > element_distribution_http_mv 43s > ... > element_totalbandwidth_http_mv 2s > page_distribution_http_mv 6s > page_summary_http_mv 6s > element_max_time_http_mv 2s > element_min_time_http_mv 2s > static_element_percentile_http_mv 7s > dynamic_element_percentile_http_mv 7s > page_percentile_http_mv < 1s > page_buckets_http_mv 3s > page_stderror_http_mv 3s > element_stderror_http_mv < 1s > thread_start_buckets_http_mv 1s > total_page_per_second_stats_http_mv 1s > individual_page_per_second_stats_http_mv 1s > static_element_per_second_stats_http_mv 1s > dynamic_element_per_second_stats_http_mv 1s > element_95confidence_http_mv 1s > mean_load_throughput_stats_http_mv 1s > concurrent_users_vs_tps_http_mv 1s > concurrent_users_vs_pps_http_mv 1s > concurrent_users_vs_statictps_http_mv 1s > concurrent_users_vs_dynamictps_http_mv 1s > concurrent_users_vs_totalrtime_http_mv 1s > concurrent_users_vs_pagertime_http_mv 1s > concurrent_users_vs_staticrtime_http_mv 1s > concurrent_users_vs_dynamicrtime_http_mv 1s > > Right now I have about 80 load runs in the database. I know the I/O performance of my hardware is not great, but even with 80 load runs there's about 5.8 million rows in the slowest view 'element_buckets_http_mv'. > I'm using postgres9 which has automatic vacuuming of the database and CPU or memory is not hit hard when running refresh_matview(), it's definitely a disk bottleneck. As ground report seems to do 10 concurrent calls to refresh_matview() the disk is being heavily thrashed and running in parallel seems to be slower that serial in this case - it's taking 19 minutes to complete refreshing all the views. > > Trying to understand the ground report python code and postgres, which I'm new to, it seems that refresh_matview deletes all rows in a table like 'element_buckets_http_mv' before copying in new data to it. I'm wondering if that can be improved? > > Cheers > Kelvin. > > > ------------------------------------------------------------------------------ > All the data continuously generated in your IT infrastructure contains a > definitive record of customers, application performance, security > threats, fraudulent activity and more. Splunk takes this data and makes > sense of it. Business sense. IT sense. Common sense. > http://p.sf.net/sfu/splunk-d2d-oct_______________________________________________ > Ground-user mailing list > Gro...@li... > https://lists.sourceforge.net/lists/listinfo/ground-user |
From: Kelvin W. <kel...@go...> - 2011-10-11 12:09:48
|
Hi I've been using ground report a fair bit and I've noticed a performance bottleneck in uploading report data. Specifically, after data from a load run has been put into the database each database 'view' is refreshed using the postgres function refresh_matview. This is an I/O heavy function and running each request serially I see the following times to complete: SELECT refresh_matview('element_summary_http_mv'); 31s ... element_buckets_http_mv 269s ! element_percentile_http_mv 98s total_element_per_second_stats_http_mv 57s concurrent_users_byrun_http_mv 54s thread_stop_buckets_http_mv 49s individual_element_per_second_stats_http_mv 43s element_distribution_http_mv 43s ... element_totalbandwidth_http_mv 2s page_distribution_http_mv 6s page_summary_http_mv 6s element_max_time_http_mv 2s element_min_time_http_mv 2s static_element_percentile_http_mv 7s dynamic_element_percentile_http_mv 7s page_percentile_http_mv < 1s page_buckets_http_mv 3s page_stderror_http_mv 3s element_stderror_http_mv < 1s thread_start_buckets_http_mv 1s total_page_per_second_stats_http_mv 1s individual_page_per_second_stats_http_mv 1s static_element_per_second_stats_http_mv 1s dynamic_element_per_second_stats_http_mv 1s element_95confidence_http_mv 1s mean_load_throughput_stats_http_mv 1s concurrent_users_vs_tps_http_mv 1s concurrent_users_vs_pps_http_mv 1s concurrent_users_vs_statictps_http_mv 1s concurrent_users_vs_dynamictps_http_mv 1s concurrent_users_vs_totalrtime_http_mv 1s concurrent_users_vs_pagertime_http_mv 1s concurrent_users_vs_staticrtime_http_mv 1s concurrent_users_vs_dynamicrtime_http_mv 1s Right now I have about 80 load runs in the database. I know the I/O performance of my hardware is not great, but even with 80 load runs there's about 5.8 million rows in the slowest view 'element_buckets_http_mv'. I'm using postgres9 which has automatic vacuuming of the database and CPU or memory is not hit hard when running refresh_matview(), it's definitely a disk bottleneck. As ground report seems to do 10 concurrent calls to refresh_matview() the disk is being heavily thrashed and running in parallel seems to be slower that serial in this case - it's taking 19 minutes to complete refreshing all the views. Trying to understand the ground report python code and postgres, which I'm new to, it seems that refresh_matview deletes all rows in a table like 'element_buckets_http_mv' before copying in new data to it. I'm wondering if that can be improved? Cheers Kelvin. |
From: Calum F. <cal...@gm...> - 2011-09-28 08:48:56
|
Great news. BTW you might want to look at using the development version that is uploaded to the git archive. This fixes a couple of problems that are in the general release. But if everything that you are using is working then no need. Cheers Cal On 26 Sep 2011, at 09:59, Gerry Brennan wrote: > Hi Calum > > This got me sorted > > Thank you > > > Sent from my iPhone > > On 24 Sep 2011, at 19:43, Calum Fitzgerald <cal...@gm...> wrote: > >> Hi Gerry, >> >> Apologies about the delay in response, I had some connectivity issues which I have now resolved. >> >> Please can you try upgrading the version of postgres to 8.2 or above. I would recommend trying version 9. >> >> The software was never tested on version 8.1 and it would be nice to rule out version incompatibility in the first instance. >> >> There are centos rpms available for postgres: >> >> http://yum.pgrpms.org/ >> >> here is the how-to page which is useful for integrating the repository with yum: >> >> http://yum.pgrpms.org/howtoyum.php >> >> Cheers >> Cal >> >> On 21 Sep 2011, at 09:15, Gerry Brennan wrote: >> >>> Hi Calum, >>> >>> >>> >>> psql (PostgreSQL) 8.1.23 Installed via YUM ... Just checking again... >>> >>> yum install postgresql postgresql-server >>> Loaded plugins: fastestmirror >>> Loading mirror speeds from cached hostfile >>> * base: archive.cs.uu.nl >>> * extras: archive.cs.uu.nl >>> * updates: centos.mirror.transip.nl >>> Setting up Install Process >>> Package postgresql-8.1.23-1.el5_6.1.i386 already installed and latest version >>> Package postgresql-server-8.1.23-1.el5_6.1.i386 already installed and latest version >>> Nothing to do >>> >>> >>> CREATE DATABASE ground_stats ENCODING 'UTF8'; > database command interpreter. >>> >>> createlang plpgsql ground_stats > via unix command console. >>> >>> CentOS release 5.6 (Final) >>> uname -a -> Linux localhost.localdomain 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:56 EDT 2011 i686 i686 i386 GNU/Linux >>> >>> Gerry >>> >>> On Tue, Sep 20, 2011 at 9:22 PM, Calum Fitzgerald <cal...@gm...> wrote: >>> Hi Gerry, >>> >>> Looks like an interesting problem....and not one that I have come >>> across before unfortunately. >>> >>> Please can you let me know which version of Postgres you are running against? >>> is the PL/PGSQL language installed? >>> Also which OS are you using? >>> >>> Cheers >>> Cal >>> >>> On 19 September 2011 15:49, Gerry Brennan <ger...@gm...> wrote: >>> > Hi Calum, >>> > >>> > I got past this issue. Thank you very much for your response. >>> > >>> > I am currently on another one around the database. (in my opinion) >>> > >>> > >>> > When I run in to Databaseinterface.bash and press 1 to install database i >>> > get the following. >>> > >>> > I have manipulated some code in the file databaseFactory.py file to >>> > highlight the problems ONLY... >>> > >>> > When i identify them i can revert back to the original. >>> > >>> > DATABASE CONTROL MENU >>> > >>> > ----------------------- >>> > >>> > >>> > >>> > Select action to perform: >>> > >>> > >>> > >>> > 1 ) install Database >>> > >>> > 2 ) delete Database >>> > >>> > 3 ) reset Database >>> > >>> > 4 ) upload Files >>> > >>> > 5 ) refresh Materialised Views >>> > >>> > 6 ) drop Data >>> > >>> > >>> > >>> > Experimental >>> > >>> > ------------ >>> > >>> > 7 ) cluster Database >>> > >>> > 8 ) refresh Extended Stats Materialised Views >>> > >>> > >>> > >>> > Information >>> > >>> > ------------ >>> > >>> > 9 ) show loadrun information >>> > >>> > 10) show data file information >>> > >>> > >>> > >>> > Type 'exit' to escape the menu >>> > >>> > >>> > >>> > Enter action: 1 >>> > >>> > >>> > >>> > Create Database Selected >>> > >>> > >>> > >>> > Traceback (innermost last): >>> > >>> > File "~/ground_report-1.5/lib/databaseInterface.py", line 455, in ? >>> > >>> > File "~/ground_report-1.5/lib/databaseInterface.py", line 197, in >>> > startDBMenu >>> > >>> > File "~/ground_report-1.5/lib/databaseFactory.py", line 35, in createDB >>> > >>> > Error: ERROR: syntax error at or near "(" [SQLCode: 0], [SQLState: 42601] >>> > >>> > >>> > >>> > Then when I look at the line of code in lib/databaseFactory.py >>> > >>> > >>> > >>> > >>> > >>> > c.execute('''CREATE TABLE "raw_data_http" >>> > >>> > 17 ( >>> > >>> > 18 "RAW_ID" integer NOT NULL DEFAULT >>> > nextval(('public."raw_http_seq"'::text)::regclass), >>> > >>> > 19 "Load Run" integer NOT NULL, >>> > >>> > 20 "Data File" integer NOT NULL, >>> > >>> > 21 "Thread" integer NOT NULL, >>> > >>> > 22 "Run" integer NOT NULL, >>> > >>> > 23 "Test" bigint NOT NULL, >>> > >>> > 24 "Milliseconds Since Epoch" bigint NOT NULL, >>> > >>> > 25 "Test Time" integer NOT NULL, >>> > >>> > 26 "Errors" integer NOT NULL, >>> > >>> > 27 "HTTP Response Code" integer NOT NULL, >>> > >>> > 28 "HTTP Response Length" integer NOT NULL, >>> > >>> > 29 "HTTP Response Errors" integer NOT NULL, >>> > >>> > 30 "Time To Resolve Host" integer NOT NULL, >>> > >>> > 31 "Time To Establish Connection" integer NOT NULL, >>> > >>> > 32 "Time To First Byte" integer NOT NULL, >>> > >>> > 33 CONSTRAINT raw_http_pkey PRIMARY KEY ("RAW_ID") >>> > >>> > 34 ) >>> > >>> > 35 WITH (OIDS=FALSE); ''') >>> > >>> > >>> > >>> > In the database log file I get >>> > >>> > >>> > >>> > ERROR: syntax error at or near "(" at character 904 >>> > >>> > LOG: unexpected EOF on client connection >>> > >>> > >>> > >>> > >>> > >>> > When I remove the component >>> > >>> > >>> > >>> > WITH (OIDS=FALSE); >>> > >>> > >>> > >>> > It moves on to the next problem statement. >>> > >>> > >>> > >>> > WITH (OIDS=FALSE); >>> > >>> > >>> > >>> > So I remove them all of them across Across all the file. >>> > >>> > >>> > >>> > Finally I get to an PL/PgSQL problem. >>> > >>> > >>> > >>> > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index >>> > "raw_http_pkey" for table "raw_data_http" >>> > >>> > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index >>> > "upload_http_pkey" for table "http_upload_info" >>> > >>> > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index >>> > "datafile_http_pkey" for table "http_data_files" >>> > >>> > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index >>> > "descfile_http_pkey" for table "http_desc_files" >>> > >>> > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index >>> > "descriptions_http_pkey" for table "descriptions_http" >>> > >>> > ERROR: syntax error at or near "$1" at character 46 >>> > >>> > QUERY: SELECT sum("Started Threads") INTO STRICT $1 FROM >>> > thread_start_buckets_http_mv WHERE "Load Run" = $2 AND "Time Bucket" <= >>> > $3 >>> > >>> > CONTEXT: SQL statement in PL/PgSQL function "concurrent_users_http" near >>> > line 9 >>> > >>> > Traceback (innermost last): >>> > File "/home/devweb/ground_report-1.5/lib/databaseInterface.py", line 455, >>> > in ? >>> > File "/home/devweb/ground_report-1.5/lib/databaseInterface.py", line 197, >>> > in startDBMenu >>> > File "/home/devweb/ground_report-1.5/lib/databaseFactory.py", line 180, in >>> > createDB >>> > Error: ERROR: syntax error at or near "$1" [SQLCode: 0], [SQLState: 42601] >>> > >>> > Your help is greatly appreciated. >>> > >>> > Gerry. >>> > >>> > >>> > On Mon, Sep 19, 2011 at 12:53 PM, Calum Fitzgerald >>> > <cal...@gm...> wrote: >>> >> >>> >> Hi Gerry, >>> >> >>> >> The issue is to do with the fact that the ground report cannot find >>> >> the Jython installation: >>> >> >>> >> Some initial steps to try with respects to troubleshooting: >>> >> >>> >> 1. Have you installed Jython onto your machine? >>> >> 2. Have you updated the configuration files with the location of your >>> >> Jython installation? >>> >> >>> >> >>> >> Cheers >>> >> Cal >>> >> >>> >> >>> >> On 16 September 2011 15:32, Gerry Brennan <ger...@gm...> wrote: >>> >> > >>> >> > Hi Group, >>> >> > >>> >> > I am trying to install ground report I have got as far as trying to run >>> >> > databaseInterface >>> >> > >>> >> > ./databaseInterface.bash >>> >> > Exception in thread "main" java.lang.NoClassDefFoundError: >>> >> > org/python/util/jython >>> >> > Caused by: java.lang.ClassNotFoundException: org.python.util.jython >>> >> > at java.net.URLClassLoader$1.run(URLClassLoader.java:217) >>> >> > at java.security.AccessController.doPrivileged(Native Method) >>> >> > at java.net.URLClassLoader.findClass(URLClassLoader.java:205) >>> >> > at java.lang.ClassLoader.loadClass(ClassLoader.java:319) >>> >> > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) >>> >> > at java.lang.ClassLoader.loadClass(ClassLoader.java:264) >>> >> > at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) >>> >> > Could not find the main class: org.python.util.jython. Program will >>> >> > exit. >>> >> > >>> >> > >>> >> > Can anyone help >>> >> > >>> >> > >>> >> > ------------------------------------------------------------------------------ >>> >> > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA >>> >> > http://p.sf.net/sfu/rim-devcon-copy2 >>> >> > _______________________________________________ >>> >> > Ground-user mailing list >>> >> > Gro...@li... >>> >> > https://lists.sourceforge.net/lists/listinfo/ground-user >>> >> > >>> >> > >>> > >>> > >>> > >>> > >>> > >>> >>> >>> >>> -- >>> Gerry Brennan. >>> "Shalom" >>> Lipstown >>> Narraghmore >>> Co. Kildare >>> Ireland >>> >>> Tel : + 353 86 0200999. >>> Tel : + 353 87 9783902. >>> >>> Personal Mail: ger...@gm... >> |
From: Calum F. <cal...@gm...> - 2011-09-24 18:43:38
|
Hi Gerry, Apologies about the delay in response, I had some connectivity issues which I have now resolved. Please can you try upgrading the version of postgres to 8.2 or above. I would recommend trying version 9. The software was never tested on version 8.1 and it would be nice to rule out version incompatibility in the first instance. There are centos rpms available for postgres: http://yum.pgrpms.org/ here is the how-to page which is useful for integrating the repository with yum: http://yum.pgrpms.org/howtoyum.php Cheers Cal On 21 Sep 2011, at 09:15, Gerry Brennan wrote: > Hi Calum, > > > > psql (PostgreSQL) 8.1.23 Installed via YUM ... Just checking again... > > yum install postgresql postgresql-server > Loaded plugins: fastestmirror > Loading mirror speeds from cached hostfile > * base: archive.cs.uu.nl > * extras: archive.cs.uu.nl > * updates: centos.mirror.transip.nl > Setting up Install Process > Package postgresql-8.1.23-1.el5_6.1.i386 already installed and latest version > Package postgresql-server-8.1.23-1.el5_6.1.i386 already installed and latest version > Nothing to do > > > CREATE DATABASE ground_stats ENCODING 'UTF8'; > database command interpreter. > > createlang plpgsql ground_stats > via unix command console. > > CentOS release 5.6 (Final) > uname -a -> Linux localhost.localdomain 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:56 EDT 2011 i686 i686 i386 GNU/Linux > > Gerry > > On Tue, Sep 20, 2011 at 9:22 PM, Calum Fitzgerald <cal...@gm...> wrote: > Hi Gerry, > > Looks like an interesting problem....and not one that I have come > across before unfortunately. > > Please can you let me know which version of Postgres you are running against? > is the PL/PGSQL language installed? > Also which OS are you using? > > Cheers > Cal > > On 19 September 2011 15:49, Gerry Brennan <ger...@gm...> wrote: > > Hi Calum, > > > > I got past this issue. Thank you very much for your response. > > > > I am currently on another one around the database. (in my opinion) > > > > > > When I run in to Databaseinterface.bash and press 1 to install database i > > get the following. > > > > I have manipulated some code in the file databaseFactory.py file to > > highlight the problems ONLY... > > > > When i identify them i can revert back to the original. > > > > DATABASE CONTROL MENU > > > > ----------------------- > > > > > > > > Select action to perform: > > > > > > > > 1 ) install Database > > > > 2 ) delete Database > > > > 3 ) reset Database > > > > 4 ) upload Files > > > > 5 ) refresh Materialised Views > > > > 6 ) drop Data > > > > > > > > Experimental > > > > ------------ > > > > 7 ) cluster Database > > > > 8 ) refresh Extended Stats Materialised Views > > > > > > > > Information > > > > ------------ > > > > 9 ) show loadrun information > > > > 10) show data file information > > > > > > > > Type 'exit' to escape the menu > > > > > > > > Enter action: 1 > > > > > > > > Create Database Selected > > > > > > > > Traceback (innermost last): > > > > File "~/ground_report-1.5/lib/databaseInterface.py", line 455, in ? > > > > File "~/ground_report-1.5/lib/databaseInterface.py", line 197, in > > startDBMenu > > > > File "~/ground_report-1.5/lib/databaseFactory.py", line 35, in createDB > > > > Error: ERROR: syntax error at or near "(" [SQLCode: 0], [SQLState: 42601] > > > > > > > > Then when I look at the line of code in lib/databaseFactory.py > > > > > > > > > > > > c.execute('''CREATE TABLE "raw_data_http" > > > > 17 ( > > > > 18 "RAW_ID" integer NOT NULL DEFAULT > > nextval(('public."raw_http_seq"'::text)::regclass), > > > > 19 "Load Run" integer NOT NULL, > > > > 20 "Data File" integer NOT NULL, > > > > 21 "Thread" integer NOT NULL, > > > > 22 "Run" integer NOT NULL, > > > > 23 "Test" bigint NOT NULL, > > > > 24 "Milliseconds Since Epoch" bigint NOT NULL, > > > > 25 "Test Time" integer NOT NULL, > > > > 26 "Errors" integer NOT NULL, > > > > 27 "HTTP Response Code" integer NOT NULL, > > > > 28 "HTTP Response Length" integer NOT NULL, > > > > 29 "HTTP Response Errors" integer NOT NULL, > > > > 30 "Time To Resolve Host" integer NOT NULL, > > > > 31 "Time To Establish Connection" integer NOT NULL, > > > > 32 "Time To First Byte" integer NOT NULL, > > > > 33 CONSTRAINT raw_http_pkey PRIMARY KEY ("RAW_ID") > > > > 34 ) > > > > 35 WITH (OIDS=FALSE); ''') > > > > > > > > In the database log file I get > > > > > > > > ERROR: syntax error at or near "(" at character 904 > > > > LOG: unexpected EOF on client connection > > > > > > > > > > > > When I remove the component > > > > > > > > WITH (OIDS=FALSE); > > > > > > > > It moves on to the next problem statement. > > > > > > > > WITH (OIDS=FALSE); > > > > > > > > So I remove them all of them across Across all the file. > > > > > > > > Finally I get to an PL/PgSQL problem. > > > > > > > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > > "raw_http_pkey" for table "raw_data_http" > > > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > > "upload_http_pkey" for table "http_upload_info" > > > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > > "datafile_http_pkey" for table "http_data_files" > > > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > > "descfile_http_pkey" for table "http_desc_files" > > > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > > "descriptions_http_pkey" for table "descriptions_http" > > > > ERROR: syntax error at or near "$1" at character 46 > > > > QUERY: SELECT sum("Started Threads") INTO STRICT $1 FROM > > thread_start_buckets_http_mv WHERE "Load Run" = $2 AND "Time Bucket" <= > > $3 > > > > CONTEXT: SQL statement in PL/PgSQL function "concurrent_users_http" near > > line 9 > > > > Traceback (innermost last): > > File "/home/devweb/ground_report-1.5/lib/databaseInterface.py", line 455, > > in ? > > File "/home/devweb/ground_report-1.5/lib/databaseInterface.py", line 197, > > in startDBMenu > > File "/home/devweb/ground_report-1.5/lib/databaseFactory.py", line 180, in > > createDB > > Error: ERROR: syntax error at or near "$1" [SQLCode: 0], [SQLState: 42601] > > > > Your help is greatly appreciated. > > > > Gerry. > > > > > > On Mon, Sep 19, 2011 at 12:53 PM, Calum Fitzgerald > > <cal...@gm...> wrote: > >> > >> Hi Gerry, > >> > >> The issue is to do with the fact that the ground report cannot find > >> the Jython installation: > >> > >> Some initial steps to try with respects to troubleshooting: > >> > >> 1. Have you installed Jython onto your machine? > >> 2. Have you updated the configuration files with the location of your > >> Jython installation? > >> > >> > >> Cheers > >> Cal > >> > >> > >> On 16 September 2011 15:32, Gerry Brennan <ger...@gm...> wrote: > >> > > >> > Hi Group, > >> > > >> > I am trying to install ground report I have got as far as trying to run > >> > databaseInterface > >> > > >> > ./databaseInterface.bash > >> > Exception in thread "main" java.lang.NoClassDefFoundError: > >> > org/python/util/jython > >> > Caused by: java.lang.ClassNotFoundException: org.python.util.jython > >> > at java.net.URLClassLoader$1.run(URLClassLoader.java:217) > >> > at java.security.AccessController.doPrivileged(Native Method) > >> > at java.net.URLClassLoader.findClass(URLClassLoader.java:205) > >> > at java.lang.ClassLoader.loadClass(ClassLoader.java:319) > >> > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) > >> > at java.lang.ClassLoader.loadClass(ClassLoader.java:264) > >> > at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) > >> > Could not find the main class: org.python.util.jython. Program will > >> > exit. > >> > > >> > > >> > Can anyone help > >> > > >> > > >> > ------------------------------------------------------------------------------ > >> > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA > >> > http://p.sf.net/sfu/rim-devcon-copy2 > >> > _______________________________________________ > >> > Ground-user mailing list > >> > Gro...@li... > >> > https://lists.sourceforge.net/lists/listinfo/ground-user > >> > > >> > > > > > > > > > > > > > > > -- > Gerry Brennan. > "Shalom" > Lipstown > Narraghmore > Co. Kildare > Ireland > > Tel : + 353 86 0200999. > Tel : + 353 87 9783902. > > Personal Mail: ger...@gm... |
From: Gerry B. <ger...@gm...> - 2011-09-21 08:15:25
|
Hi Calum, psql (PostgreSQL) 8.1.23 Installed via YUM ... Just checking again... yum install postgresql postgresql-server Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: archive.cs.uu.nl * extras: archive.cs.uu.nl * updates: centos.mirror.transip.nl Setting up Install Process Package postgresql-8.1.23-1.el5_6.1.i386 already installed and latest version Package postgresql-server-8.1.23-1.el5_6.1.i386 already installed and latest version Nothing to do CREATE DATABASE ground_stats ENCODING 'UTF8'; > database command interpreter. createlang plpgsql ground_stats > via unix command console. CentOS release 5.6 (Final) uname -a -> Linux localhost.localdomain 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:56 EDT 2011 i686 i686 i386 GNU/Linux Gerry On Tue, Sep 20, 2011 at 9:22 PM, Calum Fitzgerald < cal...@gm...> wrote: > Hi Gerry, > > Looks like an interesting problem....and not one that I have come > across before unfortunately. > > Please can you let me know which version of Postgres you are running > against? > is the PL/PGSQL language installed? > Also which OS are you using? > > Cheers > Cal > > On 19 September 2011 15:49, Gerry Brennan <ger...@gm...> wrote: > > Hi Calum, > > > > I got past this issue. Thank you very much for your response. > > > > I am currently on another one around the database. (in my opinion) > > > > > > When I run in to Databaseinterface.bash and press 1 to install database i > > get the following. > > > > I have manipulated some code in the file databaseFactory.py file to > > highlight the problems ONLY... > > > > When i identify them i can revert back to the original. > > > > DATABASE CONTROL MENU > > > > ----------------------- > > > > > > > > Select action to perform: > > > > > > > > 1 ) install Database > > > > 2 ) delete Database > > > > 3 ) reset Database > > > > 4 ) upload Files > > > > 5 ) refresh Materialised Views > > > > 6 ) drop Data > > > > > > > > Experimental > > > > ------------ > > > > 7 ) cluster Database > > > > 8 ) refresh Extended Stats Materialised Views > > > > > > > > Information > > > > ------------ > > > > 9 ) show loadrun information > > > > 10) show data file information > > > > > > > > Type 'exit' to escape the menu > > > > > > > > Enter action: 1 > > > > > > > > Create Database Selected > > > > > > > > Traceback (innermost last): > > > > File "~/ground_report-1.5/lib/databaseInterface.py", line 455, in ? > > > > File "~/ground_report-1.5/lib/databaseInterface.py", line 197, in > > startDBMenu > > > > File "~/ground_report-1.5/lib/databaseFactory.py", line 35, in createDB > > > > Error: ERROR: syntax error at or near "(" [SQLCode: 0], [SQLState: 42601] > > > > > > > > Then when I look at the line of code in lib/databaseFactory.py > > > > > > > > > > > > c.execute('''CREATE TABLE "raw_data_http" > > > > 17 ( > > > > 18 "RAW_ID" integer NOT NULL DEFAULT > > nextval(('public."raw_http_seq"'::text)::regclass), > > > > 19 "Load Run" integer NOT NULL, > > > > 20 "Data File" integer NOT NULL, > > > > 21 "Thread" integer NOT NULL, > > > > 22 "Run" integer NOT NULL, > > > > 23 "Test" bigint NOT NULL, > > > > 24 "Milliseconds Since Epoch" bigint NOT NULL, > > > > 25 "Test Time" integer NOT NULL, > > > > 26 "Errors" integer NOT NULL, > > > > 27 "HTTP Response Code" integer NOT NULL, > > > > 28 "HTTP Response Length" integer NOT NULL, > > > > 29 "HTTP Response Errors" integer NOT NULL, > > > > 30 "Time To Resolve Host" integer NOT NULL, > > > > 31 "Time To Establish Connection" integer NOT NULL, > > > > 32 "Time To First Byte" integer NOT NULL, > > > > 33 CONSTRAINT raw_http_pkey PRIMARY KEY ("RAW_ID") > > > > 34 ) > > > > 35 WITH (OIDS=FALSE); ''') > > > > > > > > In the database log file I get > > > > > > > > ERROR: syntax error at or near "(" at character 904 > > > > LOG: unexpected EOF on client connection > > > > > > > > > > > > When I remove the component > > > > > > > > WITH (OIDS=FALSE); > > > > > > > > It moves on to the next problem statement. > > > > > > > > WITH (OIDS=FALSE); > > > > > > > > So I remove them all of them across Across all the file. > > > > > > > > Finally I get to an PL/PgSQL problem. > > > > > > > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > > "raw_http_pkey" for table "raw_data_http" > > > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > > "upload_http_pkey" for table "http_upload_info" > > > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > > "datafile_http_pkey" for table "http_data_files" > > > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > > "descfile_http_pkey" for table "http_desc_files" > > > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > > "descriptions_http_pkey" for table "descriptions_http" > > > > ERROR: syntax error at or near "$1" at character 46 > > > > QUERY: SELECT sum("Started Threads") INTO STRICT $1 FROM > > thread_start_buckets_http_mv WHERE "Load Run" = $2 AND "Time Bucket" <= > > $3 > > > > CONTEXT: SQL statement in PL/PgSQL function "concurrent_users_http" near > > line 9 > > > > Traceback (innermost last): > > File "/home/devweb/ground_report-1.5/lib/databaseInterface.py", line > 455, > > in ? > > File "/home/devweb/ground_report-1.5/lib/databaseInterface.py", line > 197, > > in startDBMenu > > File "/home/devweb/ground_report-1.5/lib/databaseFactory.py", line 180, > in > > createDB > > Error: ERROR: syntax error at or near "$1" [SQLCode: 0], [SQLState: > 42601] > > > > Your help is greatly appreciated. > > > > Gerry. > > > > > > On Mon, Sep 19, 2011 at 12:53 PM, Calum Fitzgerald > > <cal...@gm...> wrote: > >> > >> Hi Gerry, > >> > >> The issue is to do with the fact that the ground report cannot find > >> the Jython installation: > >> > >> Some initial steps to try with respects to troubleshooting: > >> > >> 1. Have you installed Jython onto your machine? > >> 2. Have you updated the configuration files with the location of your > >> Jython installation? > >> > >> > >> Cheers > >> Cal > >> > >> > >> On 16 September 2011 15:32, Gerry Brennan <ger...@gm...> > wrote: > >> > > >> > Hi Group, > >> > > >> > I am trying to install ground report I have got as far as trying to > run > >> > databaseInterface > >> > > >> > ./databaseInterface.bash > >> > Exception in thread "main" java.lang.NoClassDefFoundError: > >> > org/python/util/jython > >> > Caused by: java.lang.ClassNotFoundException: org.python.util.jython > >> > at java.net.URLClassLoader$1.run(URLClassLoader.java:217) > >> > at java.security.AccessController.doPrivileged(Native Method) > >> > at java.net.URLClassLoader.findClass(URLClassLoader.java:205) > >> > at java.lang.ClassLoader.loadClass(ClassLoader.java:319) > >> > at > sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) > >> > at java.lang.ClassLoader.loadClass(ClassLoader.java:264) > >> > at > java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) > >> > Could not find the main class: org.python.util.jython. Program will > >> > exit. > >> > > >> > > >> > Can anyone help > >> > > >> > > >> > > ------------------------------------------------------------------------------ > >> > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA > >> > http://p.sf.net/sfu/rim-devcon-copy2 > >> > _______________________________________________ > >> > Ground-user mailing list > >> > Gro...@li... > >> > https://lists.sourceforge.net/lists/listinfo/ground-user > >> > > >> > > > > > > > > > > > > -- Gerry Brennan. "Shalom" Lipstown Narraghmore Co. Kildare Ireland Tel : + 353 86 0200999. Tel : + 353 87 9783902. Personal Mail: ger...@gm... |
From: Calum F. <cal...@gm...> - 2011-09-20 20:22:57
|
Hi Gerry, Looks like an interesting problem....and not one that I have come across before unfortunately. Please can you let me know which version of Postgres you are running against? is the PL/PGSQL language installed? Also which OS are you using? Cheers Cal On 19 September 2011 15:49, Gerry Brennan <ger...@gm...> wrote: > Hi Calum, > > I got past this issue. Thank you very much for your response. > > I am currently on another one around the database. (in my opinion) > > > When I run in to Databaseinterface.bash and press 1 to install database i > get the following. > > I have manipulated some code in the file databaseFactory.py file to > highlight the problems ONLY... > > When i identify them i can revert back to the original. > > DATABASE CONTROL MENU > > ----------------------- > > > > Select action to perform: > > > > 1 ) install Database > > 2 ) delete Database > > 3 ) reset Database > > 4 ) upload Files > > 5 ) refresh Materialised Views > > 6 ) drop Data > > > > Experimental > > ------------ > > 7 ) cluster Database > > 8 ) refresh Extended Stats Materialised Views > > > > Information > > ------------ > > 9 ) show loadrun information > > 10) show data file information > > > > Type 'exit' to escape the menu > > > > Enter action: 1 > > > > Create Database Selected > > > > Traceback (innermost last): > > File "~/ground_report-1.5/lib/databaseInterface.py", line 455, in ? > > File "~/ground_report-1.5/lib/databaseInterface.py", line 197, in > startDBMenu > > File "~/ground_report-1.5/lib/databaseFactory.py", line 35, in createDB > > Error: ERROR: syntax error at or near "(" [SQLCode: 0], [SQLState: 42601] > > > > Then when I look at the line of code in lib/databaseFactory.py > > > > > > c.execute('''CREATE TABLE "raw_data_http" > > 17 ( > > 18 "RAW_ID" integer NOT NULL DEFAULT > nextval(('public."raw_http_seq"'::text)::regclass), > > 19 "Load Run" integer NOT NULL, > > 20 "Data File" integer NOT NULL, > > 21 "Thread" integer NOT NULL, > > 22 "Run" integer NOT NULL, > > 23 "Test" bigint NOT NULL, > > 24 "Milliseconds Since Epoch" bigint NOT NULL, > > 25 "Test Time" integer NOT NULL, > > 26 "Errors" integer NOT NULL, > > 27 "HTTP Response Code" integer NOT NULL, > > 28 "HTTP Response Length" integer NOT NULL, > > 29 "HTTP Response Errors" integer NOT NULL, > > 30 "Time To Resolve Host" integer NOT NULL, > > 31 "Time To Establish Connection" integer NOT NULL, > > 32 "Time To First Byte" integer NOT NULL, > > 33 CONSTRAINT raw_http_pkey PRIMARY KEY ("RAW_ID") > > 34 ) > > 35 WITH (OIDS=FALSE); ''') > > > > In the database log file I get > > > > ERROR: syntax error at or near "(" at character 904 > > LOG: unexpected EOF on client connection > > > > > > When I remove the component > > > > WITH (OIDS=FALSE); > > > > It moves on to the next problem statement. > > > > WITH (OIDS=FALSE); > > > > So I remove them all of them across Across all the file. > > > > Finally I get to an PL/PgSQL problem. > > > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > "raw_http_pkey" for table "raw_data_http" > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > "upload_http_pkey" for table "http_upload_info" > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > "datafile_http_pkey" for table "http_data_files" > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > "descfile_http_pkey" for table "http_desc_files" > > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > "descriptions_http_pkey" for table "descriptions_http" > > ERROR: syntax error at or near "$1" at character 46 > > QUERY: SELECT sum("Started Threads") INTO STRICT $1 FROM > thread_start_buckets_http_mv WHERE "Load Run" = $2 AND "Time Bucket" <= > $3 > > CONTEXT: SQL statement in PL/PgSQL function "concurrent_users_http" near > line 9 > > Traceback (innermost last): > File "/home/devweb/ground_report-1.5/lib/databaseInterface.py", line 455, > in ? > File "/home/devweb/ground_report-1.5/lib/databaseInterface.py", line 197, > in startDBMenu > File "/home/devweb/ground_report-1.5/lib/databaseFactory.py", line 180, in > createDB > Error: ERROR: syntax error at or near "$1" [SQLCode: 0], [SQLState: 42601] > > Your help is greatly appreciated. > > Gerry. > > > On Mon, Sep 19, 2011 at 12:53 PM, Calum Fitzgerald > <cal...@gm...> wrote: >> >> Hi Gerry, >> >> The issue is to do with the fact that the ground report cannot find >> the Jython installation: >> >> Some initial steps to try with respects to troubleshooting: >> >> 1. Have you installed Jython onto your machine? >> 2. Have you updated the configuration files with the location of your >> Jython installation? >> >> >> Cheers >> Cal >> >> >> On 16 September 2011 15:32, Gerry Brennan <ger...@gm...> wrote: >> > >> > Hi Group, >> > >> > I am trying to install ground report I have got as far as trying to run >> > databaseInterface >> > >> > ./databaseInterface.bash >> > Exception in thread "main" java.lang.NoClassDefFoundError: >> > org/python/util/jython >> > Caused by: java.lang.ClassNotFoundException: org.python.util.jython >> > at java.net.URLClassLoader$1.run(URLClassLoader.java:217) >> > at java.security.AccessController.doPrivileged(Native Method) >> > at java.net.URLClassLoader.findClass(URLClassLoader.java:205) >> > at java.lang.ClassLoader.loadClass(ClassLoader.java:319) >> > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) >> > at java.lang.ClassLoader.loadClass(ClassLoader.java:264) >> > at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) >> > Could not find the main class: org.python.util.jython. Program will >> > exit. >> > >> > >> > Can anyone help >> > >> > >> > ------------------------------------------------------------------------------ >> > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA >> > http://p.sf.net/sfu/rim-devcon-copy2 >> > _______________________________________________ >> > Ground-user mailing list >> > Gro...@li... >> > https://lists.sourceforge.net/lists/listinfo/ground-user >> > >> > > > > > > |
From: Gerry B. <ger...@gm...> - 2011-09-19 14:49:55
|
Hi Calum, I got past this issue. Thank you very much for your response. I am currently on another one around the database. (in my opinion) When I run in to Databaseinterface.bash and press 1 to install database i get the following. I have manipulated some code in the file databaseFactory.py file to highlight the problems ONLY... When i identify them i can revert back to the original. *DATABASE CONTROL MENU * *-----------------------* * * * Select action to perform:* * * * 1 ) install Database* * 2 ) delete Database* * 3 ) reset Database* * 4 ) upload Files* * 5 ) refresh Materialised Views* * 6 ) drop Data* * * * Experimental* * ------------* * 7 ) cluster Database* * 8 ) refresh Extended Stats Materialised Views* * * * Information* * ------------* * 9 ) show loadrun information* * 10) show data file information* * * * Type 'exit' to escape the menu* * * * Enter action: 1* * * * Create Database Selected* * * *Traceback (innermost last):* * File "~/ground_report-1.5/lib/databaseInterface.py", line 455, in ?* * File "~/ground_report-1.5/lib/databaseInterface.py", line 197, in startDBMenu* * File "~/ground_report-1.5/lib/databaseFactory.py", line 35, in createDB* *Error: ERROR: syntax error at or near "(" [SQLCode: 0], [SQLState: 42601]** * * * *Then when I look at the line of code in lib/databaseFactory.py * * * * * * c.execute('''CREATE TABLE "raw_data_http"* * 17 (* * 18 "RAW_ID" integer NOT NULL DEFAULT nextval(('public."raw_http_seq"'::text)::regclass),* * 19 "Load Run" integer NOT NULL,* * 20 "Data File" integer NOT NULL, * * 21 "Thread" integer NOT NULL,* * 22 "Run" integer NOT NULL,* * 23 "Test" bigint NOT NULL,* * 24 "Milliseconds Since Epoch" bigint NOT NULL,* * 25 "Test Time" integer NOT NULL,* * 26 "Errors" integer NOT NULL,* * 27 "HTTP Response Code" integer NOT NULL,* * 28 "HTTP Response Length" integer NOT NULL,* * 29 "HTTP Response Errors" integer NOT NULL,* * 30 "Time To Resolve Host" integer NOT NULL,* * 31 "Time To Establish Connection" integer NOT NULL,* * 32 "Time To First Byte" integer NOT NULL,* * 33 CONSTRAINT raw_http_pkey PRIMARY KEY ("RAW_ID")* * 34 )* * 35 WITH (OIDS=FALSE); ''')*** * * *In the database log file I get * * * *ERROR: syntax error at or near "(" at character 904* *LOG: unexpected EOF on client connection*** * * * * *When I remove the component* * * *WITH (OIDS=FALSE); ** * * * *It moves on to the next problem statement.* * * *WITH (OIDS=FALSE); ** * * * *So I remove them all of them across Across all the file.* * * *Finally I get to an PL/PgSQL problem.* * * *NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "raw_http_pkey" for table "raw_data_http"* *NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "upload_http_pkey" for table "http_upload_info"* *NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "datafile_http_pkey" for table "http_data_files"* *NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "descfile_http_pkey" for table "http_desc_files"* *NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "descriptions_http_pkey" for table "descriptions_http"* *ERROR: syntax error at or near "$1" at character 46* *QUERY: SELECT sum("Started Threads") INTO STRICT $1 FROM thread_start_buckets_http_mv WHERE "Load Run" = $2 AND "Time Bucket" <= $3 * *CONTEXT: SQL statement in PL/PgSQL function "concurrent_users_http" near line 9*** Traceback (innermost last): File "/home/devweb/ground_report-1.5/lib/databaseInterface.py", line 455, in ? File "/home/devweb/ground_report-1.5/lib/databaseInterface.py", line 197, in startDBMenu File "/home/devweb/ground_report-1.5/lib/databaseFactory.py", line 180, in createDB Error: ERROR: syntax error at or near "$1" [SQLCode: 0], [SQLState: 42601] Your help is greatly appreciated. Gerry. On Mon, Sep 19, 2011 at 12:53 PM, Calum Fitzgerald < cal...@gm...> wrote: > Hi Gerry, > > The issue is to do with the fact that the ground report cannot find > the Jython installation: > > Some initial steps to try with respects to troubleshooting: > > 1. Have you installed Jython onto your machine? > 2. Have you updated the configuration files with the location of your > Jython installation? > > > Cheers > Cal > > > On 16 September 2011 15:32, Gerry Brennan <ger...@gm...> wrote: > > > > Hi Group, > > > > I am trying to install ground report I have got as far as trying to run > > databaseInterface > > > > ./databaseInterface.bash > > Exception in thread "main" java.lang.NoClassDefFoundError: > > org/python/util/jython > > Caused by: java.lang.ClassNotFoundException: org.python.util.jython > > at java.net.URLClassLoader$1.run(URLClassLoader.java:217) > > at java.security.AccessController.doPrivileged(Native Method) > > at java.net.URLClassLoader.findClass(URLClassLoader.java:205) > > at java.lang.ClassLoader.loadClass(ClassLoader.java:319) > > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) > > at java.lang.ClassLoader.loadClass(ClassLoader.java:264) > > at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) > > Could not find the main class: org.python.util.jython. Program will exit. > > > > > > Can anyone help > > > > > ------------------------------------------------------------------------------ > > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA > > http://p.sf.net/sfu/rim-devcon-copy2 > > _______________________________________________ > > Ground-user mailing list > > Gro...@li... > > https://lists.sourceforge.net/lists/listinfo/ground-user > > > > > |
From: Calum F. <cal...@gm...> - 2011-09-19 11:53:10
|
Hi Gerry, The issue is to do with the fact that the ground report cannot find the Jython installation: Some initial steps to try with respects to troubleshooting: 1. Have you installed Jython onto your machine? 2. Have you updated the configuration files with the location of your Jython installation? Cheers Cal On 16 September 2011 15:32, Gerry Brennan <ger...@gm...> wrote: > > Hi Group, > > I am trying to install ground report I have got as far as trying to run > databaseInterface > > ./databaseInterface.bash > Exception in thread "main" java.lang.NoClassDefFoundError: > org/python/util/jython > Caused by: java.lang.ClassNotFoundException: org.python.util.jython > at java.net.URLClassLoader$1.run(URLClassLoader.java:217) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:205) > at java.lang.ClassLoader.loadClass(ClassLoader.java:319) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) > at java.lang.ClassLoader.loadClass(ClassLoader.java:264) > at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) > Could not find the main class: org.python.util.jython. Program will exit. > > > Can anyone help > > ------------------------------------------------------------------------------ > BlackBerry® DevCon Americas, Oct. 18-20, San Francisco, CA > http://p.sf.net/sfu/rim-devcon-copy2 > _______________________________________________ > Ground-user mailing list > Gro...@li... > https://lists.sourceforge.net/lists/listinfo/ground-user > > |
From: Gerry B. <ger...@gm...> - 2011-09-16 14:32:16
|
Hi Group, I am trying to install ground report I have got as far as trying to run databaseInterface ./databaseInterface.bash Exception in thread "main" java.lang.NoClassDefFoundError: org/python/util/jython Caused by: java.lang.ClassNotFoundException: org.python.util.jython at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:319) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:264) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:332) Could not find the main class: org.python.util.jython. Program will exit. Can anyone help |
From: Calum F. <cal...@gm...> - 2011-05-31 08:03:57
|
Hi Ouray, Did you mean for this email to come to the ground-user mailing list? If you haven't posted this in the grinder-user mailing list then you should as you are more likely to get an answer there. I missed the fact that this came to the ground report mailing list and not the grinder mailing list (emails from both go into the same account) for some reason so apologies for the delay in response. Cheers Cal On 24 May 2011 13:18, Ouray Viney <ou...@vi...> wrote: > Hi All: > I am looking for some advice/suggestions for the following test/business > requirement. > As a test engineer, I need to be able to run Grinder in such a way that does > not require user intervention, in order to allow for unmanned testing. > Example: > I have 3 test runs that I need to execute. I do not want to manually > monitor the tests from the Grinder Console. > Test methodology: > > Clean application logs > Start-up application > Test to ensure application is ready for load, avoid manual timing in scripts > etc. > Run a script (bash) that makes a call to grinder to start-up (headless - > without the Grinder Console). > Once the grinder run completes, harvest the required test artefacts and > start back at bullet 1 to complete n runs. > > Psydo code for wrapper script, in this example a shell script will do: > Description: This script will run 3 tests, and follow my desired test > methodology for each test run. > TEST_NAME=Checkout > TEST_RUNS=1 2 3 > > for i in ${TEST_RUNS}; > do > # call application to startup > # call to validate that the application is ready for testing > # call to start resource monitoring on SUTs > # call grinder agent to start > # once grinder agent complets call > # call to stop application > # call to fetch logs > # call to delete old logs > # call to fetch resource logs on SUTs > done > Other options: > - pass in dynamic grinder properties upon calling grinder agent to allow for > changing the 1) script 2) duration 3) thread for example. > Call to script (see Script: section below): > ./script.sh grinder.useConsole=false grinder.threads = ${THREAD_COUNT} > grinder.duration = ${TEST_DURATION} > Script: > <snippet> > for args in $@; do JAVA_ARGS = "${JAVA_ARGS} -D${args}" > $JAVA_HOME/bin/java -cp $CLASSPATH $JAVA_ARGS net.grinder.Grinder > $GRINDER_PROPERTIES > </snippet> > My guess: > - So far it seems that you can start an agent and tell it to start without > the Grinder Console, which intrinsically tells the agent to run a particular > script (as defined in the grinder.properties). I have tested this with > Grinder Stone, and it works great. > My Question: > - Is there any know reason why this is not a good idea? Has anyone seen/see > issues with this approach? > - Are there any better suggestions? > Any thoughts/comments are welcome. > Kind Rgds, > -- > Ouray Viney > http://www.viney.ca > > ------------------------------------------------------------------------------ > vRanger cuts backup time in half-while increasing security. > With the market-leading solution for virtual backup and recovery, > you get blazing-fast, flexible, and affordable data protection. > Download your free trial now. > http://p.sf.net/sfu/quest-d2dcopy1 > _______________________________________________ > Ground-user mailing list > Gro...@li... > https://lists.sourceforge.net/lists/listinfo/ground-user > > |
From: Ouray V. <ou...@vi...> - 2011-05-24 12:18:49
|
Hi All: I am looking for some advice/suggestions for the following test/business requirement. As a test engineer, I need to be able to run Grinder in such a way that does not require user intervention, in order to allow for unmanned testing. Example: I have 3 test runs that I need to execute. I do not want to manually monitor the tests from the Grinder Console. Test methodology: 1. Clean application logs 2. Start-up application 3. Test to ensure application is ready for load, avoid manual timing in scripts etc. 4. Run a script (bash) that makes a call to grinder to start-up (headless - without the Grinder Console). 5. Once the grinder run completes, harvest the required test artefacts and start back at bullet 1 to complete n runs. Psydo code for wrapper script, in this example a shell script will do: Description: This script will run 3 tests, and follow my desired test methodology for each test run. TEST_NAME=Checkout TEST_RUNS=1 2 3 for i in ${TEST_RUNS}; do # call application to startup # call to validate that the application is ready for testing # call to start resource monitoring on SUTs # call grinder agent to start # once grinder agent complets call # call to stop application # call to fetch logs # call to delete old logs # call to fetch resource logs on SUTs done Other options: - pass in dynamic grinder properties upon calling grinder agent to allow for changing the 1) script 2) duration 3) thread for example. Call to script (see Script: section below): ./script.sh grinder.useConsole=false grinder.threads = ${THREAD_COUNT} grinder.duration = ${TEST_DURATION} Script: <snippet> for args in $@; do JAVA_ARGS = "${JAVA_ARGS} -D${args}" $JAVA_HOME/bin/java -cp $CLASSPATH $JAVA_ARGS net.grinder.Grinder $GRINDER_PROPERTIES </snippet> My guess: - So far it seems that you can start an agent and tell it to start without the Grinder Console, which intrinsically tells the agent to run a particular script (as defined in the grinder.properties). I have tested this with Grinder Stone, and it works great. My Question: - Is there any know reason why this is not a good idea? Has anyone seen/see issues with this approach? - Are there any better suggestions? Any thoughts/comments are welcome. Kind Rgds, -- Ouray Viney http://www.viney.ca |
From: Calum F. <cal...@gm...> - 2011-05-11 05:29:13
|
Hiya, The error.txt file seems to be missing. However having a look at your descr_http.txt.file I can see some errors. Ground report expects every test number that ends with '00' to be categorised as a page or 'p' in the descriptions file. Please can you change your descr_http.txt file and try again. Cheers Cal On 10 May 2011 16:41, Ouray Viney <ou...@vi...> wrote: > Steps to reproduce my issue: > ======================== > 1) Install latest version 1.5 > 2) Configure as required to get it working in your environment. > 3) Create a descr_http.txt > 4) run the database interface script, upload the description file > 5) adjust the properties file, to point to your http data directory for said > test > 6) run the database interface script and upload your http data files > 7) adjust the groundReport.properties as required (see attached example) > 8) See attached error.txt to see full stack trace from the error. > Result: > ========== > Report fails to generate. > Attachements: > =========== > 1) descr_http.txt > 2) groundReport.properties > > -- > Ouray Viney > http://www.viney.ca > > ------------------------------------------------------------------------------ > Achieve unprecedented app performance and reliability > What every C/C++ and Fortran developer should know. > Learn how Intel has extended the reach of its next-generation tools > to help boost performance applications - inlcuding clusters. > http://p.sf.net/sfu/intel-dev2devmay > _______________________________________________ > Ground-user mailing list > Gro...@li... > https://lists.sourceforge.net/lists/listinfo/ground-user > > |
From: Ouray V. <ou...@vi...> - 2011-05-10 15:41:19
|
"Test","Description","SorDorP" 100,"Navigate to CPO Main URL",d 101,"GET personal.jsf",d 102,"GET default.jsf",d 200,"Page 2",d 201,"GET postage.jsf",d 202,"GET default.jsf",d 300,"Navigate to Find a Postal Code URL",d 301,"GET domestic.jsf",d 302,"GET findByCity",d 303,"GET login.jsp",d 400,"Page 4",d 401,"GET p-101351P.jsf",d 500,"Navigate to Find a Postal Code Advanced Form",d 501,"GET p-101351P.jsf",d 502,"GET p-101351P.jsf",d 600,"Page 6",d 601,"GET p-101351P.jsf",d 602,"GET p-101351P.jsf",d 700,"Find a postal code - Advanced Form",d 701,"GET mailing-amp-shipping-supplies.jsf",d 702,"GET findByAdvanced",d 800,"Page 8",d 801,"GET mailing-boxes.jsf",d 900,"Page 9",d 901,"GET bubble-envelopes.jsf",d 902,"GET findByTrackNumber",d 1000,"Add item to cart",d 1001,"GET p-242668.jsf",d 1002,"GET p-111201P.jsf",d 1100,"Page 11",d 1101,"GET p-242668.jsf",d 1102,"GET p-242668.jsf",d 1200,"Selecting Mailing Boxes",d 1201,"GET p-242668.jsf",d 1202,"GET p-242668.jsf",d 1300,"Page 13",d 1301,"GET holiday-2010.jsf",d 1302,"GET findARate",d 1400,"Page 14",d 1401,"GET p-113773X.jsf",d 1500,"Selecting Cushion Envelope - CD",d 1501,"GET p-113773X.jsf",d 1502,"GET p-113773X.jsf",d 1700,"Browse 2010 Winter Games Stamps",d 1701,"GET stamps.jsf",d 1800,"Selecting Canada Strikes Gold! Booklet of 10 Stamps",d 1801,"GET p-413776111.jsf",d 1900,"Adding item to cart",d 1901,"POST p-413776111.jsf",d 1902,"GET p-413776111.jsf",d 2000,"View Shopping Cart",d 2001,"GET basket.jsf",d 2002,"GET basket.jsf",d 2100,"Proceed to Checkout - Address Information",d 2101,"GET checkout.jsf",d 2102,"GET checkout.jsf",d 2200,"Page 22",d 2201,"POST checkout.jsf",d 2202,"GET checkout.jsf",d 2300,"Page 23",d 2301,"POST checkout.jsf",d 2302,"GET checkout.jsf",d 2303,"GET cpoPayment.jsf",d 2400,"Page 24",d 2401,"POST cpoPayment.jsf",d 2402,"GET confirmation.jsf",d 2403,"GET confirmation.jsf",d 2500,"Page 25",d 2501,"POST confirmation.jsf",d 2502,"GET confirmation.jsf",d 2600,"Page 26",d 2601,"GET logout",d 2602,"GET confirmation.jsf",d 2603,"GET login",d 2604,"GET login.jsp",d 2605,"GET signIn",d |
From: Calum F. <cal...@gm...> - 2011-04-12 16:55:29
|
Hi Ouray, The generateReport.bat file that I have (from the git repository) does not have the hash character in it. I will need to check the distribution packaged files, thats for the heads up. The line that you are talking about (containing saxon9.jar) should not be commented out. here is the content of my generateReport.bat file which is from version 1.5 ''' @echo OFF SETLOCAL FOR /F "tokens=* skip=15" %%i IN (../etc/environment.properties) DO set %%i set GROUNDREPORT_BIN_DIR=%GROUNDREPORT_HOME%/bin set GROUNDREPORT_CONFIG_DIR=%GROUNDREPORT_HOME%/etc set GROUNDREPORT_LIB_DIR=%GROUNDREPORT_HOME%/lib set GROUNDREPORT_MAIN_BIN=groundReport.py set GROUNDREPORT_DB_BIN=databaseInterface.py set GROUNDREPORT_MAIN_PROPERTIES=groundReport.properties set GROUNDREPORT_DATA_PROPERTIES=data.properties set GROUNDREPORT_DATABASE_PROPERTIES=database.properties set GROUNDREPORT_ENV_PROPERTIES=environment.properties set GROUNDREPORT_RESOURCE_PROPERTIES=resource.properties set CLASSPATH=%JYTHON_HOME%/jython.jar set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/jfreechart-1.0.13.jar set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/jcommon-1.0.16.jar set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/postgresql-8.2-508.jdbc3.jar set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/saxon9.jar set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/fop.jar set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/fop-hyph.jar set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/avalon-framework-4.2.0.jar set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/batik-all-1.6.jar set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/commons-io-1.3.1.jar set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/commons-logging-1.0.4.jar set JYTHON_CLASS=org.python.util.jython set JYTHON_JAVA_ARGS=-Dpython.home=%JYTHON_HOME% -Dpython.cachedir=%GROUNDREPORT_HOME%/var/cache set JYTHON_CMD=%JYTHON_JAVA_ARGS% %JYTHON_CLASS% set JAVA_BIN_DIR=%JAVA_HOME%/bin set JAVA_BIN=java.exe set JAVA_ARGS=-cp set JAVA_CMD="%JAVA_BIN_DIR%/%JAVA_BIN%" %JAVA_ARGS% %CLASSPATH% %JAVA_MEM_ARGS% set GROUNDREPORT_CMD=%GROUNDREPORT_LIB_DIR%/%GROUNDREPORT_MAIN_BIN% %GROUNDREPORT_CONFIG_DIR%/%GROUNDREPORT_ENV_PROPERTIES% %GROUNDREPORT_CONFIG_DIR%/%GROUNDREPORT_DATABASE_PROPERTIES% %GROUNDREPORT_CONFIG_DIR%/%GROUNDREPORT_DATA_PROPERTIES% %GROUNDREPORT_CONFIG_DIR%/%GROUNDREPORT_RESOURCE_PROPERTIES% %GROUNDREPORT_CONFIG_DIR%/%GROUNDREPORT_MAIN_PROPERTIES% call %JAVA_CMD% %JYTHON_CMD% %GROUNDREPORT_CMD% ENDLOCAL '''' The main output type of the ground report is pdf so it should be working. Please can you set the output type to 'xml' and send me a copy (you may need to compress it). Also please can you send me a copy of your current groundReport.properties file. The error is occurring at the point where it is transforming the xml produced into fop to produce the pdf file. Having a look at the xml will show us it it is producing correctly formed xml. Cheers Cal On 12 April 2011 15:47, Ouray Viney <ou...@vi...> wrote: > Steps: > 1) load data into db > 2) generate pdf report -> exception generated report generation failed > GRAPH CREATION > ________________ > > Generating Graphs > Time Taken = 00 Hours 00 Minutes 02 Seconds > > REPORT CREATION > _________________ > > Summary Article Information: > > Generating Graphs > Time Taken = 00 Hours 00 Minutes 04 Seconds > Generating Article > Time Taken = 00 Hours 00 Minutes 00 Seconds > Post-Processing Article > Error on line 1856 of pagesetup.xsl: > SXCH0003: java.lang.NumberFormatException: For input string: "4:" > at xsl:apply-templates > (file:/E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/style/xsl/docbook > -xsl-ns-1.75.0/fo/component.xsl#688) > processing /article > at xsl:apply-templates > (file:///E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/style/xsl/docbo > ok-xsl-ns-1.75.0/fo/docbook.xsl#310) > processing /article > in built-in template rule > at xsl:apply-templates > (file:///E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/style/xsl/docbo > ok-xsl-ns-1.75.0/fo/docbook.xsl#222) > processing / > Traceback (most recent call last): > File > "E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/lib/groundReport.py", > line 351, in <modul > e> > eval(articleType)() > File > "E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/lib/groundReport.py", > line 351, in <modul > e> > eval(articleType)() > File > "E:\Headwall_Software\Projects\Innovapost\development\ground_report-1.5\lib\articleFactory.py", > line 462, in crea > teSummaryArticle > self.postProcessArticle(output, fullChartList, filename) > File > "E:\Headwall_Software\Projects\Innovapost\development\ground_report-1.5\lib\articleFactory.py", > line 76, in postP > rocessArticle > xt.articleFOPTransform() > File > "E:\Headwall_Software\Projects\Innovapost\development\ground_report-1.5\lib\xmlUtilities.py", > line 94, in article > FOPTransform > transformer.transform(source, result) > net.sf.saxon.trans.XPathException: java.lang.NumberFormatException: For > input string: "4:" > at > net.sf.saxon.event.ContentHandlerProxy.handleSAXException(ContentHandlerProxy.java:521) > at > net.sf.saxon.event.ContentHandlerProxy.startContent(ContentHandlerProxy.java:375) > at > net.sf.saxon.event.NamespaceReducer.startContent(NamespaceReducer.java:197) > at > net.sf.saxon.event.ComplexContentOutputter.startContent(ComplexContentOutputter.java:550) > at > net.sf.saxon.event.ComplexContentOutputter.startElement(ComplexContentOutputter.java:174) > at > net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:289) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at > net.sf.saxon.expr.LetExpression.processLeavingTail(LetExpression.java:549) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at > net.sf.saxon.instruct.Template.applyLeavingTail(Template.java:203) > at > net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:345) > at > net.sf.saxon.instruct.ApplyTemplates.apply(ApplyTemplates.java:210) > at > net.sf.saxon.instruct.ApplyTemplates.processLeavingTail(ApplyTemplates.java:174) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at net.sf.saxon.instruct.Instruction.process(Instruction.java:93) > at > net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:296) > at > net.sf.saxon.expr.LetExpression.processLeavingTail(LetExpression.java:549) > at > net.sf.saxon.instruct.Template.applyLeavingTail(Template.java:203) > at > net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:345) > at > net.sf.saxon.instruct.ApplyTemplates.apply(ApplyTemplates.java:210) > at > net.sf.saxon.instruct.ApplyTemplates.processLeavingTail(ApplyTemplates.java:174) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at net.sf.saxon.instruct.Instruction.process(Instruction.java:93) > at > net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:296) > at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) > at > net.sf.saxon.expr.LetExpression.processLeavingTail(LetExpression.java:549) > at > net.sf.saxon.instruct.Template.applyLeavingTail(Template.java:203) > at > net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:345) > at > net.sf.saxon.instruct.ApplyTemplates.defaultAction(ApplyTemplates.java:378) > at > net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:333) > at > net.sf.saxon.instruct.ApplyTemplates$ApplyTemplatesPackage.processLeavingTail(ApplyTemplates.java:527) > at net.sf.saxon.Controller.transformDocument(Controller.java:1812) > at net.sf.saxon.Controller.transform(Controller.java:1621) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > net.sf.saxon.trans.XPathException: net.sf.saxon.trans.XPathException: > java.lang.NumberFormatException: For input string: > "4:" > Has anyone been able to successfully generate PDFs? > With the same settings, I change the output format to 'rff' and things work > fine. > Note: > - there is a typo in the generateReport.bat file (#set > CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/saxon9.jar), should not use '#' > in a bat file. > - uncomment that line and the error above is seen. > - comment out the line above and the error below is seen (REM set > CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/saxon9.jar): > GRAPH CREATION > ________________ > > Generating Graphs > Time Taken = 00 Hours 00 Minutes 02 Seconds > > REPORT CREATION > _________________ > > Summary Article Information: > > Generating Graphs > Time Taken = 00 Hours 00 Minutes 05 Seconds > Generating Article > Time Taken = 00 Hours 00 Minutes 00 Seconds > Post-Processing Article > Apr 12, 2011 10:47:13 AM org.apache.fop.fo.FOTreeBuilder$MainFOHandler > endElement > WARNING: Mismatch: root (http://www.w3.org/1999/XSL/Format) vs. > page-sequence (http://www.w3.org/1999/XSL/Format) > Apr 12, 2011 10:47:13 AM org.apache.fop.fo.FOTreeBuilder fatalError > SEVERE: javax.xml.transform.TransformerException: > java.lang.IllegalStateException: endElement() called for fo:root where > there is no current element. > file:///E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/style/xsl/docbook-xsl-ns-1.75.0/fo/docboo > k.xsl; Line #222; Column #59; java.lang.IllegalStateException: endElement() > called for fo:root where there is no current > element. > Time Taken = 00 Hours 00 Minutes 05 Seconds > > Thanks, > Ouray > ------------------------------------------------------------------------------ > Forrester Wave Report - Recovery time is now measured in hours and minutes > not days. Key insights are discussed in the 2010 Forrester Wave Report as > part of an in-depth evaluation of disaster recovery service providers. > Forrester found the best-in-class provider in terms of services and vision. > Read this report now! http://p.sf.net/sfu/ibm-webcastpromo > _______________________________________________ > Ground-user mailing list > Gro...@li... > https://lists.sourceforge.net/lists/listinfo/ground-user > > |
From: Ouray V. <ou...@vi...> - 2011-04-12 14:47:55
|
Steps: 1) load data into db 2) generate pdf report -> exception generated report generation failed GRAPH CREATION ________________ Generating Graphs Time Taken = 00 Hours 00 Minutes 02 Seconds REPORT CREATION _________________ Summary Article Information: Generating Graphs Time Taken = 00 Hours 00 Minutes 04 Seconds Generating Article Time Taken = 00 Hours 00 Minutes 00 Seconds Post-Processing Article Error on line 1856 of pagesetup.xsl: SXCH0003: java.lang.NumberFormatException: For input string: "4:" at xsl:apply-templates (file:/E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/style/xsl/docbook -xsl-ns-1.75.0/fo/component.xsl#688) processing /article at xsl:apply-templates (file:///E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/style/xsl/docbo ok-xsl-ns-1.75.0/fo/docbook.xsl#310) processing /article in built-in template rule at xsl:apply-templates (file:///E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/style/xsl/docbo ok-xsl-ns-1.75.0/fo/docbook.xsl#222) processing / Traceback (most recent call last): File "E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/lib/groundReport.py", line 351, in <modul e> eval(articleType)() File "E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/lib/groundReport.py", line 351, in <modul e> eval(articleType)() File "E:\Headwall_Software\Projects\Innovapost\development\ground_report-1.5\lib\articleFactory.py", line 462, in crea teSummaryArticle self.postProcessArticle(output, fullChartList, filename) File "E:\Headwall_Software\Projects\Innovapost\development\ground_report-1.5\lib\articleFactory.py", line 76, in postP rocessArticle xt.articleFOPTransform() File "E:\Headwall_Software\Projects\Innovapost\development\ground_report-1.5\lib\xmlUtilities.py", line 94, in article FOPTransform transformer.transform(source, result) net.sf.saxon.trans.XPathException: java.lang.NumberFormatException: For input string: "4:" at net.sf.saxon.event.ContentHandlerProxy.handleSAXException(ContentHandlerProxy.java:521) at net.sf.saxon.event.ContentHandlerProxy.startContent(ContentHandlerProxy.java:375) at net.sf.saxon.event.NamespaceReducer.startContent(NamespaceReducer.java:197) at net.sf.saxon.event.ComplexContentOutputter.startContent(ComplexContentOutputter.java:550) at net.sf.saxon.event.ComplexContentOutputter.startElement(ComplexContentOutputter.java:174) at net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:289) at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) at net.sf.saxon.expr.LetExpression.processLeavingTail(LetExpression.java:549) at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) at net.sf.saxon.instruct.Template.applyLeavingTail(Template.java:203) at net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:345) at net.sf.saxon.instruct.ApplyTemplates.apply(ApplyTemplates.java:210) at net.sf.saxon.instruct.ApplyTemplates.processLeavingTail(ApplyTemplates.java:174) at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) at net.sf.saxon.instruct.Instruction.process(Instruction.java:93) at net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:296) at net.sf.saxon.expr.LetExpression.processLeavingTail(LetExpression.java:549) at net.sf.saxon.instruct.Template.applyLeavingTail(Template.java:203) at net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:345) at net.sf.saxon.instruct.ApplyTemplates.apply(ApplyTemplates.java:210) at net.sf.saxon.instruct.ApplyTemplates.processLeavingTail(ApplyTemplates.java:174) at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) at net.sf.saxon.instruct.Instruction.process(Instruction.java:93) at net.sf.saxon.instruct.ElementCreator.processLeavingTail(ElementCreator.java:296) at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:556) at net.sf.saxon.expr.LetExpression.processLeavingTail(LetExpression.java:549) at net.sf.saxon.instruct.Template.applyLeavingTail(Template.java:203) at net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:345) at net.sf.saxon.instruct.ApplyTemplates.defaultAction(ApplyTemplates.java:378) at net.sf.saxon.instruct.ApplyTemplates.applyTemplates(ApplyTemplates.java:333) at net.sf.saxon.instruct.ApplyTemplates$ApplyTemplatesPackage.processLeavingTail(ApplyTemplates.java:527) at net.sf.saxon.Controller.transformDocument(Controller.java:1812) at net.sf.saxon.Controller.transform(Controller.java:1621) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) net.sf.saxon.trans.XPathException: net.sf.saxon.trans.XPathException: java.lang.NumberFormatException: For input string: "4:" Has anyone been able to successfully generate PDFs? With the same settings, I change the output format to 'rff' and things work fine. Note: - there is a typo in the generateReport.bat file (#set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/saxon9.jar), should not use '#' in a bat file. - uncomment that line and the error above is seen. - comment out the line above and the error below is seen (REM set CLASSPATH=%CLASSPATH%;%GROUNDREPORT_LIB_DIR%/saxon9.jar): GRAPH CREATION ________________ Generating Graphs Time Taken = 00 Hours 00 Minutes 02 Seconds REPORT CREATION _________________ Summary Article Information: Generating Graphs Time Taken = 00 Hours 00 Minutes 05 Seconds Generating Article Time Taken = 00 Hours 00 Minutes 00 Seconds Post-Processing Article Apr 12, 2011 10:47:13 AM org.apache.fop.fo.FOTreeBuilder$MainFOHandler endElement WARNING: Mismatch: root (http://www.w3.org/1999/XSL/Format) vs. page-sequence (http://www.w3.org/1999/XSL/Format) Apr 12, 2011 10:47:13 AM org.apache.fop.fo.FOTreeBuilder fatalError SEVERE: javax.xml.transform.TransformerException: java.lang.IllegalStateException: endElement() called for fo:root where there is no current element. file:///E:/Headwall_Software/Projects/Innovapost/development/ground_report-1.5/style/xsl/docbook-xsl-ns-1.75.0/fo/docboo k.xsl; Line #222; Column #59; java.lang.IllegalStateException: endElement() called for fo:root where there is no current element. Time Taken = 00 Hours 00 Minutes 05 Seconds Thanks, Ouray |
From: Calum F. <cal...@gm...> - 2011-04-11 18:37:38
|
Hiya, My first thoughts are: verify your connection details to the db. If these are not correct then I could see how it would throw this error as conn is defined by creating the db connection. If this fails then it will exit on the line that you specified was in the error....I may need to tidy that up a bit. Cheers Cal On 11 April 2011 16:30, Ouray Viney <ou...@vi...> wrote: > Hi Calum: > OK, so it appears I jumped the gun. After trying to generate scripts a > second time, I am seeing the following error thrown by the groundReport.py. > grinder@hostname:~/grinder_artifacts/ground_report-1.5/bin$ > ./generateReport.bash /test_data/PEValidation_1104_R1T1 > Traceback (most recent call last): > File > "/opt/home/grinder/grinder_artifacts/ground_report-1.5/lib/groundReport.py", > line 367, in <module> > conn.close() > NameError: name 'conn' is not defined > -- > grinder@hsotname:~/grinder_artifacts/ground_report-1.5/bin$ grep conn > ../lib/groundReport.py > conn = du.getTxDbConnection(d, u, p, v) > c = conn.cursor(1) > sg = StandardGrapher(loadRun, loadRunList, conn, dataType, path, > lang, imageFormat, resourceFile) > mg = MultithreadGrapher(loadRun, loadRunList, conn, dataType, > path, lang, imageFormat, resourceFile) > ac = ArticleCreation(project, loadRun, loadRunList, pagePair, test, > conn, dataType, outputFormat, path, lang, booleanDict['archive'], > booleanDict['useConcurrentUserComparisonGraphs'], > booleanDict['useWatermark'], watermarkImage, imageFormat, resourceFile) > conn.close() > Any thoughts? > Cheers, > Ouray > On Mon, Apr 11, 2011 at 8:06 AM, Ouray Viney <ou...@vi...> wrote: >> >> Hi Calum: >> Thank you for your reply. Sorry I am a bit delayed getting back to you. >> I did as you requested, I tweaked the groundReport.properties as >> recommended. I retried the report generation script and this time had no >> errors (sweet!). >> So, for now, you have solved my issues generating reports. Now I have to >> review the generated reports. >> Thank you for your Support! >> Ouray >> >> On Fri, Apr 8, 2011 at 2:45 PM, Calum Fitzgerald >> <cal...@gm...> wrote: >>> >>> Hi Ouray, >>> >>> Please cc the ground-user mailing list on all correspondence so that >>> the mailing list keeps a record. Thanks. >>> >>> I've had a look at both the files that you sent. >>> >>> The data.properties file is ok. >>> >>> The groundReport.properties file has a lot of stuff switched on. >>> >>> regarding the following: >>> >>> ''' >>> ################# >>> # >>> # Select Reports >>> # >>> ################# >>> # >>> #choose report type (boolean y or n) >>> # >>> SummaryArticle=y >>> IndividualTestArticle=y >>> IndividualPageArticle=y >>> PageBreakdownArticle=y >>> LoadRunComparisonArticle=y >>> >>> ''' >>> >>> Please can you try it with only the summary report switched on and all >>> the others switched off. >>> The page reports are incompatible with your data type..as you only >>> have norm data and not http data. Page reports are for http data. >>> The LoadRunComparisonArticle is for comparing multiple runs, which I >>> don't think you are trying to do in this instance. >>> Anyway lets start with one report, get that working and then move onto >>> the others if necessary. >>> >>> Regarding the following section: >>> >>> ''' >>> ################# >>> # >>> # Select Graphs >>> # >>> ################# >>> # >>> ''' >>> >>> You have a number of the graphs switched on as well. For the purposes >>> of getting things running can you have them all switched off as well. >>> >>> The reports automatically create graphs which are incorporated into >>> the reports so you don't need to produce the graphs if you are >>> producing a report....unless there is a specific graph you are looking >>> for. >>> >>> Any graph with HTTP in its name is specifically for HTTP type data, if >>> a graph does not have HTTP in its name it should be safe to use with >>> norm type data. >>> >>> So long short lets try and output only a summary report to see if we >>> can narrow down where the problem might be.. >>> >>> Cheers >>> Cal >>> >>> >>> On 8 April 2011 14:02, Ouray Viney <ou...@vi...> wrote: >>> > HI Calum: >>> > >>> > The OSS world is amazing. Thank you for your reply! >>> > >>> > OK, first, let me answer your questions. >>> > >>> > I am not have any issues with the databaseInterface.bash script at all. >>> > >>> > I have loaded "norm" data into the reporting DB (see attached >>> > properties >>> > files you requested). >>> > >>> > Looking forward to solving this with you. >>> > >>> > Cheers, >>> > >>> > Ouray >>> > >>> > On Thu, Apr 7, 2011 at 11:56 AM, Calum Fitzgerald >>> > <cal...@gm...> wrote: >>> >> >>> >> Hi, >>> >> >>> >> Apologies for the slight delay in response. >>> >> >>> >> I'm glad you have managed to load the data into the Postgres DB, are >>> >> you still having problems with the databaseInterface script? >>> >> >>> >> With respects to the report generation I am assuming that you have >>> >> loaded norm data into the database. >>> >> >>> >> The table page_summary_norm_mv does not exist for data of type 'norm' >>> >> as norm data does not have the pages (composite tests) which are >>> >> generated from the http plugin. >>> >> >>> >> The report should not be trying to access this table if the data type >>> >> has been set to norm. >>> >> >>> >> Please could you let me know the settings in your data.properties and >>> >> the groundReport.properties file...attaching them to a reply would be >>> >> good >>> >> >>> >> This may well be a bug. >>> >> >>> >> Cheers >>> >> Cal >>> >> >>> >> On 6 April 2011 18:47, Ouray Viney <ou...@vi...> wrote: >>> >> > Hi All: >>> >> > Running with GroundReports 1.5. Have successfully uploaded my >>> >> > Grinder3 >>> >> > data >>> >> > to the PostgreSQL DB. When I attempt to run the >>> >> > ./generateReport.bash >>> >> > script with the desired data dir, I get the following error: >>> >> > Traceback (most recent call last): >>> >> > File >>> >> > >>> >> > >>> >> > "/opt/home/grinder/grinder_artifacts/ground_report-1.5/lib/groundReport.py", >>> >> > line 278, in <module> >>> >> > c.execute('SELECT e."Load Run", count(e."Test") AS "Count" FROM >>> >> > ONLY >>> >> > "page_summary_%s_mv" e WHERE e."Load Run" = %s AND e."Test" = %s >>> >> > GROUP >>> >> > BY >>> >> > e."Load Run";' % (str(dataType),str(loadRun),str(p))) >>> >> > zxJDBC.Error: ERROR: relation "page_summary_norm_mv" does not exist >>> >> > [SQLCode: 0], [SQLState: 42P01] >>> >> > + cd /opt/home/grinder/grinder_artifacts/ground_report-1.5/bin >>> >> > When I look at the database, I don't actually see the table. I have >>> >> > used >>> >> > the databaseInterface.bash script several times to recreate the >>> >> > database, >>> >> > but not luck. That particular table is never there. >>> >> > Clearly I am missing something. >>> >> > Any ideas? >>> >> > >>> >> > >>> >> > ------------------------------------------------------------------------------ >>> >> > Xperia(TM) PLAY >>> >> > It's a major breakthrough. An authentic gaming >>> >> > smartphone on the nation's most reliable network. >>> >> > And it wants your games. >>> >> > http://p.sf.net/sfu/verizon-sfdev >>> >> > _______________________________________________ >>> >> > Ground-user mailing list >>> >> > Gro...@li... >>> >> > https://lists.sourceforge.net/lists/listinfo/ground-user >>> >> > >>> >> > >>> > >>> > >>> > >>> > -- >>> > Ouray Viney >>> > http://www.viney.ca >>> > >> >> >> >> -- >> Ouray Viney >> http://www.viney.ca > > > > -- > Ouray Viney > http://www.viney.ca > |
From: Ouray V. <ou...@vi...> - 2011-04-11 15:30:21
|
Hi Calum: OK, so it appears I jumped the gun. After trying to generate scripts a second time, I am seeing the following error thrown by the groundReport.py. grinder@hostname:~/grinder_artifacts/ground_report-1.5/bin$ ./generateReport.bash /test_data/PEValidation_1104_R1T1 Traceback (most recent call last): File "/opt/home/grinder/grinder_artifacts/ground_report-1.5/lib/groundReport.py", line 367, in <module> conn.close() NameError: name 'conn' is not defined -- grinder@hsotname:~/grinder_artifacts/ground_report-1.5/bin$ grep conn ../lib/groundReport.py conn = du.getTxDbConnection(d, u, p, v) c = conn.cursor(1) sg = StandardGrapher(loadRun, loadRunList, conn, dataType, path, lang, imageFormat, resourceFile) mg = MultithreadGrapher(loadRun, loadRunList, conn, dataType, path, lang, imageFormat, resourceFile) ac = ArticleCreation(project, loadRun, loadRunList, pagePair, test, conn, dataType, outputFormat, path, lang, booleanDict['archive'], booleanDict['useConcurrentUserComparisonGraphs'], booleanDict['useWatermark'], watermarkImage, imageFormat, resourceFile) conn.close() Any thoughts? Cheers, Ouray On Mon, Apr 11, 2011 at 8:06 AM, Ouray Viney <ou...@vi...> wrote: > Hi Calum: > > Thank you for your reply. Sorry I am a bit delayed getting back to you. > > I did as you requested, I tweaked the groundReport.properties as > recommended. I retried the report generation script and this time had no > errors (sweet!). > > So, for now, you have solved my issues generating reports. Now I have to > review the generated reports. > > Thank you for your Support! > > Ouray > > > On Fri, Apr 8, 2011 at 2:45 PM, Calum Fitzgerald < > cal...@gm...> wrote: > >> Hi Ouray, >> >> Please cc the ground-user mailing list on all correspondence so that >> the mailing list keeps a record. Thanks. >> >> I've had a look at both the files that you sent. >> >> The data.properties file is ok. >> >> The groundReport.properties file has a lot of stuff switched on. >> >> regarding the following: >> >> ''' >> ################# >> # >> # Select Reports >> # >> ################# >> # >> #choose report type (boolean y or n) >> # >> SummaryArticle=y >> IndividualTestArticle=y >> IndividualPageArticle=y >> PageBreakdownArticle=y >> LoadRunComparisonArticle=y >> >> ''' >> >> Please can you try it with only the summary report switched on and all >> the others switched off. >> The page reports are incompatible with your data type..as you only >> have norm data and not http data. Page reports are for http data. >> The LoadRunComparisonArticle is for comparing multiple runs, which I >> don't think you are trying to do in this instance. >> Anyway lets start with one report, get that working and then move onto >> the others if necessary. >> >> Regarding the following section: >> >> ''' >> ################# >> # >> # Select Graphs >> # >> ################# >> # >> ''' >> >> You have a number of the graphs switched on as well. For the purposes >> of getting things running can you have them all switched off as well. >> >> The reports automatically create graphs which are incorporated into >> the reports so you don't need to produce the graphs if you are >> producing a report....unless there is a specific graph you are looking >> for. >> >> Any graph with HTTP in its name is specifically for HTTP type data, if >> a graph does not have HTTP in its name it should be safe to use with >> norm type data. >> >> So long short lets try and output only a summary report to see if we >> can narrow down where the problem might be.. >> >> Cheers >> Cal >> >> >> On 8 April 2011 14:02, Ouray Viney <ou...@vi...> wrote: >> > HI Calum: >> > >> > The OSS world is amazing. Thank you for your reply! >> > >> > OK, first, let me answer your questions. >> > >> > I am not have any issues with the databaseInterface.bash script at all. >> > >> > I have loaded "norm" data into the reporting DB (see attached properties >> > files you requested). >> > >> > Looking forward to solving this with you. >> > >> > Cheers, >> > >> > Ouray >> > >> > On Thu, Apr 7, 2011 at 11:56 AM, Calum Fitzgerald >> > <cal...@gm...> wrote: >> >> >> >> Hi, >> >> >> >> Apologies for the slight delay in response. >> >> >> >> I'm glad you have managed to load the data into the Postgres DB, are >> >> you still having problems with the databaseInterface script? >> >> >> >> With respects to the report generation I am assuming that you have >> >> loaded norm data into the database. >> >> >> >> The table page_summary_norm_mv does not exist for data of type 'norm' >> >> as norm data does not have the pages (composite tests) which are >> >> generated from the http plugin. >> >> >> >> The report should not be trying to access this table if the data type >> >> has been set to norm. >> >> >> >> Please could you let me know the settings in your data.properties and >> >> the groundReport.properties file...attaching them to a reply would be >> >> good >> >> >> >> This may well be a bug. >> >> >> >> Cheers >> >> Cal >> >> >> >> On 6 April 2011 18:47, Ouray Viney <ou...@vi...> wrote: >> >> > Hi All: >> >> > Running with GroundReports 1.5. Have successfully uploaded my >> Grinder3 >> >> > data >> >> > to the PostgreSQL DB. When I attempt to run the >> ./generateReport.bash >> >> > script with the desired data dir, I get the following error: >> >> > Traceback (most recent call last): >> >> > File >> >> > >> >> > >> "/opt/home/grinder/grinder_artifacts/ground_report-1.5/lib/groundReport.py", >> >> > line 278, in <module> >> >> > c.execute('SELECT e."Load Run", count(e."Test") AS "Count" FROM >> ONLY >> >> > "page_summary_%s_mv" e WHERE e."Load Run" = %s AND e."Test" = %s >> GROUP >> >> > BY >> >> > e."Load Run";' % (str(dataType),str(loadRun),str(p))) >> >> > zxJDBC.Error: ERROR: relation "page_summary_norm_mv" does not exist >> >> > [SQLCode: 0], [SQLState: 42P01] >> >> > + cd /opt/home/grinder/grinder_artifacts/ground_report-1.5/bin >> >> > When I look at the database, I don't actually see the table. I have >> >> > used >> >> > the databaseInterface.bash script several times to recreate the >> >> > database, >> >> > but not luck. That particular table is never there. >> >> > Clearly I am missing something. >> >> > Any ideas? >> >> > >> >> > >> ------------------------------------------------------------------------------ >> >> > Xperia(TM) PLAY >> >> > It's a major breakthrough. An authentic gaming >> >> > smartphone on the nation's most reliable network. >> >> > And it wants your games. >> >> > http://p.sf.net/sfu/verizon-sfdev >> >> > _______________________________________________ >> >> > Ground-user mailing list >> >> > Gro...@li... >> >> > https://lists.sourceforge.net/lists/listinfo/ground-user >> >> > >> >> > >> > >> > >> > >> > -- >> > Ouray Viney >> > http://www.viney.ca >> > >> > > > > -- > Ouray Viney > http://www.viney.ca > -- Ouray Viney http://www.viney.ca |