This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bryan T. <br...@sy...> - 2015-05-09 21:56:29
|
Alex, I am not sure which shortcuts you are referring to. Are you discussing the PREFIX declarations used in SPARQL? Or the manner in which the namespaces of the different triple or quad store instances are mapped onto URLs? Thanks, Bryan On Saturday, May 9, 2015, Alex Jouravlev <al...@bu...> wrote: > Hi Brad, > > Thank you for that. > > I was actually asking about namespace shortcuts in Bigdata terms. Are they > defined in DB or in a client only? > > Alex > > Alex Jouravlev > Director, Business Abstraction Pty Ltd > Phone: +61-(2)-8003-4830 > Mobile: +61-4-0408-3258 > Web: http://www.businessabstraction.com > LinkedIn: http://au.linkedin.com/in/alexjouravlev/ > > On Sat, May 9, 2015 at 11:19 PM, Brad Bebee <be...@sy... > <javascript:_e(%7B%7D,'cvml','be...@sy...');>> wrote: > >> Alex, >> >> The namespace abstraction allows you to host multiple knowledge bases, >> each of which having potentially different configuration options (triples, >> RDR, inference, quads, etc.), in a single Blazegraph server instance. >> Here's some of the configuration options [1]. There is a default >> namespace ("kb"), which is used if no namespace is specified in the SPARQL >> endpoint. In the example below [A] and [B] are equivalent. [A] uses the >> default namespace. [C] specifies using the "biology" namespace. >> >> [A] http://localhost:9999/bigdata/sparql >> >> [B] http://localhost:9999/bigdata/namespace/kb/sparql >> >> [C] http://localhost:9999/bigdata/namespace/biology/sparql >> >> Thanks, --Brad >> >> [1] >> http://wiki.blazegraph.com/wiki/index.php/GettingStarted#So_how_do_I_put_the_database_in_triple_store_versus_quad_store_mode.3F >> >> On Sat, May 9, 2015 at 1:20 AM, Alex Jouravlev < >> al...@bu... >> <javascript:_e(%7B%7D,'cvml','al...@bu...');>> wrote: >> >>> Hi, >>> >>> I cannot understand what can I do with the namespace prefixes. Should I >>> supply them with any invocation? >>> Does Bigdata store them? >>> When supplying new triples/quads, should I convert namespaces to fully >>> qualified names? >>> >>> Alex Jouravlev >>> Director, Business Abstraction Pty Ltd >>> Phone: +61-(2)-8003-4830 >>> Mobile: +61-4-0408-3258 >>> Web: http://www.businessabstraction.com >>> LinkedIn: http://au.linkedin.com/in/alexjouravlev/ >>> >>> >>> ------------------------------------------------------------------------------ >>> One dashboard for servers and applications across Physical-Virtual-Cloud >>> Widest out-of-the-box monitoring support with 50+ applications >>> Performance metrics, stats and reports that give you Actionable Insights >>> Deep dive visibility with transaction tracing using APM Insight. >>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> <javascript:_e(%7B%7D,'cvml','Big...@li...');> >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> >> >> -- >> _______________ >> Brad Bebee >> Managing Partner >> SYSTAP, LLC >> e: be...@sy... <javascript:_e(%7B%7D,'cvml','be...@sy...');> >> m: 202.642.7961 >> f: 571.367.5000 >> w: www.systap.com >> >> Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new >> technology to use GPUs to accelerate data-parallel graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> > > -- ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com <http://bigdata.com> http://mapgraph.io Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Alex J. <al...@bu...> - 2015-05-09 21:44:55
|
Hi Brad, Thank you for that. I was actually asking about namespace shortcuts in Bigdata terms. Are they defined in DB or in a client only? Alex Alex Jouravlev Director, Business Abstraction Pty Ltd Phone: +61-(2)-8003-4830 Mobile: +61-4-0408-3258 Web: http://www.businessabstraction.com LinkedIn: http://au.linkedin.com/in/alexjouravlev/ On Sat, May 9, 2015 at 11:19 PM, Brad Bebee <be...@sy...> wrote: > Alex, > > The namespace abstraction allows you to host multiple knowledge bases, > each of which having potentially different configuration options (triples, > RDR, inference, quads, etc.), in a single Blazegraph server instance. > Here's some of the configuration options [1]. There is a default > namespace ("kb"), which is used if no namespace is specified in the SPARQL > endpoint. In the example below [A] and [B] are equivalent. [A] uses the > default namespace. [C] specifies using the "biology" namespace. > > [A] http://localhost:9999/bigdata/sparql > > [B] http://localhost:9999/bigdata/namespace/kb/sparql > > [C] http://localhost:9999/bigdata/namespace/biology/sparql > > Thanks, --Brad > > [1] > http://wiki.blazegraph.com/wiki/index.php/GettingStarted#So_how_do_I_put_the_database_in_triple_store_versus_quad_store_mode.3F > > On Sat, May 9, 2015 at 1:20 AM, Alex Jouravlev < > al...@bu...> wrote: > >> Hi, >> >> I cannot understand what can I do with the namespace prefixes. Should I >> supply them with any invocation? >> Does Bigdata store them? >> When supplying new triples/quads, should I convert namespaces to fully >> qualified names? >> >> Alex Jouravlev >> Director, Business Abstraction Pty Ltd >> Phone: +61-(2)-8003-4830 >> Mobile: +61-4-0408-3258 >> Web: http://www.businessabstraction.com >> LinkedIn: http://au.linkedin.com/in/alexjouravlev/ >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > > -- > _______________ > Brad Bebee > Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.systap.com > > Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > |
From: Bryan T. <br...@sy...> - 2015-05-09 20:42:16
|
The essence of performance optimization is understanding the bottleneck. Is it IO WAIT? GC OH? CPU? Serialization through lock contention? Once you understand the bottleneck, it often suggests the approach to reduce that bottleneck. Bryan On Saturday, May 9, 2015, James HK <jam...@gm...> wrote: > Hi, > > > Let me suggest that 256MB is a pretty small heap for the JVM running > > the database. Why don't you give it a 4GB heap? > > Changing the option to "-Xmx4g" did not amount in any significant > reduction [0] of the test execution time (compared to the previous > cited runs). > > > PerformanceOptimization (General background on performance tuning for > > bigdata) > > http://wiki.blazegraph.com/wiki/index.php/PerformanceOptimization > > I read through the pages but I couldn't find any option that would > make the test suite run faster in the Travis-CI environment. > > [0] https://travis-ci.org/mwjames/SemanticMediaWiki/builds/61901411 > > Cheers > > On 5/9/15, Bryan Thompson <br...@sy... <javascript:;>> wrote: > > James, > > > > Let me suggest that 256MB is a pretty small heap for the JVM running > > the database. Why don't you give it a 4GB heap? > > > > We provide guidance for performance tuning at the following links: > > > > PerformanceOptimization (General background on performance tuning for > > bigdata) > > http://wiki.blazegraph.com/wiki/index.php/PerformanceOptimization > > > > IOOptimization (How to tune IO performance, SSD, branching factors, etc.) > > http://wiki.blazegraph.com/wiki/index.php/IOOptimization > > > > QueryOptimization (SPARQL specific query performance tricks and tips) > > http://wiki.blazegraph.com/wiki/index.php/QueryOptimization > > > > Thanks, > > Bryan > > ---- > > Bryan Thompson > > Chief Scientist & Founder > > SYSTAP, LLC > > 4501 Tower Road > > Greensboro, NC 27410 > > br...@sy... <javascript:;> > > http://blazegraph.com > > http://blog.bigdata.com > > http://mapgraph.io > > > > Blazegraph™ is our ultra high-performance graph database that supports > > both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our > > disruptive new technology to use GPUs to accelerate data-parallel > > graph analytics. > > > > CONFIDENTIALITY NOTICE: This email and its contents and attachments > > are for the sole use of the intended recipient(s) and are confidential > > or proprietary to SYSTAP. Any unauthorized review, use, disclosure, > > dissemination or copying of this email or its contents or attachments > > is prohibited. If you have received this communication in error, > > please notify the sender by reply email and permanently delete all > > copies of the email and its contents and attachments. > > > > > > On Sat, May 9, 2015 at 12:12 AM, James HK <jam...@gm... > <javascript:;>> > > wrote: > >> Hi, > >> > >> I just wanted to give you some feedback on the Blazegraph > >> 1.5.1/Semantic MediaWiki 2.3/Travis integration. > >> > >> After having tried different ways to use tomcat + bigdata on Travis > >> (which wasn't really fruitful, our Travis/sesame runs on tomcat), I > >> settled for the more pragmatic [0] approach. Using [0] now allows us > >> to run our test suite on Travis [1] against Blazegraph 1.5.1. > >> > >> I'm not sure why but Blazegraph requires ~6 min to pass all our tests > >> while the same tests finish on Fuseki [2], Sesame [3], or Virtuoso [4] > >> in about half the time (~3 min). > >> > >> Maybe there are some tweaks available in order to make our tests run > >> faster before adding the Blazegraph service to our normal Travis > >> setup. > >> > >> [0] java -server -Xmx256m > >> -Dbigdata.propertyFile=.../build/travis/blazegraph-store.properties > >> -jar bigdata-bundled.jar > >> > >> [1] https://travis-ci.org/mwjames/SemanticMediaWiki/builds/61850318, > >> https://travis-ci.org/mwjames/SemanticMediaWiki/builds/61849465 > >> > >> [2] > >> https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848314 > >> [3] > >> https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848316 > >> [4] > >> https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848315 > >> > >> Cheers > >> > >> On 5/1/15, Bryan Thompson <br...@sy... <javascript:;>> wrote: > >>> Here is the relevant documentation from the code: > >>> > >>> to turn this off include the following in your property file: > >>> > >>> com.bigdata.rdf.store.AbstractTripleStore.inlineDateTimes=false > >>> > >>> /** > >>> * Set up database to inline date/times directly into the statement > >>> * indices rather than using the lexicon to map them to term identifiers > >>> * and back (default {@value #DEFAULT_INLINE_DATE_TIMES}). Date times > >>> * will be converted to UTC, then stored as milliseconds since the > >>> * epoch. Thus if you inline date/times you will lose the canonical > >>> * representation of the date/time. This has two consequences: (1) you > >>> * will not be able to recover the original time zone of the date/time; > >>> * and (2) greater than millisecond precision will be lost. > >>> * > >>> * @see #INLINE_DATE_TIMES_TIMEZONE > >>> */ > >>> String INLINE_DATE_TIMES = AbstractTripleStore.class.getName() > >>> + ".inlineDateTimes"; > >>> > >>> String DEFAULT_INLINE_DATE_TIMES = "true"; > >>> ---- > >>> Bryan Thompson > >>> Chief Scientist & Founder > >>> SYSTAP, LLC > >>> 4501 Tower Road > >>> Greensboro, NC 27410 > >>> br...@sy... <javascript:;> > >>> http://blazegraph.com > >>> http://blog.bigdata.com > >>> http://mapgraph.io > >>> > >>> Blazegraph™ is our ultra high-performance graph database that supports > >>> both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our > >>> disruptive new technology to use GPUs to accelerate data-parallel > >>> graph analytics. > >>> > >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments > >>> are for the sole use of the intended recipient(s) and are confidential > >>> or proprietary to SYSTAP. Any unauthorized review, use, disclosure, > >>> dissemination or copying of this email or its contents or attachments > >>> is prohibited. If you have received this communication in error, > >>> please notify the sender by reply email and permanently delete all > >>> copies of the email and its contents and attachments. > >>> > >>> > >>> On Thu, Apr 30, 2015 at 12:34 PM, Bryan Thompson <br...@sy... > <javascript:;>> > >>> wrote: > >>>> I am not at a computer now.. see AbstractTripleStore > INLINE_DATE_TIMES. > >>>> You > >>>> need to turn off that property when creating the namespace or in the > >>>> RWStore > >>>> file in WEB-INF. > >>>> > >>>> Thanks, > >>>> Bryan > >>>> > >>>> On Apr 30, 2015 7:30 AM, "James HK" <jam...@gm... > <javascript:;>> > >>>> wrote: > >>>>> > >>>>> Hi, > >>>>> > >>>>> When trying to run our unit test suite [0] against a > preliminary/local > >>>>> Blazegraph 1.5.1 instance the > >>>>> following error appeared during the test run (using a vanilla > >>>>> Blazegraph with the standard kb namespace) . > >>>>> > >>>>> Our test suite (and the test mentioned) is run/tested against > Virtuoso > >>>>> 6.1/Fuseki 1.1.1/Sesame 2.7.14 [0] on Travis-CI therefore it is > >>>>> unlikely an issue on our side. > >>>>> > >>>>> 1) > >>>>> > SMW\Tests\Integration\MediaWiki\Import\TimeDataTypeTest::testImportOfDifferentDateWithAssortmentOfOutputConversion > >>>>> SMW\SPARQLStore\Exception\BadHttpDatabaseResponseException: A SPARQL > >>>>> query error has occurred > >>>>> Query: > >>>>> PREFIX wiki: <http://example.org/id/> > >>>>> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> > >>>>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> > >>>>> PREFIX owl: <http://www.w3.org/2002/07/owl#> > >>>>> PREFIX swivt: <http://semantic-mediawiki.org/swivt/1.0#> > >>>>> PREFIX property: <http://example.org/id/Property-3A> > >>>>> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> > >>>>> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { > >>>>> wiki:TimeDataTypeRegressionTest ?p ?o } > >>>>> Error: Query refused > >>>>> Endpoint: http://192.168.1.104:9999/bigdata/namespace/kb/sparql > >>>>> HTTP response code: 500 > >>>>> > >>>>> This translates into an error on the Blazegraph side with (output > from > >>>>> http://localhost:9999/bigdata/#update): > >>>>> > >>>>> ERROR: SPARQL-UPDATE: updateStr=PREFIX wiki: > >>>>> PREFIX rdf: > >>>>> PREFIX rdfs: > >>>>> PREFIX owl: > >>>>> PREFIX swivt: > >>>>> PREFIX property: > >>>>> PREFIX xsd: > >>>>> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { > >>>>> wiki:TimeDataTypeRegressionTest ?p ?o } > >>>>> java.util.concurrent.ExecutionException: > >>>>> java.util.concurrent.ExecutionException: > >>>>> org.openrdf.query.UpdateExecutionException: > >>>>> java.lang.IllegalStateException: Already assigned: > >>>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > >>>>> datatype=Vocab(-42)], new=LiteralExtensionIV > >>>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > >>>>> "8452"^^ > >>>>> at > java.util.concurrent.FutureTask.report(FutureTask.java:122) > >>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) > >>>>> at > >>>>> > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) > >>>>> at > >>>>> > com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:359) > >>>>> at > >>>>> > com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:165) > >>>>> at > >>>>> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) > >>>>> at > >>>>> > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) > >>>>> at > >>>>> javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > >>>>> at > >>>>> javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > >>>>> at > >>>>> > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > >>>>> at > >>>>> > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > >>>>> at > >>>>> > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > >>>>> at > >>>>> > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > >>>>> at > >>>>> > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > >>>>> at > >>>>> > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > >>>>> at > >>>>> > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > >>>>> at > >>>>> > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > >>>>> at > >>>>> > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > >>>>> at > >>>>> > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > >>>>> at > >>>>> > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > >>>>> at > >>>>> > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > >>>>> at > >>>>> > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > >>>>> at org.eclipse.jetty.server.Server.handle(Server.java:497) > >>>>> at > >>>>> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) > >>>>> at > >>>>> > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > >>>>> at > >>>>> > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > >>>>> at > >>>>> > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > >>>>> at > >>>>> > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > >>>>> at java.lang.Thread.run(Thread.java:744) > >>>>> Caused by: java.util.concurrent.ExecutionException: > >>>>> org.openrdf.query.UpdateExecutionException: > >>>>> java.lang.IllegalStateException: Already assigned: > >>>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > >>>>> datatype=Vocab(-42)], new=LiteralExtensionIV > >>>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > >>>>> "8452"^^ > >>>>> at > java.util.concurrent.FutureTask.report(FutureTask.java:122) > >>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) > >>>>> at > >>>>> > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460) > >>>>> at > >>>>> > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:371) > >>>>> at > >>>>> > com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) > >>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) > >>>>> at > >>>>> > com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365) > >>>>> at > >>>>> > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258) > >>>>> ... 26 more > >>>>> Caused by: org.openrdf.query.UpdateExecutionException: > >>>>> java.lang.IllegalStateException: Already assigned: > >>>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > >>>>> datatype=Vocab(-42)], new=LiteralExtensionIV > >>>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > >>>>> "8452"^^ > >>>>> at > >>>>> > com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1303) > >>>>> at > >>>>> > com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) > >>>>> at > >>>>> > com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1683) > >>>>> at > >>>>> > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1310) > >>>>> at > >>>>> > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1275) > >>>>> at > >>>>> > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:517) > >>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) > >>>>> at > >>>>> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > >>>>> at > >>>>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > >>>>> ... 1 more > >>>>> Caused by: java.lang.IllegalStateException: Already assigned: > >>>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > >>>>> datatype=Vocab(-42)], new=LiteralExtensionIV > >>>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > >>>>> "8452"^^ > >>>>> at > >>>>> > com.bigdata.rdf.model.BigdataValueImpl.setIV(BigdataValueImpl.java:139) > >>>>> at > >>>>> > com.bigdata.rdf.internal.LexiconConfiguration.createInlineIV(LexiconConfiguration.java:430) > >>>>> at > >>>>> > com.bigdata.rdf.lexicon.LexiconRelation.getInlineIV(LexiconRelation.java:3150) > >>>>> at > >>>>> > com.bigdata.rdf.lexicon.LexiconRelation.addTerms(LexiconRelation.java:1719) > >>>>> at > >>>>> > com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2928) > >>>>> at > >>>>> > com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2874) > >>>>> at > >>>>> > com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2962) > >>>>> at > >>>>> > com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2865) > >>>>> at > >>>>> > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.addOrRemoveStatement(AST2BOpUpdate.java:2054) > >>>>> at > >>>>> > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:989) > >>>>> at > >>>>> > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:417) > >>>>> at > >>>>> > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:279) > >>>>> at > >>>>> > com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1295) > >>>>> ... 9 more > >>>>> > >>>>> [0] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki > >>>>> > >>>>> Cheers > >>>>> > >>>>> > >>>>> > ------------------------------------------------------------------------------ > >>>>> One dashboard for servers and applications across > >>>>> Physical-Virtual-Cloud > >>>>> Widest out-of-the-box monitoring support with 50+ applications > >>>>> Performance metrics, stats and reports that give you Actionable > >>>>> Insights > >>>>> Deep dive visibility with transaction tracing using APM Insight. > >>>>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > >>>>> _______________________________________________ > >>>>> Bigdata-developers mailing list > >>>>> Big...@li... <javascript:;> > >>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers > >>> > > > -- ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com <http://bigdata.com> http://mapgraph.io Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: James HK <jam...@gm...> - 2015-05-09 18:38:35
|
Hi, > Let me suggest that 256MB is a pretty small heap for the JVM running > the database. Why don't you give it a 4GB heap? Changing the option to "-Xmx4g" did not amount in any significant reduction [0] of the test execution time (compared to the previous cited runs). > PerformanceOptimization (General background on performance tuning for > bigdata) > http://wiki.blazegraph.com/wiki/index.php/PerformanceOptimization I read through the pages but I couldn't find any option that would make the test suite run faster in the Travis-CI environment. [0] https://travis-ci.org/mwjames/SemanticMediaWiki/builds/61901411 Cheers On 5/9/15, Bryan Thompson <br...@sy...> wrote: > James, > > Let me suggest that 256MB is a pretty small heap for the JVM running > the database. Why don't you give it a 4GB heap? > > We provide guidance for performance tuning at the following links: > > PerformanceOptimization (General background on performance tuning for > bigdata) > http://wiki.blazegraph.com/wiki/index.php/PerformanceOptimization > > IOOptimization (How to tune IO performance, SSD, branching factors, etc.) > http://wiki.blazegraph.com/wiki/index.php/IOOptimization > > QueryOptimization (SPARQL specific query performance tricks and tips) > http://wiki.blazegraph.com/wiki/index.php/QueryOptimization > > Thanks, > Bryan > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.bigdata.com > http://mapgraph.io > > Blazegraph™ is our ultra high-performance graph database that supports > both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our > disruptive new technology to use GPUs to accelerate data-parallel > graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments > are for the sole use of the intended recipient(s) and are confidential > or proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments > is prohibited. If you have received this communication in error, > please notify the sender by reply email and permanently delete all > copies of the email and its contents and attachments. > > > On Sat, May 9, 2015 at 12:12 AM, James HK <jam...@gm...> > wrote: >> Hi, >> >> I just wanted to give you some feedback on the Blazegraph >> 1.5.1/Semantic MediaWiki 2.3/Travis integration. >> >> After having tried different ways to use tomcat + bigdata on Travis >> (which wasn't really fruitful, our Travis/sesame runs on tomcat), I >> settled for the more pragmatic [0] approach. Using [0] now allows us >> to run our test suite on Travis [1] against Blazegraph 1.5.1. >> >> I'm not sure why but Blazegraph requires ~6 min to pass all our tests >> while the same tests finish on Fuseki [2], Sesame [3], or Virtuoso [4] >> in about half the time (~3 min). >> >> Maybe there are some tweaks available in order to make our tests run >> faster before adding the Blazegraph service to our normal Travis >> setup. >> >> [0] java -server -Xmx256m >> -Dbigdata.propertyFile=.../build/travis/blazegraph-store.properties >> -jar bigdata-bundled.jar >> >> [1] https://travis-ci.org/mwjames/SemanticMediaWiki/builds/61850318, >> https://travis-ci.org/mwjames/SemanticMediaWiki/builds/61849465 >> >> [2] >> https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848314 >> [3] >> https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848316 >> [4] >> https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848315 >> >> Cheers >> >> On 5/1/15, Bryan Thompson <br...@sy...> wrote: >>> Here is the relevant documentation from the code: >>> >>> to turn this off include the following in your property file: >>> >>> com.bigdata.rdf.store.AbstractTripleStore.inlineDateTimes=false >>> >>> /** >>> * Set up database to inline date/times directly into the statement >>> * indices rather than using the lexicon to map them to term identifiers >>> * and back (default {@value #DEFAULT_INLINE_DATE_TIMES}). Date times >>> * will be converted to UTC, then stored as milliseconds since the >>> * epoch. Thus if you inline date/times you will lose the canonical >>> * representation of the date/time. This has two consequences: (1) you >>> * will not be able to recover the original time zone of the date/time; >>> * and (2) greater than millisecond precision will be lost. >>> * >>> * @see #INLINE_DATE_TIMES_TIMEZONE >>> */ >>> String INLINE_DATE_TIMES = AbstractTripleStore.class.getName() >>> + ".inlineDateTimes"; >>> >>> String DEFAULT_INLINE_DATE_TIMES = "true"; >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... >>> http://blazegraph.com >>> http://blog.bigdata.com >>> http://mapgraph.io >>> >>> Blazegraph™ is our ultra high-performance graph database that supports >>> both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our >>> disruptive new technology to use GPUs to accelerate data-parallel >>> graph analytics. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>> are for the sole use of the intended recipient(s) and are confidential >>> or proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>> dissemination or copying of this email or its contents or attachments >>> is prohibited. If you have received this communication in error, >>> please notify the sender by reply email and permanently delete all >>> copies of the email and its contents and attachments. >>> >>> >>> On Thu, Apr 30, 2015 at 12:34 PM, Bryan Thompson <br...@sy...> >>> wrote: >>>> I am not at a computer now.. see AbstractTripleStore INLINE_DATE_TIMES. >>>> You >>>> need to turn off that property when creating the namespace or in the >>>> RWStore >>>> file in WEB-INF. >>>> >>>> Thanks, >>>> Bryan >>>> >>>> On Apr 30, 2015 7:30 AM, "James HK" <jam...@gm...> >>>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> When trying to run our unit test suite [0] against a preliminary/local >>>>> Blazegraph 1.5.1 instance the >>>>> following error appeared during the test run (using a vanilla >>>>> Blazegraph with the standard kb namespace) . >>>>> >>>>> Our test suite (and the test mentioned) is run/tested against Virtuoso >>>>> 6.1/Fuseki 1.1.1/Sesame 2.7.14 [0] on Travis-CI therefore it is >>>>> unlikely an issue on our side. >>>>> >>>>> 1) >>>>> SMW\Tests\Integration\MediaWiki\Import\TimeDataTypeTest::testImportOfDifferentDateWithAssortmentOfOutputConversion >>>>> SMW\SPARQLStore\Exception\BadHttpDatabaseResponseException: A SPARQL >>>>> query error has occurred >>>>> Query: >>>>> PREFIX wiki: <http://example.org/id/> >>>>> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >>>>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> >>>>> PREFIX owl: <http://www.w3.org/2002/07/owl#> >>>>> PREFIX swivt: <http://semantic-mediawiki.org/swivt/1.0#> >>>>> PREFIX property: <http://example.org/id/Property-3A> >>>>> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> >>>>> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { >>>>> wiki:TimeDataTypeRegressionTest ?p ?o } >>>>> Error: Query refused >>>>> Endpoint: http://192.168.1.104:9999/bigdata/namespace/kb/sparql >>>>> HTTP response code: 500 >>>>> >>>>> This translates into an error on the Blazegraph side with (output from >>>>> http://localhost:9999/bigdata/#update): >>>>> >>>>> ERROR: SPARQL-UPDATE: updateStr=PREFIX wiki: >>>>> PREFIX rdf: >>>>> PREFIX rdfs: >>>>> PREFIX owl: >>>>> PREFIX swivt: >>>>> PREFIX property: >>>>> PREFIX xsd: >>>>> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { >>>>> wiki:TimeDataTypeRegressionTest ?p ?o } >>>>> java.util.concurrent.ExecutionException: >>>>> java.util.concurrent.ExecutionException: >>>>> org.openrdf.query.UpdateExecutionException: >>>>> java.lang.IllegalStateException: Already assigned: >>>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>>>> datatype=Vocab(-42)], new=LiteralExtensionIV >>>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>>>> "8452"^^ >>>>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:359) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:165) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) >>>>> at >>>>> javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >>>>> at >>>>> javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >>>>> at >>>>> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >>>>> at >>>>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >>>>> at >>>>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >>>>> at >>>>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >>>>> at >>>>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >>>>> at >>>>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >>>>> at >>>>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >>>>> at >>>>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >>>>> at >>>>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >>>>> at >>>>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >>>>> at >>>>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) >>>>> at >>>>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) >>>>> at >>>>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) >>>>> at org.eclipse.jetty.server.Server.handle(Server.java:497) >>>>> at >>>>> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) >>>>> at >>>>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) >>>>> at >>>>> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >>>>> at >>>>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) >>>>> at >>>>> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) >>>>> at java.lang.Thread.run(Thread.java:744) >>>>> Caused by: java.util.concurrent.ExecutionException: >>>>> org.openrdf.query.UpdateExecutionException: >>>>> java.lang.IllegalStateException: Already assigned: >>>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>>>> datatype=Vocab(-42)], new=LiteralExtensionIV >>>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>>>> "8452"^^ >>>>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:371) >>>>> at >>>>> com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) >>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>>>> at >>>>> com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258) >>>>> ... 26 more >>>>> Caused by: org.openrdf.query.UpdateExecutionException: >>>>> java.lang.IllegalStateException: Already assigned: >>>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>>>> datatype=Vocab(-42)], new=LiteralExtensionIV >>>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>>>> "8452"^^ >>>>> at >>>>> com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1303) >>>>> at >>>>> com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1683) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1310) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1275) >>>>> at >>>>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:517) >>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>>>> at >>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>>>> at >>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>>>> ... 1 more >>>>> Caused by: java.lang.IllegalStateException: Already assigned: >>>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>>>> datatype=Vocab(-42)], new=LiteralExtensionIV >>>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>>>> "8452"^^ >>>>> at >>>>> com.bigdata.rdf.model.BigdataValueImpl.setIV(BigdataValueImpl.java:139) >>>>> at >>>>> com.bigdata.rdf.internal.LexiconConfiguration.createInlineIV(LexiconConfiguration.java:430) >>>>> at >>>>> com.bigdata.rdf.lexicon.LexiconRelation.getInlineIV(LexiconRelation.java:3150) >>>>> at >>>>> com.bigdata.rdf.lexicon.LexiconRelation.addTerms(LexiconRelation.java:1719) >>>>> at >>>>> com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2928) >>>>> at >>>>> com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2874) >>>>> at >>>>> com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2962) >>>>> at >>>>> com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2865) >>>>> at >>>>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.addOrRemoveStatement(AST2BOpUpdate.java:2054) >>>>> at >>>>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:989) >>>>> at >>>>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:417) >>>>> at >>>>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:279) >>>>> at >>>>> com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1295) >>>>> ... 9 more >>>>> >>>>> [0] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki >>>>> >>>>> Cheers >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> One dashboard for servers and applications across >>>>> Physical-Virtual-Cloud >>>>> Widest out-of-the-box monitoring support with 50+ applications >>>>> Performance metrics, stats and reports that give you Actionable >>>>> Insights >>>>> Deep dive visibility with transaction tracing using APM Insight. >>>>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>>>> _______________________________________________ >>>>> Bigdata-developers mailing list >>>>> Big...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> > |
From: Bryan T. <br...@sy...> - 2015-05-09 13:49:34
|
James, Let me suggest that 256MB is a pretty small heap for the JVM running the database. Why don't you give it a 4GB heap? We provide guidance for performance tuning at the following links: PerformanceOptimization (General background on performance tuning for bigdata) http://wiki.blazegraph.com/wiki/index.php/PerformanceOptimization IOOptimization (How to tune IO performance, SSD, branching factors, etc.) http://wiki.blazegraph.com/wiki/index.php/IOOptimization QueryOptimization (SPARQL specific query performance tricks and tips) http://wiki.blazegraph.com/wiki/index.php/QueryOptimization Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com http://mapgraph.io Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Sat, May 9, 2015 at 12:12 AM, James HK <jam...@gm...> wrote: > Hi, > > I just wanted to give you some feedback on the Blazegraph > 1.5.1/Semantic MediaWiki 2.3/Travis integration. > > After having tried different ways to use tomcat + bigdata on Travis > (which wasn't really fruitful, our Travis/sesame runs on tomcat), I > settled for the more pragmatic [0] approach. Using [0] now allows us > to run our test suite on Travis [1] against Blazegraph 1.5.1. > > I'm not sure why but Blazegraph requires ~6 min to pass all our tests > while the same tests finish on Fuseki [2], Sesame [3], or Virtuoso [4] > in about half the time (~3 min). > > Maybe there are some tweaks available in order to make our tests run > faster before adding the Blazegraph service to our normal Travis > setup. > > [0] java -server -Xmx256m > -Dbigdata.propertyFile=.../build/travis/blazegraph-store.properties > -jar bigdata-bundled.jar > > [1] https://travis-ci.org/mwjames/SemanticMediaWiki/builds/61850318, > https://travis-ci.org/mwjames/SemanticMediaWiki/builds/61849465 > > [2] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848314 > [3] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848316 > [4] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848315 > > Cheers > > On 5/1/15, Bryan Thompson <br...@sy...> wrote: >> Here is the relevant documentation from the code: >> >> to turn this off include the following in your property file: >> >> com.bigdata.rdf.store.AbstractTripleStore.inlineDateTimes=false >> >> /** >> * Set up database to inline date/times directly into the statement >> * indices rather than using the lexicon to map them to term identifiers >> * and back (default {@value #DEFAULT_INLINE_DATE_TIMES}). Date times >> * will be converted to UTC, then stored as milliseconds since the >> * epoch. Thus if you inline date/times you will lose the canonical >> * representation of the date/time. This has two consequences: (1) you >> * will not be able to recover the original time zone of the date/time; >> * and (2) greater than millisecond precision will be lost. >> * >> * @see #INLINE_DATE_TIMES_TIMEZONE >> */ >> String INLINE_DATE_TIMES = AbstractTripleStore.class.getName() >> + ".inlineDateTimes"; >> >> String DEFAULT_INLINE_DATE_TIMES = "true"; >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.bigdata.com >> http://mapgraph.io >> >> Blazegraph™ is our ultra high-performance graph database that supports >> both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our >> disruptive new technology to use GPUs to accelerate data-parallel >> graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments >> are for the sole use of the intended recipient(s) and are confidential >> or proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments >> is prohibited. If you have received this communication in error, >> please notify the sender by reply email and permanently delete all >> copies of the email and its contents and attachments. >> >> >> On Thu, Apr 30, 2015 at 12:34 PM, Bryan Thompson <br...@sy...> wrote: >>> I am not at a computer now.. see AbstractTripleStore INLINE_DATE_TIMES. >>> You >>> need to turn off that property when creating the namespace or in the >>> RWStore >>> file in WEB-INF. >>> >>> Thanks, >>> Bryan >>> >>> On Apr 30, 2015 7:30 AM, "James HK" <jam...@gm...> wrote: >>>> >>>> Hi, >>>> >>>> When trying to run our unit test suite [0] against a preliminary/local >>>> Blazegraph 1.5.1 instance the >>>> following error appeared during the test run (using a vanilla >>>> Blazegraph with the standard kb namespace) . >>>> >>>> Our test suite (and the test mentioned) is run/tested against Virtuoso >>>> 6.1/Fuseki 1.1.1/Sesame 2.7.14 [0] on Travis-CI therefore it is >>>> unlikely an issue on our side. >>>> >>>> 1) >>>> SMW\Tests\Integration\MediaWiki\Import\TimeDataTypeTest::testImportOfDifferentDateWithAssortmentOfOutputConversion >>>> SMW\SPARQLStore\Exception\BadHttpDatabaseResponseException: A SPARQL >>>> query error has occurred >>>> Query: >>>> PREFIX wiki: <http://example.org/id/> >>>> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >>>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> >>>> PREFIX owl: <http://www.w3.org/2002/07/owl#> >>>> PREFIX swivt: <http://semantic-mediawiki.org/swivt/1.0#> >>>> PREFIX property: <http://example.org/id/Property-3A> >>>> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> >>>> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { >>>> wiki:TimeDataTypeRegressionTest ?p ?o } >>>> Error: Query refused >>>> Endpoint: http://192.168.1.104:9999/bigdata/namespace/kb/sparql >>>> HTTP response code: 500 >>>> >>>> This translates into an error on the Blazegraph side with (output from >>>> http://localhost:9999/bigdata/#update): >>>> >>>> ERROR: SPARQL-UPDATE: updateStr=PREFIX wiki: >>>> PREFIX rdf: >>>> PREFIX rdfs: >>>> PREFIX owl: >>>> PREFIX swivt: >>>> PREFIX property: >>>> PREFIX xsd: >>>> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { >>>> wiki:TimeDataTypeRegressionTest ?p ?o } >>>> java.util.concurrent.ExecutionException: >>>> java.util.concurrent.ExecutionException: >>>> org.openrdf.query.UpdateExecutionException: >>>> java.lang.IllegalStateException: Already assigned: >>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>>> datatype=Vocab(-42)], new=LiteralExtensionIV >>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>>> "8452"^^ >>>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>>> at >>>> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) >>>> at >>>> com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:359) >>>> at >>>> com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:165) >>>> at >>>> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) >>>> at >>>> com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) >>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >>>> at >>>> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >>>> at >>>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >>>> at >>>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >>>> at >>>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >>>> at >>>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >>>> at >>>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >>>> at >>>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >>>> at >>>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >>>> at >>>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >>>> at >>>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >>>> at >>>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) >>>> at >>>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) >>>> at >>>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) >>>> at org.eclipse.jetty.server.Server.handle(Server.java:497) >>>> at >>>> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) >>>> at >>>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) >>>> at >>>> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >>>> at >>>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) >>>> at >>>> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) >>>> at java.lang.Thread.run(Thread.java:744) >>>> Caused by: java.util.concurrent.ExecutionException: >>>> org.openrdf.query.UpdateExecutionException: >>>> java.lang.IllegalStateException: Already assigned: >>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>>> datatype=Vocab(-42)], new=LiteralExtensionIV >>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>>> "8452"^^ >>>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>>> at >>>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460) >>>> at >>>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:371) >>>> at >>>> com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) >>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>>> at >>>> com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365) >>>> at >>>> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258) >>>> ... 26 more >>>> Caused by: org.openrdf.query.UpdateExecutionException: >>>> java.lang.IllegalStateException: Already assigned: >>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>>> datatype=Vocab(-42)], new=LiteralExtensionIV >>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>>> "8452"^^ >>>> at >>>> com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1303) >>>> at >>>> com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) >>>> at >>>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1683) >>>> at >>>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1310) >>>> at >>>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1275) >>>> at >>>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:517) >>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>>> at >>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>>> at >>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>>> ... 1 more >>>> Caused by: java.lang.IllegalStateException: Already assigned: >>>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>>> datatype=Vocab(-42)], new=LiteralExtensionIV >>>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>>> "8452"^^ >>>> at >>>> com.bigdata.rdf.model.BigdataValueImpl.setIV(BigdataValueImpl.java:139) >>>> at >>>> com.bigdata.rdf.internal.LexiconConfiguration.createInlineIV(LexiconConfiguration.java:430) >>>> at >>>> com.bigdata.rdf.lexicon.LexiconRelation.getInlineIV(LexiconRelation.java:3150) >>>> at >>>> com.bigdata.rdf.lexicon.LexiconRelation.addTerms(LexiconRelation.java:1719) >>>> at >>>> com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2928) >>>> at >>>> com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2874) >>>> at >>>> com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2962) >>>> at >>>> com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2865) >>>> at >>>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.addOrRemoveStatement(AST2BOpUpdate.java:2054) >>>> at >>>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:989) >>>> at >>>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:417) >>>> at >>>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:279) >>>> at >>>> com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1295) >>>> ... 9 more >>>> >>>> [0] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki >>>> >>>> Cheers >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> One dashboard for servers and applications across Physical-Virtual-Cloud >>>> Widest out-of-the-box monitoring support with 50+ applications >>>> Performance metrics, stats and reports that give you Actionable Insights >>>> Deep dive visibility with transaction tracing using APM Insight. >>>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> |
From: Brad B. <be...@sy...> - 2015-05-09 13:19:42
|
Alex, The namespace abstraction allows you to host multiple knowledge bases, each of which having potentially different configuration options (triples, RDR, inference, quads, etc.), in a single Blazegraph server instance. Here's some of the configuration options [1]. There is a default namespace ("kb"), which is used if no namespace is specified in the SPARQL endpoint. In the example below [A] and [B] are equivalent. [A] uses the default namespace. [C] specifies using the "biology" namespace. [A] http://localhost:9999/bigdata/sparql [B] http://localhost:9999/bigdata/namespace/kb/sparql [C] http://localhost:9999/bigdata/namespace/biology/sparql Thanks, --Brad [1] http://wiki.blazegraph.com/wiki/index.php/GettingStarted#So_how_do_I_put_the_database_in_triple_store_versus_quad_store_mode.3F On Sat, May 9, 2015 at 1:20 AM, Alex Jouravlev < al...@bu...> wrote: > Hi, > > I cannot understand what can I do with the namespace prefixes. Should I > supply them with any invocation? > Does Bigdata store them? > When supplying new triples/quads, should I convert namespaces to fully > qualified names? > > Alex Jouravlev > Director, Business Abstraction Pty Ltd > Phone: +61-(2)-8003-4830 > Mobile: +61-4-0408-3258 > Web: http://www.businessabstraction.com > LinkedIn: http://au.linkedin.com/in/alexjouravlev/ > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- _______________ Brad Bebee Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.systap.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Brad B. <be...@sy...> - 2015-05-09 13:14:16
|
Alex, I would recommend customizing a log4j.properties file [1] and passing it as a command line argument [2]. Jetty has very fine-grained logging options you can enable in the log4j properties. Thanks, --Brad [1] https://sourceforge.net/p/bigdata/git/ci/master/tree/bigdata-war/src/WEB-INF/classes/log4j.properties [2] http://wiki.blazegraph.com/wiki/index.php/NanoSparqlServer#Logging On Sat, May 9, 2015 at 12:45 AM, Alex Jouravlev < al...@bu...> wrote: > Hi, > > I am trying to use what's supposed to be an easiest setup - standalone NSS > from command line. Something doesn't go well. > > How can log requests, responses and evaluation information? > > Alex Jouravlev > Director, Business Abstraction Pty Ltd > Phone: +61-(2)-8003-4830 > Mobile: +61-4-0408-3258 > Web: http://www.businessabstraction.com > LinkedIn: http://au.linkedin.com/in/alexjouravlev/ > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- _______________ Brad Bebee Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.systap.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Alex J. <al...@bu...> - 2015-05-09 05:20:45
|
Hi, I cannot understand what can I do with the namespace prefixes. Should I supply them with any invocation? Does Bigdata store them? When supplying new triples/quads, should I convert namespaces to fully qualified names? Alex Jouravlev Director, Business Abstraction Pty Ltd Phone: +61-(2)-8003-4830 Mobile: +61-4-0408-3258 Web: http://www.businessabstraction.com LinkedIn: http://au.linkedin.com/in/alexjouravlev/ |
From: Alex J. <al...@bu...> - 2015-05-09 05:09:28
|
Hi, I am trying to use what's supposed to be an easiest setup - standalone NSS from command line. Something doesn't go well. How can log requests, responses and evaluation information? Alex Jouravlev Director, Business Abstraction Pty Ltd Phone: +61-(2)-8003-4830 Mobile: +61-4-0408-3258 Web: http://www.businessabstraction.com LinkedIn: http://au.linkedin.com/in/alexjouravlev/ |
From: James HK <jam...@gm...> - 2015-05-09 04:12:35
|
Hi, I just wanted to give you some feedback on the Blazegraph 1.5.1/Semantic MediaWiki 2.3/Travis integration. After having tried different ways to use tomcat + bigdata on Travis (which wasn't really fruitful, our Travis/sesame runs on tomcat), I settled for the more pragmatic [0] approach. Using [0] now allows us to run our test suite on Travis [1] against Blazegraph 1.5.1. I'm not sure why but Blazegraph requires ~6 min to pass all our tests while the same tests finish on Fuseki [2], Sesame [3], or Virtuoso [4] in about half the time (~3 min). Maybe there are some tweaks available in order to make our tests run faster before adding the Blazegraph service to our normal Travis setup. [0] java -server -Xmx256m -Dbigdata.propertyFile=.../build/travis/blazegraph-store.properties -jar bigdata-bundled.jar [1] https://travis-ci.org/mwjames/SemanticMediaWiki/builds/61850318, https://travis-ci.org/mwjames/SemanticMediaWiki/builds/61849465 [2] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848314 [3] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848316 [4] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki/jobs/61848315 Cheers On 5/1/15, Bryan Thompson <br...@sy...> wrote: > Here is the relevant documentation from the code: > > to turn this off include the following in your property file: > > com.bigdata.rdf.store.AbstractTripleStore.inlineDateTimes=false > > /** > * Set up database to inline date/times directly into the statement > * indices rather than using the lexicon to map them to term identifiers > * and back (default {@value #DEFAULT_INLINE_DATE_TIMES}). Date times > * will be converted to UTC, then stored as milliseconds since the > * epoch. Thus if you inline date/times you will lose the canonical > * representation of the date/time. This has two consequences: (1) you > * will not be able to recover the original time zone of the date/time; > * and (2) greater than millisecond precision will be lost. > * > * @see #INLINE_DATE_TIMES_TIMEZONE > */ > String INLINE_DATE_TIMES = AbstractTripleStore.class.getName() > + ".inlineDateTimes"; > > String DEFAULT_INLINE_DATE_TIMES = "true"; > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.bigdata.com > http://mapgraph.io > > Blazegraph™ is our ultra high-performance graph database that supports > both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our > disruptive new technology to use GPUs to accelerate data-parallel > graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments > are for the sole use of the intended recipient(s) and are confidential > or proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments > is prohibited. If you have received this communication in error, > please notify the sender by reply email and permanently delete all > copies of the email and its contents and attachments. > > > On Thu, Apr 30, 2015 at 12:34 PM, Bryan Thompson <br...@sy...> wrote: >> I am not at a computer now.. see AbstractTripleStore INLINE_DATE_TIMES. >> You >> need to turn off that property when creating the namespace or in the >> RWStore >> file in WEB-INF. >> >> Thanks, >> Bryan >> >> On Apr 30, 2015 7:30 AM, "James HK" <jam...@gm...> wrote: >>> >>> Hi, >>> >>> When trying to run our unit test suite [0] against a preliminary/local >>> Blazegraph 1.5.1 instance the >>> following error appeared during the test run (using a vanilla >>> Blazegraph with the standard kb namespace) . >>> >>> Our test suite (and the test mentioned) is run/tested against Virtuoso >>> 6.1/Fuseki 1.1.1/Sesame 2.7.14 [0] on Travis-CI therefore it is >>> unlikely an issue on our side. >>> >>> 1) >>> SMW\Tests\Integration\MediaWiki\Import\TimeDataTypeTest::testImportOfDifferentDateWithAssortmentOfOutputConversion >>> SMW\SPARQLStore\Exception\BadHttpDatabaseResponseException: A SPARQL >>> query error has occurred >>> Query: >>> PREFIX wiki: <http://example.org/id/> >>> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> >>> PREFIX owl: <http://www.w3.org/2002/07/owl#> >>> PREFIX swivt: <http://semantic-mediawiki.org/swivt/1.0#> >>> PREFIX property: <http://example.org/id/Property-3A> >>> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> >>> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { >>> wiki:TimeDataTypeRegressionTest ?p ?o } >>> Error: Query refused >>> Endpoint: http://192.168.1.104:9999/bigdata/namespace/kb/sparql >>> HTTP response code: 500 >>> >>> This translates into an error on the Blazegraph side with (output from >>> http://localhost:9999/bigdata/#update): >>> >>> ERROR: SPARQL-UPDATE: updateStr=PREFIX wiki: >>> PREFIX rdf: >>> PREFIX rdfs: >>> PREFIX owl: >>> PREFIX swivt: >>> PREFIX property: >>> PREFIX xsd: >>> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { >>> wiki:TimeDataTypeRegressionTest ?p ?o } >>> java.util.concurrent.ExecutionException: >>> java.util.concurrent.ExecutionException: >>> org.openrdf.query.UpdateExecutionException: >>> java.lang.IllegalStateException: Already assigned: >>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>> datatype=Vocab(-42)], new=LiteralExtensionIV >>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>> "8452"^^ >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:359) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:165) >>> at >>> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) >>> at >>> com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) >>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >>> at >>> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >>> at >>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >>> at >>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >>> at >>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >>> at >>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >>> at >>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >>> at >>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >>> at >>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >>> at >>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >>> at >>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >>> at >>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) >>> at >>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) >>> at >>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) >>> at org.eclipse.jetty.server.Server.handle(Server.java:497) >>> at >>> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) >>> at >>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) >>> at >>> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >>> at >>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) >>> at >>> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) >>> at java.lang.Thread.run(Thread.java:744) >>> Caused by: java.util.concurrent.ExecutionException: >>> org.openrdf.query.UpdateExecutionException: >>> java.lang.IllegalStateException: Already assigned: >>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>> datatype=Vocab(-42)], new=LiteralExtensionIV >>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>> "8452"^^ >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:371) >>> at >>> com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at >>> com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258) >>> ... 26 more >>> Caused by: org.openrdf.query.UpdateExecutionException: >>> java.lang.IllegalStateException: Already assigned: >>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>> datatype=Vocab(-42)], new=LiteralExtensionIV >>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>> "8452"^^ >>> at >>> com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1303) >>> at >>> com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1683) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1310) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1275) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:517) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>> at >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>> ... 1 more >>> Caused by: java.lang.IllegalStateException: Already assigned: >>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >>> datatype=Vocab(-42)], new=LiteralExtensionIV >>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >>> "8452"^^ >>> at >>> com.bigdata.rdf.model.BigdataValueImpl.setIV(BigdataValueImpl.java:139) >>> at >>> com.bigdata.rdf.internal.LexiconConfiguration.createInlineIV(LexiconConfiguration.java:430) >>> at >>> com.bigdata.rdf.lexicon.LexiconRelation.getInlineIV(LexiconRelation.java:3150) >>> at >>> com.bigdata.rdf.lexicon.LexiconRelation.addTerms(LexiconRelation.java:1719) >>> at >>> com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2928) >>> at >>> com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2874) >>> at >>> com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2962) >>> at >>> com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2865) >>> at >>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.addOrRemoveStatement(AST2BOpUpdate.java:2054) >>> at >>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:989) >>> at >>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:417) >>> at >>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:279) >>> at >>> com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1295) >>> ... 9 more >>> >>> [0] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki >>> >>> Cheers >>> >>> >>> ------------------------------------------------------------------------------ >>> One dashboard for servers and applications across Physical-Virtual-Cloud >>> Widest out-of-the-box monitoring support with 50+ applications >>> Performance metrics, stats and reports that give you Actionable Insights >>> Deep dive visibility with transaction tracing using APM Insight. >>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Brad B. <be...@sy...> - 2015-05-08 17:03:25
|
All, Trac will going down at 1300 ET today for planned maintenance. Please make any updates, if you are able to access it. Thanks, --Brad -- _______________ Brad Bebee Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.systap.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Bryan T. <br...@sy...> - 2015-05-06 16:25:18
|
You need to select a namespace on the namespace tab of the workbench. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com <http://bigdata.com> http://mapgraph.io Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Wed, May 6, 2015 at 11:32 AM, Alex Jouravlev < al...@bu...> wrote: > What would that mean? > > ERROR: SPARQL-QUERY: queryStr=select * where {?s ?p ?o} > java.util.concurrent.ExecutionException: > com.bigdata.rdf.sail.webapp.DatasetNotFoundException: Not found: > namespace=undefined, timestamp=read-committed at > java.util.concurrent.FutureTask.report(Unknown Source) at > java.util.concurrent.FutureTask.get(Unknown Source) at > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:532) > at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:189) > at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) at > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at > javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) at > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > at java.lang.Thread.run(Unknown Source) Caused by: > com.bigdata.rdf.sail.webapp.DatasetNotFoundException: Not found: > namespace=undefined, timestamp=read-committed at > com.bigdata.rdf.task.AbstractApiTask.getQueryConnection(AbstractApiTask.java:238) > at > com.bigdata.rdf.task.AbstractApiTask.getQueryConnection(AbstractApiTask.java:216) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:582) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:549) > at > com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) > at java.util.concurrent.FutureTask.run(Unknown Source) at > com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365) > at > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258) > ... 26 more > > Alex Jouravlev > Director, Business Abstraction Pty Ltd > Phone: +61-(2)-8003-4830 > Mobile: +61-4-0408-3258 > Web: http://www.businessabstraction.com > LinkedIn: http://au.linkedin.com/in/alexjouravlev/ > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Alex J. <al...@bu...> - 2015-05-06 15:55:11
|
What would that mean? ERROR: SPARQL-QUERY: queryStr=select * where {?s ?p ?o} java.util.concurrent.ExecutionException: com.bigdata.rdf.sail.webapp.DatasetNotFoundException: Not found: namespace=undefined, timestamp=read-committed at java.util.concurrent.FutureTask.report(Unknown Source) at java.util.concurrent.FutureTask.get(Unknown Source) at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:532) at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:189) at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) at java.lang.Thread.run(Unknown Source) Caused by: com.bigdata.rdf.sail.webapp.DatasetNotFoundException: Not found: namespace=undefined, timestamp=read-committed at com.bigdata.rdf.task.AbstractApiTask.getQueryConnection(AbstractApiTask.java:238) at com.bigdata.rdf.task.AbstractApiTask.getQueryConnection(AbstractApiTask.java:216) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:582) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:549) at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) at java.util.concurrent.FutureTask.run(Unknown Source) at com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365) at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258) ... 26 more Alex Jouravlev Director, Business Abstraction Pty Ltd Phone: +61-(2)-8003-4830 Mobile: +61-4-0408-3258 Web: http://www.businessabstraction.com LinkedIn: http://au.linkedin.com/in/alexjouravlev/ |
From: Jim B. <ba...@ne...> - 2015-05-05 20:50:39
|
Bryan, I will try to add something there this week. Best regards, Jim > On Apr 30, 2015, at 5:12 PM, Bryan Thompson <br...@sy...> wrote: > > Jim, > > Could you create a wiki page for this at wiki.blazegraph.com? > > Thanks, > Bryan > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.bigdata.com > http://mapgraph.io > > Blazegraph™ is our ultra high-performance graph database that supports > both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our > disruptive new technology to use GPUs to accelerate data-parallel > graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments > are for the sole use of the intended recipient(s) and are confidential > or proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments > is prohibited. If you have received this communication in error, > please notify the sender by reply email and permanently delete all > copies of the email and its contents and attachments. > > > On Tue, Apr 28, 2015 at 9:49 PM, Jim Balhoff <ba...@ne...> wrote: >> Hi Kaushik, >> >> I do have a reasoner integration I developed that I use with Blazegraph. It is called Owlet: >> >> https://github.com/phenoscape/owlet >> >> It is an API for SPARQL query expansion by means of an in-memory reasoner. I typically use it with ELK but you can use any OWL API-based reasoner. It is not directly integrated into Blazegraph, but you can incorporate it into a separate application to pre-expand your query before submitting to Blazegraph. I also have a web service application called Owlery which provides a query expansion service using Owlet. >> >> One thing to keep in mind is that you must have all the data used by the reasoner loaded into memory in an instance of OWLOntology. This works well for me because I typically only need Tbox reasoning—we have large anatomical ontologies. We query the ontology using a complex class expression (e.g. muscles attached to bones in the head), get all the inferred subclasses, and the retrieve instances tagged with any of those classes from Blazegraph. We don’t actually need to classify our instance data using our ontology, which is what it looks like you’re doing. That will work with Owlet, the only issue is that you will need all your instance data loaded into the reasoner. In my experience the available OWL reasoners are much more scalable for Tbox reasoning vs. giving them a large Abox. >> >> Best regards, >> Jim >> >> >>> On Apr 28, 2015, at 8:04 AM, Bryan Thompson <br...@sy...> wrote: >>> >>> Blazegraph does not support owl out of the box. Jim Balhoff (cc) has an ELK reasoner integration that you could use for this. >>> >>> @jim: can you provide some pointers to your work? >>> >>> Thanks, >>> Bryan >>> >>> On Tuesday, April 28, 2015, Kaushik Chakraborty <kay...@gm...> wrote: >>> Hi Brad, >>> >>> Thanks a lot for the clarification, it worked as I followed your steps. >>> However, I'm stuck at a simple OWL reasoning as I extend the same instance data. >>> >>> - Here's my updated instance data >>> >>> @prefix : <http://kg.cts.com/> . >>> @prefix foaf: <http://xmlns.com/foaf/0.1/> . >>> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . >>> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . >>> @prefix owl: <http://www.w3.org/2002/07/owl#> . >>> >>> @prefix org: <http://www.w3.org/ns/org#> . >>> >>> >>> :A rdf:type org:Organization . >>> >>> :LeadershipRoles a org:Role . >>> :AdminRoles a org:Role . >>> >>> :ROLE1 a :LeadershipRoles . >>> :ROLE2 a :LeadershipRoles . >>> :ROLE3 a :AdminRoles . >>> >>> :CEOType a org:Membership ; >>> owl:equivalentClass [ >>> a owl:Restriction ; >>> owl:onProperty org:role ; >>> owl:someValuesFrom :LeadershipRoles >>> ] . >>> >>> >>> :139137 rdf:type foaf:Agent ; >>> rdfs:label "139137" ; >>> org:memberOf :A ; >>> org:hasMembership [a org:Membership; org:role :ROLE1,:ROLE3] . >>> >>> :139138 rdf:type foaf:Agent ; >>> rdfs:label "139138" ; >>> org:memberOf :A ; >>> org:hasMembership [a org:Membership; org:role :ROLE3] . >>> >>> >>> - As per the above assertions, I should get :139137 to have :CEOType membership if I make this SPARQL query. But I'm not >>> >>> prefix : <http://kg.cts.com/> >>> prefix foaf: <http://xmlns.com/foaf/0.1/> >>> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >>> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> >>> prefix owl: <http://www.w3.org/2002/07/owl#> >>> prefix org: <http://www.w3.org/ns/org#> >>> >>> select distinct ?who >>> where >>> { >>> ?who org:hasMembership :CEOType . >>> } >>> >>> >>> - However, I am getting right result if I search for exact type i.e. >>> >>> prefix : <http://kg.cts.com/> >>> prefix foaf: <http://xmlns.com/foaf/0.1/> >>> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >>> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> >>> prefix owl: <http://www.w3.org/2002/07/owl#> >>> prefix org: <http://www.w3.org/ns/org#> >>> >>> select distinct ?who >>> where >>> { >>> >>> ?who org:hasMembership/org:role/rdf:type :LeadershipRoles. >>> } >>> >>> RESULT: >>> who >>> <http://kg.cts.com/139137> >>> >>> >>> What am I doing wrong ? >>> Thanks again for your help and patience. >>> >>> On Tue, Apr 28, 2015 at 10:14 AM, Brad Bebee <be...@sy...> wrote: >>> Kaushik, >>> >>> I was able to get your example working using the following steps. I suspect the issue may have been that you had not loaded the ontology data prior to loading the instance data. Please give it a try and let us know if that works. >>> >>> Thanks, --Brad >>> >>> 1. Create a properties file with your properties (owl_test/test.properties) and started a blazegraph workbench instance. >>> >>> java -Xmx2g -Dbigdata.propertyFile=owl_test/test.properties -jar bigdata-bundled-1.5.1.jar >>> >>> 2. Loaded the ontology data in the workbench (http://localhost:9999/bigdata/). Under the "Update" tab, I selected "File Path or URL" and pasted in http://www.w3.org/ns/org.n3. >>> >>> 3. Loaded the instance data. Also in the workbench "Update" tab, I selected "RDF Data" with the format "Turtle". >>> >>> @prefix : <http://kg.cts.com/> . >>> @prefix foaf: <http://xmlns.com/foaf/0.1/> . >>> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . >>> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . >>> @prefix owl: <http://www.w3.org/2002/07/owl#> . >>> >>> @prefix org: <http://www.w3.org/ns/org#> . >>> >>> >>> :A rdf:type org:Organization . >>> >>> :139137 rdf:type foaf:Agent ; >>> rdfs:label "139137" ; >>> org:memberOf :A . >>> >>> 4. Issued the SPARQL Query in the workbench. >>> >>> prefix : <http://kg.cts.com/> >>> prefix foaf: <http://xmlns.com/foaf/0.1/> >>> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >>> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> >>> prefix owl: <http://www.w3.org/2002/07/owl#> >>> prefix org: <http://www.w3.org/ns/org#> >>> >>> select distinct ?s ?w >>> where >>> { >>> ?s org:hasMember ?w . >>> } >>> >>> with the expected results. >>> >>> <http://kg.cts.com/A> <http://kg.cts.com/139137> >>> >>> >>> >>> On Mon, Apr 27, 2015 at 9:33 PM, Bryan Thompson <br...@sy...> wrote: >>> Ontologies are just data. Use any approach you would use to load the data. >>> >>> Bryan >>> >>> On Apr 27, 2015 11:04 AM, "Kaushik Chakraborty" <kay...@gm...> wrote: >>> Hi, >>> >>> If I put this RDF data in the Updata panel of NanoSparqlServer: >>> >>> @prefix : <http://kg.cts.com/> . >>> @prefix foaf: <http://xmlns.com/foaf/0.1/> . >>> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . >>> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . >>> @prefix owl: <http://www.w3.org/2002/07/owl#> . >>> >>> @prefix org: <http://www.w3.org/ns/org#> . >>> >>> >>> :A rdf:type org:Organization . >>> >>> :139137 rdf:type foaf:Agent ; >>> rdfs:label "139137" ; >>> org:memberOf :A . >>> >>> And then if I make a query like this >>> >>> prefix : <http://kg.cts.com/> >>> prefix foaf: <http://xmlns.com/foaf/0.1/> >>> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >>> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> >>> prefix owl: <http://www.w3.org/2002/07/owl#> >>> prefix org: <http://www.w3.org/ns/org#> >>> >>> select distinct ?s ?w >>> where >>> { >>> ?s org:hasMember ?w . >>> } >>> >>> It should list <http://kg.cts.com/A> and <http://kg.cts.com/139137> respectively as ?s and ?w as per the Organization Ontology (http://www.w3.org/TR/vocab-org/#org:Organization) >>> >>> But that's not happening. Clearly the inferencing is not working for Org Ontology. >>> >>> Here're my namespace properties: >>> >>> com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor 1024 >>> com.bigdata.relation.container test-import >>> com.bigdata.journal.AbstractJournal.bufferMode DiskRW >>> com.bigdata.journal.AbstractJournal.file bigdata.jnl >>> com.bigdata.journal.AbstractJournal.initialExtent 209715200 >>> com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass com.bigdata.rdf.vocab.DefaultBigdataVocabulary >>> com.bigdata.rdf.store.AbstractTripleStore.textIndex false >>> com.bigdata.btree.BTree.branchingFactor 128 >>> com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor 400 >>> com.bigdata.rdf.store.AbstractTripleStore.axiomsClass com.bigdata.rdf.axioms.OwlAxioms >>> com.bigdata.service.AbstractTransactionService.minReleaseAge 1 >>> com.bigdata.rdf.sail.truthMaintenance true >>> com.bigdata.journal.AbstractJournal.maximumExtent 209715200 >>> com.bigdata.rdf.sail.namespace test-import >>> com.bigdata.relation.class com.bigdata.rdf.store.LocalTripleStore >>> com.bigdata.rdf.store.AbstractTripleStore.quads false >>> com.bigdata.relation.namespace test-import >>> com.bigdata.btree.writeRetentionQueue.capacity 4000 >>> com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers false >>> >>> What am I missing here ? >>> >>> >>> ------------------------------------------------------------------------------ >>> One dashboard for servers and applications across Physical-Virtual-Cloud >>> Widest out-of-the-box monitoring support with 50+ applications >>> Performance metrics, stats and reports that give you Actionable Insights >>> Deep dive visibility with transaction tracing using APM Insight. >>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >>> ------------------------------------------------------------------------------ >>> One dashboard for servers and applications across Physical-Virtual-Cloud >>> Widest out-of-the-box monitoring support with 50+ applications >>> Performance metrics, stats and reports that give you Actionable Insights >>> Deep dive visibility with transaction tracing using APM Insight. >>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >>> >>> >>> -- >>> _______________ >>> Brad Bebee >>> Managing Partner >>> SYSTAP, LLC >>> e: be...@sy... >>> m: 202.642.7961 >>> f: 571.367.5000 >>> w: www.systap.com >>> >>> Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >> >>> >>> >>> >>> >>> -- >>> Thanks, >>> Kaushik >>> >>> >>> -- >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... >>> http://blazegraph.com >>> http://blog.bigdata.com >>> http://mapgraph.io >>> Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >> >>> >>> >> |
From: Fredah B <fre...@gm...> - 2015-05-05 09:54:26
|
Dear Brad, Bryan, Thank you so much for the quick response and the great feedback! The explanation given is very clear and I'm even more thankful for the reference to the architecture. One again, thank you so much and I'll be in touch. Regards, Fredah On Tue, May 5, 2015 at 4:51 AM, Brad Bebee <be...@sy...> wrote: > Fredah, > > To add to Bryan's message, we'd love also to hear about your project and > successes you have with it. Let us know as you progress and we can feature > it on our Blog and/or Wiki. In addition to the open source support, we do > also have a few other options; just drop a line if that's of interest to > you as well. > > Regards, --Brad > > On Mon, May 4, 2015 at 8:10 PM, Bryan Thompson <br...@sy...> wrote: > >> Fredah, >> >> Please see >> http://www.blazegraph.com/whitepapers/bigdata_architecture_whitepaper.pdf >> for a good overview of the platform. The data are stored in indices >> (B+Trees). Prefix compression is used as well as a variety of other >> compression techniques. We then encode the index pages into a >> compressed representation that can be interpreted without >> decompression of the page. We do not currently compress the index >> page records using a block compression strategy. This is possible, >> but there are other compression techniques that I would implement >> first. >> >> Thanks, >> Bryan >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.bigdata.com >> http://mapgraph.io >> >> Blazegraph™ is our ultra high-performance graph database that supports >> both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our >> disruptive new technology to use GPUs to accelerate data-parallel >> graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments >> are for the sole use of the intended recipient(s) and are confidential >> or proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments >> is prohibited. If you have received this communication in error, >> please notify the sender by reply email and permanently delete all >> copies of the email and its contents and attachments. >> >> >> On Mon, May 4, 2015 at 7:04 PM, Fredah Banda <fre...@ma...> >> wrote: >> > Dear Team, >> > >> > >> > I plan on using your SPARQL engine for my project implementation. I’m >> > impressed by the tremendous work you have put in to make this engine a >> > success however I did notice that the underlying infrastructure and >> > compression technique used are encapsulated. I need to fully understand >> how >> > the data is processed from start to finish especially with regards to >> the >> > compression. Are there by any chance papers that have been written that >> > cover the compression and decompression used in your engine or is it >> > possible to refer me to someone who may be able to explain it to me? >> > >> > >> > Also, is compression default or is turned on and off depending on the >> data >> > load of the system? I was also wondering how you store the data >> internally. >> > As in, what format is the data stored? Is it an internally created >> > representation or one of the standard RDF representations? >> > >> > >> > I would really appreciate your assistance in answering these questions >> and >> > look forward to hearing from you soon. >> > >> > >> > Best Regards, >> > >> > >> > Fredah >> > >> > >> > >> ------------------------------------------------------------------------------ >> > One dashboard for servers and applications across Physical-Virtual-Cloud >> > Widest out-of-the-box monitoring support with 50+ applications >> > Performance metrics, stats and reports that give you Actionable Insights >> > Deep dive visibility with transaction tracing using APM Insight. >> > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> > _______________________________________________ >> > Bigdata-developers mailing list >> > Big...@li... >> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> > >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> > > > > -- > _______________ > Brad Bebee > Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.systap.com > > Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- Best Regards, Fredah Temwani Banda |
From: Brad B. <be...@sy...> - 2015-05-05 03:51:56
|
Fredah, To add to Bryan's message, we'd love also to hear about your project and successes you have with it. Let us know as you progress and we can feature it on our Blog and/or Wiki. In addition to the open source support, we do also have a few other options; just drop a line if that's of interest to you as well. Regards, --Brad On Mon, May 4, 2015 at 8:10 PM, Bryan Thompson <br...@sy...> wrote: > Fredah, > > Please see > http://www.blazegraph.com/whitepapers/bigdata_architecture_whitepaper.pdf > for a good overview of the platform. The data are stored in indices > (B+Trees). Prefix compression is used as well as a variety of other > compression techniques. We then encode the index pages into a > compressed representation that can be interpreted without > decompression of the page. We do not currently compress the index > page records using a block compression strategy. This is possible, > but there are other compression techniques that I would implement > first. > > Thanks, > Bryan > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.bigdata.com > http://mapgraph.io > > Blazegraph™ is our ultra high-performance graph database that supports > both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our > disruptive new technology to use GPUs to accelerate data-parallel > graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments > are for the sole use of the intended recipient(s) and are confidential > or proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments > is prohibited. If you have received this communication in error, > please notify the sender by reply email and permanently delete all > copies of the email and its contents and attachments. > > > On Mon, May 4, 2015 at 7:04 PM, Fredah Banda <fre...@ma...> > wrote: > > Dear Team, > > > > > > I plan on using your SPARQL engine for my project implementation. I’m > > impressed by the tremendous work you have put in to make this engine a > > success however I did notice that the underlying infrastructure and > > compression technique used are encapsulated. I need to fully understand > how > > the data is processed from start to finish especially with regards to the > > compression. Are there by any chance papers that have been written that > > cover the compression and decompression used in your engine or is it > > possible to refer me to someone who may be able to explain it to me? > > > > > > Also, is compression default or is turned on and off depending on the > data > > load of the system? I was also wondering how you store the data > internally. > > As in, what format is the data stored? Is it an internally created > > representation or one of the standard RDF representations? > > > > > > I would really appreciate your assistance in answering these questions > and > > look forward to hearing from you soon. > > > > > > Best Regards, > > > > > > Fredah > > > > > > > ------------------------------------------------------------------------------ > > One dashboard for servers and applications across Physical-Virtual-Cloud > > Widest out-of-the-box monitoring support with 50+ applications > > Performance metrics, stats and reports that give you Actionable Insights > > Deep dive visibility with transaction tracing using APM Insight. > > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > > _______________________________________________ > > Bigdata-developers mailing list > > Big...@li... > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > -- _______________ Brad Bebee Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.systap.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Bryan T. <br...@sy...> - 2015-05-05 00:10:17
|
Fredah, Please see http://www.blazegraph.com/whitepapers/bigdata_architecture_whitepaper.pdf for a good overview of the platform. The data are stored in indices (B+Trees). Prefix compression is used as well as a variety of other compression techniques. We then encode the index pages into a compressed representation that can be interpreted without decompression of the page. We do not currently compress the index page records using a block compression strategy. This is possible, but there are other compression techniques that I would implement first. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com http://mapgraph.io Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, May 4, 2015 at 7:04 PM, Fredah Banda <fre...@ma...> wrote: > Dear Team, > > > I plan on using your SPARQL engine for my project implementation. I’m > impressed by the tremendous work you have put in to make this engine a > success however I did notice that the underlying infrastructure and > compression technique used are encapsulated. I need to fully understand how > the data is processed from start to finish especially with regards to the > compression. Are there by any chance papers that have been written that > cover the compression and decompression used in your engine or is it > possible to refer me to someone who may be able to explain it to me? > > > Also, is compression default or is turned on and off depending on the data > load of the system? I was also wondering how you store the data internally. > As in, what format is the data stored? Is it an internally created > representation or one of the standard RDF representations? > > > I would really appreciate your assistance in answering these questions and > look forward to hearing from you soon. > > > Best Regards, > > > Fredah > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Fredah B. <fre...@ma...> - 2015-05-04 23:30:56
|
Dear Team, I plan on using your SPARQL engine for my project implementation. I’m impressed by the tremendous work you have put in to make this engine a success however I did notice that the underlying infrastructure and compression technique used are encapsulated. I need to fully understand how the data is processed from start to finish especially with regards to the compression. Are there by any chance papers that have been written that cover the compression and decompression used in your engine or is it possible to refer me to someone who may be able to explain it to me? Also, is compression default or is turned on and off depending on the data load of the system? I was also wondering how you store the data internally. As in, what format is the data stored? Is it an internally created representation or one of the standard RDF representations? I would really appreciate your assistance in answering these questions and look forward to hearing from you soon. Best Regards, Fredah |
From: Bryan T. <br...@sy...> - 2015-04-30 21:12:58
|
Jim, Could you create a wiki page for this at wiki.blazegraph.com? Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com http://mapgraph.io Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Tue, Apr 28, 2015 at 9:49 PM, Jim Balhoff <ba...@ne...> wrote: > Hi Kaushik, > > I do have a reasoner integration I developed that I use with Blazegraph. It is called Owlet: > > https://github.com/phenoscape/owlet > > It is an API for SPARQL query expansion by means of an in-memory reasoner. I typically use it with ELK but you can use any OWL API-based reasoner. It is not directly integrated into Blazegraph, but you can incorporate it into a separate application to pre-expand your query before submitting to Blazegraph. I also have a web service application called Owlery which provides a query expansion service using Owlet. > > One thing to keep in mind is that you must have all the data used by the reasoner loaded into memory in an instance of OWLOntology. This works well for me because I typically only need Tbox reasoning—we have large anatomical ontologies. We query the ontology using a complex class expression (e.g. muscles attached to bones in the head), get all the inferred subclasses, and the retrieve instances tagged with any of those classes from Blazegraph. We don’t actually need to classify our instance data using our ontology, which is what it looks like you’re doing. That will work with Owlet, the only issue is that you will need all your instance data loaded into the reasoner. In my experience the available OWL reasoners are much more scalable for Tbox reasoning vs. giving them a large Abox. > > Best regards, > Jim > > >> On Apr 28, 2015, at 8:04 AM, Bryan Thompson <br...@sy...> wrote: >> >> Blazegraph does not support owl out of the box. Jim Balhoff (cc) has an ELK reasoner integration that you could use for this. >> >> @jim: can you provide some pointers to your work? >> >> Thanks, >> Bryan >> >> On Tuesday, April 28, 2015, Kaushik Chakraborty <kay...@gm...> wrote: >> Hi Brad, >> >> Thanks a lot for the clarification, it worked as I followed your steps. >> However, I'm stuck at a simple OWL reasoning as I extend the same instance data. >> >> - Here's my updated instance data >> >> @prefix : <http://kg.cts.com/> . >> @prefix foaf: <http://xmlns.com/foaf/0.1/> . >> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . >> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . >> @prefix owl: <http://www.w3.org/2002/07/owl#> . >> >> @prefix org: <http://www.w3.org/ns/org#> . >> >> >> :A rdf:type org:Organization . >> >> :LeadershipRoles a org:Role . >> :AdminRoles a org:Role . >> >> :ROLE1 a :LeadershipRoles . >> :ROLE2 a :LeadershipRoles . >> :ROLE3 a :AdminRoles . >> >> :CEOType a org:Membership ; >> owl:equivalentClass [ >> a owl:Restriction ; >> owl:onProperty org:role ; >> owl:someValuesFrom :LeadershipRoles >> ] . >> >> >> :139137 rdf:type foaf:Agent ; >> rdfs:label "139137" ; >> org:memberOf :A ; >> org:hasMembership [a org:Membership; org:role :ROLE1,:ROLE3] . >> >> :139138 rdf:type foaf:Agent ; >> rdfs:label "139138" ; >> org:memberOf :A ; >> org:hasMembership [a org:Membership; org:role :ROLE3] . >> >> >> - As per the above assertions, I should get :139137 to have :CEOType membership if I make this SPARQL query. But I'm not >> >> prefix : <http://kg.cts.com/> >> prefix foaf: <http://xmlns.com/foaf/0.1/> >> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> >> prefix owl: <http://www.w3.org/2002/07/owl#> >> prefix org: <http://www.w3.org/ns/org#> >> >> select distinct ?who >> where >> { >> ?who org:hasMembership :CEOType . >> } >> >> >> - However, I am getting right result if I search for exact type i.e. >> >> prefix : <http://kg.cts.com/> >> prefix foaf: <http://xmlns.com/foaf/0.1/> >> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> >> prefix owl: <http://www.w3.org/2002/07/owl#> >> prefix org: <http://www.w3.org/ns/org#> >> >> select distinct ?who >> where >> { >> >> ?who org:hasMembership/org:role/rdf:type :LeadershipRoles. >> } >> >> RESULT: >> who >> <http://kg.cts.com/139137> >> >> >> What am I doing wrong ? >> Thanks again for your help and patience. >> >> On Tue, Apr 28, 2015 at 10:14 AM, Brad Bebee <be...@sy...> wrote: >> Kaushik, >> >> I was able to get your example working using the following steps. I suspect the issue may have been that you had not loaded the ontology data prior to loading the instance data. Please give it a try and let us know if that works. >> >> Thanks, --Brad >> >> 1. Create a properties file with your properties (owl_test/test.properties) and started a blazegraph workbench instance. >> >> java -Xmx2g -Dbigdata.propertyFile=owl_test/test.properties -jar bigdata-bundled-1.5.1.jar >> >> 2. Loaded the ontology data in the workbench (http://localhost:9999/bigdata/). Under the "Update" tab, I selected "File Path or URL" and pasted in http://www.w3.org/ns/org.n3. >> >> 3. Loaded the instance data. Also in the workbench "Update" tab, I selected "RDF Data" with the format "Turtle". >> >> @prefix : <http://kg.cts.com/> . >> @prefix foaf: <http://xmlns.com/foaf/0.1/> . >> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . >> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . >> @prefix owl: <http://www.w3.org/2002/07/owl#> . >> >> @prefix org: <http://www.w3.org/ns/org#> . >> >> >> :A rdf:type org:Organization . >> >> :139137 rdf:type foaf:Agent ; >> rdfs:label "139137" ; >> org:memberOf :A . >> >> 4. Issued the SPARQL Query in the workbench. >> >> prefix : <http://kg.cts.com/> >> prefix foaf: <http://xmlns.com/foaf/0.1/> >> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> >> prefix owl: <http://www.w3.org/2002/07/owl#> >> prefix org: <http://www.w3.org/ns/org#> >> >> select distinct ?s ?w >> where >> { >> ?s org:hasMember ?w . >> } >> >> with the expected results. >> >> <http://kg.cts.com/A> <http://kg.cts.com/139137> >> >> >> >> On Mon, Apr 27, 2015 at 9:33 PM, Bryan Thompson <br...@sy...> wrote: >> Ontologies are just data. Use any approach you would use to load the data. >> >> Bryan >> >> On Apr 27, 2015 11:04 AM, "Kaushik Chakraborty" <kay...@gm...> wrote: >> Hi, >> >> If I put this RDF data in the Updata panel of NanoSparqlServer: >> >> @prefix : <http://kg.cts.com/> . >> @prefix foaf: <http://xmlns.com/foaf/0.1/> . >> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . >> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . >> @prefix owl: <http://www.w3.org/2002/07/owl#> . >> >> @prefix org: <http://www.w3.org/ns/org#> . >> >> >> :A rdf:type org:Organization . >> >> :139137 rdf:type foaf:Agent ; >> rdfs:label "139137" ; >> org:memberOf :A . >> >> And then if I make a query like this >> >> prefix : <http://kg.cts.com/> >> prefix foaf: <http://xmlns.com/foaf/0.1/> >> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> >> prefix owl: <http://www.w3.org/2002/07/owl#> >> prefix org: <http://www.w3.org/ns/org#> >> >> select distinct ?s ?w >> where >> { >> ?s org:hasMember ?w . >> } >> >> It should list <http://kg.cts.com/A> and <http://kg.cts.com/139137> respectively as ?s and ?w as per the Organization Ontology (http://www.w3.org/TR/vocab-org/#org:Organization) >> >> But that's not happening. Clearly the inferencing is not working for Org Ontology. >> >> Here're my namespace properties: >> >> com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor 1024 >> com.bigdata.relation.container test-import >> com.bigdata.journal.AbstractJournal.bufferMode DiskRW >> com.bigdata.journal.AbstractJournal.file bigdata.jnl >> com.bigdata.journal.AbstractJournal.initialExtent 209715200 >> com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass com.bigdata.rdf.vocab.DefaultBigdataVocabulary >> com.bigdata.rdf.store.AbstractTripleStore.textIndex false >> com.bigdata.btree.BTree.branchingFactor 128 >> com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor 400 >> com.bigdata.rdf.store.AbstractTripleStore.axiomsClass com.bigdata.rdf.axioms.OwlAxioms >> com.bigdata.service.AbstractTransactionService.minReleaseAge 1 >> com.bigdata.rdf.sail.truthMaintenance true >> com.bigdata.journal.AbstractJournal.maximumExtent 209715200 >> com.bigdata.rdf.sail.namespace test-import >> com.bigdata.relation.class com.bigdata.rdf.store.LocalTripleStore >> com.bigdata.rdf.store.AbstractTripleStore.quads false >> com.bigdata.relation.namespace test-import >> com.bigdata.btree.writeRetentionQueue.capacity 4000 >> com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers false >> >> What am I missing here ? >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> >> >> >> -- >> _______________ >> Brad Bebee >> Managing Partner >> SYSTAP, LLC >> e: be...@sy... >> m: 202.642.7961 >> f: 571.367.5000 >> w: www.systap.com >> >> Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > >> >> >> >> >> -- >> Thanks, >> Kaushik >> >> >> -- >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.bigdata.com >> http://mapgraph.io >> Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > >> >> > |
From: Mike P. <mi...@sy...> - 2015-04-30 18:03:18
|
Be aware that it is not only not order-preserving, but also will not keep duplicates. s.p = [o1, o2, o2] -> <s> <p> "o1" . <s> <p> "o2" . On Thu, Apr 30, 2015 at 11:43 AM, Jack Park <jac...@gm...> wrote: > Thanks! > I asked because I got an IllegalArgumentException at the toLiteral method. > What I shall do is instrument that section of code more closely to see if > I was sending in something that should not have been sent. > > I do not think, at the moment, that order-preserving is necessary, though > I can see where it would be useful. > > Jack > > On Thu, Apr 30, 2015 at 10:06 AM, Mike Personick <mi...@sy...> wrote: > >> Jack, >> >> Lists get unrolled into individual statements: >> >> s.p = [o1, o2] >> >> -> >> >> <s> <p> "o1" . >> <s> <p> "o2" . >> >> Unfortunately not order-preserving right now, but I am working on a >> solution for that in the upcoming sprint (next two weeks). >> >> -Mike >> >> >> On Wed, Apr 29, 2015 at 6:06 PM, Jack Park <jac...@gm...> wrote: >> >>> New question. >>> Some of my properties are List<String>, not one of the Literal types in >>> DefaultBlueprintsValueFactory. >>> >>> What to do about that? >>> >>> Many thanks in advance. >>> Jack >>> >>> >>> ------------------------------------------------------------------------------ >>> One dashboard for servers and applications across Physical-Virtual-Cloud >>> Widest out-of-the-box monitoring support with 50+ applications >>> Performance metrics, stats and reports that give you Actionable Insights >>> Deep dive visibility with transaction tracing using APM Insight. >>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> > |
From: Jack P. <jac...@gm...> - 2015-04-30 17:43:38
|
Thanks! I asked because I got an IllegalArgumentException at the toLiteral method. What I shall do is instrument that section of code more closely to see if I was sending in something that should not have been sent. I do not think, at the moment, that order-preserving is necessary, though I can see where it would be useful. Jack On Thu, Apr 30, 2015 at 10:06 AM, Mike Personick <mi...@sy...> wrote: > Jack, > > Lists get unrolled into individual statements: > > s.p = [o1, o2] > > -> > > <s> <p> "o1" . > <s> <p> "o2" . > > Unfortunately not order-preserving right now, but I am working on a > solution for that in the upcoming sprint (next two weeks). > > -Mike > > > On Wed, Apr 29, 2015 at 6:06 PM, Jack Park <jac...@gm...> wrote: > >> New question. >> Some of my properties are List<String>, not one of the Literal types in >> DefaultBlueprintsValueFactory. >> >> What to do about that? >> >> Many thanks in advance. >> Jack >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > |
From: Mike P. <mi...@sy...> - 2015-04-30 17:06:19
|
Jack, Lists get unrolled into individual statements: s.p = [o1, o2] -> <s> <p> "o1" . <s> <p> "o2" . Unfortunately not order-preserving right now, but I am working on a solution for that in the upcoming sprint (next two weeks). -Mike On Wed, Apr 29, 2015 at 6:06 PM, Jack Park <jac...@gm...> wrote: > New question. > Some of my properties are List<String>, not one of the Literal types in > DefaultBlueprintsValueFactory. > > What to do about that? > > Many thanks in advance. > Jack > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Bryan T. <br...@sy...> - 2015-04-30 16:44:55
|
Here is the relevant documentation from the code: to turn this off include the following in your property file: com.bigdata.rdf.store.AbstractTripleStore.inlineDateTimes=false /** * Set up database to inline date/times directly into the statement * indices rather than using the lexicon to map them to term identifiers * and back (default {@value #DEFAULT_INLINE_DATE_TIMES}). Date times * will be converted to UTC, then stored as milliseconds since the * epoch. Thus if you inline date/times you will lose the canonical * representation of the date/time. This has two consequences: (1) you * will not be able to recover the original time zone of the date/time; * and (2) greater than millisecond precision will be lost. * * @see #INLINE_DATE_TIMES_TIMEZONE */ String INLINE_DATE_TIMES = AbstractTripleStore.class.getName() + ".inlineDateTimes"; String DEFAULT_INLINE_DATE_TIMES = "true"; ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com http://mapgraph.io Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Thu, Apr 30, 2015 at 12:34 PM, Bryan Thompson <br...@sy...> wrote: > I am not at a computer now.. see AbstractTripleStore INLINE_DATE_TIMES. You > need to turn off that property when creating the namespace or in the RWStore > file in WEB-INF. > > Thanks, > Bryan > > On Apr 30, 2015 7:30 AM, "James HK" <jam...@gm...> wrote: >> >> Hi, >> >> When trying to run our unit test suite [0] against a preliminary/local >> Blazegraph 1.5.1 instance the >> following error appeared during the test run (using a vanilla >> Blazegraph with the standard kb namespace) . >> >> Our test suite (and the test mentioned) is run/tested against Virtuoso >> 6.1/Fuseki 1.1.1/Sesame 2.7.14 [0] on Travis-CI therefore it is >> unlikely an issue on our side. >> >> 1) >> SMW\Tests\Integration\MediaWiki\Import\TimeDataTypeTest::testImportOfDifferentDateWithAssortmentOfOutputConversion >> SMW\SPARQLStore\Exception\BadHttpDatabaseResponseException: A SPARQL >> query error has occurred >> Query: >> PREFIX wiki: <http://example.org/id/> >> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> >> PREFIX owl: <http://www.w3.org/2002/07/owl#> >> PREFIX swivt: <http://semantic-mediawiki.org/swivt/1.0#> >> PREFIX property: <http://example.org/id/Property-3A> >> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> >> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { >> wiki:TimeDataTypeRegressionTest ?p ?o } >> Error: Query refused >> Endpoint: http://192.168.1.104:9999/bigdata/namespace/kb/sparql >> HTTP response code: 500 >> >> This translates into an error on the Blazegraph side with (output from >> http://localhost:9999/bigdata/#update): >> >> ERROR: SPARQL-UPDATE: updateStr=PREFIX wiki: >> PREFIX rdf: >> PREFIX rdfs: >> PREFIX owl: >> PREFIX swivt: >> PREFIX property: >> PREFIX xsd: >> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { >> wiki:TimeDataTypeRegressionTest ?p ?o } >> java.util.concurrent.ExecutionException: >> java.util.concurrent.ExecutionException: >> org.openrdf.query.UpdateExecutionException: >> java.lang.IllegalStateException: Already assigned: >> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >> datatype=Vocab(-42)], new=LiteralExtensionIV >> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >> "8452"^^ >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >> at >> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) >> at >> com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:359) >> at >> com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:165) >> at >> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) >> at >> com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >> at >> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >> at >> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >> at >> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >> at >> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >> at >> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >> at >> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >> at >> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >> at >> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >> at >> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >> at >> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >> at >> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) >> at >> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) >> at >> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) >> at org.eclipse.jetty.server.Server.handle(Server.java:497) >> at >> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) >> at >> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) >> at >> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >> at >> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) >> at >> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) >> at java.lang.Thread.run(Thread.java:744) >> Caused by: java.util.concurrent.ExecutionException: >> org.openrdf.query.UpdateExecutionException: >> java.lang.IllegalStateException: Already assigned: >> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >> datatype=Vocab(-42)], new=LiteralExtensionIV >> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >> "8452"^^ >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >> at >> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460) >> at >> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:371) >> at >> com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) >> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >> at >> com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365) >> at >> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258) >> ... 26 more >> Caused by: org.openrdf.query.UpdateExecutionException: >> java.lang.IllegalStateException: Already assigned: >> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >> datatype=Vocab(-42)], new=LiteralExtensionIV >> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >> "8452"^^ >> at >> com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1303) >> at >> com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) >> at >> com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1683) >> at >> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1310) >> at >> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1275) >> at >> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:517) >> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >> at >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >> at >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >> ... 1 more >> Caused by: java.lang.IllegalStateException: Already assigned: >> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), >> datatype=Vocab(-42)], new=LiteralExtensionIV >> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: >> "8452"^^ >> at >> com.bigdata.rdf.model.BigdataValueImpl.setIV(BigdataValueImpl.java:139) >> at >> com.bigdata.rdf.internal.LexiconConfiguration.createInlineIV(LexiconConfiguration.java:430) >> at >> com.bigdata.rdf.lexicon.LexiconRelation.getInlineIV(LexiconRelation.java:3150) >> at >> com.bigdata.rdf.lexicon.LexiconRelation.addTerms(LexiconRelation.java:1719) >> at >> com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2928) >> at >> com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2874) >> at >> com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2962) >> at >> com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2865) >> at >> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.addOrRemoveStatement(AST2BOpUpdate.java:2054) >> at >> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:989) >> at >> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:417) >> at >> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:279) >> at >> com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1295) >> ... 9 more >> >> [0] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki >> >> Cheers >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Bryan T. <br...@sy...> - 2015-04-30 16:34:26
|
I am not at a computer now.. see AbstractTripleStore INLINE_DATE_TIMES. You need to turn off that property when creating the namespace or in the RWStore file in WEB-INF. Thanks, Bryan On Apr 30, 2015 7:30 AM, "James HK" <jam...@gm...> wrote: > Hi, > > When trying to run our unit test suite [0] against a preliminary/local > Blazegraph 1.5.1 instance the > following error appeared during the test run (using a vanilla > Blazegraph with the standard kb namespace) . > > Our test suite (and the test mentioned) is run/tested against Virtuoso > 6.1/Fuseki 1.1.1/Sesame 2.7.14 [0] on Travis-CI therefore it is > unlikely an issue on our side. > > 1) > SMW\Tests\Integration\MediaWiki\Import\TimeDataTypeTest::testImportOfDifferentDateWithAssortmentOfOutputConversion > SMW\SPARQLStore\Exception\BadHttpDatabaseResponseException: A SPARQL > query error has occurred > Query: > PREFIX wiki: <http://example.org/id/> > PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> > PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> > PREFIX owl: <http://www.w3.org/2002/07/owl#> > PREFIX swivt: <http://semantic-mediawiki.org/swivt/1.0#> > PREFIX property: <http://example.org/id/Property-3A> > PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> > DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { > wiki:TimeDataTypeRegressionTest ?p ?o } > Error: Query refused > Endpoint: http://192.168.1.104:9999/bigdata/namespace/kb/sparql > HTTP response code: 500 > > This translates into an error on the Blazegraph side with (output from > http://localhost:9999/bigdata/#update): > > ERROR: SPARQL-UPDATE: updateStr=PREFIX wiki: > PREFIX rdf: > PREFIX rdfs: > PREFIX owl: > PREFIX swivt: > PREFIX property: > PREFIX xsd: > DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { > wiki:TimeDataTypeRegressionTest ?p ?o } > java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: > org.openrdf.query.UpdateExecutionException: > java.lang.IllegalStateException: Already assigned: > old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > datatype=Vocab(-42)], new=LiteralExtensionIV > [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > "8452"^^ > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:359) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:165) > at > com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) > at > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > at java.lang.Thread.run(Thread.java:744) > Caused by: java.util.concurrent.ExecutionException: > org.openrdf.query.UpdateExecutionException: > java.lang.IllegalStateException: Already assigned: > old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > datatype=Vocab(-42)], new=LiteralExtensionIV > [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > "8452"^^ > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:371) > at > com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365) > at > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258) > ... 26 more > Caused by: org.openrdf.query.UpdateExecutionException: > java.lang.IllegalStateException: Already assigned: > old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > datatype=Vocab(-42)], new=LiteralExtensionIV > [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > "8452"^^ > at > com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1303) > at > com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1683) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1310) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1275) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:517) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > ... 1 more > Caused by: java.lang.IllegalStateException: Already assigned: > old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > datatype=Vocab(-42)], new=LiteralExtensionIV > [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > "8452"^^ > at > com.bigdata.rdf.model.BigdataValueImpl.setIV(BigdataValueImpl.java:139) > at > com.bigdata.rdf.internal.LexiconConfiguration.createInlineIV(LexiconConfiguration.java:430) > at > com.bigdata.rdf.lexicon.LexiconRelation.getInlineIV(LexiconRelation.java:3150) > at > com.bigdata.rdf.lexicon.LexiconRelation.addTerms(LexiconRelation.java:1719) > at > com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2928) > at > com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2874) > at > com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2962) > at > com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2865) > at > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.addOrRemoveStatement(AST2BOpUpdate.java:2054) > at > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:989) > at > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:417) > at > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:279) > at > com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1295) > ... 9 more > > [0] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki > > Cheers > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Bryan T. <br...@sy...> - 2015-04-30 16:21:49
|
Well as a workaround you could turn off date time inlining. If you have extreme dates that should fix the problem. Wikimedia is working on support for their own dates which include significant extremes and error bounds. Bryan On Apr 30, 2015 9:11 AM, "James HK" <jam...@gm...> wrote: > Hi, > > > That is an interesting exception. The UPDATE request runs fine for me > > locally against a quads mode namespace. > > It is a Blazegraph instance with the kb namespace in triplemode. > > > Is this error stochastic? > > No, it always fails for the same test with the same error. > > > you using the same namespace throughout your integration tests? That > > Yes, it is defined once and used for the whole suite (same as with the > Sesame repository). > > > is are there side-effects on the database or is it being cleared down > > between each test? (Dropping the namespace or creating a new database > > instance?) Are there other mutation operations that have already run > > It is the same Blazegraph instance that is used for each test but data > sets are removed after each test. > > > instance?) Are there other mutation operations that have already run > > in this specific test? > > Not that I'm aware of. It could be that a previous test with its data > creates a side effect but as I mentioned earlier that hasn't been any > issue before. > > > It would be best if you have provide something that replicates this > > problem so we can get it into our regression test suite. This will > > also help to root cause the issue. > > Before making an effort to setup Travis, I wanted to see how well it > runs compared with the > other vendors. My hope was that someone has seen this issue before or > similar that would cause this. > > The test contains property value assignments with time values of 1 > January 300 BC, 2147483647 BC which at least gives trouble for > Virtuoso on the "1 January 300 BC" date. > > All I can provide is the SPARQL update sequence [0] for the test until > it fails when run in isolation and the output [1] for "select ?s ?p ?o > { <http://example.org/id/TimeDataTypeRegressionTest> ?p ?o . }". > > [0] https://gist.github.com/mwjames/218ffbafa0189a5a8f93 > [1] https://gist.github.com/mwjames/ca19fccb3e6808e0c887 > > Cheers > > On 4/30/15, Bryan Thompson <br...@sy...> wrote: > > That is an interesting exception. The UPDATE request runs fine for me > > locally against a quads mode namespace. Is this error stochastic? Are > > you using the same namespace throughout your integration tests? That > > is are there side-effects on the database or is it being cleared down > > between each test? (Dropping the namespace or creating a new database > > instance?) Are there other mutation operations that have already run > > in this specific test? > > > > I suspect some side-effect. > > > > It would be best if you have provide something that replicates this > > problem so we can get it into our regression test suite. This will > > also help to root cause the issue. > > > > Thanks, > > Bryan > > ---- > > Bryan Thompson > > Chief Scientist & Founder > > SYSTAP, LLC > > 4501 Tower Road > > Greensboro, NC 27410 > > br...@sy... > > http://blazegraph.com > > http://blog.bigdata.com > > http://mapgraph.io > > > > Blazegraph™ is our ultra high-performance graph database that supports > > both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our > > disruptive new technology to use GPUs to accelerate data-parallel > > graph analytics. > > > > CONFIDENTIALITY NOTICE: This email and its contents and attachments > > are for the sole use of the intended recipient(s) and are confidential > > or proprietary to SYSTAP. Any unauthorized review, use, disclosure, > > dissemination or copying of this email or its contents or attachments > > is prohibited. If you have received this communication in error, > > please notify the sender by reply email and permanently delete all > > copies of the email and its contents and attachments. > > > > > > On Thu, Apr 30, 2015 at 9:18 AM, James HK <jam...@gm...> > > wrote: > >> Hi, > >> > >> When trying to run our unit test suite [0] against a preliminary/local > >> Blazegraph 1.5.1 instance the > >> following error appeared during the test run (using a vanilla > >> Blazegraph with the standard kb namespace) . > >> > >> Our test suite (and the test mentioned) is run/tested against Virtuoso > >> 6.1/Fuseki 1.1.1/Sesame 2.7.14 [0] on Travis-CI therefore it is > >> unlikely an issue on our side. > >> > >> 1) > >> > SMW\Tests\Integration\MediaWiki\Import\TimeDataTypeTest::testImportOfDifferentDateWithAssortmentOfOutputConversion > >> SMW\SPARQLStore\Exception\BadHttpDatabaseResponseException: A SPARQL > >> query error has occurred > >> Query: > >> PREFIX wiki: <http://example.org/id/> > >> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> > >> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> > >> PREFIX owl: <http://www.w3.org/2002/07/owl#> > >> PREFIX swivt: <http://semantic-mediawiki.org/swivt/1.0#> > >> PREFIX property: <http://example.org/id/Property-3A> > >> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> > >> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { > >> wiki:TimeDataTypeRegressionTest ?p ?o } > >> Error: Query refused > >> Endpoint: http://192.168.1.104:9999/bigdata/namespace/kb/sparql > >> HTTP response code: 500 > >> > >> This translates into an error on the Blazegraph side with (output from > >> http://localhost:9999/bigdata/#update): > >> > >> ERROR: SPARQL-UPDATE: updateStr=PREFIX wiki: > >> PREFIX rdf: > >> PREFIX rdfs: > >> PREFIX owl: > >> PREFIX swivt: > >> PREFIX property: > >> PREFIX xsd: > >> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { > >> wiki:TimeDataTypeRegressionTest ?p ?o } > >> java.util.concurrent.ExecutionException: > >> java.util.concurrent.ExecutionException: > >> org.openrdf.query.UpdateExecutionException: > >> java.lang.IllegalStateException: Already assigned: > >> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > >> datatype=Vocab(-42)], new=LiteralExtensionIV > >> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > >> "8452"^^ > >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) > >> at java.util.concurrent.FutureTask.get(FutureTask.java:188) > >> at > >> > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) > >> at > >> > com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:359) > >> at > >> com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:165) > >> at > >> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) > >> at > >> > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) > >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > >> at > >> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > >> at > >> > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > >> at > >> > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > >> at > >> > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > >> at > >> > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > >> at > >> > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > >> at > >> > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > >> at > >> > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > >> at > >> > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > >> at > >> > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > >> at > >> > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > >> at > >> > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > >> at > >> > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > >> at org.eclipse.jetty.server.Server.handle(Server.java:497) > >> at > >> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) > >> at > >> > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > >> at > >> > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > >> at > >> > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > >> at > >> > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > >> at java.lang.Thread.run(Thread.java:744) > >> Caused by: java.util.concurrent.ExecutionException: > >> org.openrdf.query.UpdateExecutionException: > >> java.lang.IllegalStateException: Already assigned: > >> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > >> datatype=Vocab(-42)], new=LiteralExtensionIV > >> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > >> "8452"^^ > >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) > >> at java.util.concurrent.FutureTask.get(FutureTask.java:188) > >> at > >> > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460) > >> at > >> > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:371) > >> at > >> > com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) > >> at java.util.concurrent.FutureTask.run(FutureTask.java:262) > >> at > >> > com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365) > >> at > >> > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258) > >> ... 26 more > >> Caused by: org.openrdf.query.UpdateExecutionException: > >> java.lang.IllegalStateException: Already assigned: > >> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > >> datatype=Vocab(-42)], new=LiteralExtensionIV > >> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > >> "8452"^^ > >> at > >> > com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1303) > >> at > >> > com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) > >> at > >> > com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1683) > >> at > >> > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1310) > >> at > >> > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1275) > >> at > >> > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:517) > >> at java.util.concurrent.FutureTask.run(FutureTask.java:262) > >> at > >> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > >> at > >> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > >> ... 1 more > >> Caused by: java.lang.IllegalStateException: Already assigned: > >> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > >> datatype=Vocab(-42)], new=LiteralExtensionIV > >> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > >> "8452"^^ > >> at > >> com.bigdata.rdf.model.BigdataValueImpl.setIV(BigdataValueImpl.java:139) > >> at > >> > com.bigdata.rdf.internal.LexiconConfiguration.createInlineIV(LexiconConfiguration.java:430) > >> at > >> > com.bigdata.rdf.lexicon.LexiconRelation.getInlineIV(LexiconRelation.java:3150) > >> at > >> > com.bigdata.rdf.lexicon.LexiconRelation.addTerms(LexiconRelation.java:1719) > >> at > >> > com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2928) > >> at > >> > com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2874) > >> at > >> > com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2962) > >> at > >> > com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2865) > >> at > >> > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.addOrRemoveStatement(AST2BOpUpdate.java:2054) > >> at > >> > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:989) > >> at > >> > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:417) > >> at > >> > com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:279) > >> at > >> > com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1295) > >> ... 9 more > >> > >> [0] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki > >> > >> Cheers > >> > >> > ------------------------------------------------------------------------------ > >> One dashboard for servers and applications across Physical-Virtual-Cloud > >> Widest out-of-the-box monitoring support with 50+ applications > >> Performance metrics, stats and reports that give you Actionable Insights > >> Deep dive visibility with transaction tracing using APM Insight. > >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > >> _______________________________________________ > >> Bigdata-developers mailing list > >> Big...@li... > >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > |