From: Michael S. <ms...@me...> - 2015-09-17 21:37:53
|
Stefan, the issue has been resolved, see https://jira.blazegraph.com/browse/BLZG-1493 <https://jira.blazegraph.com/browse/BLZG-1493>. We decided to include it into the upcoming release, which is targeted for next week. Best, Michael > On 17 Sep 2015, at 11:08, Michael Schmidt <ms...@me...> wrote: > > Dear Stefan, > > we plan to discuss the issue in an internal meeting today. > > Actually, we're pretty close to a new release, but maybe there's a chance that a potential fix could still be included. We will keep you informed. > > Best, > Michael > > > Am 17.09.2015 um 11:03 schrieb Stefan Bischof <ste...@wu... <mailto:ste...@wu...>>: > >> Hi Bryan, >> >> thanks! Is there some kind of developer/beta version I could use once you have a fix? >> >> Cheers, >> Stefan >> >> Am 15.09.2015 um 23:27 schrieb Bryan Thompson: >>> Stefan, >>> >>> Thanks for reporting this issue. We were able to replicate the problem and have found some additional cases where there are issues: >>> >>> - https://jira.blazegraph.com/browse/BLZG-1493 <https://jira.blazegraph.com/browse/BLZG-1493> (same NPE) >>> - https://jira.blazegraph.com/browse/BLZG-1495 <https://jira.blazegraph.com/browse/BLZG-1495> (wrong answer) >>> >>> We will look into these. The 1.5.3 release is being locked down right now, so the fix will not be in 1.5.3. However, we can reach out once we do have a fix and see if you can validate it on your setup. Feel free to subscribe to the tickets for updates. >>> >>> Thanks, >>> Bryan >>> >>> >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... <mailto:br...@sy...> >>> http://blazegraph.com <http://blazegraph.com/> >>> http://blog.bigdata.com <http://bigdata.com/> >>> http://mapgraph.io <http://mapgraph.io/> >>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >>> >>> >>> >>> On Mon, Sep 14, 2015 at 4:26 AM, Stefan Bischof <ste...@wu... <mailto:ste...@wu...>> wrote: >>> Hi all! >>> >>> Only last week I started using Blazegraph and was impressed by the >>> performance and support for property path queries. Currently we are >>> evaluating different SPARQL engines and especially exploiting property >>> path queries for a kind of backward-chaining reasoning (see [1] for more >>> details). The queries contain very long and complicated property paths. >>> >>> When evaluating these queries (example query and full stack trace in the >>> end of the message) I get a NullPointerException. The data I loaded with >>> the DataLoader is just the LUBM ontology and just University0_0.owl from >>> LUBM. The query (generated by a query rewriter [2]) returns all >>> instances of lubm:Student under OWL QL semantics and thus encodes all >>> necessary reasoning in the path expressions. >>> >>> Can you see what the actual problem with this query is? >>> Is the query just too big, too many joins? >>> What can I do fix this? >>> >>> Thank you very much! >>> Stefan Bischof >>> >>> [1] S Bischof, M Krötzsch, A Polleres, S Rudolph: Schema-agnostic query >>> rewriting in SPARQL 1.1. ISWC 2014 >>> [2] http://citydata.wu.ac.at/SPR/ <http://citydata.wu.ac.at/SPR/> >>> >>> Query: >>> >>> PREFIX dc: <http://purl.org/dc/elements/1.1/ <http://purl.org/dc/elements/1.1/>> >>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema# <http://www.w3.org/2000/01/rdf-schema#>> >>> PREFIX foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>> >>> PREFIX owl: <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>> >>> PREFIX xsd: <http://www.w3.org/2001/XMLSchema# <http://www.w3.org/2001/XMLSchema#>> >>> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# <http://www.w3.org/1999/02/22-rdf-syntax-ns#>> >>> PREFIX lubm: <http://swat.cse.lehigh.edu/onto/univ-bench.owl# <http://swat.cse.lehigh.edu/onto/univ-bench.owl#>> >>> >>> SELECT * >>> WHERE >>> { { ?_v0 >>> (((((rdfs:subClassOf|owl:equivalentClass)|^owl:equivalentClass)|((owl:intersectionOf/(rdf:rest)*)/rdf:first))|((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(^owl:onProperty|rdfs:domain)))|((((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(owl:inverseOf|^owl:inverseOf))/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/rdfs:range))* >>> lubm:Student . >>> { { ?p rdf:type ?_v0} >>> UNION >>> { ?_v1 >>> ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/(^owl:onProperty|rdfs:domain) >>> ?_v0 . >>> ?p ?_v1 _:b0 >>> } >>> } >>> UNION >>> { ?_v1 >>> ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/rdfs:range >>> ?_v0 . >>> _:b1 ?_v1 ?p >>> } >>> } >>> } >>> >>> >>> >>> >>> Full Stacktrace: >>> >>> java.util.concurrent.ExecutionException: >>> java.util.concurrent.ExecutionException: >>> org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:632) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:259) >>> at >>> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) >>> at >>> com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) >>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >>> at >>> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >>> at >>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >>> at >>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >>> at >>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >>> at >>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >>> at >>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >>> at >>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >>> at >>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >>> at >>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >>> at >>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >>> at >>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) >>> at >>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) >>> at >>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) >>> at org.eclipse.jetty.server.Server.handle(Server.java:497) >>> at >>> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) >>> at >>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) >>> at >>> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >>> at >>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) >>> at >>> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) >>> at java.lang.Thread.run(Thread.java:745) >>> Caused by: java.util.concurrent.ExecutionException: >>> org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:830) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:649) >>> at >>> com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>> at >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>> ... 1 more >>> Caused by: org.openrdf.query.QueryEvaluationException: >>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>> java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) >>> at >>> info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) >>> at org.openrdf.query.QueryResults.report(QueryResults.java:155) >>> at >>> org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1705) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1562) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1527) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:699) >>> ... 4 more >>> Caused by: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) >>> at >>> com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) >>> ... 11 more >>> Caused by: java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>> java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) >>> ... 16 more >>> Caused by: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) >>> at >>> com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) >>> ... 4 more >>> Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) >>> at >>> com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1511) >>> at >>> com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) >>> ... 9 more >>> Caused by: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) >>> at >>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at >>> com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) >>> ... 3 more >>> Caused by: java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) >>> ... 8 more >>> Caused by: java.lang.RuntimeException: >>> cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.bop.join.JVMHashJoinUtility.launderThrowable(JVMHashJoinUtility.java:1406) >>> at >>> com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:431) >>> at >>> com.bigdata.bop.join.HashIndexOp$ChunkTask.acceptSolutions(HashIndexOp.java:433) >>> at >>> com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:338) >>> at >>> com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:237) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) >>> ... 8 more >>> Caused by: java.lang.NullPointerException >>> at >>> com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:410) >>> ... 13 more >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... <mailto:Big...@li...> >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> >>> >> |