This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Matthew R. <mr...@ca...> - 2016-02-01 18:11:32
|
I saw this error this morning as well after testing against the 2.0.0 release code. Was running code from around 1.5.0 previously. Caused by: com.bigdata.util.ChecksumError: offset=18124800,nbytes=4044,expected=0,actual=1696870497 at com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3761) ~[bigdata-core-2.0.0.jar:na] at com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3576) ~[bigdata-core-2.0.0.jar:na] at com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) ~[bigdata-core-2.0.0.jar:na] at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3413) ~[bigdata-core-2.0.0.jar:na] at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3397) ~[bigdata-core-2.0.0.jar:na] at com.bigdata.util.concurrent.Memoizer$1.call(Memoizer.java:77) ~[bigdata-core-2.0.0.jar:na] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_60] at com.bigdata.util.concurrent.Memoizer.compute(Memoizer.java:92) ~[bigdata-core-2.0.0.jar:na] at com.bigdata.io.writecache.WriteCacheService.loadRecord(WriteCacheService.java:3518) ~[bigdata-core-2.0.0.jar:na] at com.bigdata.io.writecache.WriteCacheService.read(WriteCacheService.java:3237) ~[bigdata-core-2.0.0.jar:na] at com.bigdata.rwstore.RWStore.getData(RWStore.java:2052) ~[bigdata-core-2.0.0.jar:na] ... 24 common frames omitted and after reopening the journal file get: java.lang.Error: Two allocators at same address at com.bigdata.rwstore.FixedAllocator.compareTo(FixedAllocator.java:102) at java.util.ComparableTimSort.countRunAndMakeAscending(ComparableTimSort.java:295) at java.util.ComparableTimSort.sort(ComparableTimSort.java:157) at java.util.ComparableTimSort.sort(ComparableTimSort.java:146) at java.util.Arrays.sort(Arrays.java:472) at java.util.Collections.sort(Collections.java:155) at com.bigdata.rwstore.RWStore.readAllocationBlocks(RWStore.java:1682) at com.bigdata.rwstore.RWStore.initfromRootBlock(RWStore.java:1557) at com.bigdata.rwstore.RWStore.<init>(RWStore.java:969) at com.bigdata.journal.RWStrategy.<init>(RWStrategy.java:137) Can't tell exactly what was going on query/update wise when the error occurred. Will let you know if I can reproduce the error again. Matt ------ Original Message ------ From: "Bryan Thompson" <br...@sy...> To: "Jeremy Carroll" <jj...@gm...>; "Martyn Cutcher" <ma...@sy...> Cc: "Big...@li..." <Big...@li...> Sent: 2/1/2016 10:43:04 AM Subject: Re: [Bigdata-developers] " No WriteCache debug info" >Typically this indicates an actual disk error. It is attempting to >read data from the backing file. The checksum that was stored is not >matched by the data. The only time I have see this was when there was >actually a bad disk. > >Caused by: com.bigdata.util.ChecksumError: >offset=404849739776,nbytes=1156,expected=-402931822,actual=1830389633 > at >com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3711) > at >com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3526) > at >com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) > at >com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3363) > at >com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3347) > >Bryan > >---- >Bryan Thompson >Chief Scientist & Founder >SYSTAP, LLC >4501 Tower Road >Greensboro, NC 27410 >br...@sy... >http://blazegraph.com/ >http://blog.blazegraph.com > >Blazegraph™ is our ultra high-performance graph database that supports >both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now >available with GPU acceleration using our disruptive technology to >accelerate data-parallel graph analytics and graph query. >CONFIDENTIALITY NOTICE: This email and its contents and attachments >are for the sole use of the intended recipient(s) and are confidential >or proprietary to SYSTAP. Any unauthorized review, use, disclosure, >dissemination or copying of this email or its contents or attachments >is prohibited. If you have received this communication in error, please >notify the sender by reply email and permanently delete all copies of >the email and its contents and attachments. > > >On Mon, Feb 1, 2016 at 10:34 AM, Jeremy Carroll <jj...@gm...> >wrote: >> >>Also on 1.5.3, with %codes as a solution set with half million URIs >>binding ?x >> >>What does the error message mean? >> >>Jeremy >> >> >>Feb 01,2016 07:27:13 PST - ERROR: 73992665 qtp1401132667-16779 >>com.bigdata.rdf.sail.webapp.BigdataRDFServlet.launderThrowable(BigdataRDFServlet.java:214): >>cause=java.util.concurrent.ExecutionException: >>java.util.concurrent.ExecutionException: >>org.openrdf.query.QueryEvaluationException: >>java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>java.lang.Exception: >>task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >>cause=java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: addr=-403683228 : >>cause=java.lang.IllegalStateException: Error reading from WriteCache >>addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug >>info, query=SPARQL-QUERY: queryStr=select (count(?x) as $cnt)^M >>{ INCLUDE %codes^M >>} >>java.util.concurrent.ExecutionException: >>java.util.concurrent.ExecutionException: >>org.openrdf.query.QueryEvaluationException: >>java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>java.lang.Exception: >>task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >>cause=java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: addr=-403683228 : >>cause=java.lang.IllegalStateException: Error reading from WriteCache >>addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug >>info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at >>com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) >> at >>com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:636) >> at >>com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:263) >> at >>com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) >> at >>com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) >> at >>javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >> at >>javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >> at >>org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >> at >>org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >> at >>org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >> at >>org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >> at >>org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >> at >>org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >> at >>org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >> at >>org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >> at >>org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >> at >>org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >> at >>org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) >> at >>org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) >> at >>org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) >> at org.eclipse.jetty.server.Server.handle(Server.java:497) >> at >>org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) >> at >>org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) >> at >>org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >> at >>org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) >> at >>org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) >> at java.lang.Thread.run(Thread.java:745) >>Caused by: java.util.concurrent.ExecutionException: >>org.openrdf.query.QueryEvaluationException: >>java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>java.lang.Exception: >>task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >>cause=java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: addr=-403683228 : >>cause=java.lang.IllegalStateException: Error reading from WriteCache >>addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug >>info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at >>com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:834) >> at >>com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:653) >> at >>com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at >>java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >> at >>java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >> ... 1 more >>Caused by: org.openrdf.query.QueryEvaluationException: >>java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>java.lang.Exception: >>task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >>cause=java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: addr=-403683228 : >>cause=java.lang.IllegalStateException: Error reading from WriteCache >>addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug >>info >> at >>com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) >> at >>info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) >> at >>org.openrdf.query.QueryResults.report(QueryResults.java:155) >> at >>org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) >> at >>com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1710) >> at >>com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1567) >> at >>com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1532) >> at >>com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:704) >> ... 4 more >>Caused by: java.lang.RuntimeException: >>java.util.concurrent.ExecutionException: java.lang.RuntimeException: >>java.util.concurrent.ExecutionException: java.lang.Exception: >>task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >>cause=java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: addr=-403683228 : >>cause=java.lang.IllegalStateException: Error reading from WriteCache >>addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug >>info >> at >>com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) >> at >>com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) >> at >>com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) >> at >>com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) >> at >>com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) >> at >>com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) >> ... 11 more >>Caused by: java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>java.lang.Exception: >>task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >>cause=java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: addr=-403683228 : >>cause=java.lang.IllegalStateException: Error reading from WriteCache >>addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug >>info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at >>com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) >> ... 16 more >>Caused by: java.lang.RuntimeException: >>java.util.concurrent.ExecutionException: java.lang.Exception: >>task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >>cause=java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: addr=-403683228 : >>cause=java.lang.IllegalStateException: Error reading from WriteCache >>addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug >>info >> at >>com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) >> at >>com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) >> at >>com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) >> at >>com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) >> at >>com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) >> at >>com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) >> ... 4 more >>Caused by: java.util.concurrent.ExecutionException: >>java.lang.Exception: >>task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >>cause=java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: addr=-403683228 : >>cause=java.lang.IllegalStateException: Error reading from WriteCache >>addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug >>info >> at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) >> at >>com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1514) >> at >>com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) >> at >>com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) >> ... 9 more >>Caused by: java.lang.Exception: >>task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >>cause=java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: addr=-403683228 : >>cause=java.lang.IllegalStateException: Error reading from WriteCache >>addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug >>info >> at >>com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) >> at >>com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) >> at >>java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at >>com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) >> at >>com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) >> ... 3 more >>Caused by: java.util.concurrent.ExecutionException: >>java.lang.RuntimeException: addr=-403683228 : >>cause=java.lang.IllegalStateException: Error reading from WriteCache >>addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug >>info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at >>com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) >> ... 8 more >>Caused by: java.lang.RuntimeException: addr=-403683228 : >>cause=java.lang.IllegalStateException: Error reading from WriteCache >>addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug >>info >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:2190) >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:1989) >> at >>com.bigdata.rwstore.PSInputStream.<init>(PSInputStream.java:75) >> at >>com.bigdata.rwstore.RWStore.getInputStream(RWStore.java:6463) >> at >>com.bigdata.journal.RWStrategy.getInputStream(RWStrategy.java:846) >> at >>com.bigdata.bop.solutions.SolutionSetStream.get(SolutionSetStream.java:237) >> at >>com.bigdata.rdf.sparql.ast.ssets.SolutionSetManager.getSolutions(SolutionSetManager.java:556) >> at >>com.bigdata.bop.NamedSolutionSetRefUtility.getSolutionSet(NamedSolutionSetRefUtility.java:529) >> at >>com.bigdata.bop.BOpContext.getAlternateSource(BOpContext.java:752) >> at >>com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.getRightSolutions(NestedLoopJoinOp.java:263) >> at >>com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.call(NestedLoopJoinOp.java:200) >> at >>com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.call(NestedLoopJoinOp.java:166) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at >>com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) >> ... 8 more >>Caused by: java.lang.IllegalStateException: Error reading from >>WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No >>WriteCache debug info >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:2112) >> ... 21 more >>Caused by: com.bigdata.util.ChecksumError: >>offset=404849739776,nbytes=1156,expected=-402931822,actual=1830389633 >> at >>com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3711) >> at >>com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3526) >> at >>com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) >> at >>com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3363) >> at >>com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3347) >> at >>com.bigdata.util.concurrent.Memoizer$1.call(Memoizer.java:77) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at >>com.bigdata.util.concurrent.Memoizer.compute(Memoizer.java:92) >> at >>com.bigdata.io.writecache.WriteCacheService.loadRecord(WriteCacheService.java:3468) >> at >>com.bigdata.io.writecache.WriteCacheService.read(WriteCacheService.java:3187) >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:2106) >> ... 21 more >> >> >>------------------------------------------------------------------------------ >>Site24x7 APM Insight: Get Deep Visibility into Application Performance >>APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >>Monitor end-to-end web transactions and take corrective actions now >>Troubleshoot faster and improve end-user experience. Signup Now! >>http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 >>_______________________________________________ >>Bigdata-developers mailing list >>Big...@li... >>https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> > |
From: Bryan T. <br...@sy...> - 2016-02-01 18:02:40
|
Interesting. There were a few RWStore changes that were held back to get more experience with them. One of the ones that did go in is: - https://jira.blazegraph.com/browse/BLZG-1667 (Growth in RWStore.alloc() cumulative time) I do not see any reason offhand why this might be related. We did hold back some changes to accelerate deferred free processing. This change is mainly of benefit to very large stores and very large commits. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, Feb 1, 2016 at 12:48 PM, Matthew Roy <mr...@ca...> wrote: > I saw this error this morning as well after testing against the 2.0.0 > release code. Was running code from around 1.5.0 previously. > > Caused by: com.bigdata.util.ChecksumError: > offset=18124800,nbytes=4044,expected=0,actual=1696870497 > at > com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3761) > ~[bigdata-core-2.0.0.jar:na] > at > com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3576) > ~[bigdata-core-2.0.0.jar:na] > at > com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) > ~[bigdata-core-2.0.0.jar:na] > at > com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3413) > ~[bigdata-core-2.0.0.jar:na] > at > com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3397) > ~[bigdata-core-2.0.0.jar:na] > at com.bigdata.util.concurrent.Memoizer$1.call(Memoizer.java:77) > ~[bigdata-core-2.0.0.jar:na] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_60] > at com.bigdata.util.concurrent.Memoizer.compute(Memoizer.java:92) > ~[bigdata-core-2.0.0.jar:na] > at > com.bigdata.io.writecache.WriteCacheService.loadRecord(WriteCacheService.java:3518) > ~[bigdata-core-2.0.0.jar:na] > at > com.bigdata.io.writecache.WriteCacheService.read(WriteCacheService.java:3237) > ~[bigdata-core-2.0.0.jar:na] > at com.bigdata.rwstore.RWStore.getData(RWStore.java:2052) > ~[bigdata-core-2.0.0.jar:na] > ... 24 common frames omitted > and after reopening the journal file get: > > java.lang.Error: Two allocators at same address > at com.bigdata.rwstore.FixedAllocator.compareTo(FixedAllocator.java:102) > at > java.util.ComparableTimSort.countRunAndMakeAscending(ComparableTimSort.java:295) > at java.util.ComparableTimSort.sort(ComparableTimSort.java:157) > at java.util.ComparableTimSort.sort(ComparableTimSort.java:146) > at java.util.Arrays.sort(Arrays.java:472) > at java.util.Collections.sort(Collections.java:155) > at com.bigdata.rwstore.RWStore.readAllocationBlocks(RWStore.java:1682) > at com.bigdata.rwstore.RWStore.initfromRootBlock(RWStore.java:1557) > at com.bigdata.rwstore.RWStore.<init>(RWStore.java:969) > at com.bigdata.journal.RWStrategy.<init>(RWStrategy.java:137) > Can't tell exactly what was going on query/update wise when the error > occurred. > Will let you know if I can reproduce the error again. > Matt > > ------ Original Message ------ > From: "Bryan Thompson" <br...@sy...> > To: "Jeremy Carroll" <jj...@gm...>; "Martyn Cutcher" < > ma...@sy...> > Cc: "Big...@li..." < > Big...@li...> > Sent: 2/1/2016 10:43:04 AM > Subject: Re: [Bigdata-developers] " No WriteCache debug info" > > > Typically this indicates an actual disk error. It is attempting to read > data from the backing file. The checksum that was stored is not matched by > the data. The only time I have see this was when there was actually a bad > disk. > > Caused by: com.bigdata.util.ChecksumError: > offset=404849739776,nbytes=1156,expected=-402931822,actual=1830389633 > at > com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3711) > at > com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3526) > at > com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) > at > com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3363) > at > com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3347) > > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com/ > http://blog.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Blazegraph is now available with GPU acceleration using our disruptive > technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > On Mon, Feb 1, 2016 at 10:34 AM, Jeremy Carroll <jj...@gm...> wrote: > >> >> Also on 1.5.3, with %codes as a solution set with half million URIs >> binding ?x >> >> What does the error message mean? >> >> Jeremy >> >> >> Feb 01,2016 07:27:13 PST - ERROR: 73992665 qtp1401132667-16779 >> com.bigdata.rdf.sail.webapp.BigdataRDFServlet.launderThrowable(BigdataRDFServlet.java:214): >> cause=java.util.concurrent.ExecutionException: >> java.util.concurrent.ExecutionException: >> org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.Exception: >> task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >> cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from >> WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache >> debug info, query=SPARQL-QUERY: queryStr=select (count(?x) as $cnt)^M >> { INCLUDE %codes^M >> } >> java.util.concurrent.ExecutionException: >> java.util.concurrent.ExecutionException: >> org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.Exception: >> task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >> cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from >> WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache >> debug info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at >> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) >> at >> com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:636) >> at >> com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:263) >> at >> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) >> at >> com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >> at >> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >> at >> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >> at >> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >> at >> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >> at >> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >> at >> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >> at >> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >> at >> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >> at >> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >> at >> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >> at >> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) >> at >> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) >> at >> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) >> at org.eclipse.jetty.server.Server.handle(Server.java:497) >> at >> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) >> at >> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) >> at >> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >> at >> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) >> at >> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) >> at java.lang.Thread.run(Thread.java:745) >> Caused by: java.util.concurrent.ExecutionException: >> org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.Exception: >> task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >> cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from >> WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache >> debug info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at >> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:834) >> at >> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:653) >> at >> com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >> at >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >> ... 1 more >> Caused by: org.openrdf.query.QueryEvaluationException: >> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >> java.lang.Exception: >> task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >> cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from >> WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache >> debug info >> at >> com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) >> at >> info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) >> at org.openrdf.query.QueryResults.report(QueryResults.java:155) >> at >> org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) >> at >> com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1710) >> at >> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1567) >> at >> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1532) >> at >> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:704) >> ... 4 more >> Caused by: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.Exception: >> task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >> cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from >> WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache >> debug info >> at >> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) >> at >> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) >> at >> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) >> at >> com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) >> at >> com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) >> at >> com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) >> ... 11 more >> Caused by: java.util.concurrent.ExecutionException: >> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >> java.lang.Exception: >> task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >> cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from >> WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache >> debug info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at >> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) >> ... 16 more >> Caused by: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.Exception: >> task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >> cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from >> WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache >> debug info >> at >> com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) >> at >> com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) >> at >> com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) >> at >> com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) >> at >> com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) >> at >> com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) >> ... 4 more >> Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: >> task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >> cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from >> WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache >> debug info >> at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) >> at >> com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1514) >> at >> com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) >> at >> com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) >> ... 9 more >> Caused by: java.lang.Exception: >> task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, >> cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from >> WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache >> debug info >> at >> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) >> at >> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) >> at >> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) >> at >> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) >> ... 3 more >> Caused by: java.util.concurrent.ExecutionException: >> java.lang.RuntimeException: addr=-403683228 : >> cause=java.lang.IllegalStateException: Error reading from WriteCache addr: >> 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at >> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) >> ... 8 more >> Caused by: java.lang.RuntimeException: addr=-403683228 : >> cause=java.lang.IllegalStateException: Error reading from WriteCache addr: >> 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:2190) >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:1989) >> at com.bigdata.rwstore.PSInputStream.<init>(PSInputStream.java:75) >> at com.bigdata.rwstore.RWStore.getInputStream(RWStore.java:6463) >> at >> com.bigdata.journal.RWStrategy.getInputStream(RWStrategy.java:846) >> at >> com.bigdata.bop.solutions.SolutionSetStream.get(SolutionSetStream.java:237) >> at >> com.bigdata.rdf.sparql.ast.ssets.SolutionSetManager.getSolutions(SolutionSetManager.java:556) >> at >> com.bigdata.bop.NamedSolutionSetRefUtility.getSolutionSet(NamedSolutionSetRefUtility.java:529) >> at >> com.bigdata.bop.BOpContext.getAlternateSource(BOpContext.java:752) >> at >> com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.getRightSolutions(NestedLoopJoinOp.java:263) >> at >> com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.call(NestedLoopJoinOp.java:200) >> at >> com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.call(NestedLoopJoinOp.java:166) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at >> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) >> ... 8 more >> Caused by: java.lang.IllegalStateException: Error reading from WriteCache >> addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:2112) >> ... 21 more >> Caused by: com.bigdata.util.ChecksumError: >> offset=404849739776,nbytes=1156,expected=-402931822,actual=1830389633 >> at >> com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3711) >> at >> com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3526) >> at >> com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) >> at >> com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3363) >> at >> com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3347) >> at com.bigdata.util.concurrent.Memoizer$1.call(Memoizer.java:77) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at com.bigdata.util.concurrent.Memoizer.compute(Memoizer.java:92) >> at >> com.bigdata.io.writecache.WriteCacheService.loadRecord(WriteCacheService.java:3468) >> at >> com.bigdata.io.writecache.WriteCacheService.read(WriteCacheService.java:3187) >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:2106) >> ... 21 more >> >> >> >> ------------------------------------------------------------------------------ >> Site24x7 APM Insight: Get Deep Visibility into Application Performance >> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >> Monitor end-to-end web transactions and take corrective actions now >> Troubleshoot faster and improve end-user experience. Signup Now! >> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > |
From: Bryan T. <br...@sy...> - 2016-02-01 15:43:15
|
Typically this indicates an actual disk error. It is attempting to read data from the backing file. The checksum that was stored is not matched by the data. The only time I have see this was when there was actually a bad disk. Caused by: com.bigdata.util.ChecksumError: offset=404849739776,nbytes=1156,expected=-402931822,actual=1830389633 at com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3711) at com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3526) at com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3363) at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3347) Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, Feb 1, 2016 at 10:34 AM, Jeremy Carroll <jj...@gm...> wrote: > > Also on 1.5.3, with %codes as a solution set with half million URIs > binding ?x > > What does the error message mean? > > Jeremy > > > Feb 01,2016 07:27:13 PST - ERROR: 73992665 qtp1401132667-16779 > com.bigdata.rdf.sail.webapp.BigdataRDFServlet.launderThrowable(BigdataRDFServlet.java:214): > cause=java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: > addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from > WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache > debug info, query=SPARQL-QUERY: queryStr=select (count(?x) as $cnt)^M > { INCLUDE %codes^M > } > java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: > addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from > WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache > debug info > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:636) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:263) > at > com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) > at > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: > addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from > WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache > debug info > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:834) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:653) > at > com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > ... 1 more > Caused by: org.openrdf.query.QueryEvaluationException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: > addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from > WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache > debug info > at > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) > at > info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) > at org.openrdf.query.QueryResults.report(QueryResults.java:155) > at > org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1710) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1567) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1532) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:704) > ... 4 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: > addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from > WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache > debug info > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) > at > com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) > at > com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) > at > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) > ... 11 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: > addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from > WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache > debug info > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) > ... 16 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: > addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from > WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache > debug info > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) > at > com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) > at > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) > at > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) > ... 4 more > Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: > addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from > WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache > debug info > at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) > at > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1514) > at > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) > ... 9 more > Caused by: java.lang.Exception: > task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: > addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from > WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache > debug info > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) > ... 3 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: addr=-403683228 : > cause=java.lang.IllegalStateException: Error reading from WriteCache addr: > 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) > ... 8 more > Caused by: java.lang.RuntimeException: addr=-403683228 : > cause=java.lang.IllegalStateException: Error reading from WriteCache addr: > 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info > at com.bigdata.rwstore.RWStore.getData(RWStore.java:2190) > at com.bigdata.rwstore.RWStore.getData(RWStore.java:1989) > at com.bigdata.rwstore.PSInputStream.<init>(PSInputStream.java:75) > at com.bigdata.rwstore.RWStore.getInputStream(RWStore.java:6463) > at > com.bigdata.journal.RWStrategy.getInputStream(RWStrategy.java:846) > at > com.bigdata.bop.solutions.SolutionSetStream.get(SolutionSetStream.java:237) > at > com.bigdata.rdf.sparql.ast.ssets.SolutionSetManager.getSolutions(SolutionSetManager.java:556) > at > com.bigdata.bop.NamedSolutionSetRefUtility.getSolutionSet(NamedSolutionSetRefUtility.java:529) > at > com.bigdata.bop.BOpContext.getAlternateSource(BOpContext.java:752) > at > com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.getRightSolutions(NestedLoopJoinOp.java:263) > at > com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.call(NestedLoopJoinOp.java:200) > at > com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.call(NestedLoopJoinOp.java:166) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) > ... 8 more > Caused by: java.lang.IllegalStateException: Error reading from WriteCache > addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info > at com.bigdata.rwstore.RWStore.getData(RWStore.java:2112) > ... 21 more > Caused by: com.bigdata.util.ChecksumError: > offset=404849739776,nbytes=1156,expected=-402931822,actual=1830389633 > at > com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3711) > at > com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3526) > at > com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) > at > com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3363) > at > com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3347) > at com.bigdata.util.concurrent.Memoizer$1.call(Memoizer.java:77) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at com.bigdata.util.concurrent.Memoizer.compute(Memoizer.java:92) > at > com.bigdata.io.writecache.WriteCacheService.loadRecord(WriteCacheService.java:3468) > at > com.bigdata.io.writecache.WriteCacheService.read(WriteCacheService.java:3187) > at com.bigdata.rwstore.RWStore.getData(RWStore.java:2106) > ... 21 more > > > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Jeremy C. <jj...@gm...> - 2016-02-01 15:35:00
|
Also on 1.5.3, with %codes as a solution set with half million URIs binding ?x What does the error message mean? Jeremy Feb 01,2016 07:27:13 PST - ERROR: 73992665 qtp1401132667-16779 com.bigdata.rdf.sail.webapp.BigdataRDFServlet.launderThrowable(BigdataRDFServlet.java:214): cause=java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info, query=SPARQL-QUERY: queryStr=select (count(?x) as $cnt)^M { INCLUDE %codes^M } java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:636) at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:263) at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.ExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:834) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:653) at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ... 1 more Caused by: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info at com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) at info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) at org.openrdf.query.QueryResults.report(QueryResults.java:155) at org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1710) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1567) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1532) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:704) ... 4 more Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) at com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) at com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) at com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) ... 11 more Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) ... 16 more Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info at com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) at com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) at com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) at com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) at com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) at com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) ... 4 more Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) at com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1514) at com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) at com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) ... 9 more Caused by: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) ... 3 more Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) ... 8 more Caused by: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info at com.bigdata.rwstore.RWStore.getData(RWStore.java:2190) at com.bigdata.rwstore.RWStore.getData(RWStore.java:1989) at com.bigdata.rwstore.PSInputStream.<init>(PSInputStream.java:75) at com.bigdata.rwstore.RWStore.getInputStream(RWStore.java:6463) at com.bigdata.journal.RWStrategy.getInputStream(RWStrategy.java:846) at com.bigdata.bop.solutions.SolutionSetStream.get(SolutionSetStream.java:237) at com.bigdata.rdf.sparql.ast.ssets.SolutionSetManager.getSolutions(SolutionSetManager.java:556) at com.bigdata.bop.NamedSolutionSetRefUtility.getSolutionSet(NamedSolutionSetRefUtility.java:529) at com.bigdata.bop.BOpContext.getAlternateSource(BOpContext.java:752) at com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.getRightSolutions(NestedLoopJoinOp.java:263) at com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.call(NestedLoopJoinOp.java:200) at com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.call(NestedLoopJoinOp.java:166) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) ... 8 more Caused by: java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info at com.bigdata.rwstore.RWStore.getData(RWStore.java:2112) ... 21 more Caused by: com.bigdata.util.ChecksumError: offset=404849739776,nbytes=1156,expected=-402931822,actual=1830389633 at com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3711) at com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3526) at com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3363) at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3347) at com.bigdata.util.concurrent.Memoizer$1.call(Memoizer.java:77) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.bigdata.util.concurrent.Memoizer.compute(Memoizer.java:92) at com.bigdata.io.writecache.WriteCacheService.loadRecord(WriteCacheService.java:3468) at com.bigdata.io.writecache.WriteCacheService.read(WriteCacheService.java:3187) at com.bigdata.rwstore.RWStore.getData(RWStore.java:2106) ... 21 more |
From: Bryan T. <br...@sy...> - 2016-02-01 15:17:02
|
Response at https://jira.blazegraph.com/browse/BLZG-856 ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, Feb 1, 2016 at 9:25 AM, Jeremy J Carroll <jj...@sy...> wrote: > .I read > > https://wiki.blazegraph.com/wiki/index.php/SPARQL_Update#INSERT_INTO_with_ORDER_BY > > and I am trying to follow. (I actually don’t care about the order, only > the splicing) > > However, I find that to finish (my variant of) the example I need to do > something like the following (where %codes is the solution set I created) > > select * > WHERE > { > { select ?s { > INCLUDE %codes > } LIMIT 100 OFFSET 5000 } > > graph </graph/abox> { > ?s sci:rxnormCd ?drug > } > graph </graph/vocabulary/nlm/rxnorm#> { > ?drug </vocabulary/nlm/rxnorm#asGeneric> ?generic > } > graph </graph/vocabulary/nlm/rxnorm_generic#> { > ?generic skos:prefLabel ?genericTerm > } > } > > where I use a subselect to do the splicing. This then does not work, I > think because of BLZG-856 > > Am I missing the intent of the example on the wiki? or does BLZG-856 > actually block it being useful. > > (I was attracted since almost all items in SPARQL are unordered, in > particular intermediate solution sets, so paging through partial solutions > is hard) > > Jeremy > > > > > > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Jeremy J C. <jj...@sy...> - 2016-02-01 14:52:48
|
.I read https://wiki.blazegraph.com/wiki/index.php/SPARQL_Update#INSERT_INTO_with_ORDER_BY <https://wiki.blazegraph.com/wiki/index.php/SPARQL_Update#INSERT_INTO_with_ORDER_BY> and I am trying to follow. (I actually don’t care about the order, only the splicing) However, I find that to finish (my variant of) the example I need to do something like the following (where %codes is the solution set I created) select * WHERE { { select ?s { INCLUDE %codes } LIMIT 100 OFFSET 5000 } graph </graph/abox> { ?s sci:rxnormCd ?drug } graph </graph/vocabulary/nlm/rxnorm#> { ?drug </vocabulary/nlm/rxnorm#asGeneric> ?generic } graph </graph/vocabulary/nlm/rxnorm_generic#> { ?generic skos:prefLabel ?genericTerm } } where I use a subselect to do the splicing. This then does not work, I think because of BLZG-856 Am I missing the intent of the example on the wiki? or does BLZG-856 actually block it being useful. (I was attracted since almost all items in SPARQL are unordered, in particular intermediate solution sets, so paging through partial solutions is hard) Jeremy |
From: Jeremy C. <jj...@gm...> - 2016-01-29 20:14:55
|
I got an NPE trying to materialize a transitive closure. The vocab I am using is quite a big one. I believe this code worked on a different instance running same blazegraph version 1.5.3 I will retry … any interpretation. Strong desire not to have to start over, I have a lot of data in the journal (approx 1B triples) Jeremy 2016-01-29 20:05:29,340 [ERROR] syapse.sparql.endpoint: Query did not complete (after 22.6046221256 seconds): prefix owl: <http://www.w3.org/2002/07/owl#> prefix s: </bdm/api/> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> prefix bds: <http://www.bigdata.com/rdf/search#> prefix xsd: <http://www.w3.org/2001/XMLSchema#> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> prefix skos: <http://www.w3.org/2004/02/skos/core#> prefix dc: <http://purl.org/dc/elements/1.1/> prefix syapse: </graph/syapse#> INSERT { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?subNew <https://swedish.syapse.com/graph/syapse#broader> ?superNew . } } WHERE { { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?subNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . } ?subNew skos:broader * ?superNew . ?superNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . hint:Prior hint:runLast true . } UNION { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?superNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . } ?subNew skos:broader * ?superNew . ?subNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . hint:Prior hint:runLast true . } } java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.openrdf.query.UpdateExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:448) at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:233) at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.ExecutionException: org.openrdf.query.UpdateExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:552) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460) at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ... 1 more Caused by: org.openrdf.query.UpdateExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1099) at com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1964) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1567) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1532) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:704) ... 4 more Caused by: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) at com.bigdata.rdf.sparql.ast.eval.ASTConstructIterator.hasNext(ASTConstructIterator.java:621) at info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:1015) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:435) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:288) at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1091) ... 9 more Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) at com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) at com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) at com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) ... 15 more Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) ... 20 more Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) at com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) at com.bigdata.striterator.ChunkedWrappedIterator.close(ChunkedWrappedIterator.java:180) at com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:297) at com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) ... 4 more Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) at com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1514) at com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) at com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) ... 8 more Caused by: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) ... 3 more Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) ... 8 more Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.bop.join.PipelineJoin$JoinTask.call(PipelineJoin.java:643) at com.bigdata.bop.join.PipelineJoin$JoinTask.call(PipelineJoin.java:343) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) ... 8 more Caused by: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.bop.join.PipelineJoin$JoinTask$BindingSetConsumerTask.call(PipelineJoin.java:988) at com.bigdata.bop.join.PipelineJoin$JoinTask.consumeSource(PipelineJoin.java:700) at com.bigdata.bop.join.PipelineJoin$JoinTask.call(PipelineJoin.java:584) ... 12 more Caused by: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.btree.Leaf.getKeys(Leaf.java:180) at com.bigdata.btree.Leaf.indexOf(Leaf.java:873) at com.bigdata.btree.Node.indexOf(Node.java:988) at com.bigdata.btree.Node.indexOf(Node.java:988) at com.bigdata.btree.Node.indexOf(Node.java:988) at com.bigdata.btree.AbstractBTree.rangeCount(AbstractBTree.java:2636) at com.bigdata.btree.UnisolatedReadWriteIndex.rangeCount(UnisolatedReadWriteIndex.java:442) at com.bigdata.relation.accesspath.AccessPath.historicalRangeCount(AccessPath.java:1418) at com.bigdata.relation.accesspath.AccessPath.rangeCount(AccessPath.java:1386) at com.bigdata.bop.join.PipelineJoin$JoinTask$AccessPathTask.call(PipelineJoin.java:1620) at com.bigdata.bop.join.PipelineJoin$JoinTask$BindingSetConsumerTask.executeTasks(PipelineJoin.java:1353) at com.bigdata.bop.join.PipelineJoin$JoinTask$BindingSetConsumerTask.call(PipelineJoin.java:977) ... 14 more Query: base <https://swedish.syapse.com/> prefix owl: <http://www.w3.org/2002/07/owl#> prefix s: </bdm/api/> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> prefix bds: <http://www.bigdata.com/rdf/search#> prefix xsd: <http://www.w3.org/2001/XMLSchema#> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> prefix skos: <http://www.w3.org/2004/02/skos/core#> prefix dc: <http://purl.org/dc/elements/1.1/> prefix syapse: </graph/syapse#> INSERT { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?subNew <https://swedish.syapse.com/graph/syapse#broader> ?superNew . } } WHERE { { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?subNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . } ?subNew skos:broader * ?superNew . ?superNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . hint:Prior hint:runLast true . } UNION { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?superNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . } ?subNew skos:broader * ?superNew . ?subNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . hint:Prior hint:runLast true . } } 2016-01-29 20:05:29,558 [ERROR] syapse.apps.vocab.service: Error during load of vocab from file '/tmp/rxnorm-nlmSsZbQ5-20160128194506'; cleaning up by removing graph 'rxnorm-nlm' Traceback (most recent call last): File "/home/ubuntu/webapps/syapse/src/syapse/python/syapse/apps/vocab/service.py", line 200, in load_from_service load_from_file(local_file) File "/home/ubuntu/webapps/syapse/src/syapse/python/syapse/apps/vocab/service.py", line 148, in load_from_file infer_properties_for_graph(graph_name) File "/home/ubuntu/webapps/syapse/src/syapse/python/syapse/sem/named_graphs/loader.py", line 122, in infer_properties_for_graph STANDARD_PREFIX_MAP.shrink(row.target)) File "/home/ubuntu/webapps/syapse/src/syapse/python/syapse/sem/named_graphs/inferences.py", line 198, in materialize_inferences timeouts=settings.SPARQL_VERY_LONG_TIMEOUTS) File "/home/ubuntu/webapps/syapse/src/syapse/python/syapse/sparql/endpoint.py", line 432, in raw_update prefixed_update, sparql_endpoint=sparql_endpoint) File "/home/ubuntu/webapps/syapse/src/syapse/python/syapse/sparql/endpoint.py", line 213, in __call__ result = self.call(*args, **kwargs) File "/home/ubuntu/webapps/syapse/src/syapse/python/syapse/sparql/endpoint.py", line 319, in call result = super(CallRawUpdate, self).call(*args, **kwargs) File "/home/ubuntu/webapps/syapse/src/syapse/python/syapse/sparql/endpoint.py", line 307, in call rst = method(query, default_graph=named_graphs, named_graph=named_graphs) File "/home/ubuntu/webapps/syapse/src/pymantic/pymantic/sparql.py", line 210, in update return _Update(self, sparql, **kwargs).execute() File "/home/ubuntu/webapps/syapse/src/pymantic/pymantic/sparql.py", line 95, in execute (response.headers, response.content, self.sparql)) SPARQLQueryException: {'transfer-encoding': 'chunked', 'content-type': 'text/plain', 'server': 'Jetty(9.2.3.v20140905)'}: SPARQL-UPDATE: updateStr=base <https://swedish.syapse.com/> prefix owl: <http://www.w3.org/2002/07/owl#> prefix s: </bdm/api/> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> prefix bds: <http://www.bigdata.com/rdf/search#> prefix xsd: <http://www.w3.org/2001/XMLSchema#> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> prefix skos: <http://www.w3.org/2004/02/skos/core#> prefix dc: <http://purl.org/dc/elements/1.1/> prefix syapse: </graph/syapse#> INSERT { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?subNew <https://swedish.syapse.com/graph/syapse#broader> ?superNew . } } WHERE { { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?subNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . } ?subNew skos:broader * ?superNew . ?superNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . hint:Prior hint:runLast true . } UNION { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?superNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . } ?subNew skos:broader * ?superNew . ?subNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . hint:Prior hint:runLast true . } } java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.openrdf.query.UpdateExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:448) at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:233) at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.ExecutionException: org.openrdf.query.UpdateExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:552) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460) at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ... 1 more Caused by: org.openrdf.query.UpdateExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1099) at com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1964) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1567) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1532) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:704) ... 4 more Caused by: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) at com.bigdata.rdf.sparql.ast.eval.ASTConstructIterator.hasNext(ASTConstructIterator.java:621) at info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:1015) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:435) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:288) at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1091) ... 9 more Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) at com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) at com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) at com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) ... 15 more Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) ... 20 more Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) at com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) at com.bigdata.striterator.ChunkedWrappedIterator.close(ChunkedWrappedIterator.java:180) at com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:297) at com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) ... 4 more Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) at com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1514) at com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) at com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) ... 8 more Caused by: java.lang.Exception: task=ChunkTask{query=1c9bcb20-b416-4e5a-b2cf-2a062936e4f9,bopId=29,partitionId=-1,sinkId=1,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) ... 3 more Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) ... 8 more Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.bop.join.PipelineJoin$JoinTask.call(PipelineJoin.java:643) at com.bigdata.bop.join.PipelineJoin$JoinTask.call(PipelineJoin.java:343) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) ... 8 more Caused by: java.lang.RuntimeException: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.bop.join.PipelineJoin$JoinTask$BindingSetConsumerTask.call(PipelineJoin.java:988) at com.bigdata.bop.join.PipelineJoin$JoinTask.consumeSource(PipelineJoin.java:700) at com.bigdata.bop.join.PipelineJoin$JoinTask.call(PipelineJoin.java:584) ... 12 more Caused by: java.lang.NullPointerException: leaf=com.bigdata.btree.Leaf@c08fb46#-1396626998477973526(deleted){ isDirty=false, isDeleted=true, addr=-1396626998477973526, parent=N/A, isRoot=false, data=NA} at com.bigdata.btree.Leaf.getKeys(Leaf.java:180) at com.bigdata.btree.Leaf.indexOf(Leaf.java:873) at com.bigdata.btree.Node.indexOf(Node.java:988) at com.bigdata.btree.Node.indexOf(Node.java:988) at com.bigdata.btree.Node.indexOf(Node.java:988) at com.bigdata.btree.AbstractBTree.rangeCount(AbstractBTree.java:2636) at com.bigdata.btree.UnisolatedReadWriteIndex.rangeCount(UnisolatedReadWriteIndex.java:442) at com.bigdata.relation.accesspath.AccessPath.historicalRangeCount(AccessPath.java:1418) at com.bigdata.relation.accesspath.AccessPath.rangeCount(AccessPath.java:1386) at com.bigdata.bop.join.PipelineJoin$JoinTask$AccessPathTask.call(PipelineJoin.java:1620) at com.bigdata.bop.join.PipelineJoin$JoinTask$BindingSetConsumerTask.executeTasks(PipelineJoin.java:1353) at com.bigdata.bop.join.PipelineJoin$JoinTask$BindingSetConsumerTask.call(PipelineJoin.java:977) ... 14 more Query: base <https://swedish.syapse.com/> prefix owl: <http://www.w3.org/2002/07/owl#> prefix s: </bdm/api/> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> prefix bds: <http://www.bigdata.com/rdf/search#> prefix xsd: <http://www.w3.org/2001/XMLSchema#> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> prefix skos: <http://www.w3.org/2004/02/skos/core#> prefix dc: <http://purl.org/dc/elements/1.1/> prefix syapse: </graph/syapse#> INSERT { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?subNew <https://swedish.syapse.com/graph/syapse#broader> ?superNew . } } WHERE { { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?subNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . } ?subNew skos:broader * ?superNew . ?superNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . hint:Prior hint:runLast true . } UNION { GRAPH <https://swedish.syapse.com/graph/vocabulary/nlm/rxnorm#> { ?superNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . } ?subNew skos:broader * ?superNew . ?subNew rdf:type <http://www.w3.org/2004/02/skos/core#Concept> . hint:Prior hint:runLast true . } } |
From: Brad B. <be...@sy...> - 2016-01-27 05:40:15
|
Blazegraphers, Blazegraph 2.0.0 is now available on Maven Central: https://github.com/blazegraph/database/tree/BLAZEGRAPH_RELEASE_2_0_0#maven-central. The artifacts are also hosts on Sourceforge. We'll put out the blog posts and updates later this week. You may also be interested in the other artifacts now available on Maven Central or view Github. o Tinkerpop3 1.0.0 implementation for Blazegraph 2.0.0: https://github.com/blazegraph/tinkerpop3 o Updated Samples with Custom Functions, Rules, etc.: https://github.com/blazegraph/blazegraph-samples o Blazegraph-based Triple Pattern Fragment Server (thanks Olaf!): https://github.com/blazegraph/BlazegraphBasedTPFServer Regards, --Brad -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. |
From: Stas M. <sma...@wi...> - 2016-01-20 17:58:51
|
Hi! > I am back in office. I can't test this example query from wikidata. > > SELECT ?band ?bandLabel WHERE { > ?band wdt:P31 wd:Q5741069 . > > SERVICE wikibase:label { > bd:serviceParam wikibase:language "en,fr,de,he,el,fi,no,ja" . > ?band rdfs:label ?bandLabel . > } > } limit 100 > > Apparently there is a bug in Blazegraph 1.5.3 whitelist mechanism that > prohibits me from using this service. That query works for me, on query.wikidata.org that runs 1.5.3. What is the problem you are encountering? -- Stas Malyshev sma...@wi... |
From: Joakim S. <joa...@bl...> - 2016-01-20 17:06:16
|
Hi Brad I am back in office. I can't test this example query from wikidata. SELECT ?band ?bandLabel WHERE { ?band wdt:P31 wd:Q5741069 . SERVICE wikibase:label { bd:serviceParam wikibase:language "en,fr,de,he,el,fi,no,ja" . ?band rdfs:label ?bandLabel . } } limit 100 Apparently there is a bug in Blazegraph 1.5.3 whitelist mechanism that prohibits me from using this service. Still waiting to get 2.0.0 on Maven repo /Joakim > On Jan 5, 2016, at 7:51 PM, Brad Bebee <be...@sy...> wrote: > > Joakim, > > The RC1 has not yet been pushed to maven central. It is available on the legacy SYSTAP releases maven repository. With the maven updates for 2.0, you can use the com.blazegraph.bigdata-runtime artifact to pull in just the Blazegraph-specific classes. Take a look at the pom.xml for blazegraph-deb [1]. You'll need to include the repositories listed there as well currently. The blazegraph-parent/pom.xml has the properties for the required versions, which you can pull into your POM, as needed [2]. > > This should definitely get easier soon, once we make the official release. Let us know how it goes. > > Thanks, --Brad > > [1] https://github.com/blazegraph/database/blob/master/blazegraph-deb/pom.xml <https://github.com/blazegraph/database/blob/master/blazegraph-deb/pom.xml> > > [2] https://github.com/blazegraph/database/blob/master/blazegraph-parent/pom.xml <https://github.com/blazegraph/database/blob/master/blazegraph-parent/pom.xml> > > > > On Tue, Jan 5, 2016 at 4:16 PM, Joakim Soderberg <joa...@bl... <mailto:joa...@bl...>> wrote: > Hi, > Can I get the new Blazegraph 2.0.0 from Maven? I am searching for blaze graph and bigdata in maven repo but can’t find it. > > Basically what I would like to know is what to put in my pom.xml file to get 2.0.0? > > > /Joakim > > >> On Dec 22, 2015, at 12:59 PM, Brad Bebee <be...@sy... <mailto:be...@sy...>> wrote: >> >> Blazegraphers, >> >> 2.0.0 RC1 is available. See [1] for more details. We wanted to give the Blazegraph community a chance to take a peak at 2.0 and give us feedback. Download it, clone it, have it sent via carrier pigeon (transportation charges may apply), but definitely take a look at let us know your thoughts. Please send general questions to the mailing list and report any bugs in JIRA against the 2.0.0 release. >> >> 1. Blazegraph is now on github for open source releases: https://github.com/blazegraph/database <https://github.com/blazegraph/database>: gi...@gi...:blazegraph/database.git >> >> 2. Blazegraph has been "mavenized". See [2] in particular for setting up the development environment. >> >> 3. The default service URL has changed to /blazegraph/ from /bigdata/. bigdata.jar and bigdata.war continue to support the /bigdata/ endpoint. >> >> 4. There are now Debian and RPM deployers available among other options. >> >> [1] https://blog.blazegraph.com/?p=977 <https://blog.blazegraph.com/?p=977> >> >> [2] https://wiki.blazegraph.com/wiki/index.php/MavenNotes#Getting_Started_Developing_with_Eclipse <https://wiki.blazegraph.com/wiki/index.php/MavenNotes#Getting_Started_Developing_with_Eclipse> >> >> Cheers, --Brad >> >> -- >> _______________ >> Brad Bebee >> CEO, Managing Partner >> SYSTAP, LLC >> e: be...@sy... <mailto:be...@sy...> >> m: 202.642.7961 <tel:202.642.7961> >> f: 571.367.5000 <tel:571.367.5000> >> w: www.blazegraph.com <http://www.blazegraph.com/> >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >> >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... <mailto:Big...@li...> >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> > > > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... <mailto:be...@sy...> > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com <http://www.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > |
From: Brad B. <be...@sy...> - 2016-01-14 05:16:42
|
Blazegraphers, We're getting fairly close to the 2.0.0 release. We have found a few minor issues with the 2.0.0 RC1 release, but it has generally been fairly stable. If you've tried it and found anything, please let us know on the mailing list and/or via JIRA (https://jira.blazegraph.com/). Cheers, --Brad -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Stas M. <sma...@wi...> - 2016-01-07 00:29:46
|
Hi! > It is going to me more efficient to have lookup by namespace in a map > rather than scanning a list, which is why I made this change. At the > time that I made it I was not thinking about the impact on the WDQS. > Can you split up your handler into one handler per namespace or does > that cause problems? I think I can split them, and override init() so that it won't complain about missing vocabulary type, I just wondered if there was a better way. Thanks, -- Stas Malyshev sma...@wi... |
From: Bryan T. <br...@sy...> - 2016-01-07 00:03:06
|
It is going to me more efficient to have lookup by namespace in a map rather than scanning a list, which is why I made this change. At the time that I made it I was not thinking about the impact on the WDQS. Can you split up your handler into one handler per namespace or does that cause problems? Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Wed, Jan 6, 2016 at 6:37 PM, Stas Malyshev <sma...@wi...> wrote: > Hi! > > > Could you register the same handler multiple times, once for each > namespace? > > No, because it indexes by handler.getNamespace() and this can return > only one value: > > protected void addHandler(final InlineURIHandler handler) { > > // this.handlers.add(handler); > handlersByNamespace.put(handler.getNamespace(), handler); > } > > -- > Stas Malyshev > sma...@wi... > |
From: Stas M. <sma...@wi...> - 2016-01-06 23:37:21
|
Hi! > Could you register the same handler multiple times, once for each namespace? No, because it indexes by handler.getNamespace() and this can return only one value: protected void addHandler(final InlineURIHandler handler) { // this.handlers.add(handler); handlersByNamespace.put(handler.getNamespace(), handler); } -- Stas Malyshev sma...@wi... |
From: Bryan T. <br...@sy...> - 2016-01-06 23:18:51
|
Could you register the same handler multiple times, once for each namespace? On Wednesday, January 6, 2016, Stas Malyshev <sma...@wi...> wrote: > Hi! > > I've noticed that in 2.0 InlineURIFactory code changed, namely when > previously the handlers were stored in a List, now they are stored in > map indexed by namespace. This change, however, prevents one from having > a handler that would apply to more than one prefix (such as one applying > to both http:// and https:// URIs, for example). Also, it kind of > assumes in init() that prefix and vocabulary namespace are the same. > I could hack around this but I wonder what is the recommended way of > handling this? > -- > Stas Malyshev > sma...@wi... <javascript:;> > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <javascript:;> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > -- ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Stas M. <sma...@wi...> - 2016-01-06 22:47:04
|
Hi! I've noticed that in 2.0 InlineURIFactory code changed, namely when previously the handlers were stored in a List, now they are stored in map indexed by namespace. This change, however, prevents one from having a handler that would apply to more than one prefix (such as one applying to both http:// and https:// URIs, for example). Also, it kind of assumes in init() that prefix and vocabulary namespace are the same. I could hack around this but I wonder what is the recommended way of handling this? -- Stas Malyshev sma...@wi... |
From: Brad B. <be...@sy...> - 2016-01-06 06:28:14
|
Joakim, I pushed a blazegraph-samples branch out on github that has the POM examples for the 2.0.0-RC1: https://github.com/SYSTAP/blazegraph-samples/tree/2.0.0_maven/. It's still a work in progress as there are some example test failures, but it should give you a good example of including the bigdata-runtime and then the external dependencies. Thanks, --Brad On Tue, Jan 5, 2016 at 10:51 PM, Brad Bebee <be...@sy...> wrote: > Joakim, > > The RC1 has not yet been pushed to maven central. It is available on the > legacy SYSTAP releases maven repository. With the maven updates for 2.0, > you can use the com.blazegraph.bigdata-runtime artifact to pull in just the > Blazegraph-specific classes. Take a look at the pom.xml for blazegraph-deb > [1]. You'll need to include the repositories listed there as well > currently. The blazegraph-parent/pom.xml has the properties for the > required versions, which you can pull into your POM, as needed [2]. > > This should definitely get easier soon, once we make the official > release. Let us know how it goes. > > Thanks, --Brad > > [1] > https://github.com/blazegraph/database/blob/master/blazegraph-deb/pom.xml > > [2] > https://github.com/blazegraph/database/blob/master/blazegraph-parent/pom.xml > > > > On Tue, Jan 5, 2016 at 4:16 PM, Joakim Soderberg < > joa...@bl...> wrote: > >> Hi, >> Can I get the new Blazegraph 2.0.0 from Maven? I am searching for blaze >> graph and bigdata in maven repo but can’t find it. >> >> Basically what I would like to know is what to put in my pom.xml file to >> get 2.0.0? >> >> >> /Joakim >> >> On Dec 22, 2015, at 12:59 PM, Brad Bebee <be...@sy...> wrote: >> >> Blazegraphers, >> >> 2.0.0 RC1 is available. See [1] for more details. We wanted to give >> the Blazegraph community a chance to take a peak at 2.0 and give us >> feedback. Download it, clone it, have it sent via carrier pigeon >> (transportation charges may apply), but definitely take a look at let us >> know your thoughts. Please send general questions to the mailing list and >> report any bugs in JIRA against the 2.0.0 release. >> >> 1. Blazegraph is now on github for open source releases: >> https://github.com/blazegraph/database: gi...@gi...: >> blazegraph/database.git >> >> 2. Blazegraph has been "mavenized". See [2] in particular for setting >> up the development environment. >> >> 3. The default service URL has changed to /blazegraph/ from /bigdata/. >> bigdata.jar and bigdata.war continue to support the /bigdata/ endpoint. >> >> 4. There are now Debian and RPM deployers available among other options. >> >> [1] https://blog.blazegraph.com/?p=977 >> >> [2] >> https://wiki.blazegraph.com/wiki/index.php/MavenNotes#Getting_Started_Developing_with_Eclipse >> >> Cheers, --Brad >> >> -- >> _______________ >> Brad Bebee >> CEO, Managing Partner >> SYSTAP, LLC >> e: be...@sy... >> m: 202.642.7961 >> f: 571.367.5000 >> w: www.blazegraph.com >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new >> technology to use GPUs to accelerate data-parallel graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> >> > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Brad B. <be...@sy...> - 2016-01-06 03:51:42
|
Joakim, The RC1 has not yet been pushed to maven central. It is available on the legacy SYSTAP releases maven repository. With the maven updates for 2.0, you can use the com.blazegraph.bigdata-runtime artifact to pull in just the Blazegraph-specific classes. Take a look at the pom.xml for blazegraph-deb [1]. You'll need to include the repositories listed there as well currently. The blazegraph-parent/pom.xml has the properties for the required versions, which you can pull into your POM, as needed [2]. This should definitely get easier soon, once we make the official release. Let us know how it goes. Thanks, --Brad [1] https://github.com/blazegraph/database/blob/master/blazegraph-deb/pom.xml [2] https://github.com/blazegraph/database/blob/master/blazegraph-parent/pom.xml On Tue, Jan 5, 2016 at 4:16 PM, Joakim Soderberg < joa...@bl...> wrote: > Hi, > Can I get the new Blazegraph 2.0.0 from Maven? I am searching for blaze > graph and bigdata in maven repo but can’t find it. > > Basically what I would like to know is what to put in my pom.xml file to > get 2.0.0? > > > /Joakim > > On Dec 22, 2015, at 12:59 PM, Brad Bebee <be...@sy...> wrote: > > Blazegraphers, > > 2.0.0 RC1 is available. See [1] for more details. We wanted to give > the Blazegraph community a chance to take a peak at 2.0 and give us > feedback. Download it, clone it, have it sent via carrier pigeon > (transportation charges may apply), but definitely take a look at let us > know your thoughts. Please send general questions to the mailing list and > report any bugs in JIRA against the 2.0.0 release. > > 1. Blazegraph is now on github for open source releases: > https://github.com/blazegraph/database: gi...@gi...: > blazegraph/database.git > > 2. Blazegraph has been "mavenized". See [2] in particular for setting > up the development environment. > > 3. The default service URL has changed to /blazegraph/ from /bigdata/. > bigdata.jar and bigdata.war continue to support the /bigdata/ endpoint. > > 4. There are now Debian and RPM deployers available among other options. > > [1] https://blog.blazegraph.com/?p=977 > > [2] > https://wiki.blazegraph.com/wiki/index.php/MavenNotes#Getting_Started_Developing_with_Eclipse > > Cheers, --Brad > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Joakim S. <joa...@bl...> - 2016-01-05 21:16:11
|
Hi, Can I get the new Blazegraph 2.0.0 from Maven? I am searching for blaze graph and bigdata in maven repo but can’t find it. Basically what I would like to know is what to put in my pom.xml file to get 2.0.0? /Joakim > On Dec 22, 2015, at 12:59 PM, Brad Bebee <be...@sy...> wrote: > > Blazegraphers, > > 2.0.0 RC1 is available. See [1] for more details. We wanted to give the Blazegraph community a chance to take a peak at 2.0 and give us feedback. Download it, clone it, have it sent via carrier pigeon (transportation charges may apply), but definitely take a look at let us know your thoughts. Please send general questions to the mailing list and report any bugs in JIRA against the 2.0.0 release. > > 1. Blazegraph is now on github for open source releases: https://github.com/blazegraph/database <https://github.com/blazegraph/database>: gi...@gi...:blazegraph/database.git > > 2. Blazegraph has been "mavenized". See [2] in particular for setting up the development environment. > > 3. The default service URL has changed to /blazegraph/ from /bigdata/. bigdata.jar and bigdata.war continue to support the /bigdata/ endpoint. > > 4. There are now Debian and RPM deployers available among other options. > > [1] https://blog.blazegraph.com/?p=977 <https://blog.blazegraph.com/?p=977> > > [2] https://wiki.blazegraph.com/wiki/index.php/MavenNotes#Getting_Started_Developing_with_Eclipse <https://wiki.blazegraph.com/wiki/index.php/MavenNotes#Getting_Started_Developing_with_Eclipse> > > Cheers, --Brad > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... <mailto:be...@sy...> > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com <http://www.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Jem R. <jem...@ft...> - 2016-01-04 17:47:04
|
Hello, Is there a mechanism to retreive entailments which are inferred only by statements with a particular reification graph? Is it possible to retreive inferred statements for statements with RDR reification at all? For example. Insert --> # Transitive inference -> <http://www.ft.com/ontology/thing/prefLabel> rdf:type owl:DatatypeProperty ; rdfs:subPropertyOf thing:label, # Insert a label with RDR prov: @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix thing: <http://www.ft.com/ontology/thing/> . @prefix industry: <http://www.ft.com/ontology/industry/> . @prefix event: <http://www.ft.com/ontology/event/> . <<<http://api.ft.com/things/123> rdf:type industry:IndustryClassification>> event:prov <http://api.ft.com/things/345> . <<<http://api.ft.com/things/123> thing:prefLabel "Oil & Gas EDIT"@en-US >> event:prov <http://api.ft.com/things/345> . <http://api.ft.com/things/345> rdfs:label "prov one" . # Insert a label with RDR prov: @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix thing: <http://www.ft.com/ontology/thing/> . @prefix industry: <http://www.ft.com/ontology/industry/> . @prefix event: <http://www.ft.com/ontology/event/> . <<<http://api.ft.com/things/123> rdf:type industry:IndustryClassification>> event:prov <http://api.ft.com/things/3456> . <<<http://api.ft.com/things/123> thing:prefLabel "Oil & Gas EDIT"@en-US >> event:prov <http://api.ft.com/things/3456> . <http://api.ft.com/things/3456> rdfs:label "prov two" . Query --> PREFIX thing: <http://www.ft.com/ontology/thing/> PREFIX event: <http://www.ft.com/ontology/event/> SELECT ?thing ?label WHERE { <<?thing thing:label ?label >> event:prov <http://api.ft.com/things/345> . } Retreive ?label = "Oil & Gas" Any thoughts appreciated.. Cheers Jem -- *Jem Rayfield* Head of Solution Architecture Technology +44 (0)7709 332482 Number One Southwark Bridge, London SE1 9HL <https://www.facebook.com/financialtimes> <https://twitter.com/FT> <http://www.linkedin.com/company/financial-times> <https://plus.google.com/+FinancialTimes/posts> <http://www.youtube.com/user/FinancialTimesVideos> -- ------------------------------ *This email was sent by a company owned by Financial Times Group Limited ("FT Group <http://aboutus.ft.com/corporate-information/#axzz3rajCSIAt>"), registered office at Number One Southwark Bridge, London SE1 9HL. Registered in England and Wales with company number 879531. This e-mail may contain confidential information. If you are not the intended recipient, please notify the sender immediately, delete all copies and do not distribute it further. It could also contain personal views which are not necessarily those of the FT Group. We may monitor outgoing or incoming emails as permitted by law.* |
From: Michael S. <ms...@me...> - 2015-12-29 09:39:22
|
As documented in the ticket, this defect was still around in 2.0, but I fixed this MINUS+UNION combination problem now. Remaining problems should only be for edge cases (i.e., if the left-side and right-side variables that are bound at runtime are disjoint (in that case, MINUS should *not* have an effect as compared to FILTER NOT EXISTS, but does, see also http://www.w3.org/TR/sparql11-query/#neg-notexists-minus <http://www.w3.org/TR/sparql11-query/#neg-notexists-minus> and my comments in the ticket). Jeremy, please let us know if these edge cases are high prio for you (I’d expect these edge cases to occur very rarely in practice). Best, Michael > On 28 Dec 2015, at 19:34, Bryan Thompson <br...@sy...> wrote: > > Jeremy, > > Have you tested this issue in the 2.0 Candidate Release? > > Thanks, > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... <mailto:br...@sy...> > http://blazegraph.com <http://blazegraph.com/> > http://blog.blazegraph.com <http://blog.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > > On Thu, Dec 24, 2015 at 2:42 PM, Jeremy J Carroll <jj...@sy... <mailto:jj...@sy...>> wrote: > > I added a comment to this ticket with a possible workaround. > > We have continued performance issues with FILTER NOT EXISTS and initial trials seem to be showing better performance with MINUS, but we are nervous about the correctness issues. > > The workaround ensures there is always at least one shared variable between every binding, which IIRC, avoids the problem case. > > Jeremy > > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Bryan T. <br...@sy...> - 2015-12-28 18:34:38
|
Jeremy, Have you tested this issue in the 2.0 Candidate Release? Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Thu, Dec 24, 2015 at 2:42 PM, Jeremy J Carroll <jj...@sy...> wrote: > > I added a comment to this ticket with a possible workaround. > > We have continued performance issues with FILTER NOT EXISTS and initial > trials seem to be showing better performance with MINUS, but we are nervous > about the correctness issues. > > The workaround ensures there is always at least one shared variable > between every binding, which IIRC, avoids the problem case. > > Jeremy > > > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Jeremy J C. <jj...@sy...> - 2015-12-24 19:42:25
|
I added a comment to this ticket with a possible workaround. We have continued performance issues with FILTER NOT EXISTS and initial trials seem to be showing better performance with MINUS, but we are nervous about the correctness issues. The workaround ensures there is always at least one shared variable between every binding, which IIRC, avoids the problem case. Jeremy |
From: Brad B. <be...@sy...> - 2015-12-22 20:59:11
|
Blazegraphers, 2.0.0 RC1 is available. See [1] for more details. We wanted to give the Blazegraph community a chance to take a peak at 2.0 and give us feedback. Download it, clone it, have it sent via carrier pigeon (transportation charges may apply), but definitely take a look at let us know your thoughts. Please send general questions to the mailing list and report any bugs in JIRA against the 2.0.0 release. 1. Blazegraph is now on github for open source releases: https://github.com/blazegraph/database: gi...@gi...: blazegraph/database.git 2. Blazegraph has been "mavenized". See [2] in particular for setting up the development environment. 3. The default service URL has changed to /blazegraph/ from /bigdata/. bigdata.jar and bigdata.war continue to support the /bigdata/ endpoint. 4. There are now Debian and RPM deployers available among other options. [1] https://blog.blazegraph.com/?p=977 [2] https://wiki.blazegraph.com/wiki/index.php/MavenNotes#Getting_Started_Developing_with_Eclipse Cheers, --Brad -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Brad B. <be...@sy...> - 2015-12-22 19:47:51
|
Joakim, Holiday gift from Blazegraph... ;-) Cheers, --Brad On Tue, Dec 22, 2015 at 2:23 PM, Joakim Soderberg < joa...@bl...> wrote: > NIce, so no vacation for you guys? :-) > > > On Dec 22, 2015, at 11:22 AM, Brad Bebee <be...@sy...> wrote: > > Joakim, > > The release candidate 2.0 release should be out very soon. > > Thanks, --Brad > > On Tue, Dec 22, 2015 at 2:21 PM, Joakim Soderberg < > joa...@bl...> wrote: > >> There is a ticket for this bug: >> Test case for service whitelist: >> https://jira.blazegraph.com/browse/BLZG-1609 >> >> Is the update released? >> >> >> On Dec 21, 2015, at 11:02 PM, Stas Malyshev <sma...@wi...> >> wrote: >> >> Hi! >> >> Hi Stas >> Right, I am trying http://tinyurl.com/jpr7rk8 on a Wikidata mirror with >> Blazegraph 1.5.3 embedded mode. >> >> Do you know if there Is there a bigdata-wikidata.jnl file available for >> download? Perhaps something is inappropriate in my blaze >> graph.properties file which results in this problem. >> >> >> .jnl file from query.wikidata.org is not available for download, and >> it's 100G+ in size, so I'm not sure why would you want to download it. >> Maybe this issue would be relevant: >> https://jira.blazegraph.com/browse/BLZG-1571 >> >> -- >> Stas Malyshev >> sma...@wi... >> >> >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |