|
From: Jeremy C. <jj...@gm...> - 2016-02-01 19:57:54
|
My experience is that blazegraph checks for disk integrity much more than most software, and after an error of this sort from blazegraph, sooner or later other things stop working too, because the h/w is defective. Jeremy > On Feb 1, 2016, at 9:48 AM, Matthew Roy <mr...@ca...> wrote: > > I saw this error this morning as well after testing against the 2.0.0 release code. Was running code from around 1.5.0 previously. > > Caused by: com.bigdata.util.ChecksumError: offset=18124800,nbytes=4044,expected=0,actual=1696870497 > at com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3761) ~[bigdata-core-2.0.0.jar:na] > at com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3576) ~[bigdata-core-2.0.0.jar:na] > at com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) ~[bigdata-core-2.0.0.jar:na] > at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3413) ~[bigdata-core-2.0.0.jar:na] > at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3397) ~[bigdata-core-2.0.0.jar:na] > at com.bigdata.util.concurrent.Memoizer$1.call(Memoizer.java:77) ~[bigdata-core-2.0.0.jar:na] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_60] > at com.bigdata.util.concurrent.Memoizer.compute(Memoizer.java:92) ~[bigdata-core-2.0.0.jar:na] > at com.bigdata.io.writecache.WriteCacheService.loadRecord(WriteCacheService.java:3518) ~[bigdata-core-2.0.0.jar:na] > at com.bigdata.io.writecache.WriteCacheService.read(WriteCacheService.java:3237) ~[bigdata-core-2.0.0.jar:na] > at com.bigdata.rwstore.RWStore.getData(RWStore.java:2052) ~[bigdata-core-2.0.0.jar:na] > ... 24 common frames omitted > and after reopening the journal file get: > > java.lang.Error: Two allocators at same address > at com.bigdata.rwstore.FixedAllocator.compareTo(FixedAllocator.java:102) > at java.util.ComparableTimSort.countRunAndMakeAscending(ComparableTimSort.java:295) > at java.util.ComparableTimSort.sort(ComparableTimSort.java:157) > at java.util.ComparableTimSort.sort(ComparableTimSort.java:146) > at java.util.Arrays.sort(Arrays.java:472) > at java.util.Collections.sort(Collections.java:155) > at com.bigdata.rwstore.RWStore.readAllocationBlocks(RWStore.java:1682) > at com.bigdata.rwstore.RWStore.initfromRootBlock(RWStore.java:1557) > at com.bigdata.rwstore.RWStore.<init>(RWStore.java:969) > at com.bigdata.journal.RWStrategy.<init>(RWStrategy.java:137) > Can't tell exactly what was going on query/update wise when the error occurred. > Will let you know if I can reproduce the error again. > Matt > > ------ Original Message ------ > From: "Bryan Thompson" <br...@sy... <mailto:br...@sy...>> > To: "Jeremy Carroll" <jj...@gm... <mailto:jj...@gm...>>; "Martyn Cutcher" <ma...@sy... <mailto:ma...@sy...>> > Cc: "Big...@li..." <Big...@li... <mailto:Big...@li...>> > Sent: 2/1/2016 10:43:04 AM > Subject: Re: [Bigdata-developers] " No WriteCache debug info" > >> Typically this indicates an actual disk error. It is attempting to read data from the backing file. The checksum that was stored is not matched by the data. The only time I have see this was when there was actually a bad disk. >> >> Caused by: com.bigdata.util.ChecksumError: offset=404849739776,nbytes=1156,expected=-402931822,actual=1830389633 >> at com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3711) >> at com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3526) >> at com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) >> at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3363) >> at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3347) >> >> Bryan >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... <mailto:br...@sy...> >> http://blazegraph.com/ <http://blazegraph.com/> >> http://blog.blazegraph.com <http://blog.blazegraph.com/> >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >> >> >> >> On Mon, Feb 1, 2016 at 10:34 AM, Jeremy Carroll <jj...@gm... <mailto:jj...@gm...>> wrote: >> >> Also on 1.5.3, with %codes as a solution set with half million URIs binding ?x >> >> What does the error message mean? >> >> Jeremy >> >> >> Feb 01,2016 07:27:13 PST - ERROR: 73992665 qtp1401132667-16779 com.bigdata.rdf.sail.webapp.BigdataRDFServlet.launderThrowable(BigdataRDFServlet.java:214): cause=java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info, query=SPARQL-QUERY: queryStr=select (count(?x) as $cnt)^M >> { INCLUDE %codes^M >> } >> java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) >> at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:636) >> at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:263) >> at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) >> at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >> at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >> at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >> at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >> at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >> at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >> at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >> at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) >> at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) >> at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) >> at org.eclipse.jetty.server.Server.handle(Server.java:497) >> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) >> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) >> at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >> at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) >> at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) >> at java.lang.Thread.run(Thread.java:745) >> Caused by: java.util.concurrent.ExecutionException: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:834) >> at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:653) >> at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >> ... 1 more >> Caused by: org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) >> at info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) >> at org.openrdf.query.QueryResults.report(QueryResults.java:155) >> at org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) >> at com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1710) >> at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1567) >> at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1532) >> at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:704) >> ... 4 more >> Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) >> at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) >> at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) >> at com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) >> at com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) >> at com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) >> ... 11 more >> Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) >> ... 16 more >> Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) >> at com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) >> at com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) >> at com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) >> at com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) >> at com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) >> ... 4 more >> Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) >> at com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1514) >> at com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) >> at com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) >> ... 9 more >> Caused by: java.lang.Exception: task=ChunkTask{query=57386638-1f48-4826-966a-84a6b36b5427,bopId=1,partitionId=-1,sinkId=2,altSinkId=null}, cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) >> at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) >> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) >> at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) >> ... 3 more >> Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:192) >> at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) >> ... 8 more >> Caused by: java.lang.RuntimeException: addr=-403683228 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:2190) >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:1989) >> at com.bigdata.rwstore.PSInputStream.<init>(PSInputStream.java:75) >> at com.bigdata.rwstore.RWStore.getInputStream(RWStore.java:6463) >> at com.bigdata.journal.RWStrategy.getInputStream(RWStrategy.java:846) >> at com.bigdata.bop.solutions.SolutionSetStream.get(SolutionSetStream.java:237) >> at com.bigdata.rdf.sparql.ast.ssets.SolutionSetManager.getSolutions(SolutionSetManager.java:556) >> at com.bigdata.bop.NamedSolutionSetRefUtility.getSolutionSet(NamedSolutionSetRefUtility.java:529) >> at com.bigdata.bop.BOpContext.getAlternateSource(BOpContext.java:752) >> at com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.getRightSolutions(NestedLoopJoinOp.java:263) >> at com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.call(NestedLoopJoinOp.java:200) >> at com.bigdata.bop.join.NestedLoopJoinOp$ChunkTask.call(NestedLoopJoinOp.java:166) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) >> ... 8 more >> Caused by: java.lang.IllegalStateException: Error reading from WriteCache addr: 404849739776 length: 1152, writeCacheDebug: No WriteCache debug info >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:2112) >> ... 21 more >> Caused by: com.bigdata.util.ChecksumError: offset=404849739776,nbytes=1156,expected=-402931822,actual=1830389633 >> at com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3711) >> at com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3526) >> at com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) >> at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3363) >> at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3347) >> at com.bigdata.util.concurrent.Memoizer$1.call(Memoizer.java:77) >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> at com.bigdata.util.concurrent.Memoizer.compute(Memoizer.java:92) >> at com.bigdata.io.writecache.WriteCacheService.loadRecord(WriteCacheService.java:3468) >> at com.bigdata.io.writecache.WriteCacheService.read(WriteCacheService.java:3187) >> at com.bigdata.rwstore.RWStore.getData(RWStore.java:2106) >> ... 21 more >> >> >> ------------------------------------------------------------------------------ >> Site24x7 APM Insight: Get Deep Visibility into Application Performance >> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >> Monitor end-to-end web transactions and take corrective actions now >> Troubleshoot faster and improve end-user experience. Signup Now! >> http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140 <http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... <mailto:Big...@li...> >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> >> >> |