From: Bryan T. <br...@sy...> - 2014-08-18 14:10:13
|
Pretty much. The truth maintenance is the minimal work required to keep the materialized entailments in sync with the add/drop of explicit statements. Bryan On Aug 18, 2014 8:54 AM, "Antoni Mylka" <ant...@qu...> wrote: > Hi, > > Thanks for the answer. I'll see what I can do about the test. > > What is really "truth maintenance" from the user point of view? > > I understand it as follows, please confirm: > > There are two kinds of statements in the triple store: explicit and > entailed ones. > > Truth maintenance means automatic, synchronous recomputation of all > entailments after each modification of the explicit statements. > > When this feature is turned on, every SPARQL UPDATE or LOAD will recompute > the inferred statements synchronously, so that after the modification > operation finishes - I can count on the inferred statements being there. > Both the modification and inference are a single transaction. Other clients > don't ever see the state between the modification and the inference. > > When it's turned off - the set of inferred statements is not modified > after I modify the explicit statements. New entailments will not appear in > the dataset, old entailments will not be removed. Inference must be > triggered explicitly, on a whole graph. It's possible to do both in one > transaction but it's also possible to do them in two transactions. In the > second case, the intermediate, inconsistent state will be visible to anyone > for a while, unless it's prevented by other means. > > Truth maintenance is a server-side feature. No temporary data is kept in > the client-side connection object. It's not affected by my using one > RepositoryConnection for all updates and queries in the application or > creating a connection for each query and update and then closing it. It's > also orthogonal to the client-side autoCommit flag. I can have autocommit > or not, with or without truth maintenance. I'll get the same behavior by > using curl and POSTing SPARQL queries. > > -- > Antoni Myłka > Software Engineer > > Quantinum AG, Birkenweg 61, CH-3013 Bern - Fon +41 31 388 20 40 > http://www.quantinum.com - Bee for Business > > ----- Ursprüngliche Mail ----- > Von: "Bryan Thompson" <br...@sy...> > An: "Antoni Mylka" <ant...@qu...> > CC: "Big...@li..." < > big...@li...> > Gesendet: Montag, 18. August 2014 14:14:08 > Betreff: Re: [Bigdata-developers] Getting triples up to 3 hops from a > given set of resources > > > This has been tested without truth maintenance, so I suspect that the > truth maintenance is the problem. The GASEngine tunnels some abstractions > and some of those assumptions might be violated in this case. > > > Can you create a simple test case that fails, attach it to a ticket, and > send me a link to the ticket? > > > Thanks, > Bryan > > > > > ---- > Bryan Thompson > > Chief Scientist & Founder > SYSTAP, LLC > > 4501 Tower Road > Greensboro, NC 27410 > > br...@sy... > > http://bigdata.com > > http://mapgraph.io > > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > > On Mon, Aug 18, 2014 at 8:09 AM, Antoni Mylka < ant...@qu... > > wrote: > > > Hi, > > Thanks for your answer. > > Our RWStore.properties file doesn't contain any entries for > com.bigdata.rdf.sail.truthMaintenance nor > com.bigdata.rdf.sail.isolatableIndices. AFAIU this means that the default > values are used, which is true for truth maintenance and false for > isolatable indices. > > I'll try to do another attempt without truth maintenance. AFAIU the > inference closure is always computed when transaction is committed. If I > need to write and then read within the same transaction - I need to call > BigdataSailConnection.computeClosure. I don't, so in my case this change > won't have any influence on the application code. I can just flip the > switch in the configuration and the app won't notice. > > You seem to say, that "full read/write transaction support" is only > available with the "isolatableIndices" configuration property. We don't use > it, but we thought that calling setAutoCommit(false) and then commit() on > the RepositoryConnection gives us transaction isolation "out of the box". > Doesn't it? > > -- > Antoni Myłka > Software Engineer > > Quantinum AG, Birkenweg 61, CH-3013 Bern - Fon +41 31 388 20 40 > http://www.quantinum.com - Bee for Business > > ----- Ursprüngliche Mail ----- > Von: "Bryan Thompson" < br...@sy... > > An: "Antoni Mylka" < ant...@qu... > > CC: big...@li... > Gesendet: Montag, 18. August 2014 13:45:41 > Betreff: Re: [Bigdata-developers] Getting triples up to 3 hops from a > given set of resources > > The RWStore.properties is used by the war under a variety of deployment > models. The GraphStore,properties is used by a single ant target and could > probably be replaced by the use of the RWStore.properties file. > > Are you configuring the kb instance with full read/write transaction > support or incremental truth maintenance? Both of those use a temporary > store and might account for the exception. Try using unisolated indices (do > not specify the bigdata sail option ISOLATABLE_INDICES) and try turning off > incremental truth maintenance. Let us know which one is responsible for the > issue and then we can file a ticket for this. > > Bryan > > > On Aug 18, 2014, at 7:25 AM, Antoni Mylka < ant...@qu... > > wrote: > > > > Hi, > > > > Yet another update. I still get the exceptions. It doesn't seem to > depend on the JVM. > > > > The query works, but only if I load data, restart the server and then do > the query. Then, after the first update, I start getting the exceptions. > That's why it started working with curl after a restart of Tomcat with a > different JVM. The moment I fired up my app, which started writing stuff to > the store and doing queries - the exceptions reappeared. > > > > Is it a bug? Do I need to do anything between loading data and doing GAS > queries? > > > > Best regards, > > > > -- > > Antoni Myłka > > Software Engineer > > > > Quantinum AG, Birkenweg 61, CH-3013 Bern - Fon +41 31 388 20 40 > > http://www.quantinum.com - Bee for Business > > > > ----- Ursprüngliche Mail ----- > > Von: "Antoni Myłka" < ant...@qu... > > > An: big...@li... > > Gesendet: Montag, 18. August 2014 11:34:46 > > Betreff: Re: [Bigdata-developers] Getting triples up to 3 hops from a > given set of resources > > > > Hi, > > > > The exception doesn't appear when I switch from JRE 8 to JRE 7. The > query seems to work. > > > > Best Regards, > > > > -- > > Antoni Myłka > > Software Engineer > > > > Quantinum AG, Birkenweg 61, CH-3013 Bern - Fon +41 31 388 20 40 > > http://www.quantinum.com - Bee for Business > > > > ----- Ursprüngliche Mail ----- > > Von: "Antoni Myłka" < ant...@qu... > > > An: big...@li... > > Gesendet: Montag, 18. August 2014 10:37:17 > > Betreff: Re: [Bigdata-developers] Getting triples up to 3 hops from a > given set of resources > > > > Hi > > > > I'm trying to experiment with the BFS program. The example "BFS with > extracted subgraph" seems most interesting to me. I do this: > > > > curl -X POST ' > http://localhost:28080/bigdata/namespace/our.namespace/sparql ' -o > out.xml \ > > --data-urlencode 'query=PREFIX gas: < http://www.bigdata.com/rdf/gas# > > > SELECT ?depth ?out ?p ?o > > WHERE { > > SERVICE gas:service { > > gas:program gas:gasClass "com.bigdata.rdf.graph.analytics.BFS" . > > gas:program gas:in <my-starting-point> . > > gas:program gas:out ?out . > > gas:program gas:out1 ?depth . > > gas:program gas:maxIterations 1 . > > } > > ?out ?p ?o . > > }' > > > > > > But the out.xml file contains an exception where the root cause is (Full > stack trace below): > > > > Caused by: java.lang.ClassCastException: > com.bigdata.journal.TemporaryStore cannot be cast to > com.bigdata.journal.Journal > > at > com.bigdata.rdf.store.LocalTripleStore.<init>(LocalTripleStore.java:163) > > ... 15 more > > > > It seems like some configuration issue. Is there anything I must or > can't put in the configuration files before I can use GAS? I also don't > quite understand why there are two configuration files: RWStore.properties > and GraphStore.properties. What is the relation between them? Is a "graph > store" something separate from a normal RDF store? > > > > -- > > Antoni Myłka > > Software Engineer > > > > Quantinum AG, Birkenweg 61, CH-3013 Bern - Fon +41 31 388 20 40 > > http://www.quantinum.com - Bee for Business > > > > > > PREFIX gas: < http://www.bigdata.com/rdf/gas# > > > SELECT ?depth ?out ?p ?o > > WHERE { > > SERVICE gas:service { > > gas:program gas:gasClass "com.bigdata.rdf.graph.analytics.BFS" . > > gas:program gas:in <my-starting-point> . > > gas:program gas:out ?out . > > gas:program gas:out1 ?depth . > > gas:program gas:out2 ?predecessor . > > gas:program gas:maxIterations 1 . > > } > > ?out ?p ?o . > > } > > java.util.concurrent.ExecutionException: java.lang.Exception: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=d4747392-722c-46ab-80e2-e9fb27dddd7c,bopId=1,partitionId=-1,sinkId=3,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could > not instantiate relation: java.lang.reflect.InvocationTargetException > > at java.util.concurrent.FutureTask.report(Unknown Source) > > at java.util.concurrent.FutureTask.get(Unknown Source) > > at > com.bigdata.rdf.sail.webapp.QueryServlet.doQuery(QueryServlet.java:625) > > at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:182) > > at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) > > at > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:135) > > at javax.servlet.http.HttpServlet.service(HttpServlet.java:646) > > at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) > > at > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) > > at > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) > > at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) > > at > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) > > at > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) > > at > org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220) > > at > org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122) > > at > org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501) > > at > org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) > > at > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) > > at > org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950) > > at > org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) > > at > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408) > > at > org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040) > > at > org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607) > > at > org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314) > > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > > at > org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) > > at java.lang.Thread.run(Unknown Source) > > Caused by: java.lang.Exception: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=d4747392-722c-46ab-80e2-e9fb27dddd7c,bopId=1,partitionId=-1,sinkId=3,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could > not instantiate relation: java.lang.reflect.InvocationTargetException > > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1191) > > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:500) > > at java.util.concurrent.FutureTask.run(Unknown Source) > > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > > ... 1 more > > Caused by: org.openrdf.query.QueryEvaluationException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > task=ChunkTask{query=d4747392-722c-46ab-80e2-e9fb27dddd7c,bopId=1,partitionId=-1,sinkId=3,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could > not instantiate relation: java.lang.reflect.InvocationTargetException > > at > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) > > at > org.openrdf.query.impl.TupleQueryResultImpl.hasNext(TupleQueryResultImpl.java:90) > > at org.openrdf.query.QueryResultUtil.report(QueryResultUtil.java:52) > > at > org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:63) > > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1310) > > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1164) > > ... 5 more > > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=d4747392-722c-46ab-80e2-e9fb27dddd7c,bopId=1,partitionId=-1,sinkId=3,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could > not instantiate relation: java.lang.reflect.InvocationTargetException > > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) > > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) > > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) > > at > com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) > > at > com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) > > at > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) > > ... 10 more > > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > task=ChunkTask{query=d4747392-722c-46ab-80e2-e9fb27dddd7c,bopId=1,partitionId=-1,sinkId=3,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could > not instantiate relation: java.lang.reflect.InvocationTargetException > > at java.util.concurrent.FutureTask.report(Unknown Source) > > at java.util.concurrent.FutureTask.get(Unknown Source) > > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) > > ... 15 more > > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=d4747392-722c-46ab-80e2-e9fb27dddd7c,bopId=1,partitionId=-1,sinkId=3,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could > not instantiate relation: java.lang.reflect.InvocationTargetException > > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) > > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) > > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) > > at > com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) > > at > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) > > at > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) > > ... 4 more > > Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=d4747392-722c-46ab-80e2-e9fb27dddd7c,bopId=1,partitionId=-1,sinkId=3,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could > not instantiate relation: java.lang.reflect.InvocationTargetException > > at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) > > at > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1477) > > at > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) > > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) > > ... 9 more > > Caused by: java.lang.Exception: > task=ChunkTask{query=d4747392-722c-46ab-80e2-e9fb27dddd7c,bopId=1,partitionId=-1,sinkId=3,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could > not instantiate relation: java.lang.reflect.InvocationTargetException > > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1335) > > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:894) > > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) > > at java.util.concurrent.FutureTask.run(Unknown Source) > > at com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) > > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:789) > > ... 3 more > > Caused by: java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could > not instantiate relation: java.lang.reflect.InvocationTargetException > > at java.util.concurrent.FutureTask.report(Unknown Source) > > at java.util.concurrent.FutureTask.get(Unknown Source) > > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1315) > > ... 8 more > > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: Could not instantiate relation: > java.lang.reflect.InvocationTargetException > > at java.util.concurrent.FutureTask.report(Unknown Source) > > at java.util.concurrent.FutureTask.get(Unknown Source) > > at > com.bigdata.bop.controller.ServiceCallJoin$ChunkTask.doServiceCallWithConstant(ServiceCallJoin.java:341) > > at > com.bigdata.bop.controller.ServiceCallJoin$ChunkTask.call(ServiceCallJoin.java:293) > > at > com.bigdata.bop.controller.ServiceCallJoin$ChunkTask.call(ServiceCallJoin.java:205) > > at java.util.concurrent.FutureTask.run(Unknown Source) > > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1314) > > ... 8 more > > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could > not instantiate relation: java.lang.reflect.InvocationTargetException > > at > com.bigdata.bop.controller.ServiceCallJoin$ChunkTask$ServiceCallTask.doServiceCall(ServiceCallJoin.java:743) > > at > com.bigdata.bop.controller.ServiceCallJoin$ChunkTask$ServiceCallTask.call(ServiceCallJoin.java:606) > > at > com.bigdata.bop.controller.ServiceCallJoin$ChunkTask$ServiceCallTask.call(ServiceCallJoin.java:542) > > at java.util.concurrent.FutureTask.run(Unknown Source) > > ... 3 more > > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: Could not instantiate relation: > java.lang.reflect.InvocationTargetException > > at java.util.concurrent.FutureTask.report(Unknown Source) > > at java.util.concurrent.FutureTask.get(Unknown Source) > > at > com.bigdata.rdf.graph.impl.GASEngine$ParallelFrontierStrategy.call(GASEngine.java:329) > > at > com.bigdata.rdf.graph.impl.GASEngine$ParallelFrontierStrategy.call(GASEngine.java:236) > > at > com.bigdata.rdf.graph.impl.GASContext.scatterEdges(GASContext.java:604) > > at com.bigdata.rdf.graph.impl.GASContext.doRound(GASContext.java:391) > > at com.bigdata.rdf.graph.impl.GASContext.call(GASContext.java:240) > > at > com.bigdata.rdf.graph.impl.bd.GASService$GASServiceCall.call(GASService.java:837) > > at > com.bigdata.rdf.graph.impl.bd.GASService$GASServiceCall.call(GASService.java:414) > > at > com.bigdata.bop.controller.ServiceCallJoin$ChunkTask$ServiceCallTask.doBigdataServiceCall(ServiceCallJoin.java:756) > > at > com.bigdata.bop.controller.ServiceCallJoin$ChunkTask$ServiceCallTask.doServiceCall(ServiceCallJoin.java:697) > > ... 6 more > > Caused by: java.lang.RuntimeException: Could not instantiate relation: > java.lang.reflect.InvocationTargetException > > at > com.bigdata.relation.locator.DefaultResourceLocator.newInstance(DefaultResourceLocator.java:982) > > at > com.bigdata.relation.locator.DefaultResourceLocator.cacheMiss(DefaultResourceLocator.java:464) > > at > com.bigdata.relation.locator.DefaultResourceLocator.locate(DefaultResourceLocator.java:333) > > at > com.bigdata.rdf.graph.impl.bd.BigdataGASEngine$BigdataGraphAccessor.resolveKB(BigdataGASEngine.java:301) > > at > com.bigdata.rdf.graph.impl.bd.BigdataGASEngine$BigdataGraphAccessor.getKB(BigdataGASEngine.java:272) > > at > com.bigdata.rdf.graph.impl.bd.BigdataGASEngine$BigdataGraphAccessor.getEdges(BigdataGASEngine.java:687) > > at > com.bigdata.rdf.graph.impl.GASContext$ScatterTask.call(GASContext.java:744) > > at > com.bigdata.rdf.graph.impl.GASContext$ScatterTask.call(GASContext.java:700) > > ... 4 more > > Caused by: java.lang.reflect.InvocationTargetException > > at sun.reflect.GeneratedConstructorAccessor27.newInstance(Unknown Source) > > at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown > Source) > > at java.lang.reflect.Constructor.newInstance(Unknown Source) > > at > com.bigdata.relation.locator.DefaultResourceLocator.newInstance(DefaultResourceLocator.java:963) > > ... 11 more > > Caused by: java.lang.ClassCastException: > com.bigdata.journal.TemporaryStore cannot be cast to > com.bigdata.journal.Journal > > at > com.bigdata.rdf.store.LocalTripleStore.<init>(LocalTripleStore.java:163) > > ... 15 more > > > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > > Bigdata-developers mailing list > > Big...@li... > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > > Bigdata-developers mailing list > > Big...@li... > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > > Bigdata-developers mailing list > > Big...@li... > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |