This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bryan T. <br...@sy...> - 2014-10-20 07:38:03
|
Ah. That helps. Thanks Jeremy! Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://bigdata.com http://mapgraph.io CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Sun, Oct 19, 2014 at 11:22 PM, Jean-Marc Vanel <jea...@gm...> wrote: > Jeremy, > > I actually did the 2 LOAD in 2 separated executions. > But I have the feeling that the crash comes no matter what the content of > the database. > > > 2014-10-19 22:51 GMT+02:00 Jeremy J Carroll <jj...@sy...>: > >> I think I see several issues in this case (this is using 1.3.2 rather >> than the current code base) >> >> - Jean-Marc should have separated the two load requests with a ; >> - if using the nano sparql server and the web ui, 500 errors do not >> appear to get reported, so the problem with the lack of a ; was not >> reported properly >> - the implementation of LOAD URL INTO GRAPH URI ignores the graph name >> part and always loads into the default graph >> - the optimizer crashes on >> GRAPH URI { >> ?s ?p ?o >> } >> where there are no triples in the graph … >> >> Stack trace for last part >> >> WARN : 1952073 qtp433857665-101 >> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:361): >> /bigdata/namespace/ccc/sparql >> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >> java.lang.UnsupportedOperationException >> at com.bigdata.rdf.sail.webapp.BigdataRDFServlet.launderThrowable( >> BigdataRDFServlet.java:241) >> at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery( >> QueryServlet.java:645) >> at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:191) >> at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) >> at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost( >> MultiTenancyServlet.java:144) >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:738) >> at org.eclipse.jetty.servlet.ServletHandler.doHandle( >> ServletHandler.java:551) >> at org.eclipse.jetty.server.handler.ScopedHandler.handle( >> ScopedHandler.java:143) >> at org.eclipse.jetty.security.SecurityHandler.handle( >> SecurityHandler.java:568) >> at org.eclipse.jetty.server.session.SessionHandler.doHandle( >> SessionHandler.java:221) >> at org.eclipse.jetty.server.handler.ContextHandler.doHandle( >> ContextHandler.java:1111) >> at org.eclipse.jetty.servlet.ServletHandler.doScope( >> ServletHandler.java:478) >> at org.eclipse.jetty.server.session.SessionHandler.doScope( >> SessionHandler.java:183) >> at org.eclipse.jetty.server.handler.ContextHandler.doScope( >> ContextHandler.java:1045) >> at org.eclipse.jetty.server.handler.ScopedHandler.handle( >> ScopedHandler.java:141) >> at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle( >> ContextHandlerCollection.java:199) >> at org.eclipse.jetty.server.handler.HandlerCollection.handle( >> HandlerCollection.java:109) >> at org.eclipse.jetty.server.handler.HandlerWrapper.handle( >> HandlerWrapper.java:97) >> at org.eclipse.jetty.server.Server.handle(Server.java:462) >> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:279) >> at org.eclipse.jetty.server.HttpConnection.onFillable( >> HttpConnection.java:232) >> at org.eclipse.jetty.io.AbstractConnection$2.run( >> AbstractConnection.java:534) >> at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob( >> QueuedThreadPool.java:607) >> at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run( >> QueuedThreadPool.java:536) >> at java.lang.Thread.run(Thread.java:722) >> Caused by: java.util.concurrent.ExecutionException: >> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >> java.lang.UnsupportedOperationException >> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) >> at java.util.concurrent.FutureTask.get(FutureTask.java:111) >> at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery( >> QueryServlet.java:639) >> ... 25 more >> Caused by: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: >> java.lang.UnsupportedOperationException >> at com.bigdata.rdf.sail.webapp.BigdataRDFServlet.launderThrowable( >> BigdataRDFServlet.java:241) >> at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call( >> BigdataRDFContext.java:1284) >> at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call( >> BigdataRDFContext.java:1) >> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) >> at java.util.concurrent.FutureTask.run(FutureTask.java:166) >> at java.util.concurrent.ThreadPoolExecutor.runWorker( >> ThreadPoolExecutor.java:1145) >> at java.util.concurrent.ThreadPoolExecutor$Worker.run( >> ThreadPoolExecutor.java:615) >> ... 1 more >> Caused by: java.util.concurrent.ExecutionException: >> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >> java.lang.UnsupportedOperationException >> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) >> at java.util.concurrent.FutureTask.get(FutureTask.java:111) >> at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call( >> BigdataRDFContext.java:1279) >> ... 6 more >> Caused by: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: >> java.lang.UnsupportedOperationException >> at >> com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.attachRangeCounts( >> ASTRangeCountOptimizer.java:140) >> at >> com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.optimizeJoinGroup( >> ASTRangeCountOptimizer.java:77) >> at >> com.bigdata.rdf.sparql.ast.optimizers.AbstractJoinGroupOptimizer.optimize( >> AbstractJoinGroupOptimizer.java:157) >> at >> com.bigdata.rdf.sparql.ast.optimizers.AbstractJoinGroupOptimizer.optimize( >> AbstractJoinGroupOptimizer.java:97) >> at com.bigdata.rdf.sparql.ast.optimizers.ASTOptimizerList.optimize( >> ASTOptimizerList.java:104) >> at com.bigdata.rdf.sparql.ast.eval.AST2BOpUtility.convert( >> AST2BOpUtility.java:219) >> at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.evaluateTupleQuery( >> ASTEvalHelper.java:238) >> at com.bigdata.rdf.sail.BigdataSailTupleQuery.evaluate( >> BigdataSailTupleQuery.java:93) >> at com.bigdata.rdf.sail.BigdataSailTupleQuery.evaluate( >> BigdataSailTupleQuery.java:75) >> at org.openrdf.repository.sail.SailTupleQuery.evaluate( >> SailTupleQuery.java:62) >> at com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery( >> BigdataRDFContext.java:1386) >> at >> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask$SparqlRestApiTask.call( >> BigdataRDFContext.java:1221) >> at >> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask$SparqlRestApiTask.call( >> BigdataRDFContext.java:1) >> at com.bigdata.rdf.task.ApiTaskForIndexManager.call( >> ApiTaskForIndexManager.java:67) >> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) >> at java.util.concurrent.FutureTask.run(FutureTask.java:166) >> at com.bigdata.rdf.task.AbstractApiTask.submitApiTask( >> AbstractApiTask.java:293) >> at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call( >> BigdataRDFContext.java:1277) >> ... 6 more >> Caused by: java.util.concurrent.ExecutionException: >> java.lang.UnsupportedOperationException >> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) >> at java.util.concurrent.FutureTask.get(FutureTask.java:111) >> at >> com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.attachRangeCounts( >> ASTRangeCountOptimizer.java:124) >> ... 23 more >> Caused by: java.lang.UnsupportedOperationException >> at com.bigdata.rdf.spo.SPORelation.getPredicate(SPORelation.java:1210) >> at com.bigdata.rdf.spo.SPORelation.getAccessPath(SPORelation.java:1121) >> at com.bigdata.rdf.spo.SPORelation.getAccessPath(SPORelation.java:1077) >> at com.bigdata.rdf.store.AbstractTripleStore.getAccessPath( >> AbstractTripleStore.java:3120) >> at >> com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.estimateCardinalities( >> ASTRangeCountOptimizer.java:199) >> at >> com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.estimateCardinality( >> ASTRangeCountOptimizer.java:191) >> at >> com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer$RangeCountTask.call( >> ASTRangeCountOptimizer.java:171) >> at >> com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer$RangeCountTask.call( >> ASTRangeCountOptimizer.java:1) >> ... 5 more >> >> >> Jeremy >> >> On Oct 19, 2014, at 7:22 AM, Bryan Thompson <br...@sy...> wrote: >> >> Jean-Marx, >> >> Please file a ticket with this information and include the long exception >> trace since it tells us plenty. >> >> Thanks, >> Bryan >> >> On Sunday, October 19, 2014, Jean-Marc Vanel <jea...@gm...> >> wrote: >> >>> Hi >>>> >>>> I tried this query in latest BigData source code: >>>> >>> >>> >>>> CONSTRUCT { >>>> ?thing ?p ?o . >>>> ?s ?p1 ?thing . >>>> } >>>> WHERE { >>>> { graph <http://jmvanel.free.fr/jmv.rdf#me> >>>> { ?thing ?p ?o . } >>>> } UNION { >>>> graph ?GRAPH >>>> { ?s ?p1 ?thing . } >>>> } >>>> } LIMIT 100 >>>> >>>> This maybe wrong wrt the spec. >>>> But it crashes in BigData, >>>> with a long exceptions chain that tells nothing . >>>> >>> >>> Note that in Jena and Virtuoso (dbpedia), this pattern of query is >>> accepted, >>> with unexpected results in Jena. >>> >>> >>> The database has been prepared with: >>> LOAD <http://jmvanel.free.fr/jmv.rdf#me> INTO GRAPH < >>> http://jmvanel.free.fr/jmv.rdf#me> >>> LOAD <http://danbri.org/foaf.rdf#danbri> INTO GRAPH < >>> http://danbri.org/foaf.rdf#danbri> >>> >>> >>>> >>>> -- >>>> Jean-Marc Vanel >>>> Déductions SARL - Consulting, services, training, >>>> Rule-based programming, Semantic Web >>>> http://deductions-software.com/ >>>> +33 (0)6 89 16 29 52 >>>> Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui >>>> <http://irc.freenode.net/#eulergui> >>>> >>> >>> >>> >>> -- >>> Jean-Marc Vanel >>> Déductions SARL - Consulting, services, training, >>> Rule-based programming, Semantic Web >>> http://deductions-software.com/ >>> +33 (0)6 89 16 29 52 >>> Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui >>> <http://irc.freenode.net/#eulergui> >>> >> >> >> -- >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://bigdata.com >> http://mapgraph.io >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> >> >> ------------------------------------------------------------------------------ >> Comprehensive Server Monitoring with Site24x7. >> Monitor 10 servers for $9/Month. >> Get alerted through email, SMS, voice calls or mobile push notifications. >> Take corrective actions from your mobile device. >> http://p.sf.net/sfu/Zoho_______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> >> > > > -- > Jean-Marc Vanel > Déductions SARL - Consulting, services, training, > Rule-based programming, Semantic Web > http://deductions-software.com/ > +33 (0)6 89 16 29 52 > Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui > |
From: Jean-Marc V. <jea...@gm...> - 2014-10-19 21:23:06
|
Jeremy, I actually did the 2 LOAD in 2 separated executions. But I have the feeling that the crash comes no matter what the content of the database. 2014-10-19 22:51 GMT+02:00 Jeremy J Carroll <jj...@sy...>: > I think I see several issues in this case (this is using 1.3.2 rather than > the current code base) > > - Jean-Marc should have separated the two load requests with a ; > - if using the nano sparql server and the web ui, 500 errors do not appear > to get reported, so the problem with the lack of a ; was not reported > properly > - the implementation of LOAD URL INTO GRAPH URI ignores the graph name > part and always loads into the default graph > - the optimizer crashes on > GRAPH URI { > ?s ?p ?o > } > where there are no triples in the graph … > > Stack trace for last part > > WARN : 1952073 qtp433857665-101 > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:361): > /bigdata/namespace/ccc/sparql > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException > at com.bigdata.rdf.sail.webapp.BigdataRDFServlet.launderThrowable( > BigdataRDFServlet.java:241) > at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery( > QueryServlet.java:645) > at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:191) > at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) > at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost( > MultiTenancyServlet.java:144) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:738) > at org.eclipse.jetty.servlet.ServletHandler.doHandle( > ServletHandler.java:551) > at org.eclipse.jetty.server.handler.ScopedHandler.handle( > ScopedHandler.java:143) > at org.eclipse.jetty.security.SecurityHandler.handle( > SecurityHandler.java:568) > at org.eclipse.jetty.server.session.SessionHandler.doHandle( > SessionHandler.java:221) > at org.eclipse.jetty.server.handler.ContextHandler.doHandle( > ContextHandler.java:1111) > at org.eclipse.jetty.servlet.ServletHandler.doScope( > ServletHandler.java:478) > at org.eclipse.jetty.server.session.SessionHandler.doScope( > SessionHandler.java:183) > at org.eclipse.jetty.server.handler.ContextHandler.doScope( > ContextHandler.java:1045) > at org.eclipse.jetty.server.handler.ScopedHandler.handle( > ScopedHandler.java:141) > at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle( > ContextHandlerCollection.java:199) > at org.eclipse.jetty.server.handler.HandlerCollection.handle( > HandlerCollection.java:109) > at org.eclipse.jetty.server.handler.HandlerWrapper.handle( > HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:462) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:279) > at org.eclipse.jetty.server.HttpConnection.onFillable( > HttpConnection.java:232) > at org.eclipse.jetty.io.AbstractConnection$2.run( > AbstractConnection.java:534) > at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob( > QueuedThreadPool.java:607) > at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run( > QueuedThreadPool.java:536) > at java.lang.Thread.run(Thread.java:722) > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException > at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) > at java.util.concurrent.FutureTask.get(FutureTask.java:111) > at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery( > QueryServlet.java:639) > ... 25 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException > at com.bigdata.rdf.sail.webapp.BigdataRDFServlet.launderThrowable( > BigdataRDFServlet.java:241) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call( > BigdataRDFContext.java:1284) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call( > BigdataRDFContext.java:1) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > at java.util.concurrent.FutureTask.run(FutureTask.java:166) > at java.util.concurrent.ThreadPoolExecutor.runWorker( > ThreadPoolExecutor.java:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run( > ThreadPoolExecutor.java:615) > ... 1 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException > at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) > at java.util.concurrent.FutureTask.get(FutureTask.java:111) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call( > BigdataRDFContext.java:1279) > ... 6 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException > at > com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.attachRangeCounts( > ASTRangeCountOptimizer.java:140) > at > com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.optimizeJoinGroup( > ASTRangeCountOptimizer.java:77) > at > com.bigdata.rdf.sparql.ast.optimizers.AbstractJoinGroupOptimizer.optimize( > AbstractJoinGroupOptimizer.java:157) > at > com.bigdata.rdf.sparql.ast.optimizers.AbstractJoinGroupOptimizer.optimize( > AbstractJoinGroupOptimizer.java:97) > at com.bigdata.rdf.sparql.ast.optimizers.ASTOptimizerList.optimize( > ASTOptimizerList.java:104) > at com.bigdata.rdf.sparql.ast.eval.AST2BOpUtility.convert( > AST2BOpUtility.java:219) > at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.evaluateTupleQuery( > ASTEvalHelper.java:238) > at com.bigdata.rdf.sail.BigdataSailTupleQuery.evaluate( > BigdataSailTupleQuery.java:93) > at com.bigdata.rdf.sail.BigdataSailTupleQuery.evaluate( > BigdataSailTupleQuery.java:75) > at org.openrdf.repository.sail.SailTupleQuery.evaluate( > SailTupleQuery.java:62) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery( > BigdataRDFContext.java:1386) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask$SparqlRestApiTask.call( > BigdataRDFContext.java:1221) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask$SparqlRestApiTask.call( > BigdataRDFContext.java:1) > at com.bigdata.rdf.task.ApiTaskForIndexManager.call( > ApiTaskForIndexManager.java:67) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > at java.util.concurrent.FutureTask.run(FutureTask.java:166) > at com.bigdata.rdf.task.AbstractApiTask.submitApiTask( > AbstractApiTask.java:293) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call( > BigdataRDFContext.java:1277) > ... 6 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.UnsupportedOperationException > at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) > at java.util.concurrent.FutureTask.get(FutureTask.java:111) > at > com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.attachRangeCounts( > ASTRangeCountOptimizer.java:124) > ... 23 more > Caused by: java.lang.UnsupportedOperationException > at com.bigdata.rdf.spo.SPORelation.getPredicate(SPORelation.java:1210) > at com.bigdata.rdf.spo.SPORelation.getAccessPath(SPORelation.java:1121) > at com.bigdata.rdf.spo.SPORelation.getAccessPath(SPORelation.java:1077) > at com.bigdata.rdf.store.AbstractTripleStore.getAccessPath( > AbstractTripleStore.java:3120) > at > com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.estimateCardinalities( > ASTRangeCountOptimizer.java:199) > at > com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.estimateCardinality( > ASTRangeCountOptimizer.java:191) > at > com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer$RangeCountTask.call( > ASTRangeCountOptimizer.java:171) > at > com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer$RangeCountTask.call( > ASTRangeCountOptimizer.java:1) > ... 5 more > > > Jeremy > > On Oct 19, 2014, at 7:22 AM, Bryan Thompson <br...@sy...> wrote: > > Jean-Marx, > > Please file a ticket with this information and include the long exception > trace since it tells us plenty. > > Thanks, > Bryan > > On Sunday, October 19, 2014, Jean-Marc Vanel <jea...@gm...> > wrote: > >> Hi >>> >>> I tried this query in latest BigData source code: >>> >> >> >>> CONSTRUCT { >>> ?thing ?p ?o . >>> ?s ?p1 ?thing . >>> } >>> WHERE { >>> { graph <http://jmvanel.free.fr/jmv.rdf#me> >>> { ?thing ?p ?o . } >>> } UNION { >>> graph ?GRAPH >>> { ?s ?p1 ?thing . } >>> } >>> } LIMIT 100 >>> >>> This maybe wrong wrt the spec. >>> But it crashes in BigData, >>> with a long exceptions chain that tells nothing . >>> >> >> Note that in Jena and Virtuoso (dbpedia), this pattern of query is >> accepted, >> with unexpected results in Jena. >> >> >> The database has been prepared with: >> LOAD <http://jmvanel.free.fr/jmv.rdf#me> INTO GRAPH < >> http://jmvanel.free.fr/jmv.rdf#me> >> LOAD <http://danbri.org/foaf.rdf#danbri> INTO GRAPH < >> http://danbri.org/foaf.rdf#danbri> >> >> >>> >>> -- >>> Jean-Marc Vanel >>> Déductions SARL - Consulting, services, training, >>> Rule-based programming, Semantic Web >>> http://deductions-software.com/ >>> +33 (0)6 89 16 29 52 >>> Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui >>> <http://irc.freenode.net/#eulergui> >>> >> >> >> >> -- >> Jean-Marc Vanel >> Déductions SARL - Consulting, services, training, >> Rule-based programming, Semantic Web >> http://deductions-software.com/ >> +33 (0)6 89 16 29 52 >> Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui >> <http://irc.freenode.net/#eulergui> >> > > > -- > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://bigdata.com > http://mapgraph.io > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > > ------------------------------------------------------------------------------ > Comprehensive Server Monitoring with Site24x7. > Monitor 10 servers for $9/Month. > Get alerted through email, SMS, voice calls or mobile push notifications. > Take corrective actions from your mobile device. > http://p.sf.net/sfu/Zoho_______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > -- Jean-Marc Vanel Déductions SARL - Consulting, services, training, Rule-based programming, Semantic Web http://deductions-software.com/ +33 (0)6 89 16 29 52 Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui |
From: Jeremy J C. <jj...@sy...> - 2014-10-19 21:22:21
|
I think I see several issues in this case (this is using 1.3.2 rather than the current code base) - Jean-Marc should have separated the two load requests with a ; - if using the nano sparql server and the web ui, 500 errors do not appear to get reported, so the problem with the lack of a ; was not reported properly - the implementation of LOAD URL INTO GRAPH URI ignores the graph name part and always loads into the default graph - the optimizer crashes on GRAPH URI { ?s ?p ?o } where there are no triples in the graph … Stack trace for last part WARN : 1952073 qtp433857665-101 org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:361): /bigdata/namespace/ccc/sparql java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.UnsupportedOperationException at com.bigdata.rdf.sail.webapp.BigdataRDFServlet.launderThrowable(BigdataRDFServlet.java:241) at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:645) at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:191) at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:144) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:738) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:551) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:568) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1111) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:478) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1045) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:199) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:462) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:279) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:232) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:534) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) at java.lang.Thread.run(Thread.java:722) Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.UnsupportedOperationException at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) at java.util.concurrent.FutureTask.get(FutureTask.java:111) at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:639) ... 25 more Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.UnsupportedOperationException at com.bigdata.rdf.sail.webapp.BigdataRDFServlet.launderThrowable(BigdataRDFServlet.java:241) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1284) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ... 1 more Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.UnsupportedOperationException at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) at java.util.concurrent.FutureTask.get(FutureTask.java:111) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1279) ... 6 more Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.UnsupportedOperationException at com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.attachRangeCounts(ASTRangeCountOptimizer.java:140) at com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.optimizeJoinGroup(ASTRangeCountOptimizer.java:77) at com.bigdata.rdf.sparql.ast.optimizers.AbstractJoinGroupOptimizer.optimize(AbstractJoinGroupOptimizer.java:157) at com.bigdata.rdf.sparql.ast.optimizers.AbstractJoinGroupOptimizer.optimize(AbstractJoinGroupOptimizer.java:97) at com.bigdata.rdf.sparql.ast.optimizers.ASTOptimizerList.optimize(ASTOptimizerList.java:104) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUtility.convert(AST2BOpUtility.java:219) at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.evaluateTupleQuery(ASTEvalHelper.java:238) at com.bigdata.rdf.sail.BigdataSailTupleQuery.evaluate(BigdataSailTupleQuery.java:93) at com.bigdata.rdf.sail.BigdataSailTupleQuery.evaluate(BigdataSailTupleQuery.java:75) at org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:62) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1386) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask$SparqlRestApiTask.call(BigdataRDFContext.java:1221) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask$SparqlRestApiTask.call(BigdataRDFContext.java:1) at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:67) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:293) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1277) ... 6 more Caused by: java.util.concurrent.ExecutionException: java.lang.UnsupportedOperationException at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) at java.util.concurrent.FutureTask.get(FutureTask.java:111) at com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.attachRangeCounts(ASTRangeCountOptimizer.java:124) ... 23 more Caused by: java.lang.UnsupportedOperationException at com.bigdata.rdf.spo.SPORelation.getPredicate(SPORelation.java:1210) at com.bigdata.rdf.spo.SPORelation.getAccessPath(SPORelation.java:1121) at com.bigdata.rdf.spo.SPORelation.getAccessPath(SPORelation.java:1077) at com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:3120) at com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.estimateCardinalities(ASTRangeCountOptimizer.java:199) at com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer.estimateCardinality(ASTRangeCountOptimizer.java:191) at com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer$RangeCountTask.call(ASTRangeCountOptimizer.java:171) at com.bigdata.rdf.sparql.ast.optimizers.ASTRangeCountOptimizer$RangeCountTask.call(ASTRangeCountOptimizer.java:1) ... 5 more Jeremy On Oct 19, 2014, at 7:22 AM, Bryan Thompson <br...@sy...> wrote: > Jean-Marx, > > Please file a ticket with this information and include the long exception trace since it tells us plenty. > > Thanks, > Bryan > > On Sunday, October 19, 2014, Jean-Marc Vanel <jea...@gm...> wrote: > Hi > > I tried this query in latest BigData source code: > > CONSTRUCT { > ?thing ?p ?o . > ?s ?p1 ?thing . > } > WHERE { > { graph <http://jmvanel.free.fr/jmv.rdf#me> > { ?thing ?p ?o . } > } UNION { > graph ?GRAPH > { ?s ?p1 ?thing . } > } > } LIMIT 100 > > This maybe wrong wrt the spec. > But it crashes in BigData, > with a long exceptions chain that tells nothing . > > Note that in Jena and Virtuoso (dbpedia), this pattern of query is accepted, > with unexpected results in Jena. > > > The database has been prepared with: > LOAD <http://jmvanel.free.fr/jmv.rdf#me> INTO GRAPH <http://jmvanel.free.fr/jmv.rdf#me> > LOAD <http://danbri.org/foaf.rdf#danbri> INTO GRAPH <http://danbri.org/foaf.rdf#danbri> > > > > -- > Jean-Marc Vanel > Déductions SARL - Consulting, services, training, > Rule-based programming, Semantic Web > http://deductions-software.com/ > +33 (0)6 89 16 29 52 > Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui > > > > -- > Jean-Marc Vanel > Déductions SARL - Consulting, services, training, > Rule-based programming, Semantic Web > http://deductions-software.com/ > +33 (0)6 89 16 29 52 > Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui > > > -- > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://bigdata.com > http://mapgraph.io > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > > ------------------------------------------------------------------------------ > Comprehensive Server Monitoring with Site24x7. > Monitor 10 servers for $9/Month. > Get alerted through email, SMS, voice calls or mobile push notifications. > Take corrective actions from your mobile device. > http://p.sf.net/sfu/Zoho_______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Bryan T. <br...@sy...> - 2014-10-19 14:22:09
|
Jean-Marx, Please file a ticket with this information and include the long exception trace since it tells us plenty. Thanks, Bryan On Sunday, October 19, 2014, Jean-Marc Vanel <jea...@gm...> wrote: > Hi >> >> I tried this query in latest BigData source code: >> > > >> CONSTRUCT { >> ?thing ?p ?o . >> ?s ?p1 ?thing . >> } >> WHERE { >> { graph <http://jmvanel.free.fr/jmv.rdf#me> >> { ?thing ?p ?o . } >> } UNION { >> graph ?GRAPH >> { ?s ?p1 ?thing . } >> } >> } LIMIT 100 >> >> This maybe wrong wrt the spec. >> But it crashes in BigData, >> with a long exceptions chain that tells nothing . >> > > Note that in Jena and Virtuoso (dbpedia), this pattern of query is > accepted, > with unexpected results in Jena. > > > The database has been prepared with: > LOAD <http://jmvanel.free.fr/jmv.rdf#me> INTO GRAPH < > http://jmvanel.free.fr/jmv.rdf#me> > LOAD <http://danbri.org/foaf.rdf#danbri> INTO GRAPH < > http://danbri.org/foaf.rdf#danbri> > > >> >> -- >> Jean-Marc Vanel >> Déductions SARL - Consulting, services, training, >> Rule-based programming, Semantic Web >> http://deductions-software.com/ >> +33 (0)6 89 16 29 52 >> Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui >> > > > > -- > Jean-Marc Vanel > Déductions SARL - Consulting, services, training, > Rule-based programming, Semantic Web > http://deductions-software.com/ > +33 (0)6 89 16 29 52 > Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui > -- ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://bigdata.com http://mapgraph.io CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Jean-Marc V. <jea...@gm...> - 2014-10-19 12:52:48
|
> > Hi > > I tried this query in latest BigData source code: > > CONSTRUCT { > ?thing ?p ?o . > ?s ?p1 ?thing . > } > WHERE { > { graph <http://jmvanel.free.fr/jmv.rdf#me> > { ?thing ?p ?o . } > } UNION { > graph ?GRAPH > { ?s ?p1 ?thing . } > } > } LIMIT 100 > > This maybe wrong wrt the spec. > But it crashes in BigData, > with a long exceptions chain that tells nothing . > Note that in Jena and Virtuoso (dbpedia), this pattern of query is accepted, with unexpected results in Jena. The database has been prepared with: LOAD <http://jmvanel.free.fr/jmv.rdf#me> INTO GRAPH < http://jmvanel.free.fr/jmv.rdf#me> LOAD <http://danbri.org/foaf.rdf#danbri> INTO GRAPH < http://danbri.org/foaf.rdf#danbri> > > -- > Jean-Marc Vanel > Déductions SARL - Consulting, services, training, > Rule-based programming, Semantic Web > http://deductions-software.com/ > +33 (0)6 89 16 29 52 > Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui > -- Jean-Marc Vanel Déductions SARL - Consulting, services, training, Rule-based programming, Semantic Web http://deductions-software.com/ +33 (0)6 89 16 29 52 Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui |
From: Jean-Marc V. <jea...@gm...> - 2014-10-19 12:48:40
|
Hi I tried this query in latest BigData source code: CONSTRUCT { ?thing ?p ?o . ?s ?p1 ?thing . } WHERE { { graph <http://jmvanel.free.fr/jmv.rdf#me> { ?thing ?p ?o . } } UNION { graph ?GRAPH { ?s ?p1 ?thing . } } } LIMIT 100 This maybe wrong wrt the spec. But it crashes in BigData, with a long exceptions chain th -- Jean-Marc Vanel Déductions SARL - Consulting, services, training, Rule-based programming, Semantic Web http://deductions-software.com/ +33 (0)6 89 16 29 52 Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui |
From: Jennifer <jen...@re...> - 2014-10-18 02:01:32
|
Did I mail to the right list? Or should I mail to some other list? From: "Jennifer"<jen...@re...> Sent: Sat, 18 Oct 2014 00:42:08 To: "big...@li..."<big...@li...>,"br...@sy..."<br...@sy...> Subject: How to report total query execution time taken by BigData Greetings, I loaded dbpedia in bigdata. Now I need to report performance of bigData versus my framework for my EDBT 2015 submission for simple RDF triples of the form <subject> <predicate> <object>. The query evaluation statistics shows the following timings:Query Evaluation Statisticssolutions=5, chunks=1, subqueries=0, elapsed=2ms. Another timing which bigdata workbench reports is:Time Query Results Execution Time Delete 2014-10-17T18:33:40.960Zselect ?a?b?c where{?a ?b ?c}51sec, 515msX I need to report total query execution time (including query optimization+plan execution + dictionary lookup + any additional time which bigdata takes for query execution) in my publication. But I dont know which one should I report. Any quick help from BigData team will be deeply appreciated. I am using bigdata workbench which I downloaded from (http://www.bigdata.com/download) for loading data and for querying it. Get your own FREE website, FREE domain & FREE mobile app with Company email. Know More > |
From: Jennifer <jen...@re...> - 2014-10-17 19:12:21
|
Greetings, I loaded dbpedia in bigdata. Now I need to report performance of bigData versus my framework for my EDBT 2015 submission for simple RDF triples of the form <subject> <predicate> <object>. The query evaluation statistics shows the following timings:Query Evaluation Statisticssolutions=5, chunks=1, subqueries=0, elapsed=2ms. Another timing which bigdata workbench reports is:Time Query Results Execution Time Delete 2014-10-17T18:33:40.960Zselect ?a?b?c where{?a ?b ?c}51sec, 515msX I need to report total query execution time (including query optimization+plan execution + dictionary lookup + any additional time which bigdata takes for query execution) in my publication. But I dont know which one should I report. Any quick help from BigData team will be deeply appreciated. I am using bigdata workbench which I downloaded from (http://www.bigdata.com/download) for loading data and for querying it. |
From: Maria J. <mar...@gm...> - 2014-10-17 15:57:09
|
Dear Bryan, I have created a ticket with ticket number Ticket #1023 <http://trac.bigdata.com/ticket/1023> (new defect) and have assigned it to you. Sorry I did not get how to create unit test for this case, therefore did not create one. I contains the same example as provided by another user. The example is a small one, you may use it to load data and then query it using both old SPARQL and SPARQL*. Cheers, Maria On Fri, Oct 17, 2014 at 6:26 PM, Bryan Thompson <br...@sy...> wrote: > Can you create a ticket at Trac.bigdata.com and assign it to me. Include > the data, query, expected results, and the equivalent sparql query. The > best thing is to create a unit test. That will get the quickest turn around. > > Thanks, > Bryan > On Oct 17, 2014 12:43 PM, "Maria Jackson" <mar...@gm...> > wrote: > >> Sorry I forgot to mention I also feel SPARQL* is not giving the correct >> results. Although old SPARQL syntax gives the correct results. >> On Fri, Oct 17, 2014 at 4:12 PM, Maria Jackson < >> mar...@gm...> wrote: >> >>> Dear Bryan, >>> >>> I tried with the exact data and SPARQL and SPARQL* queries which another >>> user Jyoti mentions. >>> >>> The service description shows the following XML content, which I think >>> are in accordance wiki. Please correct me if I am wrong: >>> >>> <rdf:RDF><rdf:Description rdf:nodeID="service"><rdf:type rdf:resource=" >>> http://www.w3.org/ns/sparql-service-description#Service"/><endpoint >>> rdf:resource="http://12.18.1.5:9999/bigdata/namespace/test/sparql"/><endpoint >>> rdf:resource="http://12.18.1.5:9999/bigdata/LBS/namespace/test/sparql"/><supportedLanguage >>> rdf:resource=" >>> http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/><supportedLanguage >>> rdf:resource=" >>> http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/><supportedLanguage >>> rdf:resource=" >>> http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/><feature >>> rdf:resource=" >>> http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/><feature >>> rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Sids"/><feature >>> rdf:resource="http://www.bigdata.com/rdf#/features/KB/TruthMaintenance"/><inputFormat >>> rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/><inputFormat >>> rdf:resource="http://www.w3.org/ns/formats/N-Triples"/><inputFormat >>> rdf:resource="http://www.w3.org/ns/formats/Turtle"/><inputFormat >>> rdf:resource="http://www.w3.org/ns/formats/N3"/><inputFormat >>> rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/><inputFormat >>> rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/><inputFormat >>> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/><inputFormat >>> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/><inputFormat >>> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/><inputFormat >>> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/><resultFormat >>> rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/><resultFormat >>> rdf:resource="http://www.w3.org/ns/formats/N-Triples"/><resultFormat >>> rdf:resource="http://www.w3.org/ns/formats/Turtle"/><resultFormat >>> rdf:resource="http://www.w3.org/ns/formats/N3"/><resultFormat >>> rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/><resultFormat >>> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/><resultFormat >>> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/><resultFormat >>> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/><resultFormat >>> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/><defaultDataset >>> rdf:nodeID="defaultDataset"/></rdf:Description><rdf:Description >>> rdf:nodeID="defaultDataset"><rdf:type rdf:resource=" >>> http://www.w3.org/ns/sparql-service-description#Dataset"/><rdf:type >>> rdf:resource="http://rdfs.org/ns/void#Dataset"/><title>test</title><Namespace>test</Namespace><sparqlEndpoint >>> rdf:resource=" >>> http://12.18.1.5:9999/bigdata/namespace/test/sparql/test/sparql"/><sparqlEndpoint >>> rdf:resource=" >>> http://12.18.1.5:9999/bigdata/LBS/namespace/test/sparql/test/sparql"/><uriRegexPattern>^.*</uriRegexPattern><vocabulary >>> rdf:resource="http://12.18.1.5:9999/bigdata/namespace/test/"/><vocabulary >>> rdf:resource="http://www.w3.org/1999/02/22-rdf-syntax-ns"/><defaultGraph >>> rdf:nodeID="defaultGraph"/></rdf:Description><rdf:Description >>> rdf:nodeID="defaultGraph"><rdf:type rdf:resource=" >>> http://www.w3.org/ns/sparql-service-description#Graph"/><triples >>> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">5</triples><entities >>> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">6</entities><properties >>> rdf:datatype="http://www.w3.org/2001/XMLSchema#int">5</properties><classes >>> rdf:datatype="http://www.w3.org/2001/XMLSchema#int">1</classes><propertyPartition >>> rdf:nodeID="node194eq34ovx1"/></rdf:Description><rdf:Description >>> rdf:nodeID="node194eq34ovx1"><property rdf:resource=" >>> http://192.168.1.50:9999/bigdata/namespace/test/said"/><triples >>> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description >>> rdf:nodeID="defaultGraph"><propertyPartition >>> rdf:nodeID="node194eq34ovx2"/></rdf:Description><rdf:Description >>> rdf:nodeID="node194eq34ovx2"><property rdf:resource=" >>> http://www.w3.org/1999/02/22-rdf-syntax-ns#object"/><triples >>> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description >>> rdf:nodeID="defaultGraph"><propertyPartition >>> rdf:nodeID="node194eq34ovx3"/></rdf:Description><rdf:Description >>> rdf:nodeID="node194eq34ovx3"><property rdf:resource=" >>> http://www.w3.org/1999/02/22-rdf-syntax-ns#predicate"/><triples >>> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description >>> rdf:nodeID="defaultGraph"><propertyPartition >>> rdf:nodeID="node194eq34ovx4"/></rdf:Description><rdf:Description >>> rdf:nodeID="node194eq34ovx4"><property rdf:resource=" >>> http://www.w3.org/1999/02/22-rdf-syntax-ns#subject"/><triples >>> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description >>> rdf:nodeID="defaultGraph"><propertyPartition >>> rdf:nodeID="node194eq34ovx5"/></rdf:Description><rdf:Description >>> rdf:nodeID="node194eq34ovx5"><property rdf:resource=" >>> http://www.w3.org/1999/02/22-rdf-syntax-ns#type"/><triples >>> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description >>> rdf:nodeID="defaultGraph"><classPartition >>> rdf:nodeID="node194eq34ovx6"/></rdf:Description><rdf:Description >>> rdf:nodeID="node194eq34ovx6"><class rdf:resource=" >>> http://www.w3.org/1999/02/22-rdf-syntax-ns#Statement"/><triples >>> rdf:datatype="http://www.w3.org/2001/XMLSchema#long >>> ">1</triples></rdf:Description></rdf:RDF> >>> >>> On Thu, Oct 16, 2014 at 5:56 AM, Maria Jackson < >>> mar...@gm...> wrote: >>> >>>> Dear All, >>>> >>>> I am trying to load yago2s 18.5GB ( >>>> http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/ >>>> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=36eb659b-7a36-459f-95da-e6d711aec4d0&cm_destination=http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/>) >>>> in Bigdata. I downloaded bigdata from http://www.bigdata.com/download >>>> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=0b0bc5d8-f7fe-46b3-b416-31eb502201c4&cm_destination=http://www.bigdata.com/download> and >>>> I am using Bigdata workbench via http://localhost:9999. >>>> >>>> I am loading yago2s in BigData's default namespace "kb". I am loading >>>> yago2s using update by specifying the file path there. While Bigdata is >>>> loading yago I notice that it consumes a significant amount of CPU and RAM >>>> for 4-5 hours, but after that it stops using RAM. But my dilemma is that >>>> BigData workbench still keeps on showing "Running update.." although >>>> BigData does not consume any RAM or CPU for the next 48 hours or so (In >>>> fact it keeps showing "Running update.." until I kill the process). Can you >>>> please suggest as to where am I going wrong as after killing the process >>>> BigData is not able to retrieve any tuples (and shows 0 results even for >>>> the query select ?a?b?c where{?a ?b ?c}) >>>> >>>> >>>> Also I am using BigData on a server with 16 cores and 64 GB RAM? >>>> >>>> Any help in this regard will be deeply appreciated. >>>> >>>> Cheers, >>>> Maria >>>> >>> >>> >> |
From: Bryan T. <br...@sy...> - 2014-10-17 12:57:04
|
Can you create a ticket at Trac.bigdata.com and assign it to me. Include the data, query, expected results, and the equivalent sparql query. The best thing is to create a unit test. That will get the quickest turn around. Thanks, Bryan On Oct 17, 2014 12:43 PM, "Maria Jackson" <mar...@gm...> wrote: > Sorry I forgot to mention I also feel SPARQL* is not giving the correct > results. Although old SPARQL syntax gives the correct results. > On Fri, Oct 17, 2014 at 4:12 PM, Maria Jackson < > mar...@gm...> wrote: > >> Dear Bryan, >> >> I tried with the exact data and SPARQL and SPARQL* queries which another >> user Jyoti mentions. >> >> The service description shows the following XML content, which I think >> are in accordance wiki. Please correct me if I am wrong: >> >> <rdf:RDF><rdf:Description rdf:nodeID="service"><rdf:type rdf:resource=" >> http://www.w3.org/ns/sparql-service-description#Service"/><endpoint >> rdf:resource="http://12.18.1.5:9999/bigdata/namespace/test/sparql"/><endpoint >> rdf:resource="http://12.18.1.5:9999/bigdata/LBS/namespace/test/sparql"/><supportedLanguage >> rdf:resource=" >> http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/><supportedLanguage >> rdf:resource=" >> http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/><supportedLanguage >> rdf:resource=" >> http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/><feature >> rdf:resource=" >> http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/><feature >> rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Sids"/><feature >> rdf:resource="http://www.bigdata.com/rdf#/features/KB/TruthMaintenance"/><inputFormat >> rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/><inputFormat >> rdf:resource="http://www.w3.org/ns/formats/N-Triples"/><inputFormat >> rdf:resource="http://www.w3.org/ns/formats/Turtle"/><inputFormat >> rdf:resource="http://www.w3.org/ns/formats/N3"/><inputFormat >> rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/><inputFormat >> rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/><inputFormat >> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/><inputFormat >> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/><inputFormat >> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/><inputFormat >> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/><resultFormat >> rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/><resultFormat >> rdf:resource="http://www.w3.org/ns/formats/N-Triples"/><resultFormat >> rdf:resource="http://www.w3.org/ns/formats/Turtle"/><resultFormat >> rdf:resource="http://www.w3.org/ns/formats/N3"/><resultFormat >> rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/><resultFormat >> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/><resultFormat >> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/><resultFormat >> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/><resultFormat >> rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/><defaultDataset >> rdf:nodeID="defaultDataset"/></rdf:Description><rdf:Description >> rdf:nodeID="defaultDataset"><rdf:type rdf:resource=" >> http://www.w3.org/ns/sparql-service-description#Dataset"/><rdf:type >> rdf:resource="http://rdfs.org/ns/void#Dataset"/><title>test</title><Namespace>test</Namespace><sparqlEndpoint >> rdf:resource=" >> http://12.18.1.5:9999/bigdata/namespace/test/sparql/test/sparql"/><sparqlEndpoint >> rdf:resource=" >> http://12.18.1.5:9999/bigdata/LBS/namespace/test/sparql/test/sparql"/><uriRegexPattern>^.*</uriRegexPattern><vocabulary >> rdf:resource="http://12.18.1.5:9999/bigdata/namespace/test/"/><vocabulary >> rdf:resource="http://www.w3.org/1999/02/22-rdf-syntax-ns"/><defaultGraph >> rdf:nodeID="defaultGraph"/></rdf:Description><rdf:Description >> rdf:nodeID="defaultGraph"><rdf:type rdf:resource=" >> http://www.w3.org/ns/sparql-service-description#Graph"/><triples >> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">5</triples><entities >> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">6</entities><properties >> rdf:datatype="http://www.w3.org/2001/XMLSchema#int">5</properties><classes >> rdf:datatype="http://www.w3.org/2001/XMLSchema#int">1</classes><propertyPartition >> rdf:nodeID="node194eq34ovx1"/></rdf:Description><rdf:Description >> rdf:nodeID="node194eq34ovx1"><property rdf:resource=" >> http://192.168.1.50:9999/bigdata/namespace/test/said"/><triples >> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description >> rdf:nodeID="defaultGraph"><propertyPartition >> rdf:nodeID="node194eq34ovx2"/></rdf:Description><rdf:Description >> rdf:nodeID="node194eq34ovx2"><property rdf:resource=" >> http://www.w3.org/1999/02/22-rdf-syntax-ns#object"/><triples >> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description >> rdf:nodeID="defaultGraph"><propertyPartition >> rdf:nodeID="node194eq34ovx3"/></rdf:Description><rdf:Description >> rdf:nodeID="node194eq34ovx3"><property rdf:resource=" >> http://www.w3.org/1999/02/22-rdf-syntax-ns#predicate"/><triples >> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description >> rdf:nodeID="defaultGraph"><propertyPartition >> rdf:nodeID="node194eq34ovx4"/></rdf:Description><rdf:Description >> rdf:nodeID="node194eq34ovx4"><property rdf:resource=" >> http://www.w3.org/1999/02/22-rdf-syntax-ns#subject"/><triples >> rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description >> rdf:nodeID="defaultGraph"><propertyPartition >> rdf:nodeID="node194eq34ovx5"/></rdf:Description><rdf:Description >> rdf:nodeID="node194eq34ovx5"><property rdf:resource=" >> http://www.w3.org/1999/02/22-rdf-syntax-ns#type"/><triples rdf:datatype=" >> http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description >> rdf:nodeID="defaultGraph"><classPartition >> rdf:nodeID="node194eq34ovx6"/></rdf:Description><rdf:Description >> rdf:nodeID="node194eq34ovx6"><class rdf:resource=" >> http://www.w3.org/1999/02/22-rdf-syntax-ns#Statement"/><triples >> rdf:datatype="http://www.w3.org/2001/XMLSchema#long >> ">1</triples></rdf:Description></rdf:RDF> >> >> On Thu, Oct 16, 2014 at 5:56 AM, Maria Jackson < >> mar...@gm...> wrote: >> >>> Dear All, >>> >>> I am trying to load yago2s 18.5GB ( >>> http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/ >>> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=36eb659b-7a36-459f-95da-e6d711aec4d0&cm_destination=http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/>) >>> in Bigdata. I downloaded bigdata from http://www.bigdata.com/download >>> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=0b0bc5d8-f7fe-46b3-b416-31eb502201c4&cm_destination=http://www.bigdata.com/download> and >>> I am using Bigdata workbench via http://localhost:9999. >>> >>> I am loading yago2s in BigData's default namespace "kb". I am loading >>> yago2s using update by specifying the file path there. While Bigdata is >>> loading yago I notice that it consumes a significant amount of CPU and RAM >>> for 4-5 hours, but after that it stops using RAM. But my dilemma is that >>> BigData workbench still keeps on showing "Running update.." although >>> BigData does not consume any RAM or CPU for the next 48 hours or so (In >>> fact it keeps showing "Running update.." until I kill the process). Can you >>> please suggest as to where am I going wrong as after killing the process >>> BigData is not able to retrieve any tuples (and shows 0 results even for >>> the query select ?a?b?c where{?a ?b ?c}) >>> >>> >>> Also I am using BigData on a server with 16 cores and 64 GB RAM? >>> >>> Any help in this regard will be deeply appreciated. >>> >>> Cheers, >>> Maria >>> >> >> > |
From: Maria J. <mar...@gm...> - 2014-10-17 10:43:30
|
Sorry I forgot to mention I also feel SPARQL* is not giving the correct results. Although old SPARQL syntax gives the correct results. On Fri, Oct 17, 2014 at 4:12 PM, Maria Jackson <mar...@gm...> wrote: > Dear Bryan, > > I tried with the exact data and SPARQL and SPARQL* queries which another > user Jyoti mentions. > > The service description shows the following XML content, which I think are > in accordance wiki. Please correct me if I am wrong: > > <rdf:RDF><rdf:Description rdf:nodeID="service"><rdf:type rdf:resource=" > http://www.w3.org/ns/sparql-service-description#Service"/><endpoint > rdf:resource="http://12.18.1.5:9999/bigdata/namespace/test/sparql"/><endpoint > rdf:resource="http://12.18.1.5:9999/bigdata/LBS/namespace/test/sparql"/><supportedLanguage > rdf:resource=" > http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/><supportedLanguage > rdf:resource=" > http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/><supportedLanguage > rdf:resource=" > http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/><feature > rdf:resource=" > http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/><feature > rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Sids"/><feature > rdf:resource="http://www.bigdata.com/rdf#/features/KB/TruthMaintenance"/><inputFormat > rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/><inputFormat > rdf:resource="http://www.w3.org/ns/formats/N-Triples"/><inputFormat > rdf:resource="http://www.w3.org/ns/formats/Turtle"/><inputFormat > rdf:resource="http://www.w3.org/ns/formats/N3"/><inputFormat > rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/><inputFormat > rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/><inputFormat > rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/><inputFormat > rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/><inputFormat > rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/><inputFormat > rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/><resultFormat > rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/><resultFormat > rdf:resource="http://www.w3.org/ns/formats/N-Triples"/><resultFormat > rdf:resource="http://www.w3.org/ns/formats/Turtle"/><resultFormat > rdf:resource="http://www.w3.org/ns/formats/N3"/><resultFormat > rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/><resultFormat > rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/><resultFormat > rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/><resultFormat > rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/><resultFormat > rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/><defaultDataset > rdf:nodeID="defaultDataset"/></rdf:Description><rdf:Description > rdf:nodeID="defaultDataset"><rdf:type rdf:resource=" > http://www.w3.org/ns/sparql-service-description#Dataset"/><rdf:type > rdf:resource="http://rdfs.org/ns/void#Dataset"/><title>test</title><Namespace>test</Namespace><sparqlEndpoint > rdf:resource=" > http://12.18.1.5:9999/bigdata/namespace/test/sparql/test/sparql"/><sparqlEndpoint > rdf:resource=" > http://12.18.1.5:9999/bigdata/LBS/namespace/test/sparql/test/sparql"/><uriRegexPattern>^.*</uriRegexPattern><vocabulary > rdf:resource="http://12.18.1.5:9999/bigdata/namespace/test/"/><vocabulary > rdf:resource="http://www.w3.org/1999/02/22-rdf-syntax-ns"/><defaultGraph > rdf:nodeID="defaultGraph"/></rdf:Description><rdf:Description > rdf:nodeID="defaultGraph"><rdf:type rdf:resource=" > http://www.w3.org/ns/sparql-service-description#Graph"/><triples > rdf:datatype="http://www.w3.org/2001/XMLSchema#long">5</triples><entities > rdf:datatype="http://www.w3.org/2001/XMLSchema#long">6</entities><properties > rdf:datatype="http://www.w3.org/2001/XMLSchema#int">5</properties><classes > rdf:datatype="http://www.w3.org/2001/XMLSchema#int">1</classes><propertyPartition > rdf:nodeID="node194eq34ovx1"/></rdf:Description><rdf:Description > rdf:nodeID="node194eq34ovx1"><property rdf:resource=" > http://192.168.1.50:9999/bigdata/namespace/test/said"/><triples > rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description > rdf:nodeID="defaultGraph"><propertyPartition > rdf:nodeID="node194eq34ovx2"/></rdf:Description><rdf:Description > rdf:nodeID="node194eq34ovx2"><property rdf:resource=" > http://www.w3.org/1999/02/22-rdf-syntax-ns#object"/><triples > rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description > rdf:nodeID="defaultGraph"><propertyPartition > rdf:nodeID="node194eq34ovx3"/></rdf:Description><rdf:Description > rdf:nodeID="node194eq34ovx3"><property rdf:resource=" > http://www.w3.org/1999/02/22-rdf-syntax-ns#predicate"/><triples > rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description > rdf:nodeID="defaultGraph"><propertyPartition > rdf:nodeID="node194eq34ovx4"/></rdf:Description><rdf:Description > rdf:nodeID="node194eq34ovx4"><property rdf:resource=" > http://www.w3.org/1999/02/22-rdf-syntax-ns#subject"/><triples > rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description > rdf:nodeID="defaultGraph"><propertyPartition > rdf:nodeID="node194eq34ovx5"/></rdf:Description><rdf:Description > rdf:nodeID="node194eq34ovx5"><property rdf:resource=" > http://www.w3.org/1999/02/22-rdf-syntax-ns#type"/><triples rdf:datatype=" > http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description > rdf:nodeID="defaultGraph"><classPartition > rdf:nodeID="node194eq34ovx6"/></rdf:Description><rdf:Description > rdf:nodeID="node194eq34ovx6"><class rdf:resource=" > http://www.w3.org/1999/02/22-rdf-syntax-ns#Statement"/><triples > rdf:datatype="http://www.w3.org/2001/XMLSchema#long > ">1</triples></rdf:Description></rdf:RDF> > > On Thu, Oct 16, 2014 at 5:56 AM, Maria Jackson < > mar...@gm...> wrote: > >> Dear All, >> >> I am trying to load yago2s 18.5GB ( >> http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/ >> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=36eb659b-7a36-459f-95da-e6d711aec4d0&cm_destination=http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/>) >> in Bigdata. I downloaded bigdata from http://www.bigdata.com/download >> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=0b0bc5d8-f7fe-46b3-b416-31eb502201c4&cm_destination=http://www.bigdata.com/download> and >> I am using Bigdata workbench via http://localhost:9999. >> >> I am loading yago2s in BigData's default namespace "kb". I am loading >> yago2s using update by specifying the file path there. While Bigdata is >> loading yago I notice that it consumes a significant amount of CPU and RAM >> for 4-5 hours, but after that it stops using RAM. But my dilemma is that >> BigData workbench still keeps on showing "Running update.." although >> BigData does not consume any RAM or CPU for the next 48 hours or so (In >> fact it keeps showing "Running update.." until I kill the process). Can you >> please suggest as to where am I going wrong as after killing the process >> BigData is not able to retrieve any tuples (and shows 0 results even for >> the query select ?a?b?c where{?a ?b ?c}) >> >> >> Also I am using BigData on a server with 16 cores and 64 GB RAM? >> >> Any help in this regard will be deeply appreciated. >> >> Cheers, >> Maria >> > > |
From: Maria J. <mar...@gm...> - 2014-10-17 10:42:18
|
Dear Bryan, I tried with the exact data and SPARQL and SPARQL* queries which another user Jyoti mentions. The service description shows the following XML content, which I think are in accordance wiki. Please correct me if I am wrong: <rdf:RDF><rdf:Description rdf:nodeID="service"><rdf:type rdf:resource=" http://www.w3.org/ns/sparql-service-description#Service"/><endpoint rdf:resource="http://12.18.1.5:9999/bigdata/namespace/test/sparql"/><endpoint rdf:resource="http://12.18.1.5:9999/bigdata/LBS/namespace/test/sparql"/><supportedLanguage rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/><supportedLanguage rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/><supportedLanguage rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/><feature rdf:resource=" http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/><feature rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Sids"/><feature rdf:resource="http://www.bigdata.com/rdf#/features/KB/TruthMaintenance"/><inputFormat rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/><inputFormat rdf:resource="http://www.w3.org/ns/formats/N-Triples"/><inputFormat rdf:resource="http://www.w3.org/ns/formats/Turtle"/><inputFormat rdf:resource="http://www.w3.org/ns/formats/N3"/><inputFormat rdf:resource=" http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/><inputFormat rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/><inputFormat rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/><inputFormat rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/><inputFormat rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/><inputFormat rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/><resultFormat rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/><resultFormat rdf:resource="http://www.w3.org/ns/formats/N-Triples"/><resultFormat rdf:resource="http://www.w3.org/ns/formats/Turtle"/><resultFormat rdf:resource="http://www.w3.org/ns/formats/N3"/><resultFormat rdf:resource=" http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/><resultFormat rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/><resultFormat rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/><resultFormat rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/><resultFormat rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/><defaultDataset rdf:nodeID="defaultDataset"/></rdf:Description><rdf:Description rdf:nodeID="defaultDataset"><rdf:type rdf:resource=" http://www.w3.org/ns/sparql-service-description#Dataset"/><rdf:type rdf:resource="http://rdfs.org/ns/void#Dataset"/><title>test</title><Namespace>test</Namespace><sparqlEndpoint rdf:resource=" http://12.18.1.5:9999/bigdata/namespace/test/sparql/test/sparql"/><sparqlEndpoint rdf:resource=" http://12.18.1.5:9999/bigdata/LBS/namespace/test/sparql/test/sparql"/><uriRegexPattern>^.*</uriRegexPattern><vocabulary rdf:resource="http://12.18.1.5:9999/bigdata/namespace/test/"/><vocabulary rdf:resource="http://www.w3.org/1999/02/22-rdf-syntax-ns"/><defaultGraph rdf:nodeID="defaultGraph"/></rdf:Description><rdf:Description rdf:nodeID="defaultGraph"><rdf:type rdf:resource=" http://www.w3.org/ns/sparql-service-description#Graph"/><triples rdf:datatype="http://www.w3.org/2001/XMLSchema#long">5</triples><entities rdf:datatype="http://www.w3.org/2001/XMLSchema#long">6</entities><properties rdf:datatype="http://www.w3.org/2001/XMLSchema#int">5</properties><classes rdf:datatype="http://www.w3.org/2001/XMLSchema#int">1</classes><propertyPartition rdf:nodeID="node194eq34ovx1"/></rdf:Description><rdf:Description rdf:nodeID="node194eq34ovx1"><property rdf:resource=" http://192.168.1.50:9999/bigdata/namespace/test/said"/><triples rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description rdf:nodeID="defaultGraph"><propertyPartition rdf:nodeID="node194eq34ovx2"/></rdf:Description><rdf:Description rdf:nodeID="node194eq34ovx2"><property rdf:resource=" http://www.w3.org/1999/02/22-rdf-syntax-ns#object"/><triples rdf:datatype=" http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description rdf:nodeID="defaultGraph"><propertyPartition rdf:nodeID="node194eq34ovx3"/></rdf:Description><rdf:Description rdf:nodeID="node194eq34ovx3"><property rdf:resource=" http://www.w3.org/1999/02/22-rdf-syntax-ns#predicate"/><triples rdf:datatype="http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description rdf:nodeID="defaultGraph"><propertyPartition rdf:nodeID="node194eq34ovx4"/></rdf:Description><rdf:Description rdf:nodeID="node194eq34ovx4"><property rdf:resource=" http://www.w3.org/1999/02/22-rdf-syntax-ns#subject"/><triples rdf:datatype=" http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description rdf:nodeID="defaultGraph"><propertyPartition rdf:nodeID="node194eq34ovx5"/></rdf:Description><rdf:Description rdf:nodeID="node194eq34ovx5"><property rdf:resource=" http://www.w3.org/1999/02/22-rdf-syntax-ns#type"/><triples rdf:datatype=" http://www.w3.org/2001/XMLSchema#long">1</triples></rdf:Description><rdf:Description rdf:nodeID="defaultGraph"><classPartition rdf:nodeID="node194eq34ovx6"/></rdf:Description><rdf:Description rdf:nodeID="node194eq34ovx6"><class rdf:resource=" http://www.w3.org/1999/02/22-rdf-syntax-ns#Statement"/><triples rdf:datatype="http://www.w3.org/2001/XMLSchema#long ">1</triples></rdf:Description></rdf:RDF> On Thu, Oct 16, 2014 at 5:56 AM, Maria Jackson <mar...@gm...> wrote: > Dear All, > > I am trying to load yago2s 18.5GB ( > http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/ > <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=36eb659b-7a36-459f-95da-e6d711aec4d0&cm_destination=http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/>) > in Bigdata. I downloaded bigdata from http://www.bigdata.com/download > <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=0b0bc5d8-f7fe-46b3-b416-31eb502201c4&cm_destination=http://www.bigdata.com/download> and > I am using Bigdata workbench via http://localhost:9999. > > I am loading yago2s in BigData's default namespace "kb". I am loading > yago2s using update by specifying the file path there. While Bigdata is > loading yago I notice that it consumes a significant amount of CPU and RAM > for 4-5 hours, but after that it stops using RAM. But my dilemma is that > BigData workbench still keeps on showing "Running update.." although > BigData does not consume any RAM or CPU for the next 48 hours or so (In > fact it keeps showing "Running update.." until I kill the process). Can you > please suggest as to where am I going wrong as after killing the process > BigData is not able to retrieve any tuples (and shows 0 results even for > the query select ?a?b?c where{?a ?b ?c}) > > > Also I am using BigData on a server with 16 cores and 64 GB RAM? > > Any help in this regard will be deeply appreciated. > > Cheers, > Maria > |
From: Bryan T. <br...@sy...> - 2014-10-17 10:28:08
|
If you look at the service description does it indicate the RDR mode per the wiki page? The mode is a durable configuration property for a bigdata namespace. It can only be set for a new namespace. Thanks, Bryan Bryan On Oct 17, 2014 12:23 PM, "Jyoti Leeka" <jy...@ii...> wrote: > Dear Mr. Thompson, > > Thanks a lot for the help. > > In one of the other discussions which you have answered on the forum > you have advised to use SPARQL* syntax. I tried to use the below > mentioned SPARQL* syntax for querying reified triples I have mentioned > in my example: > select ?tolkien ?rings where { <<?tolkien <wrote> ?rings>> <said> > <Wikipedia>} > > To my surprise BigData does not return any result with SPARQL* syntax. > But it does return the correct result with the SPARQL query: > select ?tolkien ?rings where { ?x > <http://www.w3.org/1999/02/22-rdf-syntax-ns#subject> ?tolkien. ?x > <http://www.w3.org/1999/02/22-rdf-syntax-ns#predicate> <wrote>. ?x > <http://www.w3.org/1999/02/22-rdf-syntax-ns#object> ?rings. ?x <said> > <Wikipedia>. } > > Can you please suggest as to where am I going wrong with SPARQL*? > > > Thanks & Regards, > Jyoti > > > FYI the discussion.... > You MUST go to the "NAMESPACES" tab and choose "Create namespace" and > specify a new namespace name and choose the RDR mode from the list of > topics (it defaults to triples - the choices are triples, rdr, quads). > > I would advise the SPARQL* syntax. > > Please try this on a small sample first to be sure that you have > everything setup correctly. > > Thanks, > Bryan > > On Fri, Oct 17, 2014 at 2:17 PM, Bryan Thompson <br...@sy...> wrote: > > You would need to add timers into the AST2BopUtility.convert() method. > > Bryan > > > > > > On Thursday, October 16, 2014, Jyoti Leeka <jy...@ii...> wrote: > >> > >> Dear All, > >> > >> I loaded the following data into BigData in RDR (by creating namespace > >> in mode RDR) using BigData workbench > >> (http://www.bigdata.com/download): > >> > >> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . > >> <x> rdf:type rdf:Statement . > >> <x> rdf:subject <Tolkien> . > >> <x> rdf:predicate <wrote> . > >> <x> rdf:object <LordOfTheRings> . > >> <x> <said> <Wikipedia> . > >> > >> After which I queried bigdata using: > >> select ?tolkien ?rings where { ?x > >> <http://www.w3.org/1999/02/22-rdf-syntax-ns#subject> ?tolkien. ?x > >> <http://www.w3.org/1999/02/22-rdf-syntax-ns#predicate> <wrote>. ?x > >> <http://www.w3.org/1999/02/22-rdf-syntax-ns#object> ?rings. ?x <said> > >> <Wikipedia>. } > >> > >> Bigdata workbench reported the execution time of this query, but I am > >> also interested in getting the detailed breakdown i.e. query optimizer > >> cost and query runtime cost. Is it possible to get such a detailed > >> breakdown. If yes, how? > >> > >> Thanks & Regards, > >> Jyoti Leeka > >> PhD Student > >> IIIT-Delhi > >> India > >> > >> > >> > ------------------------------------------------------------------------------ > >> Comprehensive Server Monitoring with Site24x7. > >> Monitor 10 servers for $9/Month. > >> Get alerted through email, SMS, voice calls or mobile push > notifications. > >> Take corrective actions from your mobile device. > >> http://p.sf.net/sfu/Zoho > >> _______________________________________________ > >> Bigdata-developers mailing list > >> Big...@li... > >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > > > > > -- > > ---- > > Bryan Thompson > > Chief Scientist & Founder > > SYSTAP, LLC > > 4501 Tower Road > > Greensboro, NC 27410 > > br...@sy... > > http://bigdata.com > > http://mapgraph.io > > > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for > > the sole use of the intended recipient(s) and are confidential or > > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > > dissemination or copying of this email or its contents or attachments is > > prohibited. If you have received this communication in error, please > notify > > the sender by reply email and permanently delete all copies of the email > and > > its contents and attachments. > > > > > |
From: Jyoti L. <jy...@ii...> - 2014-10-17 10:24:06
|
Dear Mr. Thompson, Thanks a lot for the help. In one of the other discussions which you have answered on the forum you have advised to use SPARQL* syntax. I tried to use the below mentioned SPARQL* syntax for querying reified triples I have mentioned in my example: select ?tolkien ?rings where { <<?tolkien <wrote> ?rings>> <said> <Wikipedia>} To my surprise BigData does not return any result with SPARQL* syntax. But it does return the correct result with the SPARQL query: select ?tolkien ?rings where { ?x <http://www.w3.org/1999/02/22-rdf-syntax-ns#subject> ?tolkien. ?x <http://www.w3.org/1999/02/22-rdf-syntax-ns#predicate> <wrote>. ?x <http://www.w3.org/1999/02/22-rdf-syntax-ns#object> ?rings. ?x <said> <Wikipedia>. } Can you please suggest as to where am I going wrong with SPARQL*? Thanks & Regards, Jyoti FYI the discussion.... You MUST go to the "NAMESPACES" tab and choose "Create namespace" and specify a new namespace name and choose the RDR mode from the list of topics (it defaults to triples - the choices are triples, rdr, quads). I would advise the SPARQL* syntax. Please try this on a small sample first to be sure that you have everything setup correctly. Thanks, Bryan On Fri, Oct 17, 2014 at 2:17 PM, Bryan Thompson <br...@sy...> wrote: > You would need to add timers into the AST2BopUtility.convert() method. > Bryan > > > On Thursday, October 16, 2014, Jyoti Leeka <jy...@ii...> wrote: >> >> Dear All, >> >> I loaded the following data into BigData in RDR (by creating namespace >> in mode RDR) using BigData workbench >> (http://www.bigdata.com/download): >> >> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . >> <x> rdf:type rdf:Statement . >> <x> rdf:subject <Tolkien> . >> <x> rdf:predicate <wrote> . >> <x> rdf:object <LordOfTheRings> . >> <x> <said> <Wikipedia> . >> >> After which I queried bigdata using: >> select ?tolkien ?rings where { ?x >> <http://www.w3.org/1999/02/22-rdf-syntax-ns#subject> ?tolkien. ?x >> <http://www.w3.org/1999/02/22-rdf-syntax-ns#predicate> <wrote>. ?x >> <http://www.w3.org/1999/02/22-rdf-syntax-ns#object> ?rings. ?x <said> >> <Wikipedia>. } >> >> Bigdata workbench reported the execution time of this query, but I am >> also interested in getting the detailed breakdown i.e. query optimizer >> cost and query runtime cost. Is it possible to get such a detailed >> breakdown. If yes, how? >> >> Thanks & Regards, >> Jyoti Leeka >> PhD Student >> IIIT-Delhi >> India >> >> >> ------------------------------------------------------------------------------ >> Comprehensive Server Monitoring with Site24x7. >> Monitor 10 servers for $9/Month. >> Get alerted through email, SMS, voice calls or mobile push notifications. >> Take corrective actions from your mobile device. >> http://p.sf.net/sfu/Zoho >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > -- > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://bigdata.com > http://mapgraph.io > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for > the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email and > its contents and attachments. > > |
From: Bryan T. <br...@sy...> - 2014-10-17 09:00:19
|
You MUST go to the "NAMESPACES" tab and choose "Create namespace" and specify a new namespace name and choose the RDR mode from the list of topics (it defaults to triples - the choices are triples, rdr, quads). I would advise the SPARQL* syntax. Please try this on a small sample first to be sure that you have everything setup correctly. Thanks, Bryan On Thu, Oct 16, 2014 at 6:39 PM, Maria Jackson <mar...@gm...> wrote: > Dear Bryan, > > Just to confirm... > RDR option in update window is disabled when we load a large file (i.e. > specify its path). Is it ok to only specify the namespace as RDR when > loading reified triples? > > Also can we query reified triples in Bigdata using SPARQL* or will the > plain old SPARQL with reification work? > > > Cheers, > Maria > On Fri, Oct 17, 2014 at 4:04 AM, Bryan Thompson <br...@sy...> wrote: > >> The database instance needs to be in the mode that supports RDR. This is >> called the "statement identifiers" mode internally for historical reasons. >> Today the feature is implemented using inlining. However, the details of >> the implementation mechanism should be invisible. That is the whole point >> of the RDR model. >> >> The RDR mode will support efficient reified triples, inference (if >> enabled), and triples. We will be introducing RDR support for quads but >> it is not yet in the released platform. The workbench might label the >> option differently. Choose to create a new namespace. Then choose the >> mode that supports triples with statement level provenance / RDR / >> statement identifiers. The default mode for the NSS in the WAR deployment >> is, I believe, quads. >> >> There are two new papers that might be of interest to you linked from >> http://blog.bigdata.com. They are on RDF*/SPARQL* (a formal treatment >> of RDR) and on a reconciliation of RDF and property graphs based on the RDR >> model. Many thanks to Olaf Hartig for his efforts on these papers. >> >> Thanks, >> Bryan >> >> >> On Thursday, October 16, 2014, Maria Jackson <mar...@gm...> >> wrote: >> >>> Dear Bryan, >>> >>> I need to load reified triples. The option of RDR in update window is >>> disabled when we specify the path. By this I mean how do we tell BigData >>> that we are entering reified triples during update. I am entering reified >>> triples in BigData according to W3C standard ( >>> http://www.w3.org/TR/2004/REC-rdf-primer-20040210/#reification). Also >>> do we also need to specify the present namespace to be RDR instead of >>> triples. >>> >>> Another question that I have is how can we query reified triples using >>> SPARQL. Should I use the query reification vocabulary defined here >>> http://www.w3.org/2001/sw/DataAccess/rq23/#queryReification or do you >>> have your own vocabulary for querying RDR data (containing reified triples). >>> >>> >>> Cheers, >>> Maria. >>> >>> On Thu, Oct 16, 2014 at 3:55 PM, Bryan Thompson <br...@sy...> >>> wrote: >>> >>>> There is a distinction between the workbench (JavaScript in the >>>> browser) and the database process (java running inside of a servlet >>>> container in this case). In anomalous conditions the workbench might not >>>> correctly track what is happening on the database side. I suggest that you >>>> check the database log output and see what messages were generated during >>>> that time. I suspect that you might have something like a "GC Overhead >>>> limit exceeded", which is a type of out of memory exception for java where >>>> too much of the total time is spent in garbage collection. Or perhaps some >>>> other root cause that abnormally terminated the update request in a manner >>>> that the workbench was unable to identify. >>>> >>>> If the update failed, then the database will not contain any triples. >>>> If you are trying to load a very large dataset it may make sense to upload >>>> the data in a series of smaller chunks. >>>> >>>> There is a "monitor" option that will show you the status of the update >>>> requests as they are being processed. When loading large files it will >>>> echo back on the HTTP connection a summary of the number of statements >>>> loaded over time during the load. This will provide you with better >>>> feedback. >>>> >>>> But I think that you have an error condition on the server that has >>>> halted the load. >>>> >>>> Thansk, >>>> Bryan >>>> >>>> >>>> On Wednesday, October 15, 2014, Maria Jackson < >>>> mar...@gm...> wrote: >>>> >>>>> Dear All, >>>>> >>>>> I am trying to load yago2s 18.5GB ( >>>>> http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/ >>>>> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=36eb659b-7a36-459f-95da-e6d711aec4d0&cm_destination=http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/>) >>>>> in Bigdata. I downloaded bigdata from http://www.bigdata.com/download >>>>> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=0b0bc5d8-f7fe-46b3-b416-31eb502201c4&cm_destination=http://www.bigdata.com/download> and >>>>> I am using Bigdata workbench via http://localhost:9999. >>>>> >>>>> I am loading yago2s in BigData's default namespace "kb". I am loading >>>>> yago2s using update by specifying the file path there. While Bigdata is >>>>> loading yago I notice that it consumes a significant amount of CPU and RAM >>>>> for 4-5 hours, but after that it stops using RAM. But my dilemma is that >>>>> BigData workbench still keeps on showing "Running update.." although >>>>> BigData does not consume any RAM or CPU for the next 48 hours or so (In >>>>> fact it keeps showing "Running update.." until I kill the process). Can you >>>>> please suggest as to where am I going wrong as after killing the process >>>>> BigData is not able to retrieve any tuples (and shows 0 results even for >>>>> the query select ?a?b?c where{?a ?b ?c}) >>>>> >>>>> >>>>> Also I am using BigData on a server with 16 cores and 64 GB RAM? >>>>> >>>>> Any help in this regard will be deeply appreciated. >>>>> >>>>> Cheers, >>>>> Maria >>>>> >>>> >>>> >>>> -- >>>> ---- >>>> Bryan Thompson >>>> Chief Scientist & Founder >>>> SYSTAP, LLC >>>> 4501 Tower Road >>>> Greensboro, NC 27410 >>>> br...@sy... >>>> http://bigdata.com >>>> http://mapgraph.io >>>> >>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>>> are for the sole use of the intended recipient(s) and are confidential or >>>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>>> dissemination or copying of this email or its contents or attachments is >>>> prohibited. If you have received this communication in error, please notify >>>> the sender by reply email and permanently delete all copies of the email >>>> and its contents and attachments. >>>> >>>> >>> >> >> -- >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://bigdata.com >> http://mapgraph.io >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> >> > |
From: Bryan T. <br...@sy...> - 2014-10-17 08:47:32
|
You would need to add timers into the AST2BopUtility.convert() method. Bryan On Thursday, October 16, 2014, Jyoti Leeka <jy...@ii...> wrote: > Dear All, > > I loaded the following data into BigData in RDR (by creating namespace > in mode RDR) using BigData workbench > (http://www.bigdata.com/download): > > @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . > <x> rdf:type rdf:Statement . > <x> rdf:subject <Tolkien> . > <x> rdf:predicate <wrote> . > <x> rdf:object <LordOfTheRings> . > <x> <said> <Wikipedia> . > > After which I queried bigdata using: > select ?tolkien ?rings where { ?x > <http://www.w3.org/1999/02/22-rdf-syntax-ns#subject> ?tolkien. ?x > <http://www.w3.org/1999/02/22-rdf-syntax-ns#predicate> <wrote>. ?x > <http://www.w3.org/1999/02/22-rdf-syntax-ns#object> ?rings. ?x <said> > <Wikipedia>. } > > Bigdata workbench reported the execution time of this query, but I am > also interested in getting the detailed breakdown i.e. query optimizer > cost and query runtime cost. Is it possible to get such a detailed > breakdown. If yes, how? > > Thanks & Regards, > Jyoti Leeka > PhD Student > IIIT-Delhi > India > > > ------------------------------------------------------------------------------ > Comprehensive Server Monitoring with Site24x7. > Monitor 10 servers for $9/Month. > Get alerted through email, SMS, voice calls or mobile push notifications. > Take corrective actions from your mobile device. > http://p.sf.net/sfu/Zoho > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <javascript:;> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > -- ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://bigdata.com http://mapgraph.io CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Jyoti L. <jy...@ii...> - 2014-10-17 01:16:20
|
Dear All, I loaded the following data into BigData in RDR (by creating namespace in mode RDR) using BigData workbench (http://www.bigdata.com/download): @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . <x> rdf:type rdf:Statement . <x> rdf:subject <Tolkien> . <x> rdf:predicate <wrote> . <x> rdf:object <LordOfTheRings> . <x> <said> <Wikipedia> . After which I queried bigdata using: select ?tolkien ?rings where { ?x <http://www.w3.org/1999/02/22-rdf-syntax-ns#subject> ?tolkien. ?x <http://www.w3.org/1999/02/22-rdf-syntax-ns#predicate> <wrote>. ?x <http://www.w3.org/1999/02/22-rdf-syntax-ns#object> ?rings. ?x <said> <Wikipedia>. } Bigdata workbench reported the execution time of this query, but I am also interested in getting the detailed breakdown i.e. query optimizer cost and query runtime cost. Is it possible to get such a detailed breakdown. If yes, how? Thanks & Regards, Jyoti Leeka PhD Student IIIT-Delhi India |
From: Maria J. <mar...@gm...> - 2014-10-16 22:40:01
|
Dear Bryan, Just to confirm... RDR option in update window is disabled when we load a large file (i.e. specify its path). Is it ok to only specify the namespace as RDR when loading reified triples? Also can we query reified triples in Bigdata using SPARQL* or will the plain old SPARQL with reification work? Cheers, Maria On Fri, Oct 17, 2014 at 4:04 AM, Bryan Thompson <br...@sy...> wrote: > The database instance needs to be in the mode that supports RDR. This is > called the "statement identifiers" mode internally for historical reasons. > Today the feature is implemented using inlining. However, the details of > the implementation mechanism should be invisible. That is the whole point > of the RDR model. > > The RDR mode will support efficient reified triples, inference (if > enabled), and triples. We will be introducing RDR support for quads but > it is not yet in the released platform. The workbench might label the > option differently. Choose to create a new namespace. Then choose the > mode that supports triples with statement level provenance / RDR / > statement identifiers. The default mode for the NSS in the WAR deployment > is, I believe, quads. > > There are two new papers that might be of interest to you linked from > http://blog.bigdata.com. They are on RDF*/SPARQL* (a formal treatment of > RDR) and on a reconciliation of RDF and property graphs based on the RDR > model. Many thanks to Olaf Hartig for his efforts on these papers. > > Thanks, > Bryan > > > On Thursday, October 16, 2014, Maria Jackson <mar...@gm...> > wrote: > >> Dear Bryan, >> >> I need to load reified triples. The option of RDR in update window is >> disabled when we specify the path. By this I mean how do we tell BigData >> that we are entering reified triples during update. I am entering reified >> triples in BigData according to W3C standard ( >> http://www.w3.org/TR/2004/REC-rdf-primer-20040210/#reification). Also >> do we also need to specify the present namespace to be RDR instead of >> triples. >> >> Another question that I have is how can we query reified triples using >> SPARQL. Should I use the query reification vocabulary defined here >> http://www.w3.org/2001/sw/DataAccess/rq23/#queryReification or do you >> have your own vocabulary for querying RDR data (containing reified triples). >> >> >> Cheers, >> Maria. >> >> On Thu, Oct 16, 2014 at 3:55 PM, Bryan Thompson <br...@sy...> wrote: >> >>> There is a distinction between the workbench (JavaScript in the browser) >>> and the database process (java running inside of a servlet container in >>> this case). In anomalous conditions the workbench might not correctly >>> track what is happening on the database side. I suggest that you check the >>> database log output and see what messages were generated during that time. >>> I suspect that you might have something like a "GC Overhead limit >>> exceeded", which is a type of out of memory exception for java where too >>> much of the total time is spent in garbage collection. Or perhaps some >>> other root cause that abnormally terminated the update request in a manner >>> that the workbench was unable to identify. >>> >>> If the update failed, then the database will not contain any triples. >>> If you are trying to load a very large dataset it may make sense to upload >>> the data in a series of smaller chunks. >>> >>> There is a "monitor" option that will show you the status of the update >>> requests as they are being processed. When loading large files it will >>> echo back on the HTTP connection a summary of the number of statements >>> loaded over time during the load. This will provide you with better >>> feedback. >>> >>> But I think that you have an error condition on the server that has >>> halted the load. >>> >>> Thansk, >>> Bryan >>> >>> >>> On Wednesday, October 15, 2014, Maria Jackson < >>> mar...@gm...> wrote: >>> >>>> Dear All, >>>> >>>> I am trying to load yago2s 18.5GB ( >>>> http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/ >>>> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=36eb659b-7a36-459f-95da-e6d711aec4d0&cm_destination=http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/>) >>>> in Bigdata. I downloaded bigdata from http://www.bigdata.com/download >>>> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=0b0bc5d8-f7fe-46b3-b416-31eb502201c4&cm_destination=http://www.bigdata.com/download> and >>>> I am using Bigdata workbench via http://localhost:9999. >>>> >>>> I am loading yago2s in BigData's default namespace "kb". I am loading >>>> yago2s using update by specifying the file path there. While Bigdata is >>>> loading yago I notice that it consumes a significant amount of CPU and RAM >>>> for 4-5 hours, but after that it stops using RAM. But my dilemma is that >>>> BigData workbench still keeps on showing "Running update.." although >>>> BigData does not consume any RAM or CPU for the next 48 hours or so (In >>>> fact it keeps showing "Running update.." until I kill the process). Can you >>>> please suggest as to where am I going wrong as after killing the process >>>> BigData is not able to retrieve any tuples (and shows 0 results even for >>>> the query select ?a?b?c where{?a ?b ?c}) >>>> >>>> >>>> Also I am using BigData on a server with 16 cores and 64 GB RAM? >>>> >>>> Any help in this regard will be deeply appreciated. >>>> >>>> Cheers, >>>> Maria >>>> >>> >>> >>> -- >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... >>> http://bigdata.com >>> http://mapgraph.io >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>> are for the sole use of the intended recipient(s) and are confidential or >>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>> dissemination or copying of this email or its contents or attachments is >>> prohibited. If you have received this communication in error, please notify >>> the sender by reply email and permanently delete all copies of the email >>> and its contents and attachments. >>> >>> >> > > -- > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://bigdata.com > http://mapgraph.io > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > |
From: Bryan T. <br...@sy...> - 2014-10-16 22:34:36
|
The database instance needs to be in the mode that supports RDR. This is called the "statement identifiers" mode internally for historical reasons. Today the feature is implemented using inlining. However, the details of the implementation mechanism should be invisible. That is the whole point of the RDR model. The RDR mode will support efficient reified triples, inference (if enabled), and triples. We will be introducing RDR support for quads but it is not yet in the released platform. The workbench might label the option differently. Choose to create a new namespace. Then choose the mode that supports triples with statement level provenance / RDR / statement identifiers. The default mode for the NSS in the WAR deployment is, I believe, quads. There are two new papers that might be of interest to you linked from http://blog.bigdata.com. They are on RDF*/SPARQL* (a formal treatment of RDR) and on a reconciliation of RDF and property graphs based on the RDR model. Many thanks to Olaf Hartig for his efforts on these papers. Thanks, Bryan On Thursday, October 16, 2014, Maria Jackson <mar...@gm...> wrote: > Dear Bryan, > > I need to load reified triples. The option of RDR in update window is > disabled when we specify the path. By this I mean how do we tell BigData > that we are entering reified triples during update. I am entering reified > triples in BigData according to W3C standard ( > http://www.w3.org/TR/2004/REC-rdf-primer-20040210/#reification). Also do > we also need to specify the present namespace to be RDR instead of triples. > > Another question that I have is how can we query reified triples using > SPARQL. Should I use the query reification vocabulary defined here > http://www.w3.org/2001/sw/DataAccess/rq23/#queryReification or do you > have your own vocabulary for querying RDR data (containing reified triples). > > > Cheers, > Maria. > > On Thu, Oct 16, 2014 at 3:55 PM, Bryan Thompson <br...@sy... > <javascript:_e(%7B%7D,'cvml','br...@sy...');>> wrote: > >> There is a distinction between the workbench (JavaScript in the browser) >> and the database process (java running inside of a servlet container in >> this case). In anomalous conditions the workbench might not correctly >> track what is happening on the database side. I suggest that you check the >> database log output and see what messages were generated during that time. >> I suspect that you might have something like a "GC Overhead limit >> exceeded", which is a type of out of memory exception for java where too >> much of the total time is spent in garbage collection. Or perhaps some >> other root cause that abnormally terminated the update request in a manner >> that the workbench was unable to identify. >> >> If the update failed, then the database will not contain any triples. If >> you are trying to load a very large dataset it may make sense to upload the >> data in a series of smaller chunks. >> >> There is a "monitor" option that will show you the status of the update >> requests as they are being processed. When loading large files it will >> echo back on the HTTP connection a summary of the number of statements >> loaded over time during the load. This will provide you with better >> feedback. >> >> But I think that you have an error condition on the server that has >> halted the load. >> >> Thansk, >> Bryan >> >> >> On Wednesday, October 15, 2014, Maria Jackson < >> mar...@gm... >> <javascript:_e(%7B%7D,'cvml','mar...@gm...');>> wrote: >> >>> Dear All, >>> >>> I am trying to load yago2s 18.5GB ( >>> http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/ >>> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=36eb659b-7a36-459f-95da-e6d711aec4d0&cm_destination=http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/>) >>> in Bigdata. I downloaded bigdata from http://www.bigdata.com/download >>> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=0b0bc5d8-f7fe-46b3-b416-31eb502201c4&cm_destination=http://www.bigdata.com/download> and >>> I am using Bigdata workbench via http://localhost:9999. >>> >>> I am loading yago2s in BigData's default namespace "kb". I am loading >>> yago2s using update by specifying the file path there. While Bigdata is >>> loading yago I notice that it consumes a significant amount of CPU and RAM >>> for 4-5 hours, but after that it stops using RAM. But my dilemma is that >>> BigData workbench still keeps on showing "Running update.." although >>> BigData does not consume any RAM or CPU for the next 48 hours or so (In >>> fact it keeps showing "Running update.." until I kill the process). Can you >>> please suggest as to where am I going wrong as after killing the process >>> BigData is not able to retrieve any tuples (and shows 0 results even for >>> the query select ?a?b?c where{?a ?b ?c}) >>> >>> >>> Also I am using BigData on a server with 16 cores and 64 GB RAM? >>> >>> Any help in this regard will be deeply appreciated. >>> >>> Cheers, >>> Maria >>> >> >> >> -- >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... <javascript:_e(%7B%7D,'cvml','br...@sy...');> >> http://bigdata.com >> http://mapgraph.io >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> >> > -- ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://bigdata.com http://mapgraph.io CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Maria J. <mar...@gm...> - 2014-10-16 22:14:57
|
Dear Bryan, I need to load reified triples. The option of RDR in update window is disabled when we specify the path. By this I mean how do we tell BigData that we are entering reified triples during update. I am entering reified triples in BigData according to W3C standard ( http://www.w3.org/TR/2004/REC-rdf-primer-20040210/#reification). Also do we also need to specify the present namespace to be RDR instead of triples. Another question that I have is how can we query reified triples using SPARQL. Should I use the query reification vocabulary defined here http://www.w3.org/2001/sw/DataAccess/rq23/#queryReification or do you have your own vocabulary for querying RDR data (containing reified triples). Cheers, Maria. On Thu, Oct 16, 2014 at 3:55 PM, Bryan Thompson <br...@sy...> wrote: > There is a distinction between the workbench (JavaScript in the browser) > and the database process (java running inside of a servlet container in > this case). In anomalous conditions the workbench might not correctly > track what is happening on the database side. I suggest that you check the > database log output and see what messages were generated during that time. > I suspect that you might have something like a "GC Overhead limit > exceeded", which is a type of out of memory exception for java where too > much of the total time is spent in garbage collection. Or perhaps some > other root cause that abnormally terminated the update request in a manner > that the workbench was unable to identify. > > If the update failed, then the database will not contain any triples. If > you are trying to load a very large dataset it may make sense to upload the > data in a series of smaller chunks. > > There is a "monitor" option that will show you the status of the update > requests as they are being processed. When loading large files it will > echo back on the HTTP connection a summary of the number of statements > loaded over time during the load. This will provide you with better > feedback. > > But I think that you have an error condition on the server that has halted > the load. > > Thansk, > Bryan > > > On Wednesday, October 15, 2014, Maria Jackson <mar...@gm...> > wrote: > >> Dear All, >> >> I am trying to load yago2s 18.5GB ( >> http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/ >> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=36eb659b-7a36-459f-95da-e6d711aec4d0&cm_destination=http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/>) >> in Bigdata. I downloaded bigdata from http://www.bigdata.com/download >> <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=0b0bc5d8-f7fe-46b3-b416-31eb502201c4&cm_destination=http://www.bigdata.com/download> and >> I am using Bigdata workbench via http://localhost:9999. >> >> I am loading yago2s in BigData's default namespace "kb". I am loading >> yago2s using update by specifying the file path there. While Bigdata is >> loading yago I notice that it consumes a significant amount of CPU and RAM >> for 4-5 hours, but after that it stops using RAM. But my dilemma is that >> BigData workbench still keeps on showing "Running update.." although >> BigData does not consume any RAM or CPU for the next 48 hours or so (In >> fact it keeps showing "Running update.." until I kill the process). Can you >> please suggest as to where am I going wrong as after killing the process >> BigData is not able to retrieve any tuples (and shows 0 results even for >> the query select ?a?b?c where{?a ?b ?c}) >> >> >> Also I am using BigData on a server with 16 cores and 64 GB RAM? >> >> Any help in this regard will be deeply appreciated. >> >> Cheers, >> Maria >> > > > -- > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://bigdata.com > http://mapgraph.io > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > |
From: Bryan T. <br...@sy...> - 2014-10-16 10:25:22
|
There is a distinction between the workbench (JavaScript in the browser) and the database process (java running inside of a servlet container in this case). In anomalous conditions the workbench might not correctly track what is happening on the database side. I suggest that you check the database log output and see what messages were generated during that time. I suspect that you might have something like a "GC Overhead limit exceeded", which is a type of out of memory exception for java where too much of the total time is spent in garbage collection. Or perhaps some other root cause that abnormally terminated the update request in a manner that the workbench was unable to identify. If the update failed, then the database will not contain any triples. If you are trying to load a very large dataset it may make sense to upload the data in a series of smaller chunks. There is a "monitor" option that will show you the status of the update requests as they are being processed. When loading large files it will echo back on the HTTP connection a summary of the number of statements loaded over time during the load. This will provide you with better feedback. But I think that you have an error condition on the server that has halted the load. Thansk, Bryan On Wednesday, October 15, 2014, Maria Jackson <mar...@gm...> wrote: > Dear All, > > I am trying to load yago2s 18.5GB ( > http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/ > <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=36eb659b-7a36-459f-95da-e6d711aec4d0&cm_destination=http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/>) > in Bigdata. I downloaded bigdata from http://www.bigdata.com/download > <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=0b0bc5d8-f7fe-46b3-b416-31eb502201c4&cm_destination=http://www.bigdata.com/download> and > I am using Bigdata workbench via http://localhost:9999. > > I am loading yago2s in BigData's default namespace "kb". I am loading > yago2s using update by specifying the file path there. While Bigdata is > loading yago I notice that it consumes a significant amount of CPU and RAM > for 4-5 hours, but after that it stops using RAM. But my dilemma is that > BigData workbench still keeps on showing "Running update.." although > BigData does not consume any RAM or CPU for the next 48 hours or so (In > fact it keeps showing "Running update.." until I kill the process). Can you > please suggest as to where am I going wrong as after killing the process > BigData is not able to retrieve any tuples (and shows 0 results even for > the query select ?a?b?c where{?a ?b ?c}) > > > Also I am using BigData on a server with 16 cores and 64 GB RAM? > > Any help in this regard will be deeply appreciated. > > Cheers, > Maria > -- ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://bigdata.com http://mapgraph.io CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Maria J. <mar...@gm...> - 2014-10-16 00:26:55
|
Dear All, I am trying to load yago2s 18.5GB ( http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/ <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=36eb659b-7a36-459f-95da-e6d711aec4d0&cm_destination=http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago/downloads/>) in Bigdata. I downloaded bigdata from http://www.bigdata.com/download <https://contactmonkey.com/api/v1/tracker?cm_session=4d54369b-9f5b-4f3b-ae2d-5c05ba2939a0&cm_type=link&cm_link=0b0bc5d8-f7fe-46b3-b416-31eb502201c4&cm_destination=http://www.bigdata.com/download> and I am using Bigdata workbench via http://localhost:9999. I am loading yago2s in BigData's default namespace "kb". I am loading yago2s using update by specifying the file path there. While Bigdata is loading yago I notice that it consumes a significant amount of CPU and RAM for 4-5 hours, but after that it stops using RAM. But my dilemma is that BigData workbench still keeps on showing "Running update.." although BigData does not consume any RAM or CPU for the next 48 hours or so (In fact it keeps showing "Running update.." until I kill the process). Can you please suggest as to where am I going wrong as after killing the process BigData is not able to retrieve any tuples (and shows 0 results even for the query select ?a?b?c where{?a ?b ?c}) Also I am using BigData on a server with 16 cores and 64 GB RAM? Any help in this regard will be deeply appreciated. Cheers, Maria |
From: Rose B. <ros...@gm...> - 2014-10-14 01:06:10
|
Dear Bryan, Just to get an estimate can you please help me understand how much large dataset will need good hardware. It will be great if you can help me quantify as I need to tell my manager the amount of resources I'll need for good performance (like is 10GB large or is 100GB large)? Also I intend to store the unused databases on NAS/SAN. Is there some way by which I may keep only the used databases on disk and keep the unused ones on SAN/NAS? Also I have loaded Uniprot in reified form triples form i.e. <A> rdf:type rdf:statement. <A> rdf:subject <B>. <A> rdf:predicate <C>. <A> rdf:object <D>. Now I am not getting how should I query I: RDF or SPARQL queries of the form: select ?a?b?c where{?d rdf:subject ?a. ?d rdf:predicate ?b. ?d rdf:object ?c} Thanks a lot for your help thus far Thanks & Regards, Jyoti On Tue, Oct 14, 2014 at 3:31 AM, Bryan Thompson <br...@sy...> wrote: > We do not break out the query optimizer costs vs the query execution costs. > The NSS explain view provides a detailed breakdown of the query runtime > costs, but does not show the optimizer cost component. The optimizer is > quite fast and is an overhead mainly for low latency queries. For complex > or long running queries the cost disappears into the cost of the query > evaluation. > > These are large data sets. You need a machine with sufficient resources. > SSD. 32G RAM+. 8 cores or more. If you do not have enough hardware you > will not get a good result. Fast disk is essential for graph databases. > > Thanks, > Bryan > > > On Monday, October 13, 2014, Rose Beck <ros...@gm...> wrote: >> >> Dear Bryan, >> >> Thanks for the help thus far. I have a small question: I intend to >> load different datasets like dbpedia and uniprot into Bigdata. I dont >> have enough disk space so when I am processing one dataset I intend to >> keep the other dataset on disk e.g. when I am executing queries on >> dbpedia I intend to store uniprot database in my hard drive so that I >> dont have to load Uniprot again and again into bigdata. Is there some >> way out by which I may achieve the same? I am using bigdata workbench >> in my browser using http://localhost:9999. >> >> Also I need to report the following timings to my company: >> 1. Query execution time excluding plan generation time. >> 2. Query execution time + Plan generation time >> 3. Just query execution time excluding dictionary lookup time. >> >> Can you please help me as to from where can I get these two timings in >> Bigdata. >> >> Thanks & Regards, >> Jyoti >> >> >> On Mon, Oct 6, 2014 at 11:02 PM, Bryan Thompson <br...@sy...> wrote: >> > Rose, >> > >> > The trunk is no longer used for bigdata. >> > >> > You can checkout the 1.3.2 release from: >> > https://svn.code.sf.net/p/bigdata/code/tags/BIGDATA_RELEASE_1_3_2 >> > >> > The 1.3.x maintenance and development branch is: >> > https://svn.code.sf.net/p/bigdata/code/branches/BIGDATA_RELEASE_1_3_0 >> > >> > You can also download bigdata from our web site: >> > http://www.bigdata.com/download. >> > >> > Be sure to given the Java platform sufficient resources if you are >> > loading >> > large files (-server, -Xmx16G or better, etc.). See wiki.bigdata.com >> > for >> > various information on how to obtain, build, configure, and use the >> > bigdata >> > platform. >> > >> > Thanks, >> > Bryan >> > >> > >> > ---- >> > Bryan Thompson >> > Chief Scientist & Founder >> > SYSTAP, LLC >> > 4501 Tower Road >> > Greensboro, NC 27410 >> > br...@sy... >> > http://bigdata.com >> > http://mapgraph.io >> > >> > CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> > for >> > the sole use of the intended recipient(s) and are confidential or >> > proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> > dissemination or copying of this email or its contents or attachments is >> > prohibited. If you have received this communication in error, please >> > notify >> > the sender by reply email and permanently delete all copies of the email >> > and >> > its contents and attachments. >> > >> > >> > On Mon, Oct 6, 2014 at 1:09 PM, Rose Beck <ros...@gm...> wrote: >> >> >> >> Dear Bryan, >> >> >> >> I have Uniprot data in turse triple form (ttl) and I want to load it >> >> in bigdata and I want to query it using bigdata. On the web page >> >> http://wiki.bigdata.com/wiki/index.php/NanoSparqlServer it is >> >> mentioned that I should look at the example: NSSEmbeddedExample.java. >> >> However when I check out the code: >> >> svn checkout svn://svn.code.sf.net/p/bigdata/code/trunk bigdata-code >> >> >> >> I am unable to find this example. >> >> >> >> Can you please help me with this. I am a complete novice at Java >> >> (Although I work on C/C++ and Python extensively), therefore I am not >> >> able to understand as to how should I load large datasets into Bigdata >> >> and how should I query them. Perhaps a step-by-step guide for users >> >> like me who are not familiar with Java will be of great help. >> >> >> >> Thanks & Regards, >> >> Jyoti > > > > -- > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://bigdata.com > http://mapgraph.io > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for > the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email and > its contents and attachments. > > |
From: Bryan T. <br...@sy...> - 2014-10-13 22:01:39
|
We do not break out the query optimizer costs vs the query execution costs. The NSS explain view provides a detailed breakdown of the query runtime costs, but does not show the optimizer cost component. The optimizer is quite fast and is an overhead mainly for low latency queries. For complex or long running queries the cost disappears into the cost of the query evaluation. These are large data sets. You need a machine with sufficient resources. SSD. 32G RAM+. 8 cores or more. If you do not have enough hardware you will not get a good result. Fast disk is essential for graph databases. Thanks, Bryan On Monday, October 13, 2014, Rose Beck <ros...@gm...> wrote: > Dear Bryan, > > Thanks for the help thus far. I have a small question: I intend to > load different datasets like dbpedia and uniprot into Bigdata. I dont > have enough disk space so when I am processing one dataset I intend to > keep the other dataset on disk e.g. when I am executing queries on > dbpedia I intend to store uniprot database in my hard drive so that I > dont have to load Uniprot again and again into bigdata. Is there some > way out by which I may achieve the same? I am using bigdata workbench > in my browser using http://localhost:9999. > > Also I need to report the following timings to my company: > 1. Query execution time excluding plan generation time. > 2. Query execution time + Plan generation time > 3. Just query execution time excluding dictionary lookup time. > > Can you please help me as to from where can I get these two timings in > Bigdata. > > Thanks & Regards, > Jyoti > > > On Mon, Oct 6, 2014 at 11:02 PM, Bryan Thompson <br...@sy... > <javascript:;>> wrote: > > Rose, > > > > The trunk is no longer used for bigdata. > > > > You can checkout the 1.3.2 release from: > > https://svn.code.sf.net/p/bigdata/code/tags/BIGDATA_RELEASE_1_3_2 > > > > The 1.3.x maintenance and development branch is: > > https://svn.code.sf.net/p/bigdata/code/branches/BIGDATA_RELEASE_1_3_0 > > > > You can also download bigdata from our web site: > > http://www.bigdata.com/download. > > > > Be sure to given the Java platform sufficient resources if you are > loading > > large files (-server, -Xmx16G or better, etc.). See wiki.bigdata.com > for > > various information on how to obtain, build, configure, and use the > bigdata > > platform. > > > > Thanks, > > Bryan > > > > > > ---- > > Bryan Thompson > > Chief Scientist & Founder > > SYSTAP, LLC > > 4501 Tower Road > > Greensboro, NC 27410 > > br...@sy... <javascript:;> > > http://bigdata.com > > http://mapgraph.io > > > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for > > the sole use of the intended recipient(s) and are confidential or > > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > > dissemination or copying of this email or its contents or attachments is > > prohibited. If you have received this communication in error, please > notify > > the sender by reply email and permanently delete all copies of the email > and > > its contents and attachments. > > > > > > On Mon, Oct 6, 2014 at 1:09 PM, Rose Beck <ros...@gm... > <javascript:;>> wrote: > >> > >> Dear Bryan, > >> > >> I have Uniprot data in turse triple form (ttl) and I want to load it > >> in bigdata and I want to query it using bigdata. On the web page > >> http://wiki.bigdata.com/wiki/index.php/NanoSparqlServer it is > >> mentioned that I should look at the example: NSSEmbeddedExample.java. > >> However when I check out the code: > >> svn checkout svn://svn.code.sf.net/p/bigdata/code/trunk bigdata-code > >> > >> I am unable to find this example. > >> > >> Can you please help me with this. I am a complete novice at Java > >> (Although I work on C/C++ and Python extensively), therefore I am not > >> able to understand as to how should I load large datasets into Bigdata > >> and how should I query them. Perhaps a step-by-step guide for users > >> like me who are not familiar with Java will be of great help. > >> > >> Thanks & Regards, > >> Jyoti > -- ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://bigdata.com http://mapgraph.io CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Rose B. <ros...@gm...> - 2014-10-13 08:44:33
|
Dear Bryan, Thanks for the help thus far. I have a small question: I intend to load different datasets like dbpedia and uniprot into Bigdata. I dont have enough disk space so when I am processing one dataset I intend to keep the other dataset on disk e.g. when I am executing queries on dbpedia I intend to store uniprot database in my hard drive so that I dont have to load Uniprot again and again into bigdata. Is there some way out by which I may achieve the same? I am using bigdata workbench in my browser using http://localhost:9999. Also I need to report the following timings to my company: 1. Query execution time excluding plan generation time. 2. Query execution time + Plan generation time 3. Just query execution time excluding dictionary lookup time. Can you please help me as to from where can I get these two timings in Bigdata. Thanks & Regards, Jyoti On Mon, Oct 6, 2014 at 11:02 PM, Bryan Thompson <br...@sy...> wrote: > Rose, > > The trunk is no longer used for bigdata. > > You can checkout the 1.3.2 release from: > https://svn.code.sf.net/p/bigdata/code/tags/BIGDATA_RELEASE_1_3_2 > > The 1.3.x maintenance and development branch is: > https://svn.code.sf.net/p/bigdata/code/branches/BIGDATA_RELEASE_1_3_0 > > You can also download bigdata from our web site: > http://www.bigdata.com/download. > > Be sure to given the Java platform sufficient resources if you are loading > large files (-server, -Xmx16G or better, etc.). See wiki.bigdata.com for > various information on how to obtain, build, configure, and use the bigdata > platform. > > Thanks, > Bryan > > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://bigdata.com > http://mapgraph.io > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for > the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email and > its contents and attachments. > > > On Mon, Oct 6, 2014 at 1:09 PM, Rose Beck <ros...@gm...> wrote: >> >> Dear Bryan, >> >> I have Uniprot data in turse triple form (ttl) and I want to load it >> in bigdata and I want to query it using bigdata. On the web page >> http://wiki.bigdata.com/wiki/index.php/NanoSparqlServer it is >> mentioned that I should look at the example: NSSEmbeddedExample.java. >> However when I check out the code: >> svn checkout svn://svn.code.sf.net/p/bigdata/code/trunk bigdata-code >> >> I am unable to find this example. >> >> Can you please help me with this. I am a complete novice at Java >> (Although I work on C/C++ and Python extensively), therefore I am not >> able to understand as to how should I load large datasets into Bigdata >> and how should I query them. Perhaps a step-by-step guide for users >> like me who are not familiar with Java will be of great help. >> >> Thanks & Regards, >> Jyoti |