This list is closed, nobody may subscribe to it.
| 2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
| 2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
| 2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
| 2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
| 2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
| 2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
| 2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: James HK <jam...@gm...> - 2015-04-30 16:11:14
|
Hi,
> That is an interesting exception. The UPDATE request runs fine for me
> locally against a quads mode namespace.
It is a Blazegraph instance with the kb namespace in triplemode.
> Is this error stochastic?
No, it always fails for the same test with the same error.
> you using the same namespace throughout your integration tests? That
Yes, it is defined once and used for the whole suite (same as with the
Sesame repository).
> is are there side-effects on the database or is it being cleared down
> between each test? (Dropping the namespace or creating a new database
> instance?) Are there other mutation operations that have already run
It is the same Blazegraph instance that is used for each test but data
sets are removed after each test.
> instance?) Are there other mutation operations that have already run
> in this specific test?
Not that I'm aware of. It could be that a previous test with its data
creates a side effect but as I mentioned earlier that hasn't been any
issue before.
> It would be best if you have provide something that replicates this
> problem so we can get it into our regression test suite. This will
> also help to root cause the issue.
Before making an effort to setup Travis, I wanted to see how well it
runs compared with the
other vendors. My hope was that someone has seen this issue before or
similar that would cause this.
The test contains property value assignments with time values of 1
January 300 BC, 2147483647 BC which at least gives trouble for
Virtuoso on the "1 January 300 BC" date.
All I can provide is the SPARQL update sequence [0] for the test until
it fails when run in isolation and the output [1] for "select ?s ?p ?o
{ <http://example.org/id/TimeDataTypeRegressionTest> ?p ?o . }".
[0] https://gist.github.com/mwjames/218ffbafa0189a5a8f93
[1] https://gist.github.com/mwjames/ca19fccb3e6808e0c887
Cheers
On 4/30/15, Bryan Thompson <br...@sy...> wrote:
> That is an interesting exception. The UPDATE request runs fine for me
> locally against a quads mode namespace. Is this error stochastic? Are
> you using the same namespace throughout your integration tests? That
> is are there side-effects on the database or is it being cleared down
> between each test? (Dropping the namespace or creating a new database
> instance?) Are there other mutation operations that have already run
> in this specific test?
>
> I suspect some side-effect.
>
> It would be best if you have provide something that replicates this
> problem so we can get it into our regression test suite. This will
> also help to root cause the issue.
>
> Thanks,
> Bryan
> ----
> Bryan Thompson
> Chief Scientist & Founder
> SYSTAP, LLC
> 4501 Tower Road
> Greensboro, NC 27410
> br...@sy...
> http://blazegraph.com
> http://blog.bigdata.com
> http://mapgraph.io
>
> Blazegraph™ is our ultra high-performance graph database that supports
> both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our
> disruptive new technology to use GPUs to accelerate data-parallel
> graph analytics.
>
> CONFIDENTIALITY NOTICE: This email and its contents and attachments
> are for the sole use of the intended recipient(s) and are confidential
> or proprietary to SYSTAP. Any unauthorized review, use, disclosure,
> dissemination or copying of this email or its contents or attachments
> is prohibited. If you have received this communication in error,
> please notify the sender by reply email and permanently delete all
> copies of the email and its contents and attachments.
>
>
> On Thu, Apr 30, 2015 at 9:18 AM, James HK <jam...@gm...>
> wrote:
>> Hi,
>>
>> When trying to run our unit test suite [0] against a preliminary/local
>> Blazegraph 1.5.1 instance the
>> following error appeared during the test run (using a vanilla
>> Blazegraph with the standard kb namespace) .
>>
>> Our test suite (and the test mentioned) is run/tested against Virtuoso
>> 6.1/Fuseki 1.1.1/Sesame 2.7.14 [0] on Travis-CI therefore it is
>> unlikely an issue on our side.
>>
>> 1)
>> SMW\Tests\Integration\MediaWiki\Import\TimeDataTypeTest::testImportOfDifferentDateWithAssortmentOfOutputConversion
>> SMW\SPARQLStore\Exception\BadHttpDatabaseResponseException: A SPARQL
>> query error has occurred
>> Query:
>> PREFIX wiki: <http://example.org/id/>
>> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
>> PREFIX owl: <http://www.w3.org/2002/07/owl#>
>> PREFIX swivt: <http://semantic-mediawiki.org/swivt/1.0#>
>> PREFIX property: <http://example.org/id/Property-3A>
>> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
>> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE {
>> wiki:TimeDataTypeRegressionTest ?p ?o }
>> Error: Query refused
>> Endpoint: http://192.168.1.104:9999/bigdata/namespace/kb/sparql
>> HTTP response code: 500
>>
>> This translates into an error on the Blazegraph side with (output from
>> http://localhost:9999/bigdata/#update):
>>
>> ERROR: SPARQL-UPDATE: updateStr=PREFIX wiki:
>> PREFIX rdf:
>> PREFIX rdfs:
>> PREFIX owl:
>> PREFIX swivt:
>> PREFIX property:
>> PREFIX xsd:
>> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE {
>> wiki:TimeDataTypeRegressionTest ?p ?o }
>> java.util.concurrent.ExecutionException:
>> java.util.concurrent.ExecutionException:
>> org.openrdf.query.UpdateExecutionException:
>> java.lang.IllegalStateException: Already assigned:
>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464),
>> datatype=Vocab(-42)], new=LiteralExtensionIV
>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this:
>> "8452"^^
>> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>> at java.util.concurrent.FutureTask.get(FutureTask.java:188)
>> at
>> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261)
>> at
>> com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:359)
>> at
>> com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:165)
>> at
>> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237)
>> at
>> com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137)
>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>> at
>> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769)
>> at
>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>> at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>> at
>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>> at
>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>> at
>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125)
>> at
>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>> at
>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>> at
>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059)
>> at
>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>> at
>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>> at
>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>> at
>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>> at org.eclipse.jetty.server.Server.handle(Server.java:497)
>> at
>> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>> at
>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248)
>> at
>> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>> at
>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610)
>> at
>> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539)
>> at java.lang.Thread.run(Thread.java:744)
>> Caused by: java.util.concurrent.ExecutionException:
>> org.openrdf.query.UpdateExecutionException:
>> java.lang.IllegalStateException: Already assigned:
>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464),
>> datatype=Vocab(-42)], new=LiteralExtensionIV
>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this:
>> "8452"^^
>> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>> at java.util.concurrent.FutureTask.get(FutureTask.java:188)
>> at
>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460)
>> at
>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:371)
>> at
>> com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> at
>> com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365)
>> at
>> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258)
>> ... 26 more
>> Caused by: org.openrdf.query.UpdateExecutionException:
>> java.lang.IllegalStateException: Already assigned:
>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464),
>> datatype=Vocab(-42)], new=LiteralExtensionIV
>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this:
>> "8452"^^
>> at
>> com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1303)
>> at
>> com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152)
>> at
>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1683)
>> at
>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1310)
>> at
>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1275)
>> at
>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:517)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> ... 1 more
>> Caused by: java.lang.IllegalStateException: Already assigned:
>> old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464),
>> datatype=Vocab(-42)], new=LiteralExtensionIV
>> [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this:
>> "8452"^^
>> at
>> com.bigdata.rdf.model.BigdataValueImpl.setIV(BigdataValueImpl.java:139)
>> at
>> com.bigdata.rdf.internal.LexiconConfiguration.createInlineIV(LexiconConfiguration.java:430)
>> at
>> com.bigdata.rdf.lexicon.LexiconRelation.getInlineIV(LexiconRelation.java:3150)
>> at
>> com.bigdata.rdf.lexicon.LexiconRelation.addTerms(LexiconRelation.java:1719)
>> at
>> com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2928)
>> at
>> com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2874)
>> at
>> com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2962)
>> at
>> com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2865)
>> at
>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.addOrRemoveStatement(AST2BOpUpdate.java:2054)
>> at
>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:989)
>> at
>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:417)
>> at
>> com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:279)
>> at
>> com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1295)
>> ... 9 more
>>
>> [0] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki
>>
>> Cheers
>>
>> ------------------------------------------------------------------------------
>> One dashboard for servers and applications across Physical-Virtual-Cloud
>> Widest out-of-the-box monitoring support with 50+ applications
>> Performance metrics, stats and reports that give you Actionable Insights
>> Deep dive visibility with transaction tracing using APM Insight.
>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
>> _______________________________________________
>> Bigdata-developers mailing list
>> Big...@li...
>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers
>
|
|
From: Bryan T. <br...@sy...> - 2015-04-30 14:56:18
|
That is an interesting exception. The UPDATE request runs fine for me locally against a quads mode namespace. Is this error stochastic? Are you using the same namespace throughout your integration tests? That is are there side-effects on the database or is it being cleared down between each test? (Dropping the namespace or creating a new database instance?) Are there other mutation operations that have already run in this specific test? I suspect some side-effect. It would be best if you have provide something that replicates this problem so we can get it into our regression test suite. This will also help to root cause the issue. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com http://mapgraph.io Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Thu, Apr 30, 2015 at 9:18 AM, James HK <jam...@gm...> wrote: > Hi, > > When trying to run our unit test suite [0] against a preliminary/local > Blazegraph 1.5.1 instance the > following error appeared during the test run (using a vanilla > Blazegraph with the standard kb namespace) . > > Our test suite (and the test mentioned) is run/tested against Virtuoso > 6.1/Fuseki 1.1.1/Sesame 2.7.14 [0] on Travis-CI therefore it is > unlikely an issue on our side. > > 1) SMW\Tests\Integration\MediaWiki\Import\TimeDataTypeTest::testImportOfDifferentDateWithAssortmentOfOutputConversion > SMW\SPARQLStore\Exception\BadHttpDatabaseResponseException: A SPARQL > query error has occurred > Query: > PREFIX wiki: <http://example.org/id/> > PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> > PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> > PREFIX owl: <http://www.w3.org/2002/07/owl#> > PREFIX swivt: <http://semantic-mediawiki.org/swivt/1.0#> > PREFIX property: <http://example.org/id/Property-3A> > PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> > DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { > wiki:TimeDataTypeRegressionTest ?p ?o } > Error: Query refused > Endpoint: http://192.168.1.104:9999/bigdata/namespace/kb/sparql > HTTP response code: 500 > > This translates into an error on the Blazegraph side with (output from > http://localhost:9999/bigdata/#update): > > ERROR: SPARQL-UPDATE: updateStr=PREFIX wiki: > PREFIX rdf: > PREFIX rdfs: > PREFIX owl: > PREFIX swivt: > PREFIX property: > PREFIX xsd: > DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { > wiki:TimeDataTypeRegressionTest ?p ?o } > java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: > org.openrdf.query.UpdateExecutionException: > java.lang.IllegalStateException: Already assigned: > old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > datatype=Vocab(-42)], new=LiteralExtensionIV > [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > "8452"^^ > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) > at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:359) > at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:165) > at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) > at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) > at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > at java.lang.Thread.run(Thread.java:744) > Caused by: java.util.concurrent.ExecutionException: > org.openrdf.query.UpdateExecutionException: > java.lang.IllegalStateException: Already assigned: > old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > datatype=Vocab(-42)], new=LiteralExtensionIV > [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > "8452"^^ > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460) > at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:371) > at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365) > at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258) > ... 26 more > Caused by: org.openrdf.query.UpdateExecutionException: > java.lang.IllegalStateException: Already assigned: > old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > datatype=Vocab(-42)], new=LiteralExtensionIV > [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > "8452"^^ > at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1303) > at com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1683) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1310) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1275) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:517) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > ... 1 more > Caused by: java.lang.IllegalStateException: Already assigned: > old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), > datatype=Vocab(-42)], new=LiteralExtensionIV > [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: > "8452"^^ > at com.bigdata.rdf.model.BigdataValueImpl.setIV(BigdataValueImpl.java:139) > at com.bigdata.rdf.internal.LexiconConfiguration.createInlineIV(LexiconConfiguration.java:430) > at com.bigdata.rdf.lexicon.LexiconRelation.getInlineIV(LexiconRelation.java:3150) > at com.bigdata.rdf.lexicon.LexiconRelation.addTerms(LexiconRelation.java:1719) > at com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2928) > at com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2874) > at com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2962) > at com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2865) > at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.addOrRemoveStatement(AST2BOpUpdate.java:2054) > at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:989) > at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:417) > at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:279) > at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1295) > ... 9 more > > [0] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki > > Cheers > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
|
From: Michael S. <ms...@me...> - 2015-04-30 13:57:27
|
Jeremy, thanks for reporting this issue. Indeed the problem you reported was orthogonal to the analytic query mode problem sketched in bug #490. I just provided a fix for your new example in branch mocked_term, see http://trac.bigdata.com/ticket/490#comment:7 <http://trac.bigdata.com/ticket/490#comment:7> for more details. Concerning the remaining analytics mode problem, as Bryan says, we have identified the problem and have this issue on our roadmap (see also the link above for an example query). Please let us know if this is a blocker for you, so we can consider it when prioritizing issues. Best, Michael > On 30 Apr 2015, at 00:18, Bryan Thompson <br...@sy...> wrote: > > Ok. It is likely to be a different issue. We will take a look at it. > The TermId.hashCode() and equals() issue is (I believe) specific to > the analytic query mode. However, maybe there is a basic problem with > the query that is being hit in the non-analytic mode and then running > into #490 in the analytic mode and getting a different wrong answer. > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.bigdata.com > http://mapgraph.io > > Blazegraph™ is our ultra high-performance graph database that supports > both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our > disruptive new technology to use GPUs to accelerate data-parallel > graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments > are for the sole use of the intended recipient(s) and are confidential > or proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments > is prohibited. If you have received this communication in error, > please notify the sender by reply email and permanently delete all > copies of the email and its contents and attachments. > > > On Wed, Apr 29, 2015 at 6:12 PM, Jeremy J Carroll <jj...@sy...> wrote: >> I added a test case: it gives different answers in analytic and non-analytic mode: neither is correct. >> >> It was harder to find than I remember … >> >> Jeremy J Carroll >> >> >>> On Apr 29, 2015, at 2:03 PM, Bryan Thompson <br...@sy...> wrote: >>> >>> Ok. Sounds good. >>> Bryan >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... >>> http://blazegraph.com >>> http://blog.bigdata.com >>> http://mapgraph.io >>> >>> Blazegraph™ is our ultra high-performance graph database that supports >>> both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our >>> disruptive new technology to use GPUs to accelerate data-parallel >>> graph analytics. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>> are for the sole use of the intended recipient(s) and are confidential >>> or proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>> dissemination or copying of this email or its contents or attachments >>> is prohibited. If you have received this communication in error, >>> please notify the sender by reply email and permanently delete all >>> copies of the email and its contents and attachments. >>> >>> >>> On Wed, Apr 29, 2015 at 4:56 PM, Jeremy J Carroll <jj...@sy...> wrote: >>>> Thanks, I will try and generate a failing query from that and add it tot he ticket >>>> >>>> >>>> >>>> >>>>> On Apr 29, 2015, at 1:51 PM, Bryan Thompson <br...@sy...> wrote: >>>>> >>>>> Jeremy, >>>>> >>>>> We did fix several issues related to this, but the specific ticket >>>>> that you are thinking of is >>>>> >>>>> http://trac.bigdata.com/ticket/490 (Mock IV / TermId >>>>> hashCode()/equals() problems) >>>>> >>>>> We did add logic to the query plans to do dictionary materialization >>>>> in order to resolve the IV for a constructed RDF Value that is in the >>>>> dictionary. This was done as part of >>>>> http://trac.bigdata.com/ticket/1118. Michael (Cc) might recall this >>>>> more clearly. However, I believe the specific issue for #490 still >>>>> remains in some circumstances. We have this tagged in our backlog and >>>>> plan to address it shortly. >>>>> >>>>> Let me check with Michael and get back to you on the specific issues >>>>> that remain and when we think they will be resolved. >>>>> >>>>> Thanks, >>>>> Bryan >>>>> ---- >>>>> Bryan Thompson >>>>> Chief Scientist & Founder >>>>> SYSTAP, LLC >>>>> 4501 Tower Road >>>>> Greensboro, NC 27410 >>>>> br...@sy... >>>>> http://blazegraph.com >>>>> http://blog.bigdata.com >>>>> http://mapgraph.io >>>>> >>>>> Blazegraph™ is our ultra high-performance graph database that supports >>>>> both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our >>>>> disruptive new technology to use GPUs to accelerate data-parallel >>>>> graph analytics. >>>>> >>>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>>>> are for the sole use of the intended recipient(s) and are confidential >>>>> or proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>>>> dissemination or copying of this email or its contents or attachments >>>>> is prohibited. If you have received this communication in error, >>>>> please notify the sender by reply email and permanently delete all >>>>> copies of the email and its contents and attachments. >>>>> >>>>> >>>>> On Wed, Apr 29, 2015 at 4:23 PM, Jeremy J Carroll <jj...@sy...> wrote: >>>>>> >>>>>> A long time ago, maybe two years even, analytic query mode had correctness issues to do with literals that were in the query but not in the data or something like that …. >>>>>> >>>>>> Are these issues believed resolved? >>>>>> >>>>>> >>>>>> IIRC the sort of problem case was e.g: >>>>>> >>>>>> >>>>>> select * >>>>>> { >>>>>> >>>>>> BIND (Concat("Plain","Literal") as ?foo) >>>>>> graph ?g >>>>>> { ?s ?p ?foo } >>>>>> } LIMIT 20 >>>>>> >>>>>> >>>>>> where the data includes the graph <http://www.w3.org/1999/02/22-rdf-syntax-ns> >>>>>> >>>>>> which includes the triple >>>>>> >>>>>> rdf:PlainLiteral rdfs:label “PlainLiteral” >>>>>> >>>>>> This now does work both in Analytic mode and not. >>>>>> >>>>>> thanks >>>>>> >>>>>> Jeremy >>>>>> ------------------------------------------------------------------------------ >>>>>> One dashboard for servers and applications across Physical-Virtual-Cloud >>>>>> Widest out-of-the-box monitoring support with 50+ applications >>>>>> Performance metrics, stats and reports that give you Actionable Insights >>>>>> Deep dive visibility with transaction tracing using APM Insight. >>>>>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>>>>> _______________________________________________ >>>>>> Bigdata-developers mailing list >>>>>> Big...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>> >> |
|
From: James HK <jam...@gm...> - 2015-04-30 13:18:37
|
Hi, When trying to run our unit test suite [0] against a preliminary/local Blazegraph 1.5.1 instance the following error appeared during the test run (using a vanilla Blazegraph with the standard kb namespace) . Our test suite (and the test mentioned) is run/tested against Virtuoso 6.1/Fuseki 1.1.1/Sesame 2.7.14 [0] on Travis-CI therefore it is unlikely an issue on our side. 1) SMW\Tests\Integration\MediaWiki\Import\TimeDataTypeTest::testImportOfDifferentDateWithAssortmentOfOutputConversion SMW\SPARQLStore\Exception\BadHttpDatabaseResponseException: A SPARQL query error has occurred Query: PREFIX wiki: <http://example.org/id/> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX owl: <http://www.w3.org/2002/07/owl#> PREFIX swivt: <http://semantic-mediawiki.org/swivt/1.0#> PREFIX property: <http://example.org/id/Property-3A> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { wiki:TimeDataTypeRegressionTest ?p ?o } Error: Query refused Endpoint: http://192.168.1.104:9999/bigdata/namespace/kb/sparql HTTP response code: 500 This translates into an error on the Blazegraph side with (output from http://localhost:9999/bigdata/#update): ERROR: SPARQL-UPDATE: updateStr=PREFIX wiki: PREFIX rdf: PREFIX rdfs: PREFIX owl: PREFIX swivt: PREFIX property: PREFIX xsd: DELETE { wiki:TimeDataTypeRegressionTest ?p ?o } WHERE { wiki:TimeDataTypeRegressionTest ?p ?o } java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.openrdf.query.UpdateExecutionException: java.lang.IllegalStateException: Already assigned: old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), datatype=Vocab(-42)], new=LiteralExtensionIV [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: "8452"^^ at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:188) at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:359) at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:165) at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) at java.lang.Thread.run(Thread.java:744) Caused by: java.util.concurrent.ExecutionException: org.openrdf.query.UpdateExecutionException: java.lang.IllegalStateException: Already assigned: old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), datatype=Vocab(-42)], new=LiteralExtensionIV [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: "8452"^^ at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:188) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:460) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:371) at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at com.bigdata.rdf.task.AbstractApiTask.submitApiTask(AbstractApiTask.java:365) at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:258) ... 26 more Caused by: org.openrdf.query.UpdateExecutionException: java.lang.IllegalStateException: Already assigned: old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), datatype=Vocab(-42)], new=LiteralExtensionIV [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: "8452"^^ at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1303) at com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1683) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1310) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1275) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:517) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ... 1 more Caused by: java.lang.IllegalStateException: Already assigned: old=LiteralExtensionIV [delegate=XSDLong(6017484188943806464), datatype=Vocab(-42)], new=LiteralExtensionIV [delegate=XSDLong(204552172800000), datatype=Vocab(-42)], this: "8452"^^ at com.bigdata.rdf.model.BigdataValueImpl.setIV(BigdataValueImpl.java:139) at com.bigdata.rdf.internal.LexiconConfiguration.createInlineIV(LexiconConfiguration.java:430) at com.bigdata.rdf.lexicon.LexiconRelation.getInlineIV(LexiconRelation.java:3150) at com.bigdata.rdf.lexicon.LexiconRelation.addTerms(LexiconRelation.java:1719) at com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2928) at com.bigdata.rdf.store.AbstractTripleStore.getAccessPath(AbstractTripleStore.java:2874) at com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2962) at com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.removeStatements(BigdataSail.java:2865) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.addOrRemoveStatement(AST2BOpUpdate.java:2054) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertDeleteInsert(AST2BOpUpdate.java:989) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdateSwitch(AST2BOpUpdate.java:417) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:279) at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1295) ... 9 more [0] https://travis-ci.org/SemanticMediaWiki/SemanticMediaWiki Cheers |
|
From: Jack P. <jac...@gm...> - 2015-04-30 00:06:27
|
New question. Some of my properties are List<String>, not one of the Literal types in DefaultBlueprintsValueFactory. What to do about that? Many thanks in advance. Jack |
|
From: Jeremy J C. <jj...@sy...> - 2015-04-29 22:35:27
|
I added a test case: it gives different answers in analytic and non-analytic mode: neither is correct. It was harder to find than I remember … Jeremy J Carroll > On Apr 29, 2015, at 2:03 PM, Bryan Thompson <br...@sy...> wrote: > > Ok. Sounds good. > Bryan > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.bigdata.com > http://mapgraph.io > > Blazegraph™ is our ultra high-performance graph database that supports > both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our > disruptive new technology to use GPUs to accelerate data-parallel > graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments > are for the sole use of the intended recipient(s) and are confidential > or proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments > is prohibited. If you have received this communication in error, > please notify the sender by reply email and permanently delete all > copies of the email and its contents and attachments. > > > On Wed, Apr 29, 2015 at 4:56 PM, Jeremy J Carroll <jj...@sy...> wrote: >> Thanks, I will try and generate a failing query from that and add it tot he ticket >> >> >> >> >>> On Apr 29, 2015, at 1:51 PM, Bryan Thompson <br...@sy...> wrote: >>> >>> Jeremy, >>> >>> We did fix several issues related to this, but the specific ticket >>> that you are thinking of is >>> >>> http://trac.bigdata.com/ticket/490 (Mock IV / TermId >>> hashCode()/equals() problems) >>> >>> We did add logic to the query plans to do dictionary materialization >>> in order to resolve the IV for a constructed RDF Value that is in the >>> dictionary. This was done as part of >>> http://trac.bigdata.com/ticket/1118. Michael (Cc) might recall this >>> more clearly. However, I believe the specific issue for #490 still >>> remains in some circumstances. We have this tagged in our backlog and >>> plan to address it shortly. >>> >>> Let me check with Michael and get back to you on the specific issues >>> that remain and when we think they will be resolved. >>> >>> Thanks, >>> Bryan >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... >>> http://blazegraph.com >>> http://blog.bigdata.com >>> http://mapgraph.io >>> >>> Blazegraph™ is our ultra high-performance graph database that supports >>> both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our >>> disruptive new technology to use GPUs to accelerate data-parallel >>> graph analytics. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>> are for the sole use of the intended recipient(s) and are confidential >>> or proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>> dissemination or copying of this email or its contents or attachments >>> is prohibited. If you have received this communication in error, >>> please notify the sender by reply email and permanently delete all >>> copies of the email and its contents and attachments. >>> >>> >>> On Wed, Apr 29, 2015 at 4:23 PM, Jeremy J Carroll <jj...@sy...> wrote: >>>> >>>> A long time ago, maybe two years even, analytic query mode had correctness issues to do with literals that were in the query but not in the data or something like that …. >>>> >>>> Are these issues believed resolved? >>>> >>>> >>>> IIRC the sort of problem case was e.g: >>>> >>>> >>>> select * >>>> { >>>> >>>> BIND (Concat("Plain","Literal") as ?foo) >>>> graph ?g >>>> { ?s ?p ?foo } >>>> } LIMIT 20 >>>> >>>> >>>> where the data includes the graph <http://www.w3.org/1999/02/22-rdf-syntax-ns> >>>> >>>> which includes the triple >>>> >>>> rdf:PlainLiteral rdfs:label “PlainLiteral” >>>> >>>> This now does work both in Analytic mode and not. >>>> >>>> thanks >>>> >>>> Jeremy >>>> ------------------------------------------------------------------------------ >>>> One dashboard for servers and applications across Physical-Virtual-Cloud >>>> Widest out-of-the-box monitoring support with 50+ applications >>>> Performance metrics, stats and reports that give you Actionable Insights >>>> Deep dive visibility with transaction tracing using APM Insight. >>>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> |
|
From: Bryan T. <br...@sy...> - 2015-04-29 22:18:44
|
Ok. It is likely to be a different issue. We will take a look at it. The TermId.hashCode() and equals() issue is (I believe) specific to the analytic query mode. However, maybe there is a basic problem with the query that is being hit in the non-analytic mode and then running into #490 in the analytic mode and getting a different wrong answer. ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com http://mapgraph.io Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Wed, Apr 29, 2015 at 6:12 PM, Jeremy J Carroll <jj...@sy...> wrote: > I added a test case: it gives different answers in analytic and non-analytic mode: neither is correct. > > It was harder to find than I remember … > > Jeremy J Carroll > > >> On Apr 29, 2015, at 2:03 PM, Bryan Thompson <br...@sy...> wrote: >> >> Ok. Sounds good. >> Bryan >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.bigdata.com >> http://mapgraph.io >> >> Blazegraph™ is our ultra high-performance graph database that supports >> both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our >> disruptive new technology to use GPUs to accelerate data-parallel >> graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments >> are for the sole use of the intended recipient(s) and are confidential >> or proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments >> is prohibited. If you have received this communication in error, >> please notify the sender by reply email and permanently delete all >> copies of the email and its contents and attachments. >> >> >> On Wed, Apr 29, 2015 at 4:56 PM, Jeremy J Carroll <jj...@sy...> wrote: >>> Thanks, I will try and generate a failing query from that and add it tot he ticket >>> >>> >>> >>> >>>> On Apr 29, 2015, at 1:51 PM, Bryan Thompson <br...@sy...> wrote: >>>> >>>> Jeremy, >>>> >>>> We did fix several issues related to this, but the specific ticket >>>> that you are thinking of is >>>> >>>> http://trac.bigdata.com/ticket/490 (Mock IV / TermId >>>> hashCode()/equals() problems) >>>> >>>> We did add logic to the query plans to do dictionary materialization >>>> in order to resolve the IV for a constructed RDF Value that is in the >>>> dictionary. This was done as part of >>>> http://trac.bigdata.com/ticket/1118. Michael (Cc) might recall this >>>> more clearly. However, I believe the specific issue for #490 still >>>> remains in some circumstances. We have this tagged in our backlog and >>>> plan to address it shortly. >>>> >>>> Let me check with Michael and get back to you on the specific issues >>>> that remain and when we think they will be resolved. >>>> >>>> Thanks, >>>> Bryan >>>> ---- >>>> Bryan Thompson >>>> Chief Scientist & Founder >>>> SYSTAP, LLC >>>> 4501 Tower Road >>>> Greensboro, NC 27410 >>>> br...@sy... >>>> http://blazegraph.com >>>> http://blog.bigdata.com >>>> http://mapgraph.io >>>> >>>> Blazegraph™ is our ultra high-performance graph database that supports >>>> both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our >>>> disruptive new technology to use GPUs to accelerate data-parallel >>>> graph analytics. >>>> >>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>>> are for the sole use of the intended recipient(s) and are confidential >>>> or proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>>> dissemination or copying of this email or its contents or attachments >>>> is prohibited. If you have received this communication in error, >>>> please notify the sender by reply email and permanently delete all >>>> copies of the email and its contents and attachments. >>>> >>>> >>>> On Wed, Apr 29, 2015 at 4:23 PM, Jeremy J Carroll <jj...@sy...> wrote: >>>>> >>>>> A long time ago, maybe two years even, analytic query mode had correctness issues to do with literals that were in the query but not in the data or something like that …. >>>>> >>>>> Are these issues believed resolved? >>>>> >>>>> >>>>> IIRC the sort of problem case was e.g: >>>>> >>>>> >>>>> select * >>>>> { >>>>> >>>>> BIND (Concat("Plain","Literal") as ?foo) >>>>> graph ?g >>>>> { ?s ?p ?foo } >>>>> } LIMIT 20 >>>>> >>>>> >>>>> where the data includes the graph <http://www.w3.org/1999/02/22-rdf-syntax-ns> >>>>> >>>>> which includes the triple >>>>> >>>>> rdf:PlainLiteral rdfs:label “PlainLiteral” >>>>> >>>>> This now does work both in Analytic mode and not. >>>>> >>>>> thanks >>>>> >>>>> Jeremy >>>>> ------------------------------------------------------------------------------ >>>>> One dashboard for servers and applications across Physical-Virtual-Cloud >>>>> Widest out-of-the-box monitoring support with 50+ applications >>>>> Performance metrics, stats and reports that give you Actionable Insights >>>>> Deep dive visibility with transaction tracing using APM Insight. >>>>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>>>> _______________________________________________ >>>>> Bigdata-developers mailing list >>>>> Big...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> > |
|
From: Bryan T. <br...@sy...> - 2015-04-29 21:03:57
|
Ok. Sounds good. Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com http://mapgraph.io Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Wed, Apr 29, 2015 at 4:56 PM, Jeremy J Carroll <jj...@sy...> wrote: > Thanks, I will try and generate a failing query from that and add it tot he ticket > > > > >> On Apr 29, 2015, at 1:51 PM, Bryan Thompson <br...@sy...> wrote: >> >> Jeremy, >> >> We did fix several issues related to this, but the specific ticket >> that you are thinking of is >> >> http://trac.bigdata.com/ticket/490 (Mock IV / TermId >> hashCode()/equals() problems) >> >> We did add logic to the query plans to do dictionary materialization >> in order to resolve the IV for a constructed RDF Value that is in the >> dictionary. This was done as part of >> http://trac.bigdata.com/ticket/1118. Michael (Cc) might recall this >> more clearly. However, I believe the specific issue for #490 still >> remains in some circumstances. We have this tagged in our backlog and >> plan to address it shortly. >> >> Let me check with Michael and get back to you on the specific issues >> that remain and when we think they will be resolved. >> >> Thanks, >> Bryan >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.bigdata.com >> http://mapgraph.io >> >> Blazegraph™ is our ultra high-performance graph database that supports >> both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our >> disruptive new technology to use GPUs to accelerate data-parallel >> graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments >> are for the sole use of the intended recipient(s) and are confidential >> or proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments >> is prohibited. If you have received this communication in error, >> please notify the sender by reply email and permanently delete all >> copies of the email and its contents and attachments. >> >> >> On Wed, Apr 29, 2015 at 4:23 PM, Jeremy J Carroll <jj...@sy...> wrote: >>> >>> A long time ago, maybe two years even, analytic query mode had correctness issues to do with literals that were in the query but not in the data or something like that …. >>> >>> Are these issues believed resolved? >>> >>> >>> IIRC the sort of problem case was e.g: >>> >>> >>> select * >>> { >>> >>> BIND (Concat("Plain","Literal") as ?foo) >>> graph ?g >>> { ?s ?p ?foo } >>> } LIMIT 20 >>> >>> >>> where the data includes the graph <http://www.w3.org/1999/02/22-rdf-syntax-ns> >>> >>> which includes the triple >>> >>> rdf:PlainLiteral rdfs:label “PlainLiteral” >>> >>> This now does work both in Analytic mode and not. >>> >>> thanks >>> >>> Jeremy >>> ------------------------------------------------------------------------------ >>> One dashboard for servers and applications across Physical-Virtual-Cloud >>> Widest out-of-the-box monitoring support with 50+ applications >>> Performance metrics, stats and reports that give you Actionable Insights >>> Deep dive visibility with transaction tracing using APM Insight. >>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
|
From: Jeremy J C. <jj...@sy...> - 2015-04-29 20:56:25
|
Thanks, I will try and generate a failing query from that and add it tot he ticket > On Apr 29, 2015, at 1:51 PM, Bryan Thompson <br...@sy...> wrote: > > Jeremy, > > We did fix several issues related to this, but the specific ticket > that you are thinking of is > > http://trac.bigdata.com/ticket/490 (Mock IV / TermId > hashCode()/equals() problems) > > We did add logic to the query plans to do dictionary materialization > in order to resolve the IV for a constructed RDF Value that is in the > dictionary. This was done as part of > http://trac.bigdata.com/ticket/1118. Michael (Cc) might recall this > more clearly. However, I believe the specific issue for #490 still > remains in some circumstances. We have this tagged in our backlog and > plan to address it shortly. > > Let me check with Michael and get back to you on the specific issues > that remain and when we think they will be resolved. > > Thanks, > Bryan > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.bigdata.com > http://mapgraph.io > > Blazegraph™ is our ultra high-performance graph database that supports > both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our > disruptive new technology to use GPUs to accelerate data-parallel > graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments > are for the sole use of the intended recipient(s) and are confidential > or proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments > is prohibited. If you have received this communication in error, > please notify the sender by reply email and permanently delete all > copies of the email and its contents and attachments. > > > On Wed, Apr 29, 2015 at 4:23 PM, Jeremy J Carroll <jj...@sy...> wrote: >> >> A long time ago, maybe two years even, analytic query mode had correctness issues to do with literals that were in the query but not in the data or something like that …. >> >> Are these issues believed resolved? >> >> >> IIRC the sort of problem case was e.g: >> >> >> select * >> { >> >> BIND (Concat("Plain","Literal") as ?foo) >> graph ?g >> { ?s ?p ?foo } >> } LIMIT 20 >> >> >> where the data includes the graph <http://www.w3.org/1999/02/22-rdf-syntax-ns> >> >> which includes the triple >> >> rdf:PlainLiteral rdfs:label “PlainLiteral” >> >> This now does work both in Analytic mode and not. >> >> thanks >> >> Jeremy >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
|
From: Bryan T. <br...@sy...> - 2015-04-29 20:52:02
|
Jeremy, We did fix several issues related to this, but the specific ticket that you are thinking of is http://trac.bigdata.com/ticket/490 (Mock IV / TermId hashCode()/equals() problems) We did add logic to the query plans to do dictionary materialization in order to resolve the IV for a constructed RDF Value that is in the dictionary. This was done as part of http://trac.bigdata.com/ticket/1118. Michael (Cc) might recall this more clearly. However, I believe the specific issue for #490 still remains in some circumstances. We have this tagged in our backlog and plan to address it shortly. Let me check with Michael and get back to you on the specific issues that remain and when we think they will be resolved. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com http://mapgraph.io Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Wed, Apr 29, 2015 at 4:23 PM, Jeremy J Carroll <jj...@sy...> wrote: > > A long time ago, maybe two years even, analytic query mode had correctness issues to do with literals that were in the query but not in the data or something like that …. > > Are these issues believed resolved? > > > IIRC the sort of problem case was e.g: > > > select * > { > > BIND (Concat("Plain","Literal") as ?foo) > graph ?g > { ?s ?p ?foo } > } LIMIT 20 > > > where the data includes the graph <http://www.w3.org/1999/02/22-rdf-syntax-ns> > > which includes the triple > > rdf:PlainLiteral rdfs:label “PlainLiteral” > > This now does work both in Analytic mode and not. > > thanks > > Jeremy > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
|
From: Jeremy J C. <jj...@sy...> - 2015-04-29 20:23:25
|
A long time ago, maybe two years even, analytic query mode had correctness issues to do with literals that were in the query but not in the data or something like that ….
Are these issues believed resolved?
IIRC the sort of problem case was e.g:
select *
{
BIND (Concat("Plain","Literal") as ?foo)
graph ?g
{ ?s ?p ?foo }
} LIMIT 20
where the data includes the graph <http://www.w3.org/1999/02/22-rdf-syntax-ns>
which includes the triple
rdf:PlainLiteral rdfs:label “PlainLiteral”
This now does work both in Analytic mode and not.
thanks
Jeremy
|
|
From: Jim B. <ba...@ne...> - 2015-04-29 01:50:13
|
Hi Kaushik, I do have a reasoner integration I developed that I use with Blazegraph. It is called Owlet: https://github.com/phenoscape/owlet It is an API for SPARQL query expansion by means of an in-memory reasoner. I typically use it with ELK but you can use any OWL API-based reasoner. It is not directly integrated into Blazegraph, but you can incorporate it into a separate application to pre-expand your query before submitting to Blazegraph. I also have a web service application called Owlery which provides a query expansion service using Owlet. One thing to keep in mind is that you must have all the data used by the reasoner loaded into memory in an instance of OWLOntology. This works well for me because I typically only need Tbox reasoning—we have large anatomical ontologies. We query the ontology using a complex class expression (e.g. muscles attached to bones in the head), get all the inferred subclasses, and the retrieve instances tagged with any of those classes from Blazegraph. We don’t actually need to classify our instance data using our ontology, which is what it looks like you’re doing. That will work with Owlet, the only issue is that you will need all your instance data loaded into the reasoner. In my experience the available OWL reasoners are much more scalable for Tbox reasoning vs. giving them a large Abox. Best regards, Jim > On Apr 28, 2015, at 8:04 AM, Bryan Thompson <br...@sy...> wrote: > > Blazegraph does not support owl out of the box. Jim Balhoff (cc) has an ELK reasoner integration that you could use for this. > > @jim: can you provide some pointers to your work? > > Thanks, > Bryan > > On Tuesday, April 28, 2015, Kaushik Chakraborty <kay...@gm...> wrote: > Hi Brad, > > Thanks a lot for the clarification, it worked as I followed your steps. > However, I'm stuck at a simple OWL reasoning as I extend the same instance data. > > - Here's my updated instance data > > @prefix : <http://kg.cts.com/> . > @prefix foaf: <http://xmlns.com/foaf/0.1/> . > @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . > @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . > @prefix owl: <http://www.w3.org/2002/07/owl#> . > > @prefix org: <http://www.w3.org/ns/org#> . > > > :A rdf:type org:Organization . > > :LeadershipRoles a org:Role . > :AdminRoles a org:Role . > > :ROLE1 a :LeadershipRoles . > :ROLE2 a :LeadershipRoles . > :ROLE3 a :AdminRoles . > > :CEOType a org:Membership ; > owl:equivalentClass [ > a owl:Restriction ; > owl:onProperty org:role ; > owl:someValuesFrom :LeadershipRoles > ] . > > > :139137 rdf:type foaf:Agent ; > rdfs:label "139137" ; > org:memberOf :A ; > org:hasMembership [a org:Membership; org:role :ROLE1,:ROLE3] . > > :139138 rdf:type foaf:Agent ; > rdfs:label "139138" ; > org:memberOf :A ; > org:hasMembership [a org:Membership; org:role :ROLE3] . > > > - As per the above assertions, I should get :139137 to have :CEOType membership if I make this SPARQL query. But I'm not > > prefix : <http://kg.cts.com/> > prefix foaf: <http://xmlns.com/foaf/0.1/> > prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> > prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> > prefix owl: <http://www.w3.org/2002/07/owl#> > prefix org: <http://www.w3.org/ns/org#> > > select distinct ?who > where > { > ?who org:hasMembership :CEOType . > } > > > - However, I am getting right result if I search for exact type i.e. > > prefix : <http://kg.cts.com/> > prefix foaf: <http://xmlns.com/foaf/0.1/> > prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> > prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> > prefix owl: <http://www.w3.org/2002/07/owl#> > prefix org: <http://www.w3.org/ns/org#> > > select distinct ?who > where > { > > ?who org:hasMembership/org:role/rdf:type :LeadershipRoles. > } > > RESULT: > who > <http://kg.cts.com/139137> > > > What am I doing wrong ? > Thanks again for your help and patience. > > On Tue, Apr 28, 2015 at 10:14 AM, Brad Bebee <be...@sy...> wrote: > Kaushik, > > I was able to get your example working using the following steps. I suspect the issue may have been that you had not loaded the ontology data prior to loading the instance data. Please give it a try and let us know if that works. > > Thanks, --Brad > > 1. Create a properties file with your properties (owl_test/test.properties) and started a blazegraph workbench instance. > > java -Xmx2g -Dbigdata.propertyFile=owl_test/test.properties -jar bigdata-bundled-1.5.1.jar > > 2. Loaded the ontology data in the workbench (http://localhost:9999/bigdata/). Under the "Update" tab, I selected "File Path or URL" and pasted in http://www.w3.org/ns/org.n3. > > 3. Loaded the instance data. Also in the workbench "Update" tab, I selected "RDF Data" with the format "Turtle". > > @prefix : <http://kg.cts.com/> . > @prefix foaf: <http://xmlns.com/foaf/0.1/> . > @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . > @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . > @prefix owl: <http://www.w3.org/2002/07/owl#> . > > @prefix org: <http://www.w3.org/ns/org#> . > > > :A rdf:type org:Organization . > > :139137 rdf:type foaf:Agent ; > rdfs:label "139137" ; > org:memberOf :A . > > 4. Issued the SPARQL Query in the workbench. > > prefix : <http://kg.cts.com/> > prefix foaf: <http://xmlns.com/foaf/0.1/> > prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> > prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> > prefix owl: <http://www.w3.org/2002/07/owl#> > prefix org: <http://www.w3.org/ns/org#> > > select distinct ?s ?w > where > { > ?s org:hasMember ?w . > } > > with the expected results. > > <http://kg.cts.com/A> <http://kg.cts.com/139137> > > > > On Mon, Apr 27, 2015 at 9:33 PM, Bryan Thompson <br...@sy...> wrote: > Ontologies are just data. Use any approach you would use to load the data. > > Bryan > > On Apr 27, 2015 11:04 AM, "Kaushik Chakraborty" <kay...@gm...> wrote: > Hi, > > If I put this RDF data in the Updata panel of NanoSparqlServer: > > @prefix : <http://kg.cts.com/> . > @prefix foaf: <http://xmlns.com/foaf/0.1/> . > @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . > @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . > @prefix owl: <http://www.w3.org/2002/07/owl#> . > > @prefix org: <http://www.w3.org/ns/org#> . > > > :A rdf:type org:Organization . > > :139137 rdf:type foaf:Agent ; > rdfs:label "139137" ; > org:memberOf :A . > > And then if I make a query like this > > prefix : <http://kg.cts.com/> > prefix foaf: <http://xmlns.com/foaf/0.1/> > prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> > prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> > prefix owl: <http://www.w3.org/2002/07/owl#> > prefix org: <http://www.w3.org/ns/org#> > > select distinct ?s ?w > where > { > ?s org:hasMember ?w . > } > > It should list <http://kg.cts.com/A> and <http://kg.cts.com/139137> respectively as ?s and ?w as per the Organization Ontology (http://www.w3.org/TR/vocab-org/#org:Organization) > > But that's not happening. Clearly the inferencing is not working for Org Ontology. > > Here're my namespace properties: > > com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor 1024 > com.bigdata.relation.container test-import > com.bigdata.journal.AbstractJournal.bufferMode DiskRW > com.bigdata.journal.AbstractJournal.file bigdata.jnl > com.bigdata.journal.AbstractJournal.initialExtent 209715200 > com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass com.bigdata.rdf.vocab.DefaultBigdataVocabulary > com.bigdata.rdf.store.AbstractTripleStore.textIndex false > com.bigdata.btree.BTree.branchingFactor 128 > com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor 400 > com.bigdata.rdf.store.AbstractTripleStore.axiomsClass com.bigdata.rdf.axioms.OwlAxioms > com.bigdata.service.AbstractTransactionService.minReleaseAge 1 > com.bigdata.rdf.sail.truthMaintenance true > com.bigdata.journal.AbstractJournal.maximumExtent 209715200 > com.bigdata.rdf.sail.namespace test-import > com.bigdata.relation.class com.bigdata.rdf.store.LocalTripleStore > com.bigdata.rdf.store.AbstractTripleStore.quads false > com.bigdata.relation.namespace test-import > com.bigdata.btree.writeRetentionQueue.capacity 4000 > com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers false > > What am I missing here ? > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > > -- > _______________ > Brad Bebee > Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.systap.com > > Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > > > > -- > Thanks, > Kaushik > > > -- > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.bigdata.com > http://mapgraph.io > Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > |
|
From: Bryan T. <br...@sy...> - 2015-04-28 17:51:32
|
Trac, is up again. Still with open ID. Which seems to be working. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com http://mapgraph.io Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, Apr 27, 2015 at 8:15 PM, Bryan Thompson <br...@sy...> wrote: > All, > > Trac is down at the instant. We hope to have it back online shortly. If > you have a support subscription, please contact us directly with your issue. > > Thank you for your patience. > > Bryan |
|
From: Jack P. <jac...@gm...> - 2015-04-28 14:51:31
|
Thanks Mike and Bryan. Both very useful answers. I am just now beginning to migrate the "hypermembrane" component of OpenSherlock to Blazegraph. Test cases are working fine. Cheers Jack On Mon, Apr 27, 2015 at 7:07 PM, Mike Personick <mi...@sy...> wrote: > Jack, > > Check out the BlueprintsValueFactory type hierarchy for details on how IDs > get round-tripped to URIs. It is a pluggable API so you can define your > own URI factory for Blueprints IDs. I am planning on doing a detailed > write-up of this stuff later this week. > > Thanks, > Mike > > > On Mon, Apr 27, 2015 at 3:24 PM, Jack Park <jac...@gm...> wrote: > >> Some fetches in a blueprints graph require queries on multiple key/value >> pairs: BigdataGraphClient appears to only support fetching vertices against >> a single key/value pair. That seems to indicate a need to bring out SPARQL. >> >> I figured out how to get a RepositoryConnection, but have no clue what to >> expect when mapping my identifiers against URIs. >> >> Any ideas? Perhaps a better way? >> >> Many thanks in advance. >> Jack >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > |
|
From: Bryan T. <br...@sy...> - 2015-04-28 12:04:16
|
Blazegraph does not support owl out of the box. Jim Balhoff (cc) has an ELK reasoner integration that you could use for this. @jim: can you provide some pointers to your work? Thanks, Bryan On Tuesday, April 28, 2015, Kaushik Chakraborty <kay...@gm...> wrote: > Hi Brad, > > Thanks a lot for the clarification, it worked as I followed your steps. > However, I'm stuck at a simple OWL reasoning as I extend the same instance > data. > > - Here's my updated instance data > > *@prefix : <http://kg.cts.com/ <http://kg.cts.com/>> .* > *@prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>> .* > *@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# > <http://www.w3.org/1999/02/22-rdf-syntax-ns#>> .* > *@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# > <http://www.w3.org/2000/01/rdf-schema#>> .* > *@prefix owl: <http://www.w3.org/2002/07/owl# > <http://www.w3.org/2002/07/owl#>> .* > > *@prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>> .* > > > *:A rdf:type org:Organization .* > > *:LeadershipRoles a org:Role .* > *:AdminRoles a org:Role .* > > *:ROLE1 a :LeadershipRoles .* > *:ROLE2 a :LeadershipRoles .* > *:ROLE3 a :AdminRoles .* > > *:CEOType a org:Membership ; * > * owl:equivalentClass [* > > * a owl:Restriction ;* > * owl:onProperty org:role ;* > * owl:someValuesFrom :LeadershipRoles* > * ] .* > > > *:139137 rdf:type foaf:Agent ;* > * rdfs:label "139137" ;* > * org:memberOf :A ;* > * org:hasMembership [a org:Membership; org:role :ROLE1,:ROLE3] .* > > *:139138 rdf:type foaf:Agent ;* > * rdfs:label "139138" ;* > * org:memberOf :A ;* > * org:hasMembership [a org:Membership; org:role :ROLE3] .* > > > > *- *As per the above assertions, I should get :139137 to have :CEOType > membership if I make this SPARQL query. But I'm not > > *prefix : <http://kg.cts.com/ <http://kg.cts.com/>>* > *prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>>* > *prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# > <http://www.w3.org/1999/02/22-rdf-syntax-ns#>>* > *prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# > <http://www.w3.org/2000/01/rdf-schema#>>* > *prefix owl: <http://www.w3.org/2002/07/owl# > <http://www.w3.org/2002/07/owl#>>* > *prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>>* > > *select distinct ?who* > *where* > *{* > * ?who org:hasMembership :CEOType .* > *}* > > > - However, I am getting right result if I search for exact type i.e. > > *prefix : <http://kg.cts.com/ <http://kg.cts.com/>>* > *prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>>* > *prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# > <http://www.w3.org/1999/02/22-rdf-syntax-ns#>>* > *prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# > <http://www.w3.org/2000/01/rdf-schema#>>* > *prefix owl: <http://www.w3.org/2002/07/owl# > <http://www.w3.org/2002/07/owl#>>* > *prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>>* > > *select distinct ?who* > *where* > *{* > > * ?who org:hasMembership/org:role/rdf:type :LeadershipRoles.* > *}* > > *RESULT: * > who<http://kg.cts.com/139137> > <http://localhost:9999/bigdata/#explore:namespace2:%3Chttp://kg.cts.com/139137%3E> > > > > What am I doing wrong ? > Thanks again for your help and patience. > > On Tue, Apr 28, 2015 at 10:14 AM, Brad Bebee <be...@sy... > <javascript:_e(%7B%7D,'cvml','be...@sy...');>> wrote: > >> Kaushik, >> >> I was able to get your example working using the following steps. I >> suspect the issue may have been that you had not loaded the ontology data >> prior to loading the instance data. Please give it a try and let us know >> if that works. >> >> Thanks, --Brad >> >> 1. Create a properties file with your properties >> (owl_test/test.properties) and started a blazegraph workbench instance. >> >> java -Xmx2g -Dbigdata.propertyFile=owl_test/test.properties -jar >> bigdata-bundled-1.5.1.jar >> >> 2. Loaded the ontology data in the workbench ( >> http://localhost:9999/bigdata/). Under the "Update" tab, I selected >> "File Path or URL" and pasted in http://www.w3.org/ns/org.n3. >> >> 3. Loaded the instance data. Also in the workbench "Update" tab, I >> selected "RDF Data" with the format "Turtle". >> >> @prefix : <http://kg.cts.com/> . >> @prefix foaf: <http://xmlns.com/foaf/0.1/> . >> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . >> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . >> @prefix owl: <http://www.w3.org/2002/07/owl#> . >> >> @prefix org: <http://www.w3.org/ns/org#> . >> >> >> :A rdf:type org:Organization . >> >> :139137 rdf:type foaf:Agent ; >> rdfs:label "139137" ; >> org:memberOf :A . >> >> 4. Issued the SPARQL Query in the workbench. >> >> prefix : <http://kg.cts.com/> >> prefix foaf: <http://xmlns.com/foaf/0.1/> >> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> >> prefix owl: <http://www.w3.org/2002/07/owl#> >> prefix org: <http://www.w3.org/ns/org#> >> >> select distinct ?s ?w >> where >> { >> ?s org:hasMember ?w . >> } >> >> with the expected results. >> >> <http://kg.cts.com/A> >> <http://localhost:9999/bigdata/#explore:kb:%3Chttp://kg.cts.com/A%3E> >> <http://kg.cts.com/139137> >> <http://localhost:9999/bigdata/#explore:kb:%3Chttp://kg.cts.com/139137%3E> >> >> >> >> On Mon, Apr 27, 2015 at 9:33 PM, Bryan Thompson <br...@sy... >> <javascript:_e(%7B%7D,'cvml','br...@sy...');>> wrote: >> >>> Ontologies are just data. Use any approach you would use to load the >>> data. >>> >>> Bryan >>> On Apr 27, 2015 11:04 AM, "Kaushik Chakraborty" <kay...@gm... >>> <javascript:_e(%7B%7D,'cvml','kay...@gm...');>> wrote: >>> >>>> Hi, >>>> >>>> If I put this RDF data in the Updata panel of NanoSparqlServer: >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> *@prefix : <http://kg.cts.com/ <http://kg.cts.com/>> .@prefix foaf: >>>> <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>> .@prefix rdf: >>>> <http://www.w3.org/1999/02/22-rdf-syntax-ns# >>>> <http://www.w3.org/1999/02/22-rdf-syntax-ns#>> .@prefix rdfs: >>>> <http://www.w3.org/2000/01/rdf-schema# >>>> <http://www.w3.org/2000/01/rdf-schema#>> .@prefix owl: >>>> <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>> .@prefix >>>> org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>> .:A rdf:type >>>> org:Organization .:139137 rdf:type foaf:Agent ; rdfs:label "139137" ; >>>> org:memberOf :A .* >>>> >>>> And then if I make a query like this >>>> >>>> *prefix : <http://kg.cts.com/ <http://kg.cts.com/>>* >>>> *prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>>* >>>> *prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# >>>> <http://www.w3.org/1999/02/22-rdf-syntax-ns#>>* >>>> *prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# >>>> <http://www.w3.org/2000/01/rdf-schema#>>* >>>> *prefix owl: <http://www.w3.org/2002/07/owl# >>>> <http://www.w3.org/2002/07/owl#>>* >>>> *prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>>* >>>> >>>> *select distinct ?s ?w* >>>> *where* >>>> *{* >>>> * ?s org:hasMember ?w .* >>>> *}* >>>> >>>> It should list <http://kg.cts.com/A> and <http://kg.cts.com/139137> >>>> respectively as ?s and ?w as per the Organization Ontology ( >>>> http://www.w3.org/TR/vocab-org/#org:Organization) >>>> >>>> But that's not happening. Clearly the inferencing is not working for >>>> Org Ontology. >>>> >>>> Here're my namespace properties: >>>> >>>> com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor >>>> 1024com.bigdata.relation.containertest-import >>>> com.bigdata.journal.AbstractJournal.bufferModeDiskRW >>>> com.bigdata.journal.AbstractJournal.filebigdata.jnl >>>> com.bigdata.journal.AbstractJournal.initialExtent209715200 >>>> com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass >>>> com.bigdata.rdf.vocab.DefaultBigdataVocabulary >>>> com.bigdata.rdf.store.AbstractTripleStore.textIndexfalse >>>> com.bigdata.btree.BTree.branchingFactor128 >>>> com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor400 >>>> com.bigdata.rdf.store.AbstractTripleStore.axiomsClass >>>> com.bigdata.rdf.axioms.OwlAxioms >>>> com.bigdata.service.AbstractTransactionService.minReleaseAge1 >>>> com.bigdata.rdf.sail.truthMaintenancetrue >>>> com.bigdata.journal.AbstractJournal.maximumExtent209715200 >>>> com.bigdata.rdf.sail.namespacetest-importcom.bigdata.relation.class >>>> com.bigdata.rdf.store.LocalTripleStore >>>> com.bigdata.rdf.store.AbstractTripleStore.quadsfalse >>>> com.bigdata.relation.namespacetest-import >>>> com.bigdata.btree.writeRetentionQueue.capacity4000 >>>> com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiersfalse >>>> What am I missing here ? >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> One dashboard for servers and applications across Physical-Virtual-Cloud >>>> Widest out-of-the-box monitoring support with 50+ applications >>>> Performance metrics, stats and reports that give you Actionable Insights >>>> Deep dive visibility with transaction tracing using APM Insight. >>>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> <javascript:_e(%7B%7D,'cvml','Big...@li...');> >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>> >>>> >>> >>> ------------------------------------------------------------------------------ >>> One dashboard for servers and applications across Physical-Virtual-Cloud >>> Widest out-of-the-box monitoring support with 50+ applications >>> Performance metrics, stats and reports that give you Actionable Insights >>> Deep dive visibility with transaction tracing using APM Insight. >>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> <javascript:_e(%7B%7D,'cvml','Big...@li...');> >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> >> >> -- >> _______________ >> Brad Bebee >> Managing Partner >> SYSTAP, LLC >> e: be...@sy... <javascript:_e(%7B%7D,'cvml','be...@sy...');> >> m: 202.642.7961 >> f: 571.367.5000 >> w: www.systap.com >> >> Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new >> technology to use GPUs to accelerate data-parallel graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> > > > > -- > Thanks, > Kaushik > -- ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com <http://bigdata.com> http://mapgraph.io Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
|
From: Kaushik C. <kay...@gm...> - 2015-04-28 09:17:20
|
Hi Brad, Thanks a lot for the clarification, it worked as I followed your steps. However, I'm stuck at a simple OWL reasoning as I extend the same instance data. - Here's my updated instance data *@prefix : <http://kg.cts.com/ <http://kg.cts.com/>> .* *@prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>> .* *@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# <http://www.w3.org/1999/02/22-rdf-syntax-ns#>> .* *@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# <http://www.w3.org/2000/01/rdf-schema#>> .* *@prefix owl: <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>> .* *@prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>> .* *:A rdf:type org:Organization .* *:LeadershipRoles a org:Role .* *:AdminRoles a org:Role .* *:ROLE1 a :LeadershipRoles .* *:ROLE2 a :LeadershipRoles .* *:ROLE3 a :AdminRoles .* *:CEOType a org:Membership ; * * owl:equivalentClass [* * a owl:Restriction ;* * owl:onProperty org:role ;* * owl:someValuesFrom :LeadershipRoles* * ] .* *:139137 rdf:type foaf:Agent ;* * rdfs:label "139137" ;* * org:memberOf :A ;* * org:hasMembership [a org:Membership; org:role :ROLE1,:ROLE3] .* *:139138 rdf:type foaf:Agent ;* * rdfs:label "139138" ;* * org:memberOf :A ;* * org:hasMembership [a org:Membership; org:role :ROLE3] .* *- *As per the above assertions, I should get :139137 to have :CEOType membership if I make this SPARQL query. But I'm not *prefix : <http://kg.cts.com/ <http://kg.cts.com/>>* *prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>>* *prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# <http://www.w3.org/1999/02/22-rdf-syntax-ns#>>* *prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# <http://www.w3.org/2000/01/rdf-schema#>>* *prefix owl: <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>>* *prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>>* *select distinct ?who* *where* *{* * ?who org:hasMembership :CEOType .* *}* - However, I am getting right result if I search for exact type i.e. *prefix : <http://kg.cts.com/ <http://kg.cts.com/>>* *prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>>* *prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# <http://www.w3.org/1999/02/22-rdf-syntax-ns#>>* *prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# <http://www.w3.org/2000/01/rdf-schema#>>* *prefix owl: <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>>* *prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>>* *select distinct ?who* *where* *{* * ?who org:hasMembership/org:role/rdf:type :LeadershipRoles.* *}* *RESULT: * who<http://kg.cts.com/139137> <http://localhost:9999/bigdata/#explore:namespace2:<http://kg.cts.com/139137>> What am I doing wrong ? Thanks again for your help and patience. On Tue, Apr 28, 2015 at 10:14 AM, Brad Bebee <be...@sy...> wrote: > Kaushik, > > I was able to get your example working using the following steps. I > suspect the issue may have been that you had not loaded the ontology data > prior to loading the instance data. Please give it a try and let us know > if that works. > > Thanks, --Brad > > 1. Create a properties file with your properties > (owl_test/test.properties) and started a blazegraph workbench instance. > > java -Xmx2g -Dbigdata.propertyFile=owl_test/test.properties -jar > bigdata-bundled-1.5.1.jar > > 2. Loaded the ontology data in the workbench ( > http://localhost:9999/bigdata/). Under the "Update" tab, I selected > "File Path or URL" and pasted in http://www.w3.org/ns/org.n3. > > 3. Loaded the instance data. Also in the workbench "Update" tab, I > selected "RDF Data" with the format "Turtle". > > @prefix : <http://kg.cts.com/> . > @prefix foaf: <http://xmlns.com/foaf/0.1/> . > @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . > @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . > @prefix owl: <http://www.w3.org/2002/07/owl#> . > > @prefix org: <http://www.w3.org/ns/org#> . > > > :A rdf:type org:Organization . > > :139137 rdf:type foaf:Agent ; > rdfs:label "139137" ; > org:memberOf :A . > > 4. Issued the SPARQL Query in the workbench. > > prefix : <http://kg.cts.com/> > prefix foaf: <http://xmlns.com/foaf/0.1/> > prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> > prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> > prefix owl: <http://www.w3.org/2002/07/owl#> > prefix org: <http://www.w3.org/ns/org#> > > select distinct ?s ?w > where > { > ?s org:hasMember ?w . > } > > with the expected results. > > <http://kg.cts.com/A> > <http://localhost:9999/bigdata/#explore:kb:%3Chttp://kg.cts.com/A%3E> > <http://kg.cts.com/139137> > <http://localhost:9999/bigdata/#explore:kb:%3Chttp://kg.cts.com/139137%3E> > > > > On Mon, Apr 27, 2015 at 9:33 PM, Bryan Thompson <br...@sy...> wrote: > >> Ontologies are just data. Use any approach you would use to load the data. >> >> Bryan >> On Apr 27, 2015 11:04 AM, "Kaushik Chakraborty" <kay...@gm...> >> wrote: >> >>> Hi, >>> >>> If I put this RDF data in the Updata panel of NanoSparqlServer: >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> *@prefix : <http://kg.cts.com/ <http://kg.cts.com/>> .@prefix foaf: >>> <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>> .@prefix rdf: >>> <http://www.w3.org/1999/02/22-rdf-syntax-ns# >>> <http://www.w3.org/1999/02/22-rdf-syntax-ns#>> .@prefix rdfs: >>> <http://www.w3.org/2000/01/rdf-schema# >>> <http://www.w3.org/2000/01/rdf-schema#>> .@prefix owl: >>> <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>> .@prefix >>> org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>> .:A rdf:type >>> org:Organization .:139137 rdf:type foaf:Agent ; rdfs:label "139137" ; >>> org:memberOf :A .* >>> >>> And then if I make a query like this >>> >>> *prefix : <http://kg.cts.com/ <http://kg.cts.com/>>* >>> *prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>>* >>> *prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# >>> <http://www.w3.org/1999/02/22-rdf-syntax-ns#>>* >>> *prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# >>> <http://www.w3.org/2000/01/rdf-schema#>>* >>> *prefix owl: <http://www.w3.org/2002/07/owl# >>> <http://www.w3.org/2002/07/owl#>>* >>> *prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>>* >>> >>> *select distinct ?s ?w* >>> *where* >>> *{* >>> * ?s org:hasMember ?w .* >>> *}* >>> >>> It should list <http://kg.cts.com/A> and <http://kg.cts.com/139137> >>> respectively as ?s and ?w as per the Organization Ontology ( >>> http://www.w3.org/TR/vocab-org/#org:Organization) >>> >>> But that's not happening. Clearly the inferencing is not working for Org >>> Ontology. >>> >>> Here're my namespace properties: >>> >>> com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor1024 >>> com.bigdata.relation.containertest-import >>> com.bigdata.journal.AbstractJournal.bufferModeDiskRW >>> com.bigdata.journal.AbstractJournal.filebigdata.jnl >>> com.bigdata.journal.AbstractJournal.initialExtent209715200 >>> com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass >>> com.bigdata.rdf.vocab.DefaultBigdataVocabulary >>> com.bigdata.rdf.store.AbstractTripleStore.textIndexfalse >>> com.bigdata.btree.BTree.branchingFactor128 >>> com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor400 >>> com.bigdata.rdf.store.AbstractTripleStore.axiomsClass >>> com.bigdata.rdf.axioms.OwlAxioms >>> com.bigdata.service.AbstractTransactionService.minReleaseAge1 >>> com.bigdata.rdf.sail.truthMaintenancetrue >>> com.bigdata.journal.AbstractJournal.maximumExtent209715200 >>> com.bigdata.rdf.sail.namespacetest-importcom.bigdata.relation.class >>> com.bigdata.rdf.store.LocalTripleStore >>> com.bigdata.rdf.store.AbstractTripleStore.quadsfalse >>> com.bigdata.relation.namespacetest-import >>> com.bigdata.btree.writeRetentionQueue.capacity4000 >>> com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiersfalse >>> What am I missing here ? >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> One dashboard for servers and applications across Physical-Virtual-Cloud >>> Widest out-of-the-box monitoring support with 50+ applications >>> Performance metrics, stats and reports that give you Actionable Insights >>> Deep dive visibility with transaction tracing using APM Insight. >>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > > -- > _______________ > Brad Bebee > Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.systap.com > > Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > -- Thanks, Kaushik |
|
From: Brad B. <be...@sy...> - 2015-04-28 04:44:50
|
Kaushik, I was able to get your example working using the following steps. I suspect the issue may have been that you had not loaded the ontology data prior to loading the instance data. Please give it a try and let us know if that works. Thanks, --Brad 1. Create a properties file with your properties (owl_test/test.properties) and started a blazegraph workbench instance. java -Xmx2g -Dbigdata.propertyFile=owl_test/test.properties -jar bigdata-bundled-1.5.1.jar 2. Loaded the ontology data in the workbench ( http://localhost:9999/bigdata/). Under the "Update" tab, I selected "File Path or URL" and pasted in http://www.w3.org/ns/org.n3. 3. Loaded the instance data. Also in the workbench "Update" tab, I selected "RDF Data" with the format "Turtle". @prefix : <http://kg.cts.com/> . @prefix foaf: <http://xmlns.com/foaf/0.1/> . @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix org: <http://www.w3.org/ns/org#> . :A rdf:type org:Organization . :139137 rdf:type foaf:Agent ; rdfs:label "139137" ; org:memberOf :A . 4. Issued the SPARQL Query in the workbench. prefix : <http://kg.cts.com/> prefix foaf: <http://xmlns.com/foaf/0.1/> prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> prefix owl: <http://www.w3.org/2002/07/owl#> prefix org: <http://www.w3.org/ns/org#> select distinct ?s ?w where { ?s org:hasMember ?w . } with the expected results. <http://kg.cts.com/A> <http://localhost:9999/bigdata/#explore:kb:<http://kg.cts.com/A>> <http://kg.cts.com/139137> <http://localhost:9999/bigdata/#explore:kb:<http://kg.cts.com/139137>> On Mon, Apr 27, 2015 at 9:33 PM, Bryan Thompson <br...@sy...> wrote: > Ontologies are just data. Use any approach you would use to load the data. > > Bryan > On Apr 27, 2015 11:04 AM, "Kaushik Chakraborty" <kay...@gm...> > wrote: > >> Hi, >> >> If I put this RDF data in the Updata panel of NanoSparqlServer: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> *@prefix : <http://kg.cts.com/ <http://kg.cts.com/>> .@prefix foaf: >> <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>> .@prefix rdf: >> <http://www.w3.org/1999/02/22-rdf-syntax-ns# >> <http://www.w3.org/1999/02/22-rdf-syntax-ns#>> .@prefix rdfs: >> <http://www.w3.org/2000/01/rdf-schema# >> <http://www.w3.org/2000/01/rdf-schema#>> .@prefix owl: >> <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>> .@prefix >> org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>> .:A rdf:type >> org:Organization .:139137 rdf:type foaf:Agent ; rdfs:label "139137" ; >> org:memberOf :A .* >> >> And then if I make a query like this >> >> *prefix : <http://kg.cts.com/ <http://kg.cts.com/>>* >> *prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>>* >> *prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# >> <http://www.w3.org/1999/02/22-rdf-syntax-ns#>>* >> *prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# >> <http://www.w3.org/2000/01/rdf-schema#>>* >> *prefix owl: <http://www.w3.org/2002/07/owl# >> <http://www.w3.org/2002/07/owl#>>* >> *prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>>* >> >> *select distinct ?s ?w* >> *where* >> *{* >> * ?s org:hasMember ?w .* >> *}* >> >> It should list <http://kg.cts.com/A> and <http://kg.cts.com/139137> >> respectively as ?s and ?w as per the Organization Ontology ( >> http://www.w3.org/TR/vocab-org/#org:Organization) >> >> But that's not happening. Clearly the inferencing is not working for Org >> Ontology. >> >> Here're my namespace properties: >> >> com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor1024 >> com.bigdata.relation.containertest-import >> com.bigdata.journal.AbstractJournal.bufferModeDiskRW >> com.bigdata.journal.AbstractJournal.filebigdata.jnl >> com.bigdata.journal.AbstractJournal.initialExtent209715200 >> com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass >> com.bigdata.rdf.vocab.DefaultBigdataVocabulary >> com.bigdata.rdf.store.AbstractTripleStore.textIndexfalse >> com.bigdata.btree.BTree.branchingFactor128 >> com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor400 >> com.bigdata.rdf.store.AbstractTripleStore.axiomsClass >> com.bigdata.rdf.axioms.OwlAxioms >> com.bigdata.service.AbstractTransactionService.minReleaseAge1 >> com.bigdata.rdf.sail.truthMaintenancetrue >> com.bigdata.journal.AbstractJournal.maximumExtent209715200 >> com.bigdata.rdf.sail.namespacetest-importcom.bigdata.relation.class >> com.bigdata.rdf.store.LocalTripleStore >> com.bigdata.rdf.store.AbstractTripleStore.quadsfalse >> com.bigdata.relation.namespacetest-import >> com.bigdata.btree.writeRetentionQueue.capacity4000 >> com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiersfalse >> What am I missing here ? >> >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- _______________ Brad Bebee Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.systap.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
|
From: Mike P. <mi...@sy...> - 2015-04-28 02:07:41
|
Jack, Check out the BlueprintsValueFactory type hierarchy for details on how IDs get round-tripped to URIs. It is a pluggable API so you can define your own URI factory for Blueprints IDs. I am planning on doing a detailed write-up of this stuff later this week. Thanks, Mike On Mon, Apr 27, 2015 at 3:24 PM, Jack Park <jac...@gm...> wrote: > Some fetches in a blueprints graph require queries on multiple key/value > pairs: BigdataGraphClient appears to only support fetching vertices against > a single key/value pair. That seems to indicate a need to bring out SPARQL. > > I figured out how to get a RepositoryConnection, but have no clue what to > expect when mapping my identifiers against URIs. > > Any ideas? Perhaps a better way? > > Many thanks in advance. > Jack > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
|
From: Bryan T. <br...@sy...> - 2015-04-28 01:33:39
|
Ontologies are just data. Use any approach you would use to load the data. Bryan On Apr 27, 2015 11:04 AM, "Kaushik Chakraborty" <kay...@gm...> wrote: > Hi, > > If I put this RDF data in the Updata panel of NanoSparqlServer: > > > > > > > > > > > > > > > *@prefix : <http://kg.cts.com/ <http://kg.cts.com/>> .@prefix foaf: > <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>> .@prefix rdf: > <http://www.w3.org/1999/02/22-rdf-syntax-ns# > <http://www.w3.org/1999/02/22-rdf-syntax-ns#>> .@prefix rdfs: > <http://www.w3.org/2000/01/rdf-schema# > <http://www.w3.org/2000/01/rdf-schema#>> .@prefix owl: > <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>> .@prefix > org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>> .:A rdf:type > org:Organization .:139137 rdf:type foaf:Agent ; rdfs:label "139137" ; > org:memberOf :A .* > > And then if I make a query like this > > *prefix : <http://kg.cts.com/ <http://kg.cts.com/>>* > *prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>>* > *prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# > <http://www.w3.org/1999/02/22-rdf-syntax-ns#>>* > *prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# > <http://www.w3.org/2000/01/rdf-schema#>>* > *prefix owl: <http://www.w3.org/2002/07/owl# > <http://www.w3.org/2002/07/owl#>>* > *prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>>* > > *select distinct ?s ?w* > *where* > *{* > * ?s org:hasMember ?w .* > *}* > > It should list <http://kg.cts.com/A> and <http://kg.cts.com/139137> > respectively as ?s and ?w as per the Organization Ontology ( > http://www.w3.org/TR/vocab-org/#org:Organization) > > But that's not happening. Clearly the inferencing is not working for Org > Ontology. > > Here're my namespace properties: > > com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor1024 > com.bigdata.relation.containertest-import > com.bigdata.journal.AbstractJournal.bufferModeDiskRW > com.bigdata.journal.AbstractJournal.filebigdata.jnl > com.bigdata.journal.AbstractJournal.initialExtent209715200 > com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass > com.bigdata.rdf.vocab.DefaultBigdataVocabulary > com.bigdata.rdf.store.AbstractTripleStore.textIndexfalse > com.bigdata.btree.BTree.branchingFactor128 > com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor400 > com.bigdata.rdf.store.AbstractTripleStore.axiomsClass > com.bigdata.rdf.axioms.OwlAxioms > com.bigdata.service.AbstractTransactionService.minReleaseAge1 > com.bigdata.rdf.sail.truthMaintenancetrue > com.bigdata.journal.AbstractJournal.maximumExtent209715200 > com.bigdata.rdf.sail.namespacetest-importcom.bigdata.relation.class > com.bigdata.rdf.store.LocalTripleStore > com.bigdata.rdf.store.AbstractTripleStore.quadsfalse > com.bigdata.relation.namespacetest-import > com.bigdata.btree.writeRetentionQueue.capacity4000 > com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiersfalse > What am I missing here ? > > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
|
From: Kaushik C. <kay...@gm...> - 2015-04-28 01:27:01
|
Pardon my naïveté but how do one load an ontology in a kb while Blazegraph is being deployed as NanoSparqlServer ? The actual Org Ontology is at this URL - http://www.w3.org/ns/org.n3 and it has the required inverseOf relationship that you mentioned. What's the best way to tell Blazegraph to go and fetch the same from this URL and use it for inferencing as I'm adding my document in Update panel. Much thanks in advance. On Tue, Apr 28, 2015 at 12:22 AM Mike Personick <mi...@sy...> wrote: > Did you load that organization ontology into the knowledge base? > Somewhere in the kb you need the statement: > > org:memberOf owl:inverseOf org:hasMember > > > > On Mon, Apr 27, 2015 at 12:04 PM, Kaushik Chakraborty <kay...@gm...> > wrote: > >> Hi, >> >> If I put this RDF data in the Updata panel of NanoSparqlServer: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> *@prefix : <http://kg.cts.com/ <http://kg.cts.com/>> .@prefix foaf: >> <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>> .@prefix rdf: >> <http://www.w3.org/1999/02/22-rdf-syntax-ns# >> <http://www.w3.org/1999/02/22-rdf-syntax-ns#>> .@prefix rdfs: >> <http://www.w3.org/2000/01/rdf-schema# >> <http://www.w3.org/2000/01/rdf-schema#>> .@prefix owl: >> <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>> .@prefix >> org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>> .:A rdf:type >> org:Organization .:139137 rdf:type foaf:Agent ; rdfs:label "139137" ; >> org:memberOf :A .* >> >> And then if I make a query like this >> >> *prefix : <http://kg.cts.com/ <http://kg.cts.com/>>* >> *prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>>* >> *prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# >> <http://www.w3.org/1999/02/22-rdf-syntax-ns#>>* >> *prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# >> <http://www.w3.org/2000/01/rdf-schema#>>* >> *prefix owl: <http://www.w3.org/2002/07/owl# >> <http://www.w3.org/2002/07/owl#>>* >> *prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>>* >> >> *select distinct ?s ?w* >> *where* >> *{* >> * ?s org:hasMember ?w .* >> *}* >> >> It should list <http://kg.cts.com/A> and <http://kg.cts.com/139137> >> respectively as ?s and ?w as per the Organization Ontology ( >> http://www.w3.org/TR/vocab-org/#org:Organization) >> >> But that's not happening. Clearly the inferencing is not working for Org >> Ontology. >> >> Here're my namespace properties: >> >> com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor1024 >> com.bigdata.relation.containertest-import >> com.bigdata.journal.AbstractJournal.bufferModeDiskRW >> com.bigdata.journal.AbstractJournal.filebigdata.jnl >> com.bigdata.journal.AbstractJournal.initialExtent209715200 >> com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass >> com.bigdata.rdf.vocab.DefaultBigdataVocabulary >> com.bigdata.rdf.store.AbstractTripleStore.textIndexfalse >> com.bigdata.btree.BTree.branchingFactor128 >> com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor400 >> com.bigdata.rdf.store.AbstractTripleStore.axiomsClass >> com.bigdata.rdf.axioms.OwlAxioms >> com.bigdata.service.AbstractTransactionService.minReleaseAge1 >> com.bigdata.rdf.sail.truthMaintenancetrue >> com.bigdata.journal.AbstractJournal.maximumExtent209715200 >> com.bigdata.rdf.sail.namespacetest-importcom.bigdata.relation.class >> com.bigdata.rdf.store.LocalTripleStore >> com.bigdata.rdf.store.AbstractTripleStore.quadsfalse >> com.bigdata.relation.namespacetest-import >> com.bigdata.btree.writeRetentionQueue.capacity4000 >> com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiersfalse >> What am I missing here ? >> >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > |
|
From: Bryan T. <br...@sy...> - 2015-04-28 00:15:50
|
All, Trac is down at the instant. We hope to have it back online shortly. If you have a support subscription, please contact us directly with your issue. Thank you for your patience. Bryan |
|
From: Bryan T. <br...@sy...> - 2015-04-28 00:10:20
|
Jack, Per an email thread with MikeP: addVertex(id) creates the triples: <id> rdf:type <Vertex> . addEdge(eid, from, to, type) creates the triples: <eid> rdf:type <Edge> . <eid> rdf:type <type> . <from> <eid> <to> . <from> <type> <to> . Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com <http://bigdata.com> http://mapgraph.io Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, Apr 27, 2015 at 5:24 PM, Jack Park <jac...@gm...> wrote: > Some fetches in a blueprints graph require queries on multiple key/value > pairs: BigdataGraphClient appears to only support fetching vertices against > a single key/value pair. That seems to indicate a need to bring out SPARQL. > > I figured out how to get a RepositoryConnection, but have no clue what to > expect when mapping my identifiers against URIs. > > Any ideas? Perhaps a better way? > > Many thanks in advance. > Jack > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
|
From: Jack P. <jac...@gm...> - 2015-04-27 21:24:34
|
Some fetches in a blueprints graph require queries on multiple key/value pairs: BigdataGraphClient appears to only support fetching vertices against a single key/value pair. That seems to indicate a need to bring out SPARQL. I figured out how to get a RepositoryConnection, but have no clue what to expect when mapping my identifiers against URIs. Any ideas? Perhaps a better way? Many thanks in advance. Jack |
|
From: Kaushik C. <kay...@gm...> - 2015-04-27 18:04:43
|
Hi, If I put this RDF data in the Updata panel of NanoSparqlServer: *@prefix : <http://kg.cts.com/ <http://kg.cts.com/>> .@prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>> .@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# <http://www.w3.org/1999/02/22-rdf-syntax-ns#>> .@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# <http://www.w3.org/2000/01/rdf-schema#>> .@prefix owl: <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>> .@prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>> .:A rdf:type org:Organization .:139137 rdf:type foaf:Agent ; rdfs:label "139137" ; org:memberOf :A .* And then if I make a query like this *prefix : <http://kg.cts.com/ <http://kg.cts.com/>>* *prefix foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>>* *prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# <http://www.w3.org/1999/02/22-rdf-syntax-ns#>>* *prefix rdfs: <http://www.w3.org/2000/01/rdf-schema# <http://www.w3.org/2000/01/rdf-schema#>>* *prefix owl: <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>>* *prefix org: <http://www.w3.org/ns/org# <http://www.w3.org/ns/org#>>* *select distinct ?s ?w* *where* *{* * ?s org:hasMember ?w .* *}* It should list <http://kg.cts.com/A> and <http://kg.cts.com/139137> respectively as ?s and ?w as per the Organization Ontology ( http://www.w3.org/TR/vocab-org/#org:Organization) But that's not happening. Clearly the inferencing is not working for Org Ontology. Here're my namespace properties: com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor1024 com.bigdata.relation.containertest-import com.bigdata.journal.AbstractJournal.bufferModeDiskRW com.bigdata.journal.AbstractJournal.filebigdata.jnl com.bigdata.journal.AbstractJournal.initialExtent209715200 com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass com.bigdata.rdf.vocab.DefaultBigdataVocabulary com.bigdata.rdf.store.AbstractTripleStore.textIndexfalse com.bigdata.btree.BTree.branchingFactor128 com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor400 com.bigdata.rdf.store.AbstractTripleStore.axiomsClass com.bigdata.rdf.axioms.OwlAxioms com.bigdata.service.AbstractTransactionService.minReleaseAge1 com.bigdata.rdf.sail.truthMaintenancetrue com.bigdata.journal.AbstractJournal.maximumExtent209715200 com.bigdata.rdf.sail.namespacetest-importcom.bigdata.relation.class com.bigdata.rdf.store.LocalTripleStore com.bigdata.rdf.store.AbstractTripleStore.quadsfalse com.bigdata.relation.namespacetest-import com.bigdata.btree.writeRetentionQueue.capacity4000 com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiersfalse What am I missing here ? |
|
From: Bryan T. <br...@sy...> - 2015-04-27 16:53:15
|
Lee, There are quite a few. Please see the performance optimization section of the wiki. The main things to look at are IO Wait. This can be compensated in part by the write cache buffers. Statement buffer size and branching factors also effect throughput over time. In other news, we have a fix for the pre-order traversal with annotations issue (#1210) and it fixes the wildcard rewrite problem. I've attached the modified classes so you can test locally. The fix will be in the next release. Let me suggest that we schedule a call for next week to discuss some of the questions you have had on the list and your goals with the platform and how we can work with you to achieve them. Brad Bebee (Cc) is the best point of contact to set this up. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com <http://bigdata.com> http://mapgraph.io Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, Apr 27, 2015 at 11:19 AM, Lee Kitching <le...@sw...> wrote: > Hi, > > We are trying to perform a bulk import into a new blazegraph journal. The > import process writes quads to an in-process BigdataSailRepository with the > following configuration based on the 'fastload' settings in the > bigdata-sails samples directory: > > com.bigdata.rdf.store.AbstractTripleStore.quadsMode=true > > com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers=false > > > com.bigdata.rdf.store.AbstractTripleStore.axiomsClass=com.bigdata.rdf.axioms.NoAxioms > > com.bigdata.rdf.sail.truthMaintenance=false > > com.bigdata.rdf.store.AbstractTripleStore.justify=false > > com.bigdata.journal.AbstractJournal.initialExtent=209715200 > > com.bigdata.journal.AbstractJournal.maximumExtent=209715200 > > com.bigdata.rdf.store.AbstractTripleStore.textIndex=false > > com.bigdata.journal.AbstractJournal.bufferMode=DiskRW > > com.bigdata.sail.isolatableIndices=true > > > com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass=com.bigdata.rdf.vocab.NoVocabulary > > com.bigdata.journal.AbstractJournal.file=bigdata_conf.jnl > > com.bigdata.journal.AbstractJournal.writeCacheBufferCount=2000 > > com.bigdata.btree.writeRetentionQueue.capacity=8000 > > > When run against a native sesame repository, the import takes around 50 > hours. When run against the blazegraph repository the import slows down > significantly after 2-3 hours and begins logging warnings of the form: > > [2015-04-27 07:34:30,238][WARN][com.bigdata.btree.AbstractBTree] wrote: > name=kb.spo.OCSP, 8 records (#nodes=3, #leaves=5) in 5493ms : > addrRoot=-244779124025982418 > [2015-04-27 07:47:48,342][WARN][com.bigdata.btree.AbstractBTree] wrote: > name=kb.spo.SOPC, 1 records (#nodes=1, #leaves=0) in 40841ms : > addrRoot=-246059333517835846 > [2015-04-27 07:47:48,858][WARN][com.bigdata.btree.AbstractBTree] wrote: > name=kb.spo.SPOC, 7 records (#nodes=4, #leaves=3) in 42109ms : > addrRoot=-246099989678259484 > [2015-04-27 07:54:47,743][WARN][com.bigdata.btree.AbstractBTree] wrote: > name=kb.spo.SOPC, 1 records (#nodes=1, #leaves=0) in 43231ms : > addrRoot=-245678000551493109 > [2015-04-27 07:54:52,251][WARN][com.bigdata.btree.AbstractBTree] wrote: > name=kb.spo.SPOC, 1 records (#nodes=1, #leaves=0) in 44875ms : > addrRoot=-245097441232158259 > [2015-04-27 07:54:52,251][WARN][com.bigdata.btree.AbstractBTree] wrote: > name=kb.spo.CSPO, 1 records (#nodes=1, #leaves=0) in 34808ms : > addrRoot=-245097501361700476 > [2015-04-27 07:54:52,251][WARN][com.bigdata.btree.AbstractBTree] wrote: > name=kb.spo.POCS, 1 records (#nodes=1, #leaves=0) in 44875ms : > addrRoot=-245097342447910551 > > Are there any settings we should change or add to the journal > configuration to prevent this slowdown? > > Thanks > > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |