This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Stas M. <sma...@wi...> - 2017-04-21 18:30:27
|
Hi! > You agree with me that this query : > `select distinct ?g { graph ?g {?s ?p ?o} }` seems to be a valid SPARQL > query, but throws an error in WDQS service [1]. It is a valid SPARQL query, but what you're essentially asking is to download whole 1.8b triples in a single query. There's no way we can deliver this in one minute, which is current constraint on queries, neither we want to enable queries like this - they are very resource-intensive and serve little purpose. If you want to work with huge data sets or import the whole data, you may look either into getting the dump download and processing it with offline tools like Wikidata Toolkit[2], or using our LDF server[1] which is lightweight and allows you to work with the data much more efficiently. [1] https://www.mediawiki.org/wiki/Wikidata_query_service/User_Manual#Linked_Data_Fragments_endpoint [2] https://www.mediawiki.org/wiki/Wikidata_Toolkit -- Stas Malyshev sma...@wi... |
From: Ghislain A. <ghi...@gm...> - 2017-04-21 14:58:51
|
Thanks Stas! It's clear that a user might ask any type of query from an endpoint according to her needs. My concern goes further on whether such queries are valid or not in the underlying RDF graph (in this case BLZ) regarding SPARQL 1.1 spec. You agree with me that this query : `select distinct ?g { graph ?g {?s ?p ?o} }` seems to be a valid SPARQL query, but throws an error in WDQS service [1]. Any thoughts on this? Best, Ghislain [1] https://goo.gl/hdJwUk El jue., 20 abr. 2017 a las 21:54, Stas Malyshev (<sma...@wi...>) escribió: > Hi! > > > I wanted to test the use of the queries below to list named graphs (if > > any) in wikidata service [a]. I've tried them without success: > > We do not use any subgraphs in WDQS, so I don't think that query would > produce much of a useful result. > > -- > Stas Malyshev > sma...@wi... > -- ------- "Love all, trust a few, do wrong to none" (W. Shakespeare) Web: http://atemezing.org |
From: Mikel E. A. <mik...@gm...> - 2017-04-21 10:09:04
|
2017-04-20 18:40 GMT+02:00 Stas Malyshev <sma...@wi...>: > Hi! > > > is there an easy way to configure blazegraph so that INSERT and DELETE > > queries cannot be executed from an "outside" connection? we are > > deploying Blazegraph as a war in an internal Tomcat server > > The way we set it up, we just put a proxy in front of it that takes care > of filtering the requests. Simplest one would just pass GET queries from > "outside" (which may be a problem if you need to serve SELECT queries > via POST, see below). Of course, you could have proxy use more complex > rules and firewall according to your definitions of inside/outside. > I think we will implement something like this Thanks > > In newer versions of Blazegraph, there's also X-BIGDATA-READ-ONLY > header, if the proxy sets it to "yes", write requests will be rejected. > This allows to pass POST requests but still have it read-only. > > -- > Stas Malyshev > sma...@wi... > -- Mikel Egaña Aranguren, Ph.D. https://mikel-egana-aranguren.github.io |
From: Mikel E. A. <mik...@gm...> - 2017-04-21 10:08:27
|
Thanks, I think we will go for the proxy solution explained bellow. Is there any estimated date for the 2.1.5 release? Regards 2017-04-20 15:10 GMT+02:00 Bryan Thompson <br...@bl...>: > There is no simple single switch. You can either make the entire endpoint > read-only (edit web.xml) or you can control the forward of the request to > require authentication for POST w/o query=. > > Thanks, > Bryan > > On Thu, Apr 20, 2017 at 5:52 AM, Mikel Egaña Aranguren < > mik...@gm...> wrote: > >> hi: >> >> is there an easy way to configure blazegraph so that INSERT and DELETE >> queries cannot be executed from an "outside" connection? we are deploying >> Blazegraph as a war in an internal Tomcat server >> >> thanks >> >> regards >> >> -- >> Mikel Egaña Aranguren, Ph.D. >> >> https://mikel-egana-aranguren.github.io >> >> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > -- Mikel Egaña Aranguren, Ph.D. https://mikel-egana-aranguren.github.io |
From: Stas M. <sma...@wi...> - 2017-04-20 19:55:00
|
Hi! > I wanted to test the use of the queries below to list named graphs (if > any) in wikidata service [a]. I've tried them without success: We do not use any subgraphs in WDQS, so I don't think that query would produce much of a useful result. -- Stas Malyshev sma...@wi... |
From: Ghislain A. <ghi...@gm...> - 2017-04-20 17:59:40
|
Hi, I wanted to test the use of the queries below to list named graphs (if any) in wikidata service [a]. I've tried them without success: 1- select distinct ?g { graph ?g {} } 2- select distinct ?g { graph ?g {?s ?p ?o} } 3- select (count(distinct ?g) as ?count) { graph ?g {} } Are those queries not standard or just taking too much time because of the underlying dataset | time out settings? In general, is there a way to reduce the time execution of such "useful" query in public endpoints with billions of data TIA Best, Ghislain PS: I note that there are errors when trying to use DBpedia endpoint for queries #1 and #3. The result of query #2 is a bit strange. [a] https://query.wikidata.org/ -- ------- "Love all, trust a few, do wrong to none" (W. Shakespeare) Web: http://atemezing.org |
From: Stas M. <sma...@wi...> - 2017-04-20 17:34:22
|
Hi! > In newer versions of Blazegraph, there's also X-BIGDATA-READ-ONLY > header, if the proxy sets it to "yes", write requests will be rejected. > This allows to pass POST requests but still have it read-only. This one seems to be only in 2.1.5RC, so if you want to use it, you'd have to use RC or wait for 2.1.5 release :) -- Stas Malyshev sma...@wi... |
From: Stas M. <sma...@wi...> - 2017-04-20 17:12:20
|
Hi! > is there an easy way to configure blazegraph so that INSERT and DELETE > queries cannot be executed from an "outside" connection? we are > deploying Blazegraph as a war in an internal Tomcat server The way we set it up, we just put a proxy in front of it that takes care of filtering the requests. Simplest one would just pass GET queries from "outside" (which may be a problem if you need to serve SELECT queries via POST, see below). Of course, you could have proxy use more complex rules and firewall according to your definitions of inside/outside. In newer versions of Blazegraph, there's also X-BIGDATA-READ-ONLY header, if the proxy sets it to "yes", write requests will be rejected. This allows to pass POST requests but still have it read-only. -- Stas Malyshev sma...@wi... |
From: Bryan T. <br...@bl...> - 2017-04-20 13:35:34
|
There is no simple single switch. You can either make the entire endpoint read-only (edit web.xml) or you can control the forward of the request to require authentication for POST w/o query=. Thanks, Bryan On Thu, Apr 20, 2017 at 5:52 AM, Mikel Egaña Aranguren < mik...@gm...> wrote: > hi: > > is there an easy way to configure blazegraph so that INSERT and DELETE > queries cannot be executed from an "outside" connection? we are deploying > Blazegraph as a war in an internal Tomcat server > > thanks > > regards > > -- > Mikel Egaña Aranguren, Ph.D. > > https://mikel-egana-aranguren.github.io > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Mikel E. A. <mik...@gm...> - 2017-04-20 12:52:40
|
hi: is there an easy way to configure blazegraph so that INSERT and DELETE queries cannot be executed from an "outside" connection? we are deploying Blazegraph as a war in an internal Tomcat server thanks regards -- Mikel Egaña Aranguren, Ph.D. https://mikel-egana-aranguren.github.io |
From: Mikel E. A. <mik...@gm...> - 2017-04-20 11:35:39
|
Fixed. Runing maven install on bigdata-war-html first did the trick Thanks 2017-04-20 10:29 GMT+02:00 Mikel Egaña Aranguren < mik...@gm...>: > Hi; > > I have an issue when generating the blazegraph war to deploy in our > server. The process I'm following is: > > 1) Clone the Blazegraph_release_2_1_4 branch from GitHub, in Eclipse. > 2) Run maven install in /blazegraph-war/pom.xml > 3) Copy the resulting war (blazegraph-war-2.1.4.war) in tomcat webapps > > That works fine. However, I would like to generate a war that includes a > RWStore.properties file edited by me. I have tried changing the > RWStore.properties file in the /bigdata-war-html/src/main/resources > directory, since it is referenced by the pom at /blazegraph-war, but the > generated war does not include the changes (com.bigdata.journal. > AbstractJournal.file=/root/blazegraph-db/blazegraph.jnl). > > What is the origin of the RWStore.properties file when generating from > /blazegraph-war? > > Any clues to solve this? > > Thanks > > -- > Mikel Egaña Aranguren, Ph.D. > > https://mikel-egana-aranguren.github.io > > > -- Mikel Egaña Aranguren, Ph.D. https://mikel-egana-aranguren.github.io |
From: Mikel E. A. <mik...@gm...> - 2017-04-20 08:29:57
|
Hi; I have an issue when generating the blazegraph war to deploy in our server. The process I'm following is: 1) Clone the Blazegraph_release_2_1_4 branch from GitHub, in Eclipse. 2) Run maven install in /blazegraph-war/pom.xml 3) Copy the resulting war (blazegraph-war-2.1.4.war) in tomcat webapps That works fine. However, I would like to generate a war that includes a RWStore.properties file edited by me. I have tried changing the RWStore.properties file in the /bigdata-war-html/src/main/resources directory, since it is referenced by the pom at /blazegraph-war, but the generated war does not include the changes (com.bigdata.journal.AbstractJournal.file=/root/blazegraph-db/blazegraph.jnl). What is the origin of the RWStore.properties file when generating from /blazegraph-war? Any clues to solve this? Thanks -- Mikel Egaña Aranguren, Ph.D. https://mikel-egana-aranguren.github.io |
From: Jeremy J C. <jj...@sy...> - 2017-04-11 19:23:24
|
2147483616 = 7FFFFFE0 > On Apr 11, 2017, at 11:50 AM, Jeremy J Carroll <jj...@sy...> wrote: > > Hi > > We have an issue with one of our larger instances. The journal is 1.5T big. > The blazegraph process has been running since January and has now stopped accepting updates. > The errors we see are: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could not commit index: name=.spo.CSPO > > Caused by …. > > Caused by: java.lang.AssertionError: Record exists for offset in cache: offset=2147483616 > at com.bigdata.io.writecache.WriteCache.write(WriteCache.java:977) > > This also occurs for the other indexes .spo.CSPO, .spo.OCSP, .spo.PCSO, .spo.POCS, .spo.SOPC and .spo.SPOC > > The full stack trace is below. > > Every occurrence of the error always has the same offset 2147483616 for the record in cache > > The system is still running 2.0.1, and we have not tried restarting it yet. (BLZG-2086 is not too positive about that). I have checked that: > > com.bigdata.service.AbstractTransactionService.minReleaseAge=1 > > We see in the logs three physical address errors (one in January, one in March, and one a week ago). The January one was preceeded, by a few days, by a query time-out during an update. > > The journal file is on an AWS EBS volume, and has been in use for a couple of years. It was copied from one volume to another in November, when we changed our encryption strategy. > > We have taken a copy of the journal to try various strategies for recovery, does anyone have suggestions? > > Are there maintenance procedures that may help clean the journal up, e.g. DumpJournal dumpPages > > Thanks > > Jeremy Carroll > Syapse, Inc. > > > > Full stack trace: > > Apr 09,2017 05:00:24 PDT - ERROR: 7457721498 com.bigdata.rdf.sail.webapp.BigdataRDFContext.queryService4 com.bigdata.journal.Name2Addr.handleCommit(Name2Addr.java:787): l.name: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could not commit index: name=.spo.CSPO > java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could not commit index: name=.spo.CSPO > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at com.bigdata.journal.Name2Addr.handleCommit(Name2Addr.java:749) > at com.bigdata.journal.AbstractJournal.notifyCommitters(AbstractJournal.java:2716) > at com.bigdata.journal.AbstractJournal.access$1700(AbstractJournal.java:255) > at com.bigdata.journal.AbstractJournal$CommitState.notifyCommitters(AbstractJournal.java:3422) > at com.bigdata.journal.AbstractJournal$CommitState.access$2600(AbstractJournal.java:3298) > at com.bigdata.journal.AbstractJournal.commitNow(AbstractJournal.java:4092) > at com.bigdata.journal.AbstractJournal.commit(AbstractJournal.java:3129) > at com.bigdata.rdf.store.LocalTripleStore.commit(LocalTripleStore.java:98) > at com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.commit2(BigdataSail.java:3695) > at com.bigdata.rdf.sail.BigdataSailRepositoryConnection.commit2(BigdataSailRepositoryConnection.java:330) > at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertCommit(AST2BOpUpdate.java:375) > at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:321) > at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1072) > at com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1966) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1568) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1533) > at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:705) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: Could not commit index: name=.spo.CSPO > at com.bigdata.journal.Name2Addr$CommitIndexTask.call(Name2Addr.java:578) > at com.bigdata.journal.Name2Addr$CommitIndexTask.call(Name2Addr.java:513) > ... 4 more > Caused by: java.lang.AssertionError: Record exists for offset in cache: offset=2147483616 > at com.bigdata.io.writecache.WriteCache.write(WriteCache.java:977) > at com.bigdata.io.writecache.WriteCacheService.write_timed(WriteCacheService.java:2487) > at com.bigdata.io.writecache.WriteCacheService.write(WriteCacheService.java:2421) > at com.bigdata.rwstore.RWStore.alloc(RWStore.java:3020) > at com.bigdata.rwstore.PSOutputStream.save(PSOutputStream.java:359) > at com.bigdata.rwstore.RWStore.alloc(RWStore.java:2991) > at com.bigdata.journal.RWStrategy.write(RWStrategy.java:239) > at com.bigdata.journal.RWStrategy.write(RWStrategy.java:199) > at com.bigdata.journal.AbstractJournal.write(AbstractJournal.java:4313) > at com.bigdata.btree.AbstractBTree.writeNodeOrLeaf(AbstractBTree.java:3948) > at com.bigdata.btree.AbstractBTree.writeNodeRecursive(AbstractBTree.java:3720) > at com.bigdata.btree.BTree.flush(BTree.java:756) > at com.bigdata.btree.BTree._writeCheckpoint2(BTree.java:961) > at com.bigdata.btree.BTree.writeCheckpoint2(BTree.java:922) > at com.bigdata.btree.BTree.handleCommit(BTree.java:1323) > at com.bigdata.journal.Name2Addr$CommitIndexTask.call(Name2Addr.java:570) > ... 5 more |
From: Jeremy J C. <jj...@sy...> - 2017-04-11 19:21:53
|
Hi We have an issue with one of our larger instances. The journal is 1.5T big. The blazegraph process has been running since January and has now stopped accepting updates. The errors we see are: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could not commit index: name=.spo.CSPO Caused by …. Caused by: java.lang.AssertionError: Record exists for offset in cache: offset=2147483616 at com.bigdata.io.writecache.WriteCache.write(WriteCache.java:977) This also occurs for the other indexes .spo.CSPO, .spo.OCSP, .spo.PCSO, .spo.POCS, .spo.SOPC and .spo.SPOC The full stack trace is below. Every occurrence of the error always has the same offset 2147483616 for the record in cache The system is still running 2.0.1, and we have not tried restarting it yet. (BLZG-2086 is not too positive about that). I have checked that: com.bigdata.service.AbstractTransactionService.minReleaseAge=1 We see in the logs three physical address errors (one in January, one in March, and one a week ago). The January one was preceeded, by a few days, by a query time-out during an update. The journal file is on an AWS EBS volume, and has been in use for a couple of years. It was copied from one volume to another in November, when we changed our encryption strategy. We have taken a copy of the journal to try various strategies for recovery, does anyone have suggestions? Are there maintenance procedures that may help clean the journal up, e.g. DumpJournal dumpPages Thanks Jeremy Carroll Syapse, Inc. Full stack trace: Apr 09,2017 05:00:24 PDT - ERROR: 7457721498 com.bigdata.rdf.sail.webapp.BigdataRDFContext.queryService4 com.bigdata.journal.Name2Addr.handleCommit(Name2Addr.java:787): l.name: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could not commit index: name=.spo.CSPO java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could not commit index: name=.spo.CSPO at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.journal.Name2Addr.handleCommit(Name2Addr.java:749) at com.bigdata.journal.AbstractJournal.notifyCommitters(AbstractJournal.java:2716) at com.bigdata.journal.AbstractJournal.access$1700(AbstractJournal.java:255) at com.bigdata.journal.AbstractJournal$CommitState.notifyCommitters(AbstractJournal.java:3422) at com.bigdata.journal.AbstractJournal$CommitState.access$2600(AbstractJournal.java:3298) at com.bigdata.journal.AbstractJournal.commitNow(AbstractJournal.java:4092) at com.bigdata.journal.AbstractJournal.commit(AbstractJournal.java:3129) at com.bigdata.rdf.store.LocalTripleStore.commit(LocalTripleStore.java:98) at com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.commit2(BigdataSail.java:3695) at com.bigdata.rdf.sail.BigdataSailRepositoryConnection.commit2(BigdataSailRepositoryConnection.java:330) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertCommit(AST2BOpUpdate.java:375) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:321) at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1072) at com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1966) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1568) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1533) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:705) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: Could not commit index: name=.spo.CSPO at com.bigdata.journal.Name2Addr$CommitIndexTask.call(Name2Addr.java:578) at com.bigdata.journal.Name2Addr$CommitIndexTask.call(Name2Addr.java:513) ... 4 more Caused by: java.lang.AssertionError: Record exists for offset in cache: offset=2147483616 at com.bigdata.io.writecache.WriteCache.write(WriteCache.java:977) at com.bigdata.io.writecache.WriteCacheService.write_timed(WriteCacheService.java:2487) at com.bigdata.io.writecache.WriteCacheService.write(WriteCacheService.java:2421) at com.bigdata.rwstore.RWStore.alloc(RWStore.java:3020) at com.bigdata.rwstore.PSOutputStream.save(PSOutputStream.java:359) at com.bigdata.rwstore.RWStore.alloc(RWStore.java:2991) at com.bigdata.journal.RWStrategy.write(RWStrategy.java:239) at com.bigdata.journal.RWStrategy.write(RWStrategy.java:199) at com.bigdata.journal.AbstractJournal.write(AbstractJournal.java:4313) at com.bigdata.btree.AbstractBTree.writeNodeOrLeaf(AbstractBTree.java:3948) at com.bigdata.btree.AbstractBTree.writeNodeRecursive(AbstractBTree.java:3720) at com.bigdata.btree.BTree.flush(BTree.java:756) at com.bigdata.btree.BTree._writeCheckpoint2(BTree.java:961) at com.bigdata.btree.BTree.writeCheckpoint2(BTree.java:922) at com.bigdata.btree.BTree.handleCommit(BTree.java:1323) at com.bigdata.journal.Name2Addr$CommitIndexTask.call(Name2Addr.java:570) ... 5 more |
From: Stas M. <sma...@wi...> - 2017-04-07 17:05:07
|
Hi! > But there are no statements defining owl:Class t5545 in the file I am > loading. > Where does those tnnnn classes come from? Those are probably bnodes. They do not have identity as such, so the tools usually generate local IDs in form like you see to differentiate between them. -- Stas Malyshev sma...@wi... |
From: Joakim S. <joa...@bl...> - 2017-04-07 01:01:42
|
<http://10.11.12.190:19999/blazegraph/#explore:undefined:%3Chttp://blippar.com/ns/0.1/ontology/PhysicalPhenomenon%3E> Hi, I am loading a small ttl file containing some class declarations using the Blazegraph dashboard in triple mode. When executing the following SPARQL query: SELECT ?class WHERE { ?class a owl:Class } I get the following result: <http://blippar.com/ns/0.1/ontology/ProgrammingStyle> <http://10.11.12.190:19999/blazegraph/#explore:undefined:%3Chttp://blippar.com/ns/0.1/ontology/ProgrammingStyle%3E> <http://blippar.com/ns/0.1/ontology/Proverb> <http://10.11.12.190:19999/blazegraph/#explore:undefined:%3Chttp://blippar.com/ns/0.1/ontology/Proverb%3E> <http://blippar.com/ns/0.1/ontology/Relationship> <http://10.11.12.190:19999/blazegraph/#explore:undefined:%3Chttp://blippar.com/ns/0.1/ontology/Relationship%3E> <http://blippar.com/ns/0.1/ontology/Saying> <http://10.11.12.190:19999/blazegraph/#explore:undefined:%3Chttp://blippar.com/ns/0.1/ontology/Saying%3E> <http://blippar.com/ns/0.1/ontology/Shooting> <http://10.11.12.190:19999/blazegraph/#explore:undefined:%3Chttp://blippar.com/ns/0.1/ontology/Shooting%3E> <http://blippar.com/ns/0.1/ontology/SIDerivedUnit> <http://10.11.12.190:19999/blazegraph/#explore:undefined:%3Chttp://blippar.com/ns/0.1/ontology/SIDerivedUnit%3E> <http://blippar.com/ns/0.1/ontology/Symphony> <http://10.11.12.190:19999/blazegraph/#explore:undefined:%3Chttp://blippar.com/ns/0.1/ontology/Symphony%3E> <http://blippar.com/ns/0.1/ontology/Terminology> <http://10.11.12.190:19999/blazegraph/#explore:undefined:%3Chttp://blippar.com/ns/0.1/ontology/Terminology%3E> <http://blippar.com/ns/0.1/ontology/VoiceType> <http://10.11.12.190:19999/blazegraph/#explore:undefined:%3Chttp://blippar.com/ns/0.1/ontology/VoiceType%3E> t5545 t5549 t5552 t5557 t5564 t5568 t5570 t5572 t5578 t5579 t5580 t5583 t5584 t5589 t5592 t5595 But there are no statements defining owl:Class t5545 in the file I am loading. Where does those tnnnn classes come from? Best Joakim |
From: Brad B. <be...@sy...> - 2017-03-29 13:52:47
|
Mikel, Thanks for reaching out. I'll drop you a note off list and we can discuss the options. Thanks, Brad On Mar 29, 2017 3:26 AM, "Mikel Egaña Aranguren" < mik...@gm...> wrote: > Hi; > > I have a question about licensing, but I haven't found a suitable email > address to send it to on the web. Can someone point me to the correct email > address for this sort of questions? > > Thanks > > Regards > > -- > Mikel Egaña Aranguren, Ph.D. > > https://mikel-egana-aranguren.github.io > > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Mikel E. A. <mik...@gm...> - 2017-03-29 10:26:45
|
Hi; I have a question about licensing, but I haven't found a suitable email address to send it to on the web. Can someone point me to the correct email address for this sort of questions? Thanks Regards -- Mikel Egaña Aranguren, Ph.D. https://mikel-egana-aranguren.github.io |
From: Bryan T. <br...@bl...> - 2017-03-20 13:56:43
|
Your message is not complete. Probably you were unable to set ...? (The email does not say.) Please provide more detail about your setup. Thanks, Bryan On Mon, Mar 20, 2017 at 05:22 Ghislain ATEMEZING < ghi...@gm...> wrote: > Hi again, > I am trying to use this framework [1] to make a small benchmark on a > dataset from the publication office [a]. I really don't understand why I am > getting some very long time for results when they are fast in the > workbench. So, I think I am doing something wrong. Probably I was not able > to set the > 1- The loading time was extremely long (almost 3 days to load 727 Mio > dataset) > 2- During each warm up, the results are taking more and more time to > execute. > 3- Based on the indication here [2] to get in touch with the team, I > really need to understand what I am missing. > > > Best, > Ghislain > [a] http://publications.europa.eu/ > [1] https://sourceforge.net/p/sparql-query-bm/wiki/CLI/ > [2] https://wiki.blazegraph.com/wiki/index.php/SPARQL_Benchmarks > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Ghislain A. <ghi...@gm...> - 2017-03-20 12:22:19
|
Hi again, I am trying to use this framework [1] to make a small benchmark on a dataset from the publication office [a]. I really don't understand why I am getting some very long time for results when they are fast in the workbench. So, I think I am doing something wrong. Probably I was not able to set the 1- The loading time was extremely long (almost 3 days to load 727 Mio dataset) 2- During each warm up, the results are taking more and more time to execute. 3- Based on the indication here [2] to get in touch with the team, I really need to understand what I am missing. Best, Ghislain [a] http://publications.europa.eu/ [1] https://sourceforge.net/p/sparql-query-bm/wiki/CLI/ [2] https://wiki.blazegraph.com/wiki/index.php/SPARQL_Benchmarks -- ------- "Love all, trust a few, do wrong to none" (W. Shakespeare) Web: http://atemezing.org |
From: Ghislain A. <ghi...@gm...> - 2017-02-25 11:48:16
|
SOLVED! I just added in <myFile>.properties the line queryTimeout=60000. (because I need 60s) Seems to be on again. El sáb., 25 feb. 2017 a las 12:24, Ghislain ATEMEZING (< ghi...@gm...>) escribió: > Ouch.. now I have an http error : 503. I can't access the workbench > anymore. > Here is the command: > > $java -server -XX:+UseG1GC -Xmx6g -jar -Djetty.overrideWebXml=/path/to/weboverride.xml > -Dfile.encoding=UTF-8 -Dsun.jnu.encoding=UTF-8 > -Dlog4j.configuration=file:<myFile>.properties ../*blazegraph*.jar > > Any help? > > Thx. > > El sáb., 25 feb. 2017 a las 12:12, Ghislain ATEMEZING (< > ghi...@gm...>) escribió: > > I think I saw it > https://github.com/blazegraph/database/blob/master/bigdata-war-html/src/main/webapp/WEB-INF/web.xml > . > I will use this command for > that -Djetty.overrideWebXml=/path/to/override.xml. > > > El sáb., 25 feb. 2017 a las 11:19, Ghislain ATEMEZING (< > ghi...@gm...>) escribió: > > Thanks Bryan! > Please what do you mean by setting in "web.xml"? I just have a > blazegraph.jar file to start BLZgraph. > I'll see if I can se the timeout directly from the Workbench. > I thought there could be a way to pass a parameter to .properties file for > that. > > Best, > Ghislain > > El vie., 24 feb. 2017 a las 16:26, Bryan Thompson (<br...@bl...>) > escribió: > > The difference is seconds vs milliseconds of resolution in the timeout. > The "timeout" parameter is defined by openrdf. The maxQueryTimeMillis is > defined by blazegraph. > > timeout Specifies a maximum query execution time, in whole seconds. The > value should be an integer. A setting of 0 or a negative number indicates > unlimited query time (the default). > maxQueryTimeMillis Specifies the maximum time that a query is allowed to > run, measured in milliseconds. May be also specified by using HTTP header > X-BIGDATA-MAX-QUERY-MILLIS. > You can also impose a timeout in web.xml. > > Thanks, > Bryan > > > On Fri, Feb 24, 2017 at 4:10 AM, Ghislain ATEMEZING < > ghi...@gm...> wrote: > > Hi, > I am struggling to find an easy way to set the time out limit for a given > namespace using a configuration file or similar. > I've only seen this https://wiki.blazegraph.com/wiki/index.php/REST_API and > I don't understand the difference between the timeout and > maxQueryTimeMillis. > > TIA. > Best, > Ghislain > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > -- ------- "Love all, trust a few, do wrong to none" (W. Shakespeare) Web: http://atemezing.org |
From: Ghislain A. <ghi...@gm...> - 2017-02-25 11:25:15
|
Ouch.. now I have an http error : 503. I can't access the workbench anymore. Here is the command: $java -server -XX:+UseG1GC -Xmx6g -jar -Djetty.overrideWebXml=/path/to/weboverride.xml -Dfile.encoding=UTF-8 -Dsun.jnu.encoding=UTF-8 -Dlog4j.configuration=file:<myFile>.properties ../*blazegraph*.jar Any help? Thx. El sáb., 25 feb. 2017 a las 12:12, Ghislain ATEMEZING (< ghi...@gm...>) escribió: > I think I saw it > https://github.com/blazegraph/database/blob/master/bigdata-war-html/src/main/webapp/WEB-INF/web.xml > . > I will use this command for > that -Djetty.overrideWebXml=/path/to/override.xml. > > > El sáb., 25 feb. 2017 a las 11:19, Ghislain ATEMEZING (< > ghi...@gm...>) escribió: > > Thanks Bryan! > Please what do you mean by setting in "web.xml"? I just have a > blazegraph.jar file to start BLZgraph. > I'll see if I can se the timeout directly from the Workbench. > I thought there could be a way to pass a parameter to .properties file for > that. > > Best, > Ghislain > > El vie., 24 feb. 2017 a las 16:26, Bryan Thompson (<br...@bl...>) > escribió: > > The difference is seconds vs milliseconds of resolution in the timeout. > The "timeout" parameter is defined by openrdf. The maxQueryTimeMillis is > defined by blazegraph. > > timeout Specifies a maximum query execution time, in whole seconds. The > value should be an integer. A setting of 0 or a negative number indicates > unlimited query time (the default). > maxQueryTimeMillis Specifies the maximum time that a query is allowed to > run, measured in milliseconds. May be also specified by using HTTP header > X-BIGDATA-MAX-QUERY-MILLIS. > You can also impose a timeout in web.xml. > > Thanks, > Bryan > > > On Fri, Feb 24, 2017 at 4:10 AM, Ghislain ATEMEZING < > ghi...@gm...> wrote: > > Hi, > I am struggling to find an easy way to set the time out limit for a given > namespace using a configuration file or similar. > I've only seen this https://wiki.blazegraph.com/wiki/index.php/REST_API and > I don't understand the difference between the timeout and > maxQueryTimeMillis. > > TIA. > Best, > Ghislain > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > -- ------- "Love all, trust a few, do wrong to none" (W. Shakespeare) Web: http://atemezing.org |
From: Ghislain A. <ghi...@gm...> - 2017-02-25 11:12:26
|
I think I saw it https://github.com/blazegraph/database/blob/master/bigdata-war-html/src/main/webapp/WEB-INF/web.xml . I will use this command for that -Djetty.overrideWebXml=/path/to/override.xml. El sáb., 25 feb. 2017 a las 11:19, Ghislain ATEMEZING (< ghi...@gm...>) escribió: > Thanks Bryan! > Please what do you mean by setting in "web.xml"? I just have a > blazegraph.jar file to start BLZgraph. > I'll see if I can se the timeout directly from the Workbench. > I thought there could be a way to pass a parameter to .properties file for > that. > > Best, > Ghislain > > El vie., 24 feb. 2017 a las 16:26, Bryan Thompson (<br...@bl...>) > escribió: > > The difference is seconds vs milliseconds of resolution in the timeout. > The "timeout" parameter is defined by openrdf. The maxQueryTimeMillis is > defined by blazegraph. > > timeout Specifies a maximum query execution time, in whole seconds. The > value should be an integer. A setting of 0 or a negative number indicates > unlimited query time (the default). > maxQueryTimeMillis Specifies the maximum time that a query is allowed to > run, measured in milliseconds. May be also specified by using HTTP header > X-BIGDATA-MAX-QUERY-MILLIS. > You can also impose a timeout in web.xml. > > Thanks, > Bryan > > > On Fri, Feb 24, 2017 at 4:10 AM, Ghislain ATEMEZING < > ghi...@gm...> wrote: > > Hi, > I am struggling to find an easy way to set the time out limit for a given > namespace using a configuration file or similar. > I've only seen this https://wiki.blazegraph.com/wiki/index.php/REST_API and > I don't understand the difference between the timeout and > maxQueryTimeMillis. > > TIA. > Best, > Ghislain > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > -- ------- "Love all, trust a few, do wrong to none" (W. Shakespeare) Web: http://atemezing.org |
From: Ghislain A. <ghi...@gm...> - 2017-02-25 10:20:16
|
Thanks Bryan! Please what do you mean by setting in "web.xml"? I just have a blazegraph.jar file to start BLZgraph. I'll see if I can se the timeout directly from the Workbench. I thought there could be a way to pass a parameter to .properties file for that. Best, Ghislain El vie., 24 feb. 2017 a las 16:26, Bryan Thompson (<br...@bl...>) escribió: > The difference is seconds vs milliseconds of resolution in the timeout. > The "timeout" parameter is defined by openrdf. The maxQueryTimeMillis is > defined by blazegraph. > > timeout Specifies a maximum query execution time, in whole seconds. The > value should be an integer. A setting of 0 or a negative number indicates > unlimited query time (the default). > maxQueryTimeMillis Specifies the maximum time that a query is allowed to > run, measured in milliseconds. May be also specified by using HTTP header > X-BIGDATA-MAX-QUERY-MILLIS. > You can also impose a timeout in web.xml. > > Thanks, > Bryan > > > On Fri, Feb 24, 2017 at 4:10 AM, Ghislain ATEMEZING < > ghi...@gm...> wrote: > > Hi, > I am struggling to find an easy way to set the time out limit for a given > namespace using a configuration file or similar. > I've only seen this https://wiki.blazegraph.com/wiki/index.php/REST_API and > I don't understand the difference between the timeout and > maxQueryTimeMillis. > > TIA. > Best, > Ghislain > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > -- ------- "Love all, trust a few, do wrong to none" (W. Shakespeare) Web: http://atemezing.org |
From: Bryan T. <br...@bl...> - 2017-02-24 15:26:47
|
The difference is seconds vs milliseconds of resolution in the timeout. The "timeout" parameter is defined by openrdf. The maxQueryTimeMillis is defined by blazegraph. timeout Specifies a maximum query execution time, in whole seconds. The value should be an integer. A setting of 0 or a negative number indicates unlimited query time (the default). maxQueryTimeMillis Specifies the maximum time that a query is allowed to run, measured in milliseconds. May be also specified by using HTTP header X-BIGDATA-MAX-QUERY-MILLIS. You can also impose a timeout in web.xml. Thanks, Bryan On Fri, Feb 24, 2017 at 4:10 AM, Ghislain ATEMEZING < ghi...@gm...> wrote: > Hi, > I am struggling to find an easy way to set the time out limit for a given > namespace using a configuration file or similar. > I've only seen this https://wiki.blazegraph.com/wiki/index.php/REST_API and > I don't understand the difference between the timeout and > maxQueryTimeMillis. > > TIA. > Best, > Ghislain > -- > ------- > "Love all, trust a few, do wrong to none" (W. Shakespeare) > Web: http://atemezing.org > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |