This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bryan T. <br...@bl...> - 2019-03-29 16:38:35
|
Andy Seaborne <an...@se...> wrote: SPARQL 1.2 Community Group starts up: http://www.w3.org/community/sparql-12/ It will document features found as extensions and capture common needs from the user community. Thanks, Bryan |
From: Bob D. <bo...@sn...> - 2018-04-21 21:30:42
|
Following the directions on https://github.com/blazegraph/tinkerpop3, I executed the ":install com.blazegraph blazegraph-gremlin 1.0.0" command and waited a few minutes, and eventually got this: ==>Error grabbing Grapes -- [download failed: org.slf4j#slf4j-api;1.7.12!slf4j-api.jar, download failed: org.slf4j#jcl-over-slf4j;1.7.12!jcl-over-slf4j.jar, download failed: commons-logging#commons-logging;1.1.1!commons-logging.jar] I had skipped over the two mvn steps, hoping that that the :install command would pull down everything I need. I also thought that since there was nothing on the web page about how the :install command would find what had been built with the mvn command(s) that the build results might not be necessary for the install, but of course I might be misunderstanding. And, the error was about difficulty getting sl4j anyway. Does anyone have any suggestions for what I should try next? Thanks, Bob |
From: David F. A. <dav...@cs...> - 2018-04-09 09:11:45
|
<!DOCTYPE html> <html><head> <meta charset="UTF-8"> </head><body><p>Hello:<br></p><p><br></p><p>I'm interested on how the Blazegraph full text search ranking algorithm works, because, <a href="https://blazegraph.github.io/database/apidocs/com/bigdata/rdf/store/BDS.html#RANK">https://blazegraph.github.io/database/apidocs/com/bigdata/rdf/store/BDS.html#RANK</a> only says that it assigns a number between 1 and N (<em>where N is the number of results</em>), how is this number calculated?<br></p><p><br></p><p>Thanks.<br></p><div class="io-ox-signature"><p><strong><br></strong></p><p><strong>___________________________________________________________________</strong></p><p><span style="color: #008080;"><strong>David Fernández Aldana</strong></span><br><span style="color: #008080;">Tècnic de Portals i Repositoris</span><br><span style="color: #008080;">Àrea de Càlcul i Aplicacions (TIC)</span><br><span style="color: #008080;">Consorci de Serveis Universitaris de Catalunya (CSUC)</span><br></p><p>Gran Capità, 2 (Edifici Nexus) - 08034 Barcelona<br>T. 93 551 6210 - <a class="mailto-link" href="mailto:dav...@cs...">dav...@cs...</a><br></p><p>Subscriu-te al butlletí (<a href="http://www.csuc.cat/butlleti">www.csuc.cat/butlleti</a>)</p></div></body></html> |
From: Kevin F. <kf...@ar...> - 2017-12-13 19:08:52
|
What's in the tmp directory? Look, too, for a growing log file. Those two seem like logical places to start, so apologies if you've already covered these locales. Yours, Kevin On 12/12/17 02:33, Rune Stilling wrote: > If someone takes a look this at some point it now seems that the import process has stalled. Since yesterday the invisible disk usage has gone up, but the datafile hasn’t been touched. > > Yesterday: > -rw-r--r-- 1 root root 186G 11 dec 11:46 blazegraph.jnl > /dev/xvda1 2,0T 1,7T 344G 83% / > > Today: > -rw-r--r-- 1 root root 186G 11 dec 11:46 blazegraph.jnl > /dev/xvda1 2,0T 1,9T 73G 97% / > > Soon the importer will probably stop with an out of disk space exception. > > /Rune > >> Den 11. dec. 2017 kl. 10.24 skrev Rune Stilling <rs...@un...>: >> >> Hi list >> >> An update on my question. It seems that the blazegraph importer makes use of disk space besides the datafile. Currently I’m running the import on a server with 2 TB of disk space. Even though the blazegraph.jnl file is only around 186 GB (and still running the import) the disk reports a use of 1.7 TB. I can’t find any files that make up for this use though. >> >> If the import doesn’t finish soon the import will end with the same exception again. >> >> Is this normal behavior and what (hidden) files is the blazegraph importer creating during the import? >> >> Reagrds, >> Rune >> >> ~]$ ls -la ~/blazegraph --block-size=G >> totalt 171G >> drwxrwxr-x 2 ec2-user ec2-user 1G 4 dec 12:25 . >> drwx------ 6 ec2-user ec2-user 1G 29 nov 10:52 .. >> -rw-r--r-- 1 root root 186G 11 dec 09:21 blazegraph.jnl >> >> [ ~]$ df -h >> Filsystem Størr Brugt Tilb Brug% Monteret på >> devtmpfs 15G 60K 15G 1% /dev >> tmpfs 15G 0 15G 0% /dev/shm >> /dev/xvda1 2,0T 1,7T 363G 83% / >> >>> Den 4. dec. 2017 kl. 07.45 skrev Rune Stilling <rs...@un...>: >>> >>> Hi list >>> >>> I have setup an AWS instance using the blazegraph-ami-2.1.5. I have done the setup using a one disc configuration with 1 terabyte of space. After this I use the biodata.jar to import the full Wikidata dataset: >>> >>>> sudo java -cp bigdata.jar com.bigdata.rdf.store.DataLoader -namespace wikidata -defaultGraph http://www.wikidata.org fullfeat >>>> ure.properties latest-all.ttl >>> >>> >>> After running 4 days the import terminates with an exception saying “no more space left on device”. There’s plenty of space left so what can I do to make this work? >>> >>>> [ec2-user@ip-10-10-0-189 ~]$ df -h >>>> Filsystem Størr Brugt Tilb Brug% Monteret på >>>> devtmpfs 15G 60K 15G 1% /dev >>>> tmpfs 15G 0 15G 0% /dev/shm >>>> /dev/xvda1 1008G 305G 703G 31% / >>> >>> >>>> [ec2-user@ip-10-10-0-189 ~]$ ls -la ~/blazegraph --block-size=GB >>>> totalt 119GB >>>> drwxrwxr-x 2 ec2-user ec2-user 1GB 29 nov 10:53 . >>>> drwx------ 6 ec2-user ec2-user 1GB 29 nov 10:52 .. >>>> -rw-r--r-- 1 root root 124GB 2 dec 10:14 blazegraph.jnl >>> >>> >>> /Rune Stilling >>> >> > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Rune S. <rs...@un...> - 2017-12-12 08:56:58
|
If someone takes a look this at some point it now seems that the import process has stalled. Since yesterday the invisible disk usage has gone up, but the datafile hasn’t been touched. Yesterday: -rw-r--r-- 1 root root 186G 11 dec 11:46 blazegraph.jnl /dev/xvda1 2,0T 1,7T 344G 83% / Today: -rw-r--r-- 1 root root 186G 11 dec 11:46 blazegraph.jnl /dev/xvda1 2,0T 1,9T 73G 97% / Soon the importer will probably stop with an out of disk space exception. /Rune > Den 11. dec. 2017 kl. 10.24 skrev Rune Stilling <rs...@un...>: > > Hi list > > An update on my question. It seems that the blazegraph importer makes use of disk space besides the datafile. Currently I’m running the import on a server with 2 TB of disk space. Even though the blazegraph.jnl file is only around 186 GB (and still running the import) the disk reports a use of 1.7 TB. I can’t find any files that make up for this use though. > > If the import doesn’t finish soon the import will end with the same exception again. > > Is this normal behavior and what (hidden) files is the blazegraph importer creating during the import? > > Reagrds, > Rune > > ~]$ ls -la ~/blazegraph --block-size=G > totalt 171G > drwxrwxr-x 2 ec2-user ec2-user 1G 4 dec 12:25 . > drwx------ 6 ec2-user ec2-user 1G 29 nov 10:52 .. > -rw-r--r-- 1 root root 186G 11 dec 09:21 blazegraph.jnl > > [ ~]$ df -h > Filsystem Størr Brugt Tilb Brug% Monteret på > devtmpfs 15G 60K 15G 1% /dev > tmpfs 15G 0 15G 0% /dev/shm > /dev/xvda1 2,0T 1,7T 363G 83% / > >> Den 4. dec. 2017 kl. 07.45 skrev Rune Stilling <rs...@un...>: >> >> Hi list >> >> I have setup an AWS instance using the blazegraph-ami-2.1.5. I have done the setup using a one disc configuration with 1 terabyte of space. After this I use the biodata.jar to import the full Wikidata dataset: >> >>> sudo java -cp bigdata.jar com.bigdata.rdf.store.DataLoader -namespace wikidata -defaultGraph http://www.wikidata.org fullfeat >>> ure.properties latest-all.ttl >> >> >> After running 4 days the import terminates with an exception saying “no more space left on device”. There’s plenty of space left so what can I do to make this work? >> >>> [ec2-user@ip-10-10-0-189 ~]$ df -h >>> Filsystem Størr Brugt Tilb Brug% Monteret på >>> devtmpfs 15G 60K 15G 1% /dev >>> tmpfs 15G 0 15G 0% /dev/shm >>> /dev/xvda1 1008G 305G 703G 31% / >> >> >>> [ec2-user@ip-10-10-0-189 ~]$ ls -la ~/blazegraph --block-size=GB >>> totalt 119GB >>> drwxrwxr-x 2 ec2-user ec2-user 1GB 29 nov 10:53 . >>> drwx------ 6 ec2-user ec2-user 1GB 29 nov 10:52 .. >>> -rw-r--r-- 1 root root 124GB 2 dec 10:14 blazegraph.jnl >> >> >> /Rune Stilling >> > |
From: Rune S. <rs...@un...> - 2017-12-11 09:24:49
|
Hi list An update on my question. It seems that the blazegraph importer makes use of disk space besides the datafile. Currently I’m running the import on a server with 2 TB of disk space. Even though the blazegraph.jnl file is only around 186 GB (and still running the import) the disk reports a use of 1.7 TB. I can’t find any files that make up for this use though. If the import doesn’t finish soon the import will end with the same exception again. Is this normal behavior and what (hidden) files is the blazegraph importer creating during the import? Reagrds, Rune ~]$ ls -la ~/blazegraph --block-size=G totalt 171G drwxrwxr-x 2 ec2-user ec2-user 1G 4 dec 12:25 . drwx------ 6 ec2-user ec2-user 1G 29 nov 10:52 .. -rw-r--r-- 1 root root 186G 11 dec 09:21 blazegraph.jnl [ ~]$ df -h Filsystem Størr Brugt Tilb Brug% Monteret på devtmpfs 15G 60K 15G 1% /dev tmpfs 15G 0 15G 0% /dev/shm /dev/xvda1 2,0T 1,7T 363G 83% / > Den 4. dec. 2017 kl. 07.45 skrev Rune Stilling <rs...@un...>: > > Hi list > > I have setup an AWS instance using the blazegraph-ami-2.1.5. I have done the setup using a one disc configuration with 1 terabyte of space. After this I use the biodata.jar to import the full Wikidata dataset: > >> sudo java -cp bigdata.jar com.bigdata.rdf.store.DataLoader -namespace wikidata -defaultGraph http://www.wikidata.org fullfeat >> ure.properties latest-all.ttl > > > After running 4 days the import terminates with an exception saying “no more space left on device”. There’s plenty of space left so what can I do to make this work? > >> [ec2-user@ip-10-10-0-189 ~]$ df -h >> Filsystem Størr Brugt Tilb Brug% Monteret på >> devtmpfs 15G 60K 15G 1% /dev >> tmpfs 15G 0 15G 0% /dev/shm >> /dev/xvda1 1008G 305G 703G 31% / > > >> [ec2-user@ip-10-10-0-189 ~]$ ls -la ~/blazegraph --block-size=GB >> totalt 119GB >> drwxrwxr-x 2 ec2-user ec2-user 1GB 29 nov 10:53 . >> drwx------ 6 ec2-user ec2-user 1GB 29 nov 10:52 .. >> -rw-r--r-- 1 root root 124GB 2 dec 10:14 blazegraph.jnl > > > /Rune Stilling > |
From: Rune S. <rs...@un...> - 2017-12-04 06:46:01
|
Hi list I have setup an AWS instance using the blazegraph-ami-2.1.5. I have done the setup using a one disc configuration with 1 terabyte of space. After this I use the biodata.jar to import the full Wikidata dataset: > sudo java -cp bigdata.jar com.bigdata.rdf.store.DataLoader -namespace wikidata -defaultGraph http://www.wikidata.org fullfeat > ure.properties latest-all.ttl After running 4 days the import terminates with an exception saying “no more space left on device”. There’s plenty of space left so what can I do to make this work? > [ec2-user@ip-10-10-0-189 ~]$ df -h > Filsystem Størr Brugt Tilb Brug% Monteret på > devtmpfs 15G 60K 15G 1% /dev > tmpfs 15G 0 15G 0% /dev/shm > /dev/xvda1 1008G 305G 703G 31% / > [ec2-user@ip-10-10-0-189 ~]$ ls -la ~/blazegraph --block-size=GB > totalt 119GB > drwxrwxr-x 2 ec2-user ec2-user 1GB 29 nov 10:53 . > drwx------ 6 ec2-user ec2-user 1GB 29 nov 10:52 .. > -rw-r--r-- 1 root root 124GB 2 dec 10:14 blazegraph.jnl /Rune Stilling > Reading properties: fullfeature.properties > Will load from: latest-all.ttl > Journal file: /home/ec2-user/blazegraph/blazegraph.jnl > ERROR: LexiconConfiguration.java:728: "0000-01-01T00:00:00Z" is not a valid representation of an XML Gregorian Calendar value.: value=0000-01-01T00:00:00Z > ERROR: LexiconConfiguration.java:728: "0000-01-01T00:00:00Z" is not a valid representation of an XML Gregorian Calendar value.: value=0000-01-01T00:00:00Z > ERROR: LexiconConfiguration.java:728: "0000-01-01T00:00:00Z" is not a valid representation of an XML Gregorian Calendar value.: value=0000-01-01T00:00:00Z > ERROR: LexiconConfiguration.java:728: "0000-01-01T00:00:00Z" is not a valid representation of an XML Gregorian Calendar value.: value=0000-01-01T00:00:00Z > ERROR: LexiconConfiguration.java:728: "0000-01-01T00:00:00Z" is not a valid representation of an XML Gregorian Calendar value.: value=0000-01-01T00:00:00Z > ERROR: LexiconConfiguration.java:728: "0000-01-01T00:00:00Z" is not a valid representation of an XML Gregorian Calendar value.: value=0000-01-01T00:00:00Z > ERROR: LexiconConfiguration.java:728: "0000-01-01T00:00:00Z" is not a valid representation of an XML Gregorian Calendar value.: value=0000-01-01T00:00:00Z > ERROR: LexiconConfiguration.java:728: "0000-12-14T00:00:00Z" is not a valid representation of an XML Gregorian Calendar value.: value=0000-12-14T00:00:00Z > ERROR: LexiconConfiguration.java:728: "0000-01-01T00:00:00Z" is not a valid representation of an XML Gregorian Calendar value.: value=0000-01-01T00:00:00Z > ERROR: LexiconConfiguration.java:728: "0000-01-01T00:00:00Z" is not a valid representation of an XML Gregorian Calendar value.: value=0000-01-01T00:00:00Z > ERROR: LexiconConfiguration.java:728: "0000-01-01T00:00:00Z" is not a valid representation of an XML Gregorian Calendar value.: value=0000-01-01T00:00:00Z > ERROR: SPORelation.java:2303: java.util.concurrent.ExecutionException: com.bigdata.btree.EvictionError: java.lang.RuntimeException: java.io.IOException: Ikke mere > plads på enheden > java.util.concurrent.ExecutionException: com.bigdata.btree.EvictionError: java.lang.RuntimeException: java.io.IOException: Ikke mere plads på enheden > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at com.bigdata.rdf.spo.SPORelation.logFuture(SPORelation.java:2298) > at com.bigdata.rdf.spo.SPORelation.insert(SPORelation.java:2253) > at com.bigdata.rdf.store.AbstractTripleStore.addStatements(AbstractTripleStore.java:4405) > at com.bigdata.rdf.rio.StatementBuffer$Batch.writeSPOs(StatementBuffer.java:2178) > at com.bigdata.rdf.rio.StatementBuffer$Batch.addStatements(StatementBuffer.java:2027) > at com.bigdata.rdf.rio.StatementBuffer$Batch.writeNow(StatementBuffer.java:1912) > at com.bigdata.rdf.rio.StatementBuffer$Batch.access$1000(StatementBuffer.java:1645) > at com.bigdata.rdf.rio.StatementBuffer.incrementalWrite(StatementBuffer.java:1362) > at com.bigdata.rdf.rio.StatementBuffer.add(StatementBuffer.java:2240) > at com.bigdata.rdf.rio.StatementBuffer.add(StatementBuffer.java:2219) > at com.bigdata.rdf.rio.PresortRioLoader.handleStatement(PresortRioLoader.java:162) > at org.openrdf.rio.turtle.TurtleParser.reportStatement(TurtleParser.java:1155) > at org.openrdf.rio.turtle.TurtleParser.parseObject(TurtleParser.java:505) > at org.openrdf.rio.turtle.TurtleParser.parseObjectList(TurtleParser.java:428) > at org.openrdf.rio.turtle.TurtleParser.parsePredicateObjectList(TurtleParser.java:400) > at org.openrdf.rio.turtle.TurtleParser.parseTriples(TurtleParser.java:385) > at org.openrdf.rio.turtle.TurtleParser.parseStatement(TurtleParser.java:261) > at org.openrdf.rio.turtle.TurtleParser.parse(TurtleParser.java:216) > at com.bigdata.rdf.rio.BasicRioLoader.loadRdf2(BasicRioLoader.java:236) > at com.bigdata.rdf.rio.BasicRioLoader.loadRdf(BasicRioLoader.java:176) > at com.bigdata.rdf.store.DataLoader.loadData4_ParserErrors_Not_Trapped(DataLoader.java:1595) > at com.bigdata.rdf.store.DataLoader.loadFiles(DataLoader.java:1359) > at com.bigdata.rdf.store.DataLoader.main(DataLoader.java:2085) > Caused by: com.bigdata.btree.EvictionError: java.lang.RuntimeException: java.io.IOException: Ikke mere plads på enheden > Caused by: com.bigdata.btree.EvictionError: java.lang.RuntimeException: java.io.IOException: Ikke mere plads på enheden > at com.bigdata.btree.DefaultEvictionListener.doEviction(DefaultEvictionListener.java:198) > at com.bigdata.btree.DefaultEvictionListener.evicted(DefaultEvictionListener.java:75) > at com.bigdata.btree.DefaultEvictionListener.evicted(DefaultEvictionListener.java:37) > at com.bigdata.cache.HardReferenceQueue.evict(HardReferenceQueue.java:226) > at com.bigdata.cache.HardReferenceQueue.beforeOffer(HardReferenceQueue.java:199) > at com.bigdata.cache.RingBuffer.add(RingBuffer.java:159) > at com.bigdata.cache.HardReferenceQueue.add(HardReferenceQueue.java:176) > at com.bigdata.btree.AbstractBTree.doTouch(AbstractBTree.java:3766) > at com.bigdata.btree.AbstractBTree.doSyncTouch(AbstractBTree.java:3722) > at com.bigdata.btree.AbstractBTree.touch(AbstractBTree.java:3685) > at com.bigdata.btree.Node.insert(Node.java:916) > at com.bigdata.btree.Node.insert(Node.java:922) > at com.bigdata.btree.Node.insert(Node.java:922) > at com.bigdata.btree.AbstractBTree.insert(AbstractBTree.java:2184) > at com.bigdata.btree.AbstractBTree.insert(AbstractBTree.java:2108) > at com.bigdata.rdf.spo.SPOIndexWriteProc.applyOnce(SPOIndexWriteProc.java:246) > at com.bigdata.btree.proc.AbstractKeyArrayIndexProcedure.apply(AbstractKeyArrayIndexProcedure.java:381) > at com.bigdata.btree.UnisolatedReadWriteIndex.submit(UnisolatedReadWriteIndex.java:723) > at com.bigdata.rdf.spo.SPOIndexWriter.call(SPOIndexWriter.java:363) > at com.bigdata.rdf.spo.SPOIndexWriter.call(SPOIndexWriter.java:68) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: java.io.IOException: Ikke mere plads på enheden > at com.bigdata.journal.DiskOnlyStrategy.writeOnDisk(DiskOnlyStrategy.java:2292) > at com.bigdata.journal.DiskOnlyStrategy.access$100(DiskOnlyStrategy.java:163) > at com.bigdata.journal.DiskOnlyStrategy$WriteCache.flush(DiskOnlyStrategy.java:386) > at com.bigdata.journal.DiskOnlyStrategy.flushWriteCache(DiskOnlyStrategy.java:506) > at com.bigdata.journal.DiskOnlyStrategy.write(DiskOnlyStrategy.java:2093) > at com.bigdata.journal.TemporaryRawStore.write(TemporaryRawStore.java:587) > at com.bigdata.btree.AbstractBTree.writeNodeOrLeaf(AbstractBTree.java:4416) > at com.bigdata.btree.AbstractBTree.writeNodeRecursiveConcurrent(AbstractBTree.java:4131) > at com.bigdata.btree.AbstractBTree.writeNodeRecursive(AbstractBTree.java:3844) > at com.bigdata.btree.DefaultEvictionListener.doEviction(DefaultEvictionListener.java:139) > ... 23 more > Caused by: java.io.IOException: Ikke mere plads på enheden > at sun.nio.ch.FileDispatcherImpl.pwrite0(Native Method) > at sun.nio.ch.FileDispatcherImpl.pwrite(FileDispatcherImpl.java:66) > at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:89) > at sun.nio.ch.IOUtil.write(IOUtil.java:51) > at sun.nio.ch.FileChannelImpl.writeInternal(FileChannelImpl.java:771) > at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:756) > at com.bigdata.io.FileChannelUtility.writeAll(FileChannelUtility.java:610) > at com.bigdata.io.FileChannelUtility.writeAll(FileChannelUtility.java:509) > at com.bigdata.journal.DiskOnlyStrategy.writeOnDisk(DiskOnlyStrategy.java:2287) > ... 32 more > ERROR: Banner.java:160: Uncaught exception in thread > com.bigdata.btree.IndexInconsistentError: Index is in error state: 0e4d0aa7-6117-40fd-b85f-3932085187b3kb.spo.POS, store: com.bigdata.journal.TemporaryStore{file= > /tmp/bigdata5787148345791213009.tmp} > at com.bigdata.btree.AbstractBTree.assertNotReadOnly(AbstractBTree.java:1429) > at com.bigdata.btree.BTree.setDirtyListener(BTree.java:709) > at com.bigdata.journal.Name2Addr.dropIndex(Name2Addr.java:1239) > at com.bigdata.journal.TemporaryStore.dropIndex(TemporaryStore.java:333) > at com.bigdata.rdf.spo.SPORelation.destroy(SPORelation.java:524) > at com.bigdata.rdf.store.AbstractTripleStore.destroy(AbstractTripleStore.java:2052) > at com.bigdata.rdf.store.TempTripleStore.close(TempTripleStore.java:155) > at com.bigdata.rdf.store.DataLoader.loadData4_ParserErrors_Not_Trapped(DataLoader.java:1715) > at com.bigdata.rdf.store.DataLoader.loadFiles(DataLoader.java:1359) > at com.bigdata.rdf.store.DataLoader.main(DataLoader.java:2085) > Caused by: java.lang.RuntimeException: java.io.IOException: Ikke mere plads på enheden > at com.bigdata.journal.DiskOnlyStrategy.writeOnDisk(DiskOnlyStrategy.java:2292) > at com.bigdata.journal.DiskOnlyStrategy.access$100(DiskOnlyStrategy.java:163) > at com.bigdata.journal.DiskOnlyStrategy$WriteCache.flush(DiskOnlyStrategy.java:386) > at com.bigdata.journal.DiskOnlyStrategy.flushWriteCache(DiskOnlyStrategy.java:506) > at com.bigdata.journal.DiskOnlyStrategy.write(DiskOnlyStrategy.java:2093) > at com.bigdata.journal.TemporaryRawStore.write(TemporaryRawStore.java:587) > at com.bigdata.btree.AbstractBTree.writeNodeOrLeaf(AbstractBTree.java:4416) > at com.bigdata.btree.AbstractBTree.writeNodeRecursiveConcurrent(AbstractBTree.java:4131) > at com.bigdata.btree.AbstractBTree.writeNodeRecursive(AbstractBTree.java:3844) > at com.bigdata.btree.DefaultEvictionListener.doEviction(DefaultEvictionListener.java:139) > at com.bigdata.btree.DefaultEvictionListener.evicted(DefaultEvictionListener.java:75) > at com.bigdata.btree.DefaultEvictionListener.evicted(DefaultEvictionListener.java:37) > at com.bigdata.cache.HardReferenceQueue.evict(HardReferenceQueue.java:226) > at com.bigdata.cache.HardReferenceQueue.beforeOffer(HardReferenceQueue.java:199) > at com.bigdata.cache.RingBuffer.add(RingBuffer.java:159) > at com.bigdata.cache.HardReferenceQueue.add(HardReferenceQueue.java:176) > at com.bigdata.btree.AbstractBTree.doTouch(AbstractBTree.java:3766) > at com.bigdata.btree.AbstractBTree.doSyncTouch(AbstractBTree.java:3722) > at com.bigdata.btree.AbstractBTree.touch(AbstractBTree.java:3685) > at com.bigdata.btree.Node.insert(Node.java:916) > at com.bigdata.btree.Node.insert(Node.java:922) > at com.bigdata.btree.Node.insert(Node.java:922) > at com.bigdata.btree.AbstractBTree.insert(AbstractBTree.java:2184) > at com.bigdata.btree.AbstractBTree.insert(AbstractBTree.java:2108) > at com.bigdata.rdf.spo.SPOIndexWriteProc.applyOnce(SPOIndexWriteProc.java:246) > at com.bigdata.btree.proc.AbstractKeyArrayIndexProcedure.apply(AbstractKeyArrayIndexProcedure.java:381) > at com.bigdata.btree.UnisolatedReadWriteIndex.submit(UnisolatedReadWriteIndex.java:723) > at com.bigdata.rdf.spo.SPOIndexWriter.call(SPOIndexWriter.java:363) > at com.bigdata.rdf.spo.SPOIndexWriter.call(SPOIndexWriter.java:68) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: Ikke mere plads på enheden > at sun.nio.ch.FileDispatcherImpl.pwrite0(Native Method) > at sun.nio.ch.FileDispatcherImpl.pwrite(FileDispatcherImpl.java:66) > at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:89) > at sun.nio.ch.IOUtil.write(IOUtil.java:51) > at sun.nio.ch.FileChannelImpl.writeInternal(FileChannelImpl.java:771) > at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:756) > at com.bigdata.io.FileChannelUtility.writeAll(FileChannelUtility.java:610) > at com.bigdata.io.FileChannelUtility.writeAll(FileChannelUtility.java:509) > at com.bigdata.journal.DiskOnlyStrategy.writeOnDisk(DiskOnlyStrategy.java:2287) > ... 32 more |
From: Rune S. <rs...@un...> - 2017-12-01 09:34:41
|
Hi list It seems the problem was somehow related to a two drive configuration on my machine. The primary drive where the blazegraph service was located had very limited space. The data drive had plenty but I guess some data also is written to primary drive. Running in a single drive configuration seems to resolved the problem. Regards, Rune > Den 28. nov. 2017 kl. 14.01 skrev Rune Stilling <rs...@un...>: > > Hi list > > I’m trying to import wikidata into a graphdb EC2-instance. But the import stops with an exception > >> Caused by: java.io.IOException: No space left on device > > > There seems to be plenty of space left though: > >> $ df -h >> Filesystem Size Used Avail Use% Mounted on >> devtmpfs 7.9G 68K 7.9G 1% /dev >> tmpfs 7.9G 0 7.9G 0% /dev/shm >> /dev/xvda1 493G 23G 470G 5% / >> /dev/xvdb1 1008G 382G 585G 40% /data > > > The blazegraph.jnl file is located in the /data folder. > > What could be the problem? > > Regards, > Rune Stilling > at com.bigdata.btree.AbstractBTree.writeNodeRecursiveConcurrent(AbstractBTree.java:4131) > at com.bigdata.btree.AbstractBTree.writeNodeRecursive(AbstractBTree.java:3844) > at com.bigdata.btree.DefaultEvictionListener.doEviction(DefaultEvictionListener.java:139) > at com.bigdata.btree.DefaultEvictionListener.evicted(DefaultEvictionListener.java:75) > at com.bigdata.btree.DefaultEvictionListener.evicted(DefaultEvictionListener.java:37) > at com.bigdata.cache.HardReferenceQueue.evict(HardReferenceQueue.java:226) > at com.bigdata.cache.HardReferenceQueue.beforeOffer(HardReferenceQueue.java:199) > at com.bigdata.cache.RingBuffer.add(RingBuffer.java:159) > at com.bigdata.cache.HardReferenceQueue.add(HardReferenceQueue.java:176) > at com.bigdata.btree.AbstractBTree.doTouch(AbstractBTree.java:3766) > at com.bigdata.btree.AbstractBTree.doSyncTouch(AbstractBTree.java:3722) > at com.bigdata.btree.AbstractBTree.touch(AbstractBTree.java:3685) > at com.bigdata.btree.AbstractNode.<init>(AbstractNode.java:298) > at com.bigdata.btree.Leaf.<init>(Leaf.java:348) > at com.bigdata.btree.BTree$NodeFactory.allocLeaf(BTree.java:1936) > at com.bigdata.btree.NodeSerializer.wrap(NodeSerializer.java:411) > at com.bigdata.btree.AbstractBTree.readNodeOrLeaf(AbstractBTree.java:4584) > at com.bigdata.btree.Node._getChild(Node.java:2746) > at com.bigdata.btree.AbstractBTree$1.compute(AbstractBTree.java:377) > at com.bigdata.btree.AbstractBTree$1.compute(AbstractBTree.java:360) > at com.bigdata.util.concurrent.Memoizer$1.call(Memoizer.java:77) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at com.bigdata.util.concurrent.Memoizer.compute(Memoizer.java:92) > at com.bigdata.btree.AbstractBTree.loadChild(AbstractBTree.java:546) > at com.bigdata.btree.Node.getChild(Node.java:2644) > at com.bigdata.btree.Node.lookup(Node.java:938) > at com.bigdata.btree.Node.lookup(Node.java:940) > at com.bigdata.btree.Node.lookup(Node.java:940) > at com.bigdata.btree.Node.lookup(Node.java:940) > at com.bigdata.btree.Node.lookup(Node.java:940) > at com.bigdata.btree.Node.lookup(Node.java:940) > at com.bigdata.btree.Node.lookup(Node.java:940) > at com.bigdata.btree.AbstractBTree.lookup(AbstractBTree.java:2455) > at com.bigdata.btree.AbstractBTree.lookup(AbstractBTree.java:2383) > at com.bigdata.rdf.spo.SPOIndexWriteProc.applyOnce(SPOIndexWriteProc.java:232) > at com.bigdata.btree.proc.AbstractKeyArrayIndexProcedure.apply(AbstractKeyArrayIndexProcedure.java:381) > at com.bigdata.btree.UnisolatedReadWriteIndex.submit(UnisolatedReadWriteIndex.java:723) > at com.bigdata.rdf.spo.SPOIndexWriter.call(SPOIndexWriter.java:363) > at com.bigdata.rdf.spo.SPOIndexWriter.call(SPOIndexWriter.java:68) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.IOException: No space left on device > at sun.nio.ch.FileDispatcherImpl.pwrite0(Native Method) > at sun.nio.ch.FileDispatcherImpl.pwrite(FileDispatcherImpl.java:66) > at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:89) > at sun.nio.ch.IOUtil.write(IOUtil.java:51) > at sun.nio.ch.FileChannelImpl.writeInternal(FileChannelImpl.java:771) > at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:756) > at com.bigdata.io.FileChannelUtility.writeAll(FileChannelUtility.java:610) > at com.bigdata.io.FileChannelUtility.writeAll(FileChannelUtility.java:509) > at com.bigdata.journal.DiskOnlyStrategy.writeOnDisk(DiskOnlyStrategy.java:2287) |
From: Rune S. <rs...@un...> - 2017-11-28 13:31:49
|
Hi list I’m trying to import wikidata into a graphdb EC2-instance. But the import stops with an exception > Caused by: java.io.IOException: No space left on device There seems to be plenty of space left though: > $ df -h > Filesystem Size Used Avail Use% Mounted on > devtmpfs 7.9G 68K 7.9G 1% /dev > tmpfs 7.9G 0 7.9G 0% /dev/shm > /dev/xvda1 493G 23G 470G 5% / > /dev/xvdb1 1008G 382G 585G 40% /data The blazegraph.jnl file is located in the /data folder. What could be the problem? Regards, Rune Stilling at com.bigdata.btree.AbstractBTree.writeNodeRecursiveConcurrent(AbstractBTree.java:4131) at com.bigdata.btree.AbstractBTree.writeNodeRecursive(AbstractBTree.java:3844) at com.bigdata.btree.DefaultEvictionListener.doEviction(DefaultEvictionListener.java:139) at com.bigdata.btree.DefaultEvictionListener.evicted(DefaultEvictionListener.java:75) at com.bigdata.btree.DefaultEvictionListener.evicted(DefaultEvictionListener.java:37) at com.bigdata.cache.HardReferenceQueue.evict(HardReferenceQueue.java:226) at com.bigdata.cache.HardReferenceQueue.beforeOffer(HardReferenceQueue.java:199) at com.bigdata.cache.RingBuffer.add(RingBuffer.java:159) at com.bigdata.cache.HardReferenceQueue.add(HardReferenceQueue.java:176) at com.bigdata.btree.AbstractBTree.doTouch(AbstractBTree.java:3766) at com.bigdata.btree.AbstractBTree.doSyncTouch(AbstractBTree.java:3722) at com.bigdata.btree.AbstractBTree.touch(AbstractBTree.java:3685) at com.bigdata.btree.AbstractNode.<init>(AbstractNode.java:298) at com.bigdata.btree.Leaf.<init>(Leaf.java:348) at com.bigdata.btree.BTree$NodeFactory.allocLeaf(BTree.java:1936) at com.bigdata.btree.NodeSerializer.wrap(NodeSerializer.java:411) at com.bigdata.btree.AbstractBTree.readNodeOrLeaf(AbstractBTree.java:4584) at com.bigdata.btree.Node._getChild(Node.java:2746) at com.bigdata.btree.AbstractBTree$1.compute(AbstractBTree.java:377) at com.bigdata.btree.AbstractBTree$1.compute(AbstractBTree.java:360) at com.bigdata.util.concurrent.Memoizer$1.call(Memoizer.java:77) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at com.bigdata.util.concurrent.Memoizer.compute(Memoizer.java:92) at com.bigdata.btree.AbstractBTree.loadChild(AbstractBTree.java:546) at com.bigdata.btree.Node.getChild(Node.java:2644) at com.bigdata.btree.Node.lookup(Node.java:938) at com.bigdata.btree.Node.lookup(Node.java:940) at com.bigdata.btree.Node.lookup(Node.java:940) at com.bigdata.btree.Node.lookup(Node.java:940) at com.bigdata.btree.Node.lookup(Node.java:940) at com.bigdata.btree.Node.lookup(Node.java:940) at com.bigdata.btree.Node.lookup(Node.java:940) at com.bigdata.btree.AbstractBTree.lookup(AbstractBTree.java:2455) at com.bigdata.btree.AbstractBTree.lookup(AbstractBTree.java:2383) at com.bigdata.rdf.spo.SPOIndexWriteProc.applyOnce(SPOIndexWriteProc.java:232) at com.bigdata.btree.proc.AbstractKeyArrayIndexProcedure.apply(AbstractKeyArrayIndexProcedure.java:381) at com.bigdata.btree.UnisolatedReadWriteIndex.submit(UnisolatedReadWriteIndex.java:723) at com.bigdata.rdf.spo.SPOIndexWriter.call(SPOIndexWriter.java:363) at com.bigdata.rdf.spo.SPOIndexWriter.call(SPOIndexWriter.java:68) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: No space left on device at sun.nio.ch.FileDispatcherImpl.pwrite0(Native Method) at sun.nio.ch.FileDispatcherImpl.pwrite(FileDispatcherImpl.java:66) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:89) at sun.nio.ch.IOUtil.write(IOUtil.java:51) at sun.nio.ch.FileChannelImpl.writeInternal(FileChannelImpl.java:771) at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:756) at com.bigdata.io.FileChannelUtility.writeAll(FileChannelUtility.java:610) at com.bigdata.io.FileChannelUtility.writeAll(FileChannelUtility.java:509) at com.bigdata.journal.DiskOnlyStrategy.writeOnDisk(DiskOnlyStrategy.java:2287) |
From: Lu, J. <Jim...@mo...> - 2017-11-17 17:24:29
|
Hi, I created a backup of the journal using /blazegraph/backup, but when I try to use it, it cannot be updated. Any update operation results in this error: ERROR: SPARQL-UPDATE: updateStr=PREFIX : INSERT data { :s :p :o } java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.openrdf.query.UpdateExecutionException: java.lang.RuntimeException: Problem with entry at -150774113961508442: lastRootBlock=rootBlock{ rootBlock=0, challisField=58292, version=3, nextOffset=1291651319809379, localTime=1510803554238 [Wednesday, November 15, 2017 10:39:14 PM EST], firstCommitTime=1510348818598 [Friday, November 10, 2017 4:20:18 PM EST], lastCommitTime=1510803554119 [Wednesday, November 15, 2017 10:39:14 PM EST], commitCounter=58292, commitRecordAddr={off=NATIVE:-35104836,len=422}, commitRecordIndexAddr={off=NATIVE:-34906522,len=220}, blockSequence=1, quorumToken=-1, metaBitsAddr=1291382849274056, metaStartAddr=329628, storeType=RW, uuid=121caa82-349a-44c6-8bfb-21400fb23494, offsetBits=42, checksum=-719835888, createTime=1510348816644 [Friday, November 10, 2017 4:20:16 PM EST], closeTime=0} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) at com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlUpdate(QueryServlet.java:458) at com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:239) at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:269) at com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:193) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:808) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) at java.lang.Thread.run(Thread.java:748) Caused by: java.util.concurrent.ExecutionException: org.openrdf.query.UpdateExecutionException: java.lang.RuntimeException: Problem with entry at -150774113961508442: lastRootBlock=rootBlock{ rootBlock=0, challisField=58292, version=3, nextOffset=1291651319809379, localTime=1510803554238 [Wednesday, November 15, 2017 10:39:14 PM EST], firstCommitTime=1510348818598 [Friday, November 10, 2017 4:20:18 PM EST], lastCommitTime=1510803554119 [Wednesday, November 15, 2017 10:39:14 PM EST], commitCounter=58292, commitRecordAddr={off=NATIVE:-35104836,len=422}, commitRecordIndexAddr={off=NATIVE:-34906522,len=220}, blockSequence=1, quorumToken=-1, metaBitsAddr=1291382849274056, metaStartAddr=329628, storeType=RW, uuid=121caa82-349a-44c6-8bfb-21400fb23494, offsetBits=42, checksum=-719835888, createTime=1510348816644 [Friday, November 10, 2017 4:20:16 PM EST], closeTime=0} at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:569) at com.bigdata.rdf.sail.webapp.QueryServlet$SparqlUpdateTask.call(QueryServlet.java:470) at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more Caused by: org.openrdf.query.UpdateExecutionException: java.lang.RuntimeException: Problem with entry at -150774113961508442: lastRootBlock=rootBlock{ rootBlock=0, challisField=58292, version=3, nextOffset=1291651319809379, localTime=1510803554238 [Wednesday, November 15, 2017 10:39:14 PM EST], firstCommitTime=1510348818598 [Friday, November 10, 2017 4:20:18 PM EST], lastCommitTime=1510803554119 [Wednesday, November 15, 2017 10:39:14 PM EST], commitCounter=58292, commitRecordAddr={off=NATIVE:-35104836,len=422}, commitRecordIndexAddr={off=NATIVE:-34906522,len=220}, blockSequence=1, quorumToken=-1, metaBitsAddr=1291382849274056, metaStartAddr=329628, storeType=RW, uuid=121caa82-349a-44c6-8bfb-21400fb23494, offsetBits=42, checksum=-719835888, createTime=1510348816644 [Friday, November 10, 2017 4:20:16 PM EST], closeTime=0} at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1080) at com.bigdata.rdf.sail.BigdataSailUpdate.execute2(BigdataSailUpdate.java:152) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$UpdateTask.doQuery(BigdataRDFContext.java:1968) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1569) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1534) at com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:747) ... 4 more Caused by: java.lang.RuntimeException: Problem with entry at -150774113961508442: lastRootBlock=rootBlock{ rootBlock=0, challisField=58292, version=3, nextOffset=1291651319809379, localTime=1510803554238 [Wednesday, November 15, 2017 10:39:14 PM EST], firstCommitTime=1510348818598 [Friday, November 10, 2017 4:20:18 PM EST], lastCommitTime=1510803554119 [Wednesday, November 15, 2017 10:39:14 PM EST], commitCounter=58292, commitRecordAddr={off=NATIVE:-35104836,len=422}, commitRecordIndexAddr={off=NATIVE:-34906522,len=220}, blockSequence=1, quorumToken=-1, metaBitsAddr=1291382849274056, metaStartAddr=329628, storeType=RW, uuid=121caa82-349a-44c6-8bfb-21400fb23494, offsetBits=42, checksum=-719835888, createTime=1510348816644 [Friday, November 10, 2017 4:20:16 PM EST], closeTime=0} at com.bigdata.journal.AbstractJournal.commit(AbstractJournal.java:3134) at com.bigdata.rdf.store.LocalTripleStore.commit(LocalTripleStore.java:99) at com.bigdata.rdf.sail.BigdataSail$BigdataSailConnection.commit2(BigdataSail.java:3933) at com.bigdata.rdf.sail.BigdataSailRepositoryConnection.commit2(BigdataSailRepositoryConnection.java:330) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertCommit(AST2BOpUpdate.java:375) at com.bigdata.rdf.sparql.ast.eval.AST2BOpUpdate.convertUpdate(AST2BOpUpdate.java:321) at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.executeUpdate(ASTEvalHelper.java:1072) ... 9 more Caused by: java.lang.RuntimeException: Problem with entry at -150774113961508442 at com.bigdata.rwstore.RWStore.freeDeferrals(RWStore.java:5413) at com.bigdata.rwstore.RWStore.checkDeferredFrees(RWStore.java:3896) at com.bigdata.journal.RWStrategy.checkDeferredFrees(RWStrategy.java:780) at com.bigdata.journal.AbstractJournal$CommitState.writeCommitRecord(AbstractJournal.java:3499) at com.bigdata.journal.AbstractJournal$CommitState.access$2800(AbstractJournal.java:3301) at com.bigdata.journal.AbstractJournal.commitNow(AbstractJournal.java:4111) at com.bigdata.journal.AbstractJournal.commit(AbstractJournal.java:3132) ... 15 more Caused by: java.lang.RuntimeException: addr=-35096590 : cause=java.lang.IllegalStateException: Error reading from WriteCache addr: 19706807680 length: 124, writeCacheDebug: No WriteCache debug info at com.bigdata.rwstore.RWStore.getData(RWStore.java:2399) at com.bigdata.rwstore.RWStore.getData(RWStore.java:2120) at com.bigdata.rwstore.RWStore.freeDeferrals(RWStore.java:5303) at com.bigdata.rwstore.RWStore.freeDeferrals(RWStore.java:5399) ... 21 more Caused by: java.lang.IllegalStateException: Error reading from WriteCache addr: 19706807680 length: 124, writeCacheDebug: No WriteCache debug info at com.bigdata.rwstore.RWStore.getData(RWStore.java:2317) ... 24 more Caused by: com.bigdata.util.ChecksumError: offset=19706807680,nbytes=128,expected=-1423444224,actual=-1811077022 at com.bigdata.io.writecache.WriteCacheService._readFromLocalDiskIntoNewHeapByteBuffer(WriteCacheService.java:3783) at com.bigdata.io.writecache.WriteCacheService._getRecord(WriteCacheService.java:3598) at com.bigdata.io.writecache.WriteCacheService.access$2500(WriteCacheService.java:200) at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3435) at com.bigdata.io.writecache.WriteCacheService$1.compute(WriteCacheService.java:3419) at com.bigdata.util.concurrent.Memoizer$1.call(Memoizer.java:77) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at com.bigdata.util.concurrent.Memoizer.compute(Memoizer.java:92) at com.bigdata.io.writecache.WriteCacheService.loadRecord(WriteCacheService.java:3540) at com.bigdata.io.writecache.WriteCacheService.read(WriteCacheService.java:3259) at com.bigdata.rwstore.RWStore.getData(RWStore.java:2315) ... 24 more Note that query is fine, it's only update causes the ChecksumError. Does anyone know what's the problem? Regards, Jimmy Lu ________________________________ NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley. |
From: Tarmo J. <tar...@ze...> - 2017-10-30 14:19:14
|
Hi, I'm using Jena to query Blazegraph and I need to use math functions like asin and sqrt. As I understand, when using Jena I have to have the functions in the SPARQL like with Virtuoso I could do "<bif:asin>(...)". Is there a way do that with Blazegraph? I googled but only thing that came up was this question posted back in May with no answer as of now: https://stackoverflow.com/questions/44168486/using-functions-in-blazegraph. Tarmo |
From: Ian D. <ian...@gm...> - 2017-10-24 14:28:50
|
Hello, Does bigdata/blazegraph allow the use of a named graph in a SERVICE clause that is on an arbitrary sparql endpoint. I have a blazegraph instance running in triples mode but I want to run a query against both it and a remote non-blazegraph sparql endpoint where the data is in a specific graph. This results in the stack trace: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: com.bigdata.rdf.sparql.ast.QuadsOperationInTriplesModeException: Use of WITH and GRAPH constructs in query body is not supported in triples mode. at com.bigdata.rdf.sparql.ast.eval.ASTDeferredIVResolution.fillInIV(ASTDeferredIVResolution.java:1015) at com.bigdata.rdf.sparql.ast.eval.ASTDeferredIVResolution.fillInIV(ASTDeferredIVResolution.java:878) at com.bigdata.rdf.sparql.ast.eval.ASTDeferredIVResolution.fillInIV(ASTDeferredIVResolution.java:878) at com.bigdata.rdf.sparql.ast.eval.ASTDeferredIVResolution.fillInIV(ASTDeferredIVResolution.java:1028) at com.bigdata.rdf.sparql.ast.eval.ASTDeferredIVResolution.fillInIV(ASTDeferredIVResolution.java:878) at com.bigdata.rdf.sparql.ast.eval.ASTDeferredIVResolution.prepare(ASTDeferredIVResolution.java:662) at com.bigdata.rdf.sparql.ast.eval.ASTDeferredIVResolution.resolve(ASTDeferredIVResolution.java:447) at com.bigdata.rdf.sparql.ast.eval.ASTDeferredIVResolution.resolveQuery(ASTDeferredIVResolution.java:270) at com.bigdata.rdf.sparql.ast.eval.ASTEvalHelper.optimizeQuery(ASTEvalHelper.java:408) This query replicates the issue: SELECT DISTINCT ?a ?b WHERE { SERVICE <http://another-endpoint/sparql> { GRAPH <http://a-graph> { ?a a ?b . } } } It seems that bigdata parses the query, sees the graph clause and because the source sparql endpoint is running in triples mode it doesn't like it. However, another endpoint can be running in any mode it wants. It should be up to the external endpoint to complain about the query it is receiving, not the originator. Now I understand that you can't have graphs in a blazegraph triples mode store but you could in an external one which you want to federate your query with. It would be great if there was an easy way round this but I haven't managed to find anything in the docs. Cheers, Ian |
From: Rune S. <rs...@un...> - 2017-10-24 11:22:05
|
Hi Joakim The import process had stopped though I don’t know if it do so with an error. But probably it hadn’t finished. /Rune > Den 23. okt. 2017 kl. 17.38 skrev Joakim Soderberg <joa...@bl...>: > > It can still be loading. My jnl-file is 145G and that’s without links and only labels in english. > > >> On Oct 23, 2017, at 5:37 AM, Jim McCusker <mcc...@gm... <mailto:mcc...@gm...>> wrote: >> >> Make sure you have the right namespace selected in the UI. The default one is kb. Go to the namespaces tab to change it. >> >> Jim >> On Mon, Oct 23, 2017 at 8:17 AM Rune Stilling <rs...@un... <mailto:rs...@un...>> wrote: >> Hi all >> >> I have set up blazegraph from the Amazon AMI on AWS. I have apparently succeeded with loading the wikidata data set by running the following bulk loader command: >> >> > java -cp bigdata.jar com.bigdata.rdf.store.DataLoader -namespace wikidata fullfeature.properties latest-truthy.nt >> >> The loader took several days to complete. The result was a blazegraph.jnl file of 51GB. >> >> My problem is that I can’t see the data on the query tab of the blazegraph web frontend or on the sparql endpoint. So it seems that the data is there, but how to find out “where”? >> >> Kind regards, >> Rune >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org <http://slashdot.org/>! http://sdm.link/slashdot <http://sdm.link/slashdot> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... <mailto:Big...@li...> >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> >> -- >> James P. McCusker III, Ph.D. >> http://tw.rpi.edu/web/person/JamesMcCusker <http://tw.rpi.edu/web/person/JamesMcCusker>------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org <http://slashdot.org/>! http://sdm.link/slashdot_______________________________________________ <http://sdm.link/slashdot_______________________________________________> >> Bigdata-developers mailing list >> Big...@li... <mailto:Big...@li...> >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Joakim S. <joa...@bl...> - 2017-10-23 16:09:02
|
It can still be loading. My jnl-file is 145G and that’s without links and only labels in english. > On Oct 23, 2017, at 5:37 AM, Jim McCusker <mcc...@gm...> wrote: > > Make sure you have the right namespace selected in the UI. The default one is kb. Go to the namespaces tab to change it. > > Jim > On Mon, Oct 23, 2017 at 8:17 AM Rune Stilling <rs...@un... <mailto:rs...@un...>> wrote: > Hi all > > I have set up blazegraph from the Amazon AMI on AWS. I have apparently succeeded with loading the wikidata data set by running the following bulk loader command: > > > java -cp bigdata.jar com.bigdata.rdf.store.DataLoader -namespace wikidata fullfeature.properties latest-truthy.nt > > The loader took several days to complete. The result was a blazegraph.jnl file of 51GB. > > My problem is that I can’t see the data on the query tab of the blazegraph web frontend or on the sparql endpoint. So it seems that the data is there, but how to find out “where”? > > Kind regards, > Rune > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot <http://sdm.link/slashdot> > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> > -- > James P. McCusker III, Ph.D. > http://tw.rpi.edu/web/person/JamesMcCusker <http://tw.rpi.edu/web/person/JamesMcCusker>------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot_______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Jim M. <mcc...@gm...> - 2017-10-23 12:37:32
|
Make sure you have the right namespace selected in the UI. The default one is kb. Go to the namespaces tab to change it. Jim On Mon, Oct 23, 2017 at 8:17 AM Rune Stilling <rs...@un...> wrote: > Hi all > > I have set up blazegraph from the Amazon AMI on AWS. I have apparently > succeeded with loading the wikidata data set by running the following bulk > loader command: > > > java -cp bigdata.jar com.bigdata.rdf.store.DataLoader -namespace > wikidata fullfeature.properties latest-truthy.nt > > The loader took several days to complete. The result was a blazegraph.jnl > file of 51GB. > > My problem is that I can’t see the data on the query tab of the blazegraph > web frontend or on the sparql endpoint. So it seems that the data is there, > but how to find out “where”? > > Kind regards, > Rune > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > -- James P. McCusker III, Ph.D. http://tw.rpi.edu/web/person/JamesMcCusker |
From: Rune S. <rs...@un...> - 2017-10-23 12:17:22
|
Hi all I have set up blazegraph from the Amazon AMI on AWS. I have apparently succeeded with loading the wikidata data set by running the following bulk loader command: > java -cp bigdata.jar com.bigdata.rdf.store.DataLoader -namespace wikidata fullfeature.properties latest-truthy.nt The loader took several days to complete. The result was a blazegraph.jnl file of 51GB. My problem is that I can’t see the data on the query tab of the blazegraph web frontend or on the sparql endpoint. So it seems that the data is there, but how to find out “where”? Kind regards, Rune |
From: Kevin F. <kf...@ar...> - 2017-10-13 18:23:18
|
Thanks, Jim, for the info. That's great to hear! Cheers, Kevin On 10/6/17 13:05, Jim McCusker wrote: > I have created nanopublication-based resources with millions of graphs, > and it works great. > > Jim > > On Fri, Oct 6, 2017 at 1:47 PM Kevin Ford <kf...@ar... > <mailto:kf...@ar...>> wrote: > > Dear All, > > How many graphs is too many? > > I'm wondering if one were to create hundreds of thousands or possibly > millions of graphs, how blazegraph would manage from a performance > perspective. Does the number make no difference to how Blazegraph would > perform or would performance suffer (for whatever reason, such as an > internal indexing strategy)? > > All the best, > Kevin > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- > James P. McCusker III, Ph.D. > http://tw.rpi.edu/web/person/JamesMcCusker |
From: Jim M. <mcc...@gm...> - 2017-10-06 18:05:52
|
I have created nanopublication-based resources with millions of graphs, and it works great. Jim On Fri, Oct 6, 2017 at 1:47 PM Kevin Ford <kf...@ar...> wrote: > Dear All, > > How many graphs is too many? > > I'm wondering if one were to create hundreds of thousands or possibly > millions of graphs, how blazegraph would manage from a performance > perspective. Does the number make no difference to how Blazegraph would > perform or would performance suffer (for whatever reason, such as an > internal indexing strategy)? > > All the best, > Kevin > > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > -- James P. McCusker III, Ph.D. http://tw.rpi.edu/web/person/JamesMcCusker |
From: Kevin F. <kf...@ar...> - 2017-10-06 17:47:21
|
Dear All, How many graphs is too many? I'm wondering if one were to create hundreds of thousands or possibly millions of graphs, how blazegraph would manage from a performance perspective. Does the number make no difference to how Blazegraph would perform or would performance suffer (for whatever reason, such as an internal indexing strategy)? All the best, Kevin |
From: Jim M. <mcc...@gm...> - 2017-09-29 18:15:02
|
When was this announced? By supported do you mean that you've removed extant features, or are you not helping people configure it? Thanks, Jim On Fri, Sep 29, 2017 at 12:59 PM Brad Bebee <be...@sy...> wrote: > Jimmy, > > Thank you. Scale-out and HA are no longer supported in the open source > version. Feel free to contact us separately about other options that are > available. > > Thanks, --Brad > > On Fri, Sep 29, 2017 at 7:48 AM, Lu, Jimmy <Jim...@mo...> > wrote: > >> Hi, >> >> I am trying to set up a blazegraph cluster but the documentation on Wiki >> is outdated. There is no install task in build.xml and thus >> >> ant install >> >> does not work. Is there any other script I can run to make a cluster >> deployment? >> >> Thanks, >> >> >> Jimmy Lu >> >> >> >> >> ------------------------------ >> >> NOTICE: Morgan Stanley is not acting as a municipal advisor and the >> opinions or views contained herein are not intended to be, and do not >> constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall >> Street Reform and Consumer Protection Act. If you have received this >> communication in error, please destroy all electronic and paper copies and >> notify the sender immediately. Mistransmission is not intended to waive >> confidentiality or privilege. Morgan Stanley reserves the right, to the >> extent permitted under applicable law, to monitor electronic >> communications. This message is subject to terms available at the following >> link: http://www.morganstanley.com/disclaimers If you cannot access >> these links, please notify us by reply message and we will send the >> contents to you. By communicating with Morgan Stanley you consent to the >> foregoing and to the voice recording of conversations with personnel of >> Morgan Stanley. >> >> >> >> ------------------------------------------------------------------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 <(202)%20642-7961> > f: 571.367.5000 <(571)%20367-5000> > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > -- James P. McCusker III, Ph.D. http://tw.rpi.edu/web/person/JamesMcCusker |
From: Brad B. <be...@sy...> - 2017-09-29 16:58:33
|
Jimmy, Thank you. Scale-out and HA are no longer supported in the open source version. Feel free to contact us separately about other options that are available. Thanks, --Brad On Fri, Sep 29, 2017 at 7:48 AM, Lu, Jimmy <Jim...@mo...> wrote: > Hi, > > I am trying to set up a blazegraph cluster but the documentation on Wiki > is outdated. There is no install task in build.xml and thus > > ant install > > does not work. Is there any other script I can run to make a cluster > deployment? > > Thanks, > > > Jimmy Lu > > > > > ------------------------------ > > NOTICE: Morgan Stanley is not acting as a municipal advisor and the > opinions or views contained herein are not intended to be, and do not > constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall > Street Reform and Consumer Protection Act. If you have received this > communication in error, please destroy all electronic and paper copies and > notify the sender immediately. Mistransmission is not intended to waive > confidentiality or privilege. Morgan Stanley reserves the right, to the > extent permitted under applicable law, to monitor electronic > communications. This message is subject to terms available at the following > link: http://www.morganstanley.com/disclaimers If you cannot access > these links, please notify us by reply message and we will send the > contents to you. By communicating with Morgan Stanley you consent to the > foregoing and to the voice recording of conversations with personnel of > Morgan Stanley. > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Lu, J. <Jim...@mo...> - 2017-09-29 14:48:18
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN"> <HTML> <HEAD><!-- Template generated by Exclaimer Template Editor on 10:48:05 Friday, 29 September 2017 --> <STYLE type=text/css>P.80cfbde7-5866-4f31-bc88-fe8997de835a { MARGIN: 0cm 0cm 0pt } LI.80cfbde7-5866-4f31-bc88-fe8997de835a { MARGIN: 0cm 0cm 0pt } DIV.80cfbde7-5866-4f31-bc88-fe8997de835a { MARGIN: 0cm 0cm 0pt } TABLE.80cfbde7-5866-4f31-bc88-fe8997de835aTable { MARGIN: 0cm 0cm 0pt } DIV.Section1 { page: Section1 } </STYLE> <meta http-equiv="Content-Type" content="text/html; charset=us-ascii" /> <meta name="Generator" content="MS Exchange Server version 14.02.5004.000" /> <title></title> </HEAD> <BODY> <P> <!-- Converted from text/plain format --> <p><font size="2">Hi,<br /> <br /> I am trying to set up a blazegraph cluster but the documentation on Wiki is outdated. There is no install task in build.xml and thus<br /> <br /> ant install<br /> <br /> does not work. Is there any other script I can run to make a cluster deployment?<br /> <br /> Thanks,<br /> <br /> <br /> Jimmy Lu<br /> <br /> </font></p> <div id="X-Confidentiality:.TEZUF76X5TAHATHESADREYAK-00-CLA190+*74000-10=lujim,MD=20170929104541/-0400,TC=lujim,ZZ=000I-9a40."> </div> <BR /><BR /> <HR id=HR1 /> <BR /><SPAN style="FONT-SIZE: 7.5pt; FONT-FAMILY: Arial; COLOR: #808080">NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers If you cannot access these links, please notify us by reply message and we will send the contents to you. By communicating with Morgan Stanley you consent to the foregoing and to the voice recording of conversations with personnel of Morgan Stanley.</SPAN><BR /> <P></P> <P></P> <P></P> <P></P></P></BODY> </HTML> |
From: Jim M. <mcc...@gm...> - 2017-09-15 18:01:57
|
I'm using bigdata as the SPARQL endpoint for a nanopublication-based knowledge graph framework, but part of what we are doing with it involves many inserts and deletes as the knowledge is updated. For instance, a URL can be mapped to a DOI, triggering a linked data import (as a nanopublication, qualifying where the data came from, etc.), which then might trigger an autonomous agent to find the PDF and extract its text, which would trigger another agent to perform entity extraction, and so on. This generates a large number of concurrent inserts as each agent adds its bit of knowledge, and that seems to sometimes cause blazegraph to wedge. Are there some example namespace configurations I can use to try to optimize for frequent inserts, deletes, and while maintaining reasonable read performance? Thanks, Jim -- James P. McCusker III, Ph.D. http://tw.rpi.edu/web/person/JamesMcCusker |
From: Joakim S. <joa...@bl...> - 2017-08-11 20:44:45
|
Hi Blazegraph has been loading a DBpedia dump for a week, and then crashed. Below is the log. Is there a way to use the generate jul file, or do I have to start again? (2017-08-06 02:49:45,122) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 5800ms : addrRoot=-473064121936378609 (2017-08-06 02:52:34,062) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 6506ms : addrRoot=-165614776452380163 (2017-08-06 06:10:51,856) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 5432ms : addrRoot=-472548042961058590 (2017-08-06 06:21:40,198) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 7653ms : addrRoot=-181920881784650115 (2017-08-06 06:27:15,534) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 7017ms : addrRoot=-449670607601137751 (2017-08-06 07:13:22,205) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 18562ms : addrRoot=-480637274560657468 (2017-08-06 07:13:34,728) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 12496ms : addrRoot=-480641135736256967 (2017-08-06 07:22:15,295) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 6922ms : addrRoot=-477409546508171647 (2017-08-06 07:22:24,443) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 5596ms : addrRoot=-315397783760664442 (2017-08-06 07:24:10,770) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 5964ms : addrRoot=-445355273570220260 (2017-08-06 07:24:56,770) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 6129ms : addrRoot=-373735671707204031 (2017-08-06 07:26:20,230) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 6027ms : addrRoot=-41220798299502714 (2017-08-06 07:31:18,674) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 5586ms : addrRoot=-465462180146313494 (2017-08-06 07:34:57,286) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 5988ms : addrRoot=-240257837723286486 (2017-08-06 07:40:01,334) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 5295ms : addrRoot=-451954469345688115 (2017-08-06 08:15:26,651) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 5348ms : addrRoot=-421643205805602695 (2017-08-06 08:21:40,586) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 7097ms : addrRoot=-477382037242641071 (2017-08-06 08:22:15,022) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 5971ms : addrRoot=-248774787936025141 (2017-08-06 08:23:19,158) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 5638ms : addrRoot=-385602357108407291 (2017-08-06 08:24:39,846) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 5647ms : addrRoot=-477868029972051680 (2017-08-06 08:26:39,862) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 6711ms : addrRoot=-470382683659172823 (2017-08-06 08:27:23,576) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 5207ms : addrRoot=-465394624605714811 (2017-08-06 08:33:01,610) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 6012ms : addrRoot=-467366164918434365 (2017-08-06 08:57:17,809) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.spo.POS, 6 records (#nodes=1, #leaves=5) in 5406ms : addrRoot=-398968480017152282 (2017-08-06 09:21:24,729) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 5772ms : addrRoot=-467691165093722943 (2017-08-06 09:22:09,626) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 5405ms : addrRoot=-479202390411573670 (2017-08-06 09:24:33,418) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 5789ms : addrRoot=-468223109678234593 (2017-08-06 10:18:56,330) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 5756ms : addrRoot=-320730084442896695 (2017-08-06 10:22:04,562) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 5956ms : addrRoot=-469187484455009050 (2017-08-06 10:22:29,066) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 5620ms : addrRoot=-446988001092827865 (2017-08-06 10:23:26,606) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 1 records (#nodes=1, #leaves=0) in 5643ms : addrRoot=-484862513062607784 (2017-08-06 10:25:09,166) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 5032ms : addrRoot=-322367063983127396 (2017-08-06 10:29:47,434) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 5644ms : addrRoot=-157299625278044806 (2017-08-06 10:30:02,378) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 5324ms : addrRoot=-477423900288875372 (2017-08-06 10:30:43,478) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 5084ms : addrRoot=-471912984801704102 (2017-08-06 10:37:35,786) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 5425ms : addrRoot=-484499429412305698 (2017-08-06 10:38:32,454) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 5190ms : addrRoot=-483755579731344428 (2017-08-06 10:39:02,754) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 5068ms : addrRoot=-479980513931557508 (2017-08-06 11:16:07,362) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 5123ms : addrRoot=-470074867648035382 (2017-08-06 11:16:37,402) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 5707ms : addrRoot=-216317900468778880 (2017-08-06 11:18:25,094) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 6517ms : addrRoot=-240262201410059165 (2017-08-06 11:19:14,944) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 6839ms : addrRoot=-447453657152091941 (2017-08-06 11:24:05,530) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 7016ms : addrRoot=-468221490475563240 (2017-08-06 12:11:37,558) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 9428ms : addrRoot=-480367353045972208 (2017-08-06 12:12:04,796) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 6050ms : addrRoot=-223437951223200275 (2017-08-06 12:13:48,790) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 5192ms : addrRoot=-466270578890766330 (2017-08-06 12:16:29,302) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 6905ms : addrRoot=-468178824270445329 (2017-08-06 12:17:17,142) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 7349ms : addrRoot=-472158957578746043 (2017-08-06 12:21:34,666) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 6904ms : addrRoot=-466738979434134057 (2017-08-06 12:24:40,022) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 5417ms : addrRoot=-472675466050796023 (2017-08-06 12:27:05,694) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 5466ms : addrRoot=-465807233523906567 (2017-08-06 12:32:44,126) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 5045ms : addrRoot=-486962167954799636 (2017-08-06 13:11:10,102) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 6124ms : addrRoot=-344438372636293322 (2017-08-06 13:11:27,515) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 5625ms : addrRoot=-185966242761275980 (2017-08-06 13:12:05,313) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 6257ms : addrRoot=-480677449684745155 (2017-08-06 13:12:35,202) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 6111ms : addrRoot=-331965499845901291 (2017-08-06 13:13:46,102) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 8761ms : addrRoot=-476491097816693499 (2017-08-06 13:14:22,874) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 17890ms : addrRoot=-479070449016240249 (2017-08-06 13:14:57,811) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 9050ms : addrRoot=-191252982430431624 (2017-08-06 13:15:50,470) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 6009ms : addrRoot=-484807030675077502 (2017-08-06 13:19:23,587) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=2, #leaves=5) in 26414ms : addrRoot=-474756046993226073 (2017-08-06 13:19:49,358) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 12992ms : addrRoot=-235502935264459432 (2017-08-06 13:20:05,254) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 15871ms : addrRoot=-235519230370380580 (2017-08-06 13:20:17,867) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 12187ms : addrRoot=-182973956226021161 (2017-08-06 13:21:35,823) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 2 records (#nodes=1, #leaves=1) in 9108ms : addrRoot=-487277908180597427 (2017-08-06 13:26:34,794) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 6966ms : addrRoot=-433455216273127884 (2017-08-06 13:27:29,178) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 6023ms : addrRoot=-320718488031197897 (2017-08-06 17:02:02,782) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 6198ms : addrRoot=-218534421421095591 (2017-08-07 02:31:49,378) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.spo.POS, 9 records (#nodes=1, #leaves=8) in 5517ms : addrRoot=-54575990516480614 (2017-08-07 16:39:18,028) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 5701ms : addrRoot=-492143904263502327 (2017-08-07 23:35:37,353) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.spo.SPO, 3 records (#nodes=2, #leaves=1) in 6511ms : addrRoot=-500841423491300433 (2017-08-08 00:01:23,782) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 7704ms : addrRoot=-292008637407165309 (2017-08-08 00:02:04,260) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 11365ms : addrRoot=-328203748444731276 (2017-08-08 00:04:02,602) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 5821ms : addrRoot=-497618063420553348 (2017-08-08 03:57:52,868) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 7667ms : addrRoot=-502335646908545043 (2017-08-08 06:34:33,672) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 5625ms : addrRoot=-191247433332685635 (2017-08-08 06:35:17,659) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 8255ms : addrRoot=-499226206550358828 (2017-08-08 06:36:35,142) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 5007ms : addrRoot=-476471761873926550 (2017-08-08 06:37:20,345) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 8198ms : addrRoot=-218559289281739081 (2017-08-08 06:39:41,697) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 7034ms : addrRoot=-467323331209591081 (2017-08-08 06:40:20,278) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 9535ms : addrRoot=-208206442413423270 (2017-08-08 06:40:37,873) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 8598ms : addrRoot=-495329627535833504 (2017-08-08 06:41:21,338) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 11696ms : addrRoot=-202360708455725826 (2017-08-08 06:41:58,140) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 13327ms : addrRoot=-472124301487635236 (2017-08-08 06:44:40,297) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 6272ms : addrRoot=-298386169920485905 (2017-08-08 06:44:55,167) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 8759ms : addrRoot=-101002521106971440 (2017-08-08 06:45:01,676) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 5636ms : addrRoot=-240103949045070367 (2017-08-08 06:45:08,166) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 5993ms : addrRoot=-240649564510484658 (2017-08-08 06:46:23,074) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 10409ms : addrRoot=-248767434952014504 (2017-08-08 06:46:35,089) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 6669ms : addrRoot=-240629494128310718 (2017-08-08 06:47:20,356) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 12621ms : addrRoot=-206724579912055290 (2017-08-08 06:51:51,578) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 6666ms : addrRoot=-477387483261171995 (2017-08-08 06:53:41,649) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 5096ms : addrRoot=-410044307320338863 (2017-08-08 06:54:40,021) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 6004ms : addrRoot=-266958798998993931 (2017-08-08 06:56:19,472) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 6071ms : addrRoot=-220652725061286064 (2017-08-08 06:59:01,519) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 8130ms : addrRoot=-223429799375271992 (2017-08-08 06:59:36,264) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 9099ms : addrRoot=-470072956387588525 (2017-08-08 07:00:51,034) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 6912ms : addrRoot=-488506968906922189 (2017-08-08 07:02:49,826) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 12203ms : addrRoot=-166140798276991134 (2017-08-08 07:03:19,127) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 12804ms : addrRoot=-468189213796334247 (2017-08-08 07:03:57,826) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 14045ms : addrRoot=-465479643483339063 (2017-08-08 07:05:20,394) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 5672ms : addrRoot=-468603995967977702 (2017-08-08 07:05:40,793) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 11554ms : addrRoot=-473089861675383376 (2017-08-08 07:05:46,492) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 5673ms : addrRoot=-488978517661318052 (2017-08-08 07:06:22,709) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 6632ms : addrRoot=-480815296660110029 (2017-08-08 07:08:53,584) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 2 records (#nodes=1, #leaves=1) in 5270ms : addrRoot=-472696180678064583 (2017-08-08 07:09:27,072) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 6722ms : addrRoot=-465474098680560303 (2017-08-08 07:09:37,923) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 10594ms : addrRoot=-480796038026754304 (2017-08-08 07:10:53,491) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 8516ms : addrRoot=-501599382139828426 (2017-08-08 07:11:13,564) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 8444ms : addrRoot=-475721611475942222 (2017-08-08 07:12:40,948) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 6848ms : addrRoot=-478331585792309952 (2017-08-08 07:21:54,576) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 5048ms : addrRoot=-503563204396317028 (2017-08-08 07:22:36,608) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 1 records (#nodes=1, #leaves=0) in 7802ms : addrRoot=-503150518168713924 (2017-08-08 07:22:54,876) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 1 records (#nodes=1, #leaves=0) in 7230ms : addrRoot=-165623155933574848 (2017-08-08 07:23:14,836) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 7239ms : addrRoot=-165641392364713895 (2017-08-08 07:23:22,131) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 5593ms : addrRoot=-491284799135152729 (2017-08-08 07:23:34,367) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 5008ms : addrRoot=-501217550957279732 (2017-08-08 07:23:45,635) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 5328ms : addrRoot=-479889860056841070 (2017-08-08 07:23:55,570) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 9090ms : addrRoot=-475686852305615324 (2017-08-08 07:24:05,968) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 1 records (#nodes=1, #leaves=0) in 9349ms : addrRoot=-502349880430164298 (2017-08-08 07:24:23,354) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 8993ms : addrRoot=-503596331479071261 (2017-08-08 07:24:40,060) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 10312ms : addrRoot=-489365245106583230 (2017-08-08 07:25:15,164) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 5138ms : addrRoot=-480342768653170271 (2017-08-08 07:25:24,844) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 6432ms : addrRoot=-480344503819957834 (2017-08-08 07:25:32,517) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 6025ms : addrRoot=-503618695373781375 (2017-08-08 08:25:56,468) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 5264ms : addrRoot=-417618799974414700 (2017-08-08 11:02:09,951) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.spo.POS, 10 records (#nodes=1, #leaves=9) in 12837ms : addrRoot=-493369413051874318 (2017-08-08 12:58:43,590) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 11997ms : addrRoot=-504167987331201920 (2017-08-08 14:20:19,026) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 3 records (#nodes=1, #leaves=2) in 5518ms : addrRoot=-478234098624625428 (2017-08-08 14:54:19,232) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 5304ms : addrRoot=-487646961835441083 (2017-08-08 14:55:40,093) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 6248ms : addrRoot=-210078816226244855 (2017-08-08 14:55:57,958) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 17230ms : addrRoot=-429840485897468659 (2017-08-08 16:48:11,614) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 8048ms : addrRoot=-509172285656005179 (2017-08-08 17:55:22,324) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 5646ms : addrRoot=-502340912538450722 (2017-08-08 19:33:59,003) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 8025ms : addrRoot=-129249056428915346 (2017-08-08 19:34:47,225) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 7 records (#nodes=1, #leaves=6) in 6955ms : addrRoot=-490535799788403626 (2017-08-08 19:35:16,505) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=2, #leaves=2) in 5946ms : addrRoot=-510555312369956162 (2017-08-08 19:35:51,408) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 12315ms : addrRoot=-471234989854292511 (2017-08-08 19:36:31,049) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 10 records (#nodes=1, #leaves=9) in 5778ms : addrRoot=-486552153196853469 (2017-08-08 19:36:58,348) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 4 records (#nodes=1, #leaves=3) in 5543ms : addrRoot=-507272784829740737 (2017-08-08 19:39:59,884) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 7975ms : addrRoot=-470713469155407762 (2017-08-08 19:43:13,313) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.spo.SPO, 3 records (#nodes=1, #leaves=2) in 5412ms : addrRoot=-511037972204747120 (2017-08-08 19:43:53,539) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.spo.SPO, 9 records (#nodes=1, #leaves=8) in 5174ms : addrRoot=-508843686298121297 (2017-08-08 21:21:40,167) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 2 records (#nodes=1, #leaves=1) in 7373ms : addrRoot=-242886495147391757 (2017-08-08 21:22:09,710) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 1 records (#nodes=1, #leaves=0) in 5726ms : addrRoot=-509244213473311624 (2017-08-08 21:22:19,553) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 9374ms : addrRoot=-485587559376747209 (2017-08-08 21:26:04,940) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 8474ms : addrRoot=-487327935959660647 (2017-08-08 21:26:42,081) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 2 records (#nodes=1, #leaves=1) in 8469ms : addrRoot=-511903975050573775 (2017-08-09 00:20:19,026) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 6793ms : addrRoot=-507612791620761874 (2017-08-09 00:21:49,268) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 5 records (#nodes=1, #leaves=4) in 10039ms : addrRoot=-11180938037820454 (2017-08-09 00:42:48,195) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 6269ms : addrRoot=-500745284943346262 (2017-08-09 05:52:14,796) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.spo.OSP, 10 records (#nodes=1, #leaves=9) in 5398ms : addrRoot=-463486460830415496 (2017-08-09 10:14:52,182) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 10013ms : addrRoot=-466443833576520237 (2017-08-09 10:16:19,888) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 9752ms : addrRoot=-481932645287066282 (2017-08-09 15:12:57,553) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.spo.SPO, 1 records (#nodes=1, #leaves=0) in 5045ms : addrRoot=-220702417832900584 (2017-08-09 15:12:57,553) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.spo.OSP, 7 records (#nodes=1, #leaves=6) in 5046ms : addrRoot=-498536731155364764 (2017-08-11 02:51:48,003) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 6 records (#nodes=1, #leaves=5) in 7640ms : addrRoot=-507578839904287712 (2017-08-11 02:52:11,481) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 9 records (#nodes=1, #leaves=8) in 23024ms : addrRoot=-197892490768940061 (2017-08-11 05:15:33,037) WARN : AbstractBTree.java:3758: wrote: name=dbpediaFull.lex.search, 8 records (#nodes=1, #leaves=7) in 12701ms : addrRoot=-331983392679656360 (2017-08-11 07:29:12,249) ERROR: BigdataRDFServlet.java:214: cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.io.UTFDataFormatException: encoded string too long: 73257 bytes, query=DATALOADER-SERVLET: dbpediaFull java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: java.io.UTFDataFormatException: encoded string too long: 73257 bytes at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) at com.bigdata.rdf.sail.webapp.DataLoaderServlet.doBulkLoad(DataLoaderServlet.java:310) at com.bigdata.rdf.sail.webapp.DataLoaderServlet.doPost(DataLoaderServlet.java:108) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:497) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: java.lang.RuntimeException: java.io.UTFDataFormatException: encoded string too long: 73257 bytes at com.bigdata.rdf.lexicon.LexiconRelation.addBlobs(LexiconRelation.java:1968) at com.bigdata.rdf.lexicon.LexiconRelation.addTerms(LexiconRelation.java:1877) at com.bigdata.rdf.rio.StatementBuffer$Batch.addTerms(StatementBuffer.java:1951) at com.bigdata.rdf.rio.StatementBuffer$Batch.writeNow(StatementBuffer.java:1881) at com.bigdata.rdf.rio.StatementBuffer$Batch.access$1000(StatementBuffer.java:1645) at com.bigdata.rdf.rio.StatementBuffer.incrementalWrite(StatementBuffer.java:1362) at com.bigdata.rdf.rio.StatementBuffer.add(StatementBuffer.java:2240) at com.bigdata.rdf.rio.StatementBuffer.add(StatementBuffer.java:2219) at com.bigdata.rdf.rio.PresortRioLoader.handleStatement(PresortRioLoader.java:162) at org.openrdf.rio.turtle.TurtleParser.reportStatement(TurtleParser.java:1155) at org.openrdf.rio.turtle.TurtleParser.parseObject(TurtleParser.java:505) at org.openrdf.rio.turtle.TurtleParser.parseObjectList(TurtleParser.java:428) at org.openrdf.rio.turtle.TurtleParser.parsePredicateObjectList(TurtleParser.java:400) at org.openrdf.rio.turtle.TurtleParser.parseTriples(TurtleParser.java:385) at org.openrdf.rio.turtle.TurtleParser.parseStatement(TurtleParser.java:261) at org.openrdf.rio.turtle.TurtleParser.parse(TurtleParser.java:216) at com.bigdata.rdf.rio.BasicRioLoader.loadRdf2(BasicRioLoader.java:236) at com.bigdata.rdf.rio.BasicRioLoader.loadRdf(BasicRioLoader.java:176) at com.bigdata.rdf.store.DataLoader.loadData4_ParserErrors_Not_Trapped(DataLoader.java:1595) at com.bigdata.rdf.store.DataLoader.loadFiles(DataLoader.java:1359) at com.bigdata.rdf.store.DataLoader.loadFiles(DataLoader.java:1301) at com.bigdata.rdf.sail.webapp.DataLoaderServlet$DataLoaderTask.call(DataLoaderServlet.java:473) at com.bigdata.rdf.sail.webapp.DataLoaderServlet$DataLoaderTask.call(DataLoaderServlet.java:334) at com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more Caused by: java.lang.RuntimeException: java.io.UTFDataFormatException: encoded string too long: 73257 bytes at com.bigdata.rdf.model.BigdataValueSerializer.serialize2(BigdataValueSerializer.java:302) at com.bigdata.rdf.model.BigdataValueSerializer.serialize(BigdataValueSerializer.java:250) at com.bigdata.rdf.lexicon.BlobsIndexHelper.generateKVOs(BlobsIndexHelper.java:178) at com.bigdata.rdf.lexicon.BlobsWriteTask.call(BlobsWriteTask.java:124) at com.bigdata.rdf.lexicon.LexiconRelation.addBlobs(LexiconRelation.java:1965) ... 27 more Caused by: java.io.UTFDataFormatException: encoded string too long: 73257 bytes at java.io.DataOutputStream.writeUTF(DataOutputStream.java:364) at java.io.DataOutputStream.writeUTF(DataOutputStream.java:323) at com.bigdata.io.DataOutputBuffer.writeUTF(DataOutputBuffer.java:334) at com.bigdata.rdf.model.BigdataValueSerializer.serializeVersion0(BigdataValueSerializer.java:441) at com.bigdata.rdf.model.BigdataValueSerializer.serialize2(BigdataValueSerializer.java:282) |
From: Jim B. <ba...@gm...> - 2017-08-08 02:06:22
|
Hi, I’ve been experimenting with named subqueries and finding they can provide some major performance improvements. The Blazegraph wiki page on query hints lists “runOnce”, saying "The sub-select should be lifted into a named subquery such that it is evaluated exactly once. See NamedSubquery.” I thought this would allow me to use named subqueries within a legal SPARQL syntax (easier to construct queries programmatically). However I find that using the query hint is much slower (3.5 seconds) than writing out the named subquery (0.3 seconds). Here is an example with a named subquery: SELECT DISTINCT ?taxon ?taxon_label ?desc WITH { SELECT ?phenotype WHERE {?phenotype rdfs:subClassOf/ps:phenotype_of_some pectoral_fin: . } } AS %namedSet1 WHERE { ?taxon rdfs:label ?taxon_label . ?state dc:description ?desc . ?taxon ps:exhibits_state ?state . ?state ps:describes_phenotype ?phenotype . INCLUDE %namedSet1 } ORDER BY ?taxon_label LIMIT 50 Here is the “same” query with a regular subquery with the query hint: SELECT DISTINCT ?taxon ?taxon_label ?desc WHERE { ?taxon rdfs:label ?taxon_label . ?state dc:description ?desc . ?taxon ps:exhibits_state ?state . ?state ps:describes_phenotype ?phenotype . { SELECT ?phenotype WHERE { hint:SubQuery hint:runOnce true . ?phenotype rdfs:subClassOf/ps:phenotype_of_some pectoral_fin: . } } } ORDER BY ?taxon_label LIMIT 50 Is the query hint supposed to do the same thing as with the named subquery? The query hint is definitely doing something, because without it, the query never completes in the time I have waited. Thank you, Jim |