This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bryan T. <br...@sy...> - 2015-10-29 18:46:35
|
Copy of the journal under update is not supported and could yield this outcome. You could try returning to the previous commit point. See com.bigdata.journal.Options for an option to open the previous root block. However, copy under update could easily have caused the copy to be bad in a fashion that is not recoverable (in the copy). The original journal should be fine of course. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Thu, Oct 29, 2015 at 2:18 PM, Jeremy J Carroll <jj...@sy...> wrote: > > Does this error message mean the journal file is corrupt? > > One theory we have is that the journal file may have been copied while > updates were in progress (which I do not believe is supported) > > thanks > > Jeremy > > > > WARN : 1 main > org.eclipse.jetty.util.component.AbstractLifeCycle.setFailed(AbstractLifeCycle.java:212): > FAILED o.e.j.w.WebAppContext@887af79{/bigdata,file:/usr/share/blazegraph/var/jetty/,STARTING}{/usr/share/blazegraph/var/jetty}: > java.lang.Error: Two allocators at same address > java.lang.Error: Two allocators at same address > at > com.bigdata.rwstore.FixedAllocator.compareTo(FixedAllocator.java:102) > at java.util.ComparableTimSort.mergeLo(ComparableTimSort.java:684) > at java.util.ComparableTimSort.mergeAt(ComparableTimSort.java:481) > at > java.util.ComparableTimSort.mergeCollapse(ComparableTimSort.java:406) > at java.util.ComparableTimSort.sort(ComparableTimSort.java:213) > at java.util.Arrays.sort(Arrays.java:1312) > at java.util.Arrays.sort(Arrays.java:1506) > at java.util.ArrayList.sort(ArrayList.java:1454) > at java.util.Collections.sort(Collections.java:141) > at > com.bigdata.rwstore.RWStore.readAllocationBlocks(RWStore.java:1683) > at com.bigdata.rwstore.RWStore.initfromRootBlock(RWStore.java:1558) > at com.bigdata.rwstore.RWStore.<init>(RWStore.java:970) > at com.bigdata.journal.RWStrategy.<init>(RWStrategy.java:137) > at > com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1265) > at com.bigdata.journal.Journal.<init>(Journal.java:275) > at com.bigdata.journal.Journal.<init>(Journal.java:268) > at > com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.openIndexManager(BigdataRDFServletContextListener.java:798) > at > com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.contextInitialized(BigdataRDFServletContextListener.java:276) > at > org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:798) > at > org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:444) > at > org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:789) > at > org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:294) > at > org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1341) > at > org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1334) > at > org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741) > at > org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:497) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) > at > org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:163) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) > at > org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) > at org.eclipse.jetty.server.Server.start(Server.java:387) > at > org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) > at > org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) > at org.eclipse.jetty.server.Server.doStart(Server.java:354) > at > org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at > com.bigdata.rdf.sail.webapp.NanoSparqlServer.awaitServerStart(NanoSparqlServer.java:485) > at > com.bigdata.rdf.sail.webapp.NanoSparqlServer.main(NanoSparqlServer.java:449) > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Jeremy J C. <jj...@sy...> - 2015-10-29 18:44:05
|
Does this error message mean the journal file is corrupt? One theory we have is that the journal file may have been copied while updates were in progress (which I do not believe is supported) thanks Jeremy WARN : 1 main org.eclipse.jetty.util.component.AbstractLifeCycle.setFailed(AbstractLifeCycle.java:212): FAILED o.e.j.w.WebAppContext@887af79{/bigdata,file:/usr/share/blazegraph/var/jetty/,STARTING}{/usr/share/blazegraph/var/jetty}: java.lang.Error: Two allocators at same address java.lang.Error: Two allocators at same address at com.bigdata.rwstore.FixedAllocator.compareTo(FixedAllocator.java:102) at java.util.ComparableTimSort.mergeLo(ComparableTimSort.java:684) at java.util.ComparableTimSort.mergeAt(ComparableTimSort.java:481) at java.util.ComparableTimSort.mergeCollapse(ComparableTimSort.java:406) at java.util.ComparableTimSort.sort(ComparableTimSort.java:213) at java.util.Arrays.sort(Arrays.java:1312) at java.util.Arrays.sort(Arrays.java:1506) at java.util.ArrayList.sort(ArrayList.java:1454) at java.util.Collections.sort(Collections.java:141) at com.bigdata.rwstore.RWStore.readAllocationBlocks(RWStore.java:1683) at com.bigdata.rwstore.RWStore.initfromRootBlock(RWStore.java:1558) at com.bigdata.rwstore.RWStore.<init>(RWStore.java:970) at com.bigdata.journal.RWStrategy.<init>(RWStrategy.java:137) at com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1265) at com.bigdata.journal.Journal.<init>(Journal.java:275) at com.bigdata.journal.Journal.<init>(Journal.java:268) at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.openIndexManager(BigdataRDFServletContextListener.java:798) at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.contextInitialized(BigdataRDFServletContextListener.java:276) at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:798) at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:444) at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:789) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:294) at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1341) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1334) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:497) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:163) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) at org.eclipse.jetty.server.Server.start(Server.java:387) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) at org.eclipse.jetty.server.Server.doStart(Server.java:354) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at com.bigdata.rdf.sail.webapp.NanoSparqlServer.awaitServerStart(NanoSparqlServer.java:485) at com.bigdata.rdf.sail.webapp.NanoSparqlServer.main(NanoSparqlServer.java:449) |
From: Mikel E. A. <mik...@gm...> - 2015-10-29 10:21:59
|
Find attached a screenshot of the log Let me know if this suffices, Thanks 2015-10-29 2:04 GMT+01:00 Brad Bebee <be...@sy...>: > Mikel, > > Do you get any javascript output when accessing the workbench when it > works initially? > > Thanks, --Brad > > On Wed, Oct 28, 2015 at 11:39 AM, Mikel Egaña Aranguren < > mik...@gm...> wrote: > >> Hi; >> >> I'm still getting the same error. I can access >> http://127.0.0.1:9999/bigdata/#splash but then if I access the >> container's IP (obtained with inspect, 172.17.0.4) the connection is >> refused: >> >> mikel@durruti:~$wget >> http://172.17.0.4:9999/bigdata/sparql?query=select%20*%20where%20{%20?s%20?p%20?o%20}%20limit%201 >> <http://172.17.0.4:9999/bigdata/sparql?query=select%20*%20where%20%7B%20?s%20?p%20?o%20%7D%20limit%201> >> Konektatzen 172.17.0.4:9999... huts egin da: Connection refused >> >> Thanks >> >> 2015-10-28 16:25 GMT+01:00 Brad Bebee <be...@sy...>: >> >>> Mikel, >>> >>> You'll need to make sure to use the IP of the docker container to access >>> the Blazegraph instance. Outside of the container, it will not be running >>> on 127.0.0.1, but rather the IP assigned to the docker container. >>> >>> You'll need to run something like *docker inspect --format '{{ >>> .NetworkSettings.IPAddress }}' "CONTAINER ID".* >>> >>> Let us know if that works. >>> >>> Thanks, --Brad >>> >>> >>> >>> >>> On Wed, Oct 28, 2015 at 11:11 AM, Mikel Egaña Aranguren < >>> mik...@gm...> wrote: >>> >>>> Hi; >>>> >>>> I'm preparing a Docker image that includes Blazegraph, but Blazegraph >>>> is not working. It must be a problem with the Docker setting but perhaps >>>> someone can give a pointer. (I have also submitted the question to stack >>>> overflow [1]) >>>> >>>> The Dockerfile [2] isimply copies a working installation of blazegraph >>>> and executes the nano sparql server: >>>> >>>> COPY blazegraph /LinkedDataServer/blazegraph >>>> CMD java -server -jar blazegraph/bigdata-bundled.jar >>>> >>>> When I run the container (also including pubby and refine): >>>> >>>> docker run -d -p 9999:9999 -p 3333:3333 -p 8080:8080 >>>> mikeleganaaranguren/linked-data-server:0.0.1 >>>> >>>> It runs fine, i.e. I can access Blazegraph at http://127.0.0.1:9999. >>>> However, when I try to do a query or update data I get "ERROR: Could not >>>> contact server". Also, I can't access the server programmatically >>>> (Connection reset by peer): >>>> >>>> wget >>>> http://127.0.0.1:9999/bigdata/sparql?query=select%20*%20where%20{%20?s%20?p%20?o%20}%20limit%201 >>>> <http://127.0.0.1:9999/bigdata/sparql?query=select%20*%20where%20%7B%20?s%20?p%20?o%20%7D%20limit%201> >>>> >>>> However when I enter the image and run blazegraph manually I can do the >>>> wget above, both to localhost and the "internal" IP of the image. Obviously >>>> there is something wrong with the Docker setting, but any clues will be >>>> wellcome. Also, is there any blazegraph log I can access when runing >>>> blazegraph like this? >>>> >>>> Thanks >>>> >>>> >>>> [1] >>>> http://stackoverflow.com/questions/33368741/docker-container-with-blazegraph-triple-store-not-working-possibly-due-to-networ >>>> [2] >>>> >>>> FROM ubuntu:14.04 >>>> MAINTAINER Mikel Egaña Aranguren <my.email@x.com> >>>> >>>> RUN apt-get update && apt-get install -y openjdk-7-jre wget curl >>>> >>>> RUN mkdir /LinkedDataServer >>>> >>>> COPY google-refine-2.5 /LinkedDataServer/google-refine-2.5 >>>> COPY blazegraph /LinkedDataServer/blazegraph >>>> COPY jetty /LinkedDataServer/jetty >>>> >>>> EXPOSE 9999 >>>> EXPOSE 3333 >>>> EXPOSE 8080 >>>> >>>> WORKDIR /LinkedDataServer >>>> CMD java -server -jar blazegraph/bigdata-bundled.jar >>>> CMD google-refine-2.5/refine -i 0.0.0.0 >>>> >>>> WORKDIR /LinkedDataServer/jetty >>>> CMD java -jar start.jar jetty.port=8080 >>>> >>>> >>>> -- >>>> Mikel Egaña Aranguren, Ph.D. >>>> >>>> http://mikeleganaaranguren.com >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>> >>>> >>> >>> >>> -- >>> _______________ >>> Brad Bebee >>> CEO, Managing Partner >>> SYSTAP, LLC >>> e: be...@sy... >>> m: 202.642.7961 >>> f: 571.367.5000 >>> w: www.blazegraph.com >>> >>> Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance >>> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >>> APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new >>> technology to use GPUs to accelerate data-parallel graph analytics. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>> are for the sole use of the intended recipient(s) and are confidential or >>> proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, >>> dissemination or copying of this email or its contents or attachments is >>> prohibited. If you have received this communication in error, please notify >>> the sender by reply email and permanently delete all copies of the email >>> and its contents and attachments. >>> >> >> >> >> -- >> Mikel Egaña Aranguren, Ph.D. >> >> http://mikeleganaaranguren.com >> >> >> > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > -- Mikel Egaña Aranguren, Ph.D. http://mikeleganaaranguren.com |
From: Brad B. <be...@sy...> - 2015-10-29 01:04:30
|
Mikel, Do you get any javascript output when accessing the workbench when it works initially? Thanks, --Brad On Wed, Oct 28, 2015 at 11:39 AM, Mikel Egaña Aranguren < mik...@gm...> wrote: > Hi; > > I'm still getting the same error. I can access > http://127.0.0.1:9999/bigdata/#splash but then if I access the > container's IP (obtained with inspect, 172.17.0.4) the connection is > refused: > > mikel@durruti:~$wget > http://172.17.0.4:9999/bigdata/sparql?query=select%20*%20where%20{%20?s%20?p%20?o%20}%20limit%201 > <http://172.17.0.4:9999/bigdata/sparql?query=select%20*%20where%20%7B%20?s%20?p%20?o%20%7D%20limit%201> > Konektatzen 172.17.0.4:9999... huts egin da: Connection refused > > Thanks > > 2015-10-28 16:25 GMT+01:00 Brad Bebee <be...@sy...>: > >> Mikel, >> >> You'll need to make sure to use the IP of the docker container to access >> the Blazegraph instance. Outside of the container, it will not be running >> on 127.0.0.1, but rather the IP assigned to the docker container. >> >> You'll need to run something like *docker inspect --format '{{ >> .NetworkSettings.IPAddress }}' "CONTAINER ID".* >> >> Let us know if that works. >> >> Thanks, --Brad >> >> >> >> >> On Wed, Oct 28, 2015 at 11:11 AM, Mikel Egaña Aranguren < >> mik...@gm...> wrote: >> >>> Hi; >>> >>> I'm preparing a Docker image that includes Blazegraph, but Blazegraph is >>> not working. It must be a problem with the Docker setting but perhaps >>> someone can give a pointer. (I have also submitted the question to stack >>> overflow [1]) >>> >>> The Dockerfile [2] isimply copies a working installation of blazegraph >>> and executes the nano sparql server: >>> >>> COPY blazegraph /LinkedDataServer/blazegraph >>> CMD java -server -jar blazegraph/bigdata-bundled.jar >>> >>> When I run the container (also including pubby and refine): >>> >>> docker run -d -p 9999:9999 -p 3333:3333 -p 8080:8080 >>> mikeleganaaranguren/linked-data-server:0.0.1 >>> >>> It runs fine, i.e. I can access Blazegraph at http://127.0.0.1:9999. >>> However, when I try to do a query or update data I get "ERROR: Could not >>> contact server". Also, I can't access the server programmatically >>> (Connection reset by peer): >>> >>> wget >>> http://127.0.0.1:9999/bigdata/sparql?query=select%20*%20where%20{%20?s%20?p%20?o%20}%20limit%201 >>> <http://127.0.0.1:9999/bigdata/sparql?query=select%20*%20where%20%7B%20?s%20?p%20?o%20%7D%20limit%201> >>> >>> However when I enter the image and run blazegraph manually I can do the >>> wget above, both to localhost and the "internal" IP of the image. Obviously >>> there is something wrong with the Docker setting, but any clues will be >>> wellcome. Also, is there any blazegraph log I can access when runing >>> blazegraph like this? >>> >>> Thanks >>> >>> >>> [1] >>> http://stackoverflow.com/questions/33368741/docker-container-with-blazegraph-triple-store-not-working-possibly-due-to-networ >>> [2] >>> >>> FROM ubuntu:14.04 >>> MAINTAINER Mikel Egaña Aranguren <my.email@x.com> >>> >>> RUN apt-get update && apt-get install -y openjdk-7-jre wget curl >>> >>> RUN mkdir /LinkedDataServer >>> >>> COPY google-refine-2.5 /LinkedDataServer/google-refine-2.5 >>> COPY blazegraph /LinkedDataServer/blazegraph >>> COPY jetty /LinkedDataServer/jetty >>> >>> EXPOSE 9999 >>> EXPOSE 3333 >>> EXPOSE 8080 >>> >>> WORKDIR /LinkedDataServer >>> CMD java -server -jar blazegraph/bigdata-bundled.jar >>> CMD google-refine-2.5/refine -i 0.0.0.0 >>> >>> WORKDIR /LinkedDataServer/jetty >>> CMD java -jar start.jar jetty.port=8080 >>> >>> >>> -- >>> Mikel Egaña Aranguren, Ph.D. >>> >>> http://mikeleganaaranguren.com >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> >> >> -- >> _______________ >> Brad Bebee >> CEO, Managing Partner >> SYSTAP, LLC >> e: be...@sy... >> m: 202.642.7961 >> f: 571.367.5000 >> w: www.blazegraph.com >> >> Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new >> technology to use GPUs to accelerate data-parallel graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> > > > > -- > Mikel Egaña Aranguren, Ph.D. > > http://mikeleganaaranguren.com > > > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Brad B. <be...@sy...> - 2015-10-28 18:07:40
|
Joel, Thank you. The default executable jar uses an internal jetty configuration that does not restrict access to the end points. You can deploy it using a customized jetty configuration to control end-point access either by creating a custom jetty.xml and passing it via -DjettyXml=/path/to/your file or deploying it within another jetty instance or tomcat web app container. Thanks, --Brad On Wed, Oct 28, 2015 at 1:25 PM, Sachs, Joel <Joe...@ag...> wrote: > Hi, > > I haven't found anything in the documentation about controlling access to > the database via user accounts. Can Update queries be restricted to > particular users? More generally, if I've run Blazegraph from the command > line > (java -server -Xmx4g -jar bigdata-bundled.jar), am I able to control > access to the endpoint? > > Many thanks, > Joel. > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Sachs, J. <Joe...@AG...> - 2015-10-28 17:45:11
|
Hi, I haven't found anything in the documentation about controlling access to the database via user accounts. Can Update queries be restricted to particular users? More generally, if I've run Blazegraph from the command line (java -server -Xmx4g -jar bigdata-bundled.jar), am I able to control access to the endpoint? Many thanks, Joel. |
From: Mikel E. A. <mik...@gm...> - 2015-10-28 15:39:13
|
Hi; I'm still getting the same error. I can access http://127.0.0.1:9999/bigdata/#splash but then if I access the container's IP (obtained with inspect, 172.17.0.4) the connection is refused: mikel@durruti:~$wget http://172.17.0.4:9999/bigdata/sparql?query=select%20*%20where%20{%20?s%20?p%20?o%20}%20limit%201 Konektatzen 172.17.0.4:9999... huts egin da: Connection refused Thanks 2015-10-28 16:25 GMT+01:00 Brad Bebee <be...@sy...>: > Mikel, > > You'll need to make sure to use the IP of the docker container to access > the Blazegraph instance. Outside of the container, it will not be running > on 127.0.0.1, but rather the IP assigned to the docker container. > > You'll need to run something like *docker inspect --format '{{ > .NetworkSettings.IPAddress }}' "CONTAINER ID".* > > Let us know if that works. > > Thanks, --Brad > > > > > On Wed, Oct 28, 2015 at 11:11 AM, Mikel Egaña Aranguren < > mik...@gm...> wrote: > >> Hi; >> >> I'm preparing a Docker image that includes Blazegraph, but Blazegraph is >> not working. It must be a problem with the Docker setting but perhaps >> someone can give a pointer. (I have also submitted the question to stack >> overflow [1]) >> >> The Dockerfile [2] isimply copies a working installation of blazegraph >> and executes the nano sparql server: >> >> COPY blazegraph /LinkedDataServer/blazegraph >> CMD java -server -jar blazegraph/bigdata-bundled.jar >> >> When I run the container (also including pubby and refine): >> >> docker run -d -p 9999:9999 -p 3333:3333 -p 8080:8080 >> mikeleganaaranguren/linked-data-server:0.0.1 >> >> It runs fine, i.e. I can access Blazegraph at http://127.0.0.1:9999. >> However, when I try to do a query or update data I get "ERROR: Could not >> contact server". Also, I can't access the server programmatically >> (Connection reset by peer): >> >> wget >> http://127.0.0.1:9999/bigdata/sparql?query=select%20*%20where%20{%20?s%20?p%20?o%20}%20limit%201 >> <http://127.0.0.1:9999/bigdata/sparql?query=select%20*%20where%20%7B%20?s%20?p%20?o%20%7D%20limit%201> >> >> However when I enter the image and run blazegraph manually I can do the >> wget above, both to localhost and the "internal" IP of the image. Obviously >> there is something wrong with the Docker setting, but any clues will be >> wellcome. Also, is there any blazegraph log I can access when runing >> blazegraph like this? >> >> Thanks >> >> >> [1] >> http://stackoverflow.com/questions/33368741/docker-container-with-blazegraph-triple-store-not-working-possibly-due-to-networ >> [2] >> >> FROM ubuntu:14.04 >> MAINTAINER Mikel Egaña Aranguren <my.email@x.com> >> >> RUN apt-get update && apt-get install -y openjdk-7-jre wget curl >> >> RUN mkdir /LinkedDataServer >> >> COPY google-refine-2.5 /LinkedDataServer/google-refine-2.5 >> COPY blazegraph /LinkedDataServer/blazegraph >> COPY jetty /LinkedDataServer/jetty >> >> EXPOSE 9999 >> EXPOSE 3333 >> EXPOSE 8080 >> >> WORKDIR /LinkedDataServer >> CMD java -server -jar blazegraph/bigdata-bundled.jar >> CMD google-refine-2.5/refine -i 0.0.0.0 >> >> WORKDIR /LinkedDataServer/jetty >> CMD java -jar start.jar jetty.port=8080 >> >> >> -- >> Mikel Egaña Aranguren, Ph.D. >> >> http://mikeleganaaranguren.com >> >> >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > -- Mikel Egaña Aranguren, Ph.D. http://mikeleganaaranguren.com |
From: Brad B. <be...@sy...> - 2015-10-28 15:25:56
|
Mikel, You'll need to make sure to use the IP of the docker container to access the Blazegraph instance. Outside of the container, it will not be running on 127.0.0.1, but rather the IP assigned to the docker container. You'll need to run something like *docker inspect --format '{{ .NetworkSettings.IPAddress }}' "CONTAINER ID".* Let us know if that works. Thanks, --Brad On Wed, Oct 28, 2015 at 11:11 AM, Mikel Egaña Aranguren < mik...@gm...> wrote: > Hi; > > I'm preparing a Docker image that includes Blazegraph, but Blazegraph is > not working. It must be a problem with the Docker setting but perhaps > someone can give a pointer. (I have also submitted the question to stack > overflow [1]) > > The Dockerfile [2] isimply copies a working installation of blazegraph and > executes the nano sparql server: > > COPY blazegraph /LinkedDataServer/blazegraph > CMD java -server -jar blazegraph/bigdata-bundled.jar > > When I run the container (also including pubby and refine): > > docker run -d -p 9999:9999 -p 3333:3333 -p 8080:8080 > mikeleganaaranguren/linked-data-server:0.0.1 > > It runs fine, i.e. I can access Blazegraph at http://127.0.0.1:9999. > However, when I try to do a query or update data I get "ERROR: Could not > contact server". Also, I can't access the server programmatically > (Connection reset by peer): > > wget > http://127.0.0.1:9999/bigdata/sparql?query=select%20*%20where%20{%20?s%20?p%20?o%20}%20limit%201 > <http://127.0.0.1:9999/bigdata/sparql?query=select%20*%20where%20%7B%20?s%20?p%20?o%20%7D%20limit%201> > > However when I enter the image and run blazegraph manually I can do the > wget above, both to localhost and the "internal" IP of the image. Obviously > there is something wrong with the Docker setting, but any clues will be > wellcome. Also, is there any blazegraph log I can access when runing > blazegraph like this? > > Thanks > > > [1] > http://stackoverflow.com/questions/33368741/docker-container-with-blazegraph-triple-store-not-working-possibly-due-to-networ > [2] > > FROM ubuntu:14.04 > MAINTAINER Mikel Egaña Aranguren <my.email@x.com> > > RUN apt-get update && apt-get install -y openjdk-7-jre wget curl > > RUN mkdir /LinkedDataServer > > COPY google-refine-2.5 /LinkedDataServer/google-refine-2.5 > COPY blazegraph /LinkedDataServer/blazegraph > COPY jetty /LinkedDataServer/jetty > > EXPOSE 9999 > EXPOSE 3333 > EXPOSE 8080 > > WORKDIR /LinkedDataServer > CMD java -server -jar blazegraph/bigdata-bundled.jar > CMD google-refine-2.5/refine -i 0.0.0.0 > > WORKDIR /LinkedDataServer/jetty > CMD java -jar start.jar jetty.port=8080 > > > -- > Mikel Egaña Aranguren, Ph.D. > > http://mikeleganaaranguren.com > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Mikel E. A. <mik...@gm...> - 2015-10-28 15:11:36
|
Hi; I'm preparing a Docker image that includes Blazegraph, but Blazegraph is not working. It must be a problem with the Docker setting but perhaps someone can give a pointer. (I have also submitted the question to stack overflow [1]) The Dockerfile [2] isimply copies a working installation of blazegraph and executes the nano sparql server: COPY blazegraph /LinkedDataServer/blazegraph CMD java -server -jar blazegraph/bigdata-bundled.jar When I run the container (also including pubby and refine): docker run -d -p 9999:9999 -p 3333:3333 -p 8080:8080 mikeleganaaranguren/linked-data-server:0.0.1 It runs fine, i.e. I can access Blazegraph at http://127.0.0.1:9999. However, when I try to do a query or update data I get "ERROR: Could not contact server". Also, I can't access the server programmatically (Connection reset by peer): wget http://127.0.0.1:9999/bigdata/sparql?query=select%20*%20where%20{%20?s%20?p%20?o%20}%20limit%201 However when I enter the image and run blazegraph manually I can do the wget above, both to localhost and the "internal" IP of the image. Obviously there is something wrong with the Docker setting, but any clues will be wellcome. Also, is there any blazegraph log I can access when runing blazegraph like this? Thanks [1] http://stackoverflow.com/questions/33368741/docker-container-with-blazegraph-triple-store-not-working-possibly-due-to-networ [2] FROM ubuntu:14.04 MAINTAINER Mikel Egaña Aranguren <my.email@x.com> RUN apt-get update && apt-get install -y openjdk-7-jre wget curl RUN mkdir /LinkedDataServer COPY google-refine-2.5 /LinkedDataServer/google-refine-2.5 COPY blazegraph /LinkedDataServer/blazegraph COPY jetty /LinkedDataServer/jetty EXPOSE 9999 EXPOSE 3333 EXPOSE 8080 WORKDIR /LinkedDataServer CMD java -server -jar blazegraph/bigdata-bundled.jar CMD google-refine-2.5/refine -i 0.0.0.0 WORKDIR /LinkedDataServer/jetty CMD java -jar start.jar jetty.port=8080 -- Mikel Egaña Aranguren, Ph.D. http://mikeleganaaranguren.com |
From: Bryan T. <br...@sy...> - 2015-10-26 12:24:44
|
It depends. One (offline) procedure is outlined here: - https://wiki.blazegraph.com/wiki/index.php/DataMigration Different procedures might be appropriate for export of a named graph from a quads namespace or export while the database is online. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Fri, Oct 23, 2015 at 2:06 PM, Joakim Soderberg < joa...@bl...> wrote: > Hi, > What is the recommended (fastest) way to export data from a KB? I am > currently using blazegraph 1.52 > > thanks > joakim > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Joakim S. <joa...@bl...> - 2015-10-23 18:35:26
|
Hi, What is the recommended (fastest) way to export data from a KB? I am currently using blazegraph 1.52 thanks joakim |
From: Brad B. <be...@sy...> - 2015-10-21 13:39:12
|
Jerven, Thank you. We've created https://jira.blazegraph.com/browse/BLZG-1585 to track this one. Thanks, --Brad On Wed, Oct 21, 2015 at 9:12 AM, Jerven Tjalling Bolleman < Jer...@is...> wrote: > Hi BlazeGraph developers, > > The SPARQL BlazeGraph generates to send to other endpoints in response > to a SERVICE call use the non standard BINDINGS keyword instead of the > finally accepted values clause. > > I believe this can be fixed with the patch below. > > This was raised by users of the sparql.uniprot.org endpoint who want to > use data in their blazegraph in combination with our virtuoso instance > guarded by a sesame layer. > > Regards, > Jerven > > > index c44a626..8000b3d 100644 > --- > > a/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/RemoteSparql11QueryBuilder.java > +++ > > b/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/RemoteSparql11QueryBuilder.java > @@ -293,7 +293,7 @@ public class RemoteSparql11QueryBuilder implements > IRemoteSparqlQueryBuilder { > // Variables in a known stable order. > final LinkedHashSet<String> vars = getDistinctVars(bindingSets); > > - sb.append("BINDINGS"); > + sb.append("VALUES");^M > > // Variable declarations. > { > > -- > Jerven Tjalling Bolleman > SIB | Swiss Institute of Bioinformatics > CMU - 1, rue Michel Servet - 1211 Geneva 4 > t: +41 22 379 58 85 - f: +41 22 379 58 58 > Jer...@is... - http://www.isb-sib.ch > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Jerven T. B. <Jer...@is...> - 2015-10-21 13:12:34
|
Hi BlazeGraph developers, The SPARQL BlazeGraph generates to send to other endpoints in response to a SERVICE call use the non standard BINDINGS keyword instead of the finally accepted values clause. I believe this can be fixed with the patch below. This was raised by users of the sparql.uniprot.org endpoint who want to use data in their blazegraph in combination with our virtuoso instance guarded by a sesame layer. Regards, Jerven index c44a626..8000b3d 100644 --- a/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/RemoteSparql11QueryBuilder.java +++ b/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/RemoteSparql11QueryBuilder.java @@ -293,7 +293,7 @@ public class RemoteSparql11QueryBuilder implements IRemoteSparqlQueryBuilder { // Variables in a known stable order. final LinkedHashSet<String> vars = getDistinctVars(bindingSets); - sb.append("BINDINGS"); + sb.append("VALUES");^M // Variable declarations. { -- Jerven Tjalling Bolleman SIB | Swiss Institute of Bioinformatics CMU - 1, rue Michel Servet - 1211 Geneva 4 t: +41 22 379 58 85 - f: +41 22 379 58 58 Jer...@is... - http://www.isb-sib.ch |
From: Brad B. <be...@sy...> - 2015-10-13 14:29:06
|
Felix, You will definitely want to adjust the properties a bit. Checkout [1] and then generate a Vocabulary per the JIRA ticket [2]. That should improve your performance substantially. I've also copied the blazegraph developers list to see if anyone else has recommendations. Thanks, --Brad [1] https://sourceforge.net/p/bigdata/git/ci/master/tree/bigdata-sails/src/samples/com/bigdata/samples/fastload.properties [2] https://jira.blazegraph.com/browse/BLZG-1509 On Tue, Oct 13, 2015 at 10:19 AM, <mai...@st...> wrote: > Hey > > thanks for the fast answer. > The journal file looks like this: > > com.bigdata.journal.AbstractJournal.bufferMode=DiskRW > com.bigdata.journal.AbstractJournal.file=bigdata.jnl > > Thanks > Felix > > > Zitat von Brad Bebee <be...@sy...>: > > Felix, >> >> Thank you. Can you send us some details of your journal configuration? >> It may be beneficial to use a Vocabulary to inline the DBPedia values for >> improved load performance. >> >> Thanks, --Brad >> >> On Tue, Oct 13, 2015 at 10:10 AM, Blazegraph Web Site Contact < >> bla...@bl...> wrote: >> >> From: Felix Conrads<mai...@st...> >>> Subject: Benchmarking Blazegraph >>> >>> Message Body: >>> >>> Name : Felix Conrads >>> Email : mai...@st... >>> Message : Hello, >>> >>> I'm Felix Conrads from AKSW Research Group at University Leipzig. >>> Currently i evaluate a Benchmark execution framework with several >>> triplestores including Blazegrah. While blazegraph works fine it needs >>> too >>> much time to upload some of our datasets into it. >>> These are the dbpedia datasets with ~217.000.000 triples and ~434.000.000 >>> triples. >>> I tried to upload the first one with the method described here: >>> https://wiki.blazegraph.com/wiki/index.php/Bulk_Data_Load >>> If you could tell me if there is any possiblity to upload them faster >>> than >>> >15 hours (i aborted it there) i would be very thankful . >>> >>> Thanks in advance and best regards >>> Felix Conrads >>> >>> -- >>> This e-mail was sent from a contact form on (http://www.blazegraph.com) >>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "contact-us" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to con...@sy.... >>> To post to this group, send email to con...@sy.... >>> To view this discussion on the web visit >>> >>> https://groups.google.com/a/systap.com/d/msgid/contact-us/8ece997cd271234401ab075d427430ca%40www.blazegraph.com >>> . >>> >>> >> >> >> -- >> _______________ >> Brad Bebee >> CEO, Managing Partner >> SYSTAP, LLC >> e: be...@sy... >> m: 202.642.7961 >> f: 571.367.5000 >> w: www.blazegraph.com >> >> Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance >> graph >> database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. >> Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new >> technology >> to use GPUs to accelerate data-parallel graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please >> notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> > > > > -- > You received this message because you are subscribed to the Google Groups > "blazegraph" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to bla...@sy.... > To post to this group, send email to bla...@sy.... > To view this discussion on the web visit > https://groups.google.com/a/systap.com/d/msgid/blazegraph/20151013161930.Horde.W_LJNqOM0w-ESzbfcFB7bw2%40mail.uni-leipzig.de > . > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Brad B. <be...@sy...> - 2015-09-25 17:59:12
|
Blazegraphers, We’re very pleased to announce the release of Blazegraph 1.5.3 is now available for download: https://www.blazegraph.com/download/. 1.5.3 is minor release of Blazegraph with some important bug fixes, which are listed below. If you're upgrading from prior to 1.5.2, be sure to checkout the updated documentation on SPARQL from Dr. Michael Schmidt's excellent blog posts. https://wiki.blazegraph.com/wiki/index.php/SPARQL_Order_Matters https://wiki.blazegraph.com/wiki/index.php/SPARQL_Bottom_Up_Semantics We have some exciting things planned for our 1.6 release coming later this year including GPU acceleration, deployment on Maven Central, and continued improvements in query optimization and performance, so stay tuned and keep up to date with our blog and mailing list. Do you have a great success story with Blazegraph? Want to find out more? We’d love to hear from you. Regards, --Brad https://blog.blazegraph.com/?p=970 -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. |
From: Stefan B. <ste...@wu...> - 2015-09-18 12:23:47
|
Hi Michael, thanks very much! Looking forward to try my queries next week again. Cheers, Stefan Am 2015-09-17 23:37, schrieb Michael Schmidt: > Stefan, > > the issue has been resolved, see https://jira.blazegraph.com/browse/BLZG-1493 [1]. We decided to include it into the upcoming release, which is targeted for next week. > > Best, > Michael > > On 17 Sep 2015, at 11:08, Michael Schmidt <ms...@me...> wrote: > > Dear Stefan, > > we plan to discuss the issue in an internal meeting today. > > Actually, we're pretty close to a new release, but maybe there's a chance that a potential fix could still be included. We will keep you informed. > > Best, > Michael > > Am 17.09.2015 um 11:03 schrieb Stefan Bischof <ste...@wu...>: > > Hi Bryan, > > thanks! Is there some kind of developer/beta version I could use once you have a fix? > > Cheers, > Stefan > > Am 15.09.2015 um 23:27 schrieb Bryan Thompson: > Stefan, > > Thanks for reporting this issue. We were able to replicate the problem and have found some additional cases where there are issues: > > - https://jira.blazegraph.com/browse/BLZG-1493 [1] (same NPE) > - https://jira.blazegraph.com/browse/BLZG-1495 [2] (wrong answer) > > We will look into these. The 1.5.3 release is being locked down right now, so the fix will not be in 1.5.3. However, we can reach out once we do have a fix and see if you can validate it on your setup. Feel free to subscribe to the tickets for updates. > > Thanks, > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com [3] > http://blog.bigdata.com [4] > http://mapgraph.io [5] > > Blazegraph(tm) [6] is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > On Mon, Sep 14, 2015 at 4:26 AM, Stefan Bischof <ste...@wu...> wrote: > Hi all! > > Only last week I started using Blazegraph and was impressed by the > performance and support for property path queries. Currently we are > evaluating different SPARQL engines and especially exploiting property > path queries for a kind of backward-chaining reasoning (see [1] for more > details). The queries contain very long and complicated property paths. > > When evaluating these queries (example query and full stack trace in the > end of the message) I get a NullPointerException. The data I loaded with > the DataLoader is just the LUBM ontology and just University0_0.owl from > LUBM. The query (generated by a query rewriter [2]) returns all > instances of lubm:Student under OWL QL semantics and thus encodes all > necessary reasoning in the path expressions. > > Can you see what the actual problem with this query is? > Is the query just too big, too many joins? > What can I do fix this? > > Thank you very much! > Stefan Bischof > > [1] S Bischof, M Krötzsch, A Polleres, S Rudolph: Schema-agnostic query > rewriting in SPARQL 1.1. ISWC 2014 > [2] http://citydata.wu.ac.at/SPR/ [7] > > Query: > > PREFIX dc: <http://purl.org/dc/elements/1.1/ [8]> > PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema# [9]> > PREFIX foaf: <http://xmlns.com/foaf/0.1/ [10]> > PREFIX owl: <http://www.w3.org/2002/07/owl# [11]> > PREFIX xsd: <http://www.w3.org/2001/XMLSchema# [12]> > PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# [13]> > PREFIX lubm: <http://swat.cse.lehigh.edu/onto/univ-bench.owl# [14]> > > SELECT * > WHERE > { { ?_v0 > (((((rdfs:subClassOf|owl:equivalentClass)|^owl:equivalentClass)|((owl:intersectionOf/(rdf:rest)*)/rdf:first))|((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(^owl:onProperty|rdfs:domain)))|((((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(owl:inverseOf|^owl:inverseOf))/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/rdfs:range))* > lubm:Student . > { { ?p rdf:type ?_v0} > UNION > { ?_v1 > ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/(^owl:onProperty|rdfs:domain) > ?_v0 . > ?p ?_v1 _:b0 > } > } > UNION > { ?_v1 > ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/rdfs:range > ?_v0 . > _:b1 ?_v1 ?p > } > } > } > > Full Stacktrace: > > java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:632) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:259) > at > com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) > at > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:830) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:649) > at > com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > ... 1 more > Caused by: org.openrdf.query.QueryEvaluationException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) > at > info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) > at org.openrdf.query.QueryResults.report(QueryResults.java:155) > at > org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1705) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1562) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1527) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:699) > ... 4 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) > at > com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) > at > com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) > at > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) > ... 11 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) > ... 16 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) > at > com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) > at > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) > at > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) > ... 4 more > Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) > at > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1511) > at > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) > ... 9 more > Caused by: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) > ... 3 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) > ... 8 more > Caused by: java.lang.RuntimeException: > cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.bop.join.JVMHashJoinUtility.launderThrowable(JVMHashJoinUtility.java:1406) > at > com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:431) > at > com.bigdata.bop.join.HashIndexOp$ChunkTask.acceptSolutions(HashIndexOp.java:433) > at > com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:338) > at > com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:237) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) > ... 8 more > Caused by: java.lang.NullPointerException > at > com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:410) > ... 13 more > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers [15] Links: ------ [1] https://jira.blazegraph.com/browse/BLZG-1493 [2] https://jira.blazegraph.com/browse/BLZG-1495 [3] http://blazegraph.com/ [4] http://bigdata.com/ [5] http://mapgraph.io/ [6] http://www.blazegraph.com/ [7] http://citydata.wu.ac.at/SPR/ [8] http://purl.org/dc/elements/1.1/ [9] http://www.w3.org/2000/01/rdf-schema# [10] http://xmlns.com/foaf/0.1/ [11] http://www.w3.org/2002/07/owl# [12] http://www.w3.org/2001/XMLSchema# [13] http://www.w3.org/1999/02/22-rdf-syntax-ns# [14] http://swat.cse.lehigh.edu/onto/univ-bench.owl# [15] https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Michael S. <ms...@me...> - 2015-09-17 21:37:53
|
Stefan, the issue has been resolved, see https://jira.blazegraph.com/browse/BLZG-1493 <https://jira.blazegraph.com/browse/BLZG-1493>. We decided to include it into the upcoming release, which is targeted for next week. Best, Michael > On 17 Sep 2015, at 11:08, Michael Schmidt <ms...@me...> wrote: > > Dear Stefan, > > we plan to discuss the issue in an internal meeting today. > > Actually, we're pretty close to a new release, but maybe there's a chance that a potential fix could still be included. We will keep you informed. > > Best, > Michael > > > Am 17.09.2015 um 11:03 schrieb Stefan Bischof <ste...@wu... <mailto:ste...@wu...>>: > >> Hi Bryan, >> >> thanks! Is there some kind of developer/beta version I could use once you have a fix? >> >> Cheers, >> Stefan >> >> Am 15.09.2015 um 23:27 schrieb Bryan Thompson: >>> Stefan, >>> >>> Thanks for reporting this issue. We were able to replicate the problem and have found some additional cases where there are issues: >>> >>> - https://jira.blazegraph.com/browse/BLZG-1493 <https://jira.blazegraph.com/browse/BLZG-1493> (same NPE) >>> - https://jira.blazegraph.com/browse/BLZG-1495 <https://jira.blazegraph.com/browse/BLZG-1495> (wrong answer) >>> >>> We will look into these. The 1.5.3 release is being locked down right now, so the fix will not be in 1.5.3. However, we can reach out once we do have a fix and see if you can validate it on your setup. Feel free to subscribe to the tickets for updates. >>> >>> Thanks, >>> Bryan >>> >>> >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... <mailto:br...@sy...> >>> http://blazegraph.com <http://blazegraph.com/> >>> http://blog.bigdata.com <http://bigdata.com/> >>> http://mapgraph.io <http://mapgraph.io/> >>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >>> >>> >>> >>> On Mon, Sep 14, 2015 at 4:26 AM, Stefan Bischof <ste...@wu... <mailto:ste...@wu...>> wrote: >>> Hi all! >>> >>> Only last week I started using Blazegraph and was impressed by the >>> performance and support for property path queries. Currently we are >>> evaluating different SPARQL engines and especially exploiting property >>> path queries for a kind of backward-chaining reasoning (see [1] for more >>> details). The queries contain very long and complicated property paths. >>> >>> When evaluating these queries (example query and full stack trace in the >>> end of the message) I get a NullPointerException. The data I loaded with >>> the DataLoader is just the LUBM ontology and just University0_0.owl from >>> LUBM. The query (generated by a query rewriter [2]) returns all >>> instances of lubm:Student under OWL QL semantics and thus encodes all >>> necessary reasoning in the path expressions. >>> >>> Can you see what the actual problem with this query is? >>> Is the query just too big, too many joins? >>> What can I do fix this? >>> >>> Thank you very much! >>> Stefan Bischof >>> >>> [1] S Bischof, M Krötzsch, A Polleres, S Rudolph: Schema-agnostic query >>> rewriting in SPARQL 1.1. ISWC 2014 >>> [2] http://citydata.wu.ac.at/SPR/ <http://citydata.wu.ac.at/SPR/> >>> >>> Query: >>> >>> PREFIX dc: <http://purl.org/dc/elements/1.1/ <http://purl.org/dc/elements/1.1/>> >>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema# <http://www.w3.org/2000/01/rdf-schema#>> >>> PREFIX foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>> >>> PREFIX owl: <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>> >>> PREFIX xsd: <http://www.w3.org/2001/XMLSchema# <http://www.w3.org/2001/XMLSchema#>> >>> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# <http://www.w3.org/1999/02/22-rdf-syntax-ns#>> >>> PREFIX lubm: <http://swat.cse.lehigh.edu/onto/univ-bench.owl# <http://swat.cse.lehigh.edu/onto/univ-bench.owl#>> >>> >>> SELECT * >>> WHERE >>> { { ?_v0 >>> (((((rdfs:subClassOf|owl:equivalentClass)|^owl:equivalentClass)|((owl:intersectionOf/(rdf:rest)*)/rdf:first))|((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(^owl:onProperty|rdfs:domain)))|((((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(owl:inverseOf|^owl:inverseOf))/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/rdfs:range))* >>> lubm:Student . >>> { { ?p rdf:type ?_v0} >>> UNION >>> { ?_v1 >>> ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/(^owl:onProperty|rdfs:domain) >>> ?_v0 . >>> ?p ?_v1 _:b0 >>> } >>> } >>> UNION >>> { ?_v1 >>> ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/rdfs:range >>> ?_v0 . >>> _:b1 ?_v1 ?p >>> } >>> } >>> } >>> >>> >>> >>> >>> Full Stacktrace: >>> >>> java.util.concurrent.ExecutionException: >>> java.util.concurrent.ExecutionException: >>> org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:632) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:259) >>> at >>> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) >>> at >>> com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) >>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >>> at >>> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >>> at >>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >>> at >>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >>> at >>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >>> at >>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >>> at >>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >>> at >>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >>> at >>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >>> at >>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >>> at >>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >>> at >>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) >>> at >>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) >>> at >>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) >>> at org.eclipse.jetty.server.Server.handle(Server.java:497) >>> at >>> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) >>> at >>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) >>> at >>> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >>> at >>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) >>> at >>> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) >>> at java.lang.Thread.run(Thread.java:745) >>> Caused by: java.util.concurrent.ExecutionException: >>> org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:830) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:649) >>> at >>> com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>> at >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>> ... 1 more >>> Caused by: org.openrdf.query.QueryEvaluationException: >>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>> java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) >>> at >>> info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) >>> at org.openrdf.query.QueryResults.report(QueryResults.java:155) >>> at >>> org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1705) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1562) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1527) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:699) >>> ... 4 more >>> Caused by: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) >>> at >>> com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) >>> ... 11 more >>> Caused by: java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>> java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) >>> ... 16 more >>> Caused by: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) >>> at >>> com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) >>> ... 4 more >>> Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) >>> at >>> com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1511) >>> at >>> com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) >>> ... 9 more >>> Caused by: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) >>> at >>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at >>> com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) >>> ... 3 more >>> Caused by: java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) >>> ... 8 more >>> Caused by: java.lang.RuntimeException: >>> cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.bop.join.JVMHashJoinUtility.launderThrowable(JVMHashJoinUtility.java:1406) >>> at >>> com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:431) >>> at >>> com.bigdata.bop.join.HashIndexOp$ChunkTask.acceptSolutions(HashIndexOp.java:433) >>> at >>> com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:338) >>> at >>> com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:237) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) >>> ... 8 more >>> Caused by: java.lang.NullPointerException >>> at >>> com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:410) >>> ... 13 more >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... <mailto:Big...@li...> >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> >>> >> |
From: Michael S. <ms...@me...> - 2015-09-17 09:09:06
|
Dear Stefan, we plan to discuss the issue in an internal meeting today. Actually, we're pretty close to a new release, but maybe there's a chance that a potential fix could still be included. We will keep you informed. Best, Michael > Am 17.09.2015 um 11:03 schrieb Stefan Bischof <ste...@wu...>: > > Hi Bryan, > > thanks! Is there some kind of developer/beta version I could use once you have a fix? > > Cheers, > Stefan > >> Am 15.09.2015 um 23:27 schrieb Bryan Thompson: >> Stefan, >> >> Thanks for reporting this issue. We were able to replicate the problem and have found some additional cases where there are issues: >> >> - https://jira.blazegraph.com/browse/BLZG-1493 (same NPE) >> - https://jira.blazegraph.com/browse/BLZG-1495 (wrong answer) >> >> We will look into these. The 1.5.3 release is being locked down right now, so the fix will not be in 1.5.3. However, we can reach out once we do have a fix and see if you can validate it on your setup. Feel free to subscribe to the tickets for updates. >> >> Thanks, >> Bryan >> >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.bigdata.com >> http://mapgraph.io >> Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >> >> >> >>> On Mon, Sep 14, 2015 at 4:26 AM, Stefan Bischof <ste...@wu...> wrote: >>> Hi all! >>> >>> Only last week I started using Blazegraph and was impressed by the >>> performance and support for property path queries. Currently we are >>> evaluating different SPARQL engines and especially exploiting property >>> path queries for a kind of backward-chaining reasoning (see [1] for more >>> details). The queries contain very long and complicated property paths. >>> >>> When evaluating these queries (example query and full stack trace in the >>> end of the message) I get a NullPointerException. The data I loaded with >>> the DataLoader is just the LUBM ontology and just University0_0.owl from >>> LUBM. The query (generated by a query rewriter [2]) returns all >>> instances of lubm:Student under OWL QL semantics and thus encodes all >>> necessary reasoning in the path expressions. >>> >>> Can you see what the actual problem with this query is? >>> Is the query just too big, too many joins? >>> What can I do fix this? >>> >>> Thank you very much! >>> Stefan Bischof >>> >>> [1] S Bischof, M Krötzsch, A Polleres, S Rudolph: Schema-agnostic query >>> rewriting in SPARQL 1.1. ISWC 2014 >>> [2] http://citydata.wu.ac.at/SPR/ >>> >>> Query: >>> >>> PREFIX dc: <http://purl.org/dc/elements/1.1/> >>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> >>> PREFIX foaf: <http://xmlns.com/foaf/0.1/> >>> PREFIX owl: <http://www.w3.org/2002/07/owl#> >>> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> >>> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> >>> PREFIX lubm: <http://swat.cse.lehigh.edu/onto/univ-bench.owl#> >>> >>> SELECT * >>> WHERE >>> { { ?_v0 >>> (((((rdfs:subClassOf|owl:equivalentClass)|^owl:equivalentClass)|((owl:intersectionOf/(rdf:rest)*)/rdf:first))|((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(^owl:onProperty|rdfs:domain)))|((((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(owl:inverseOf|^owl:inverseOf))/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/rdfs:range))* >>> lubm:Student . >>> { { ?p rdf:type ?_v0} >>> UNION >>> { ?_v1 >>> ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/(^owl:onProperty|rdfs:domain) >>> ?_v0 . >>> ?p ?_v1 _:b0 >>> } >>> } >>> UNION >>> { ?_v1 >>> ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/rdfs:range >>> ?_v0 . >>> _:b1 ?_v1 ?p >>> } >>> } >>> } >>> >>> >>> >>> >>> Full Stacktrace: >>> >>> java.util.concurrent.ExecutionException: >>> java.util.concurrent.ExecutionException: >>> org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:632) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:259) >>> at >>> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) >>> at >>> com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) >>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >>> at >>> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >>> at >>> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >>> at >>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >>> at >>> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >>> at >>> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >>> at >>> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >>> at >>> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >>> at >>> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >>> at >>> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >>> at >>> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >>> at >>> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) >>> at >>> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) >>> at >>> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) >>> at org.eclipse.jetty.server.Server.handle(Server.java:497) >>> at >>> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) >>> at >>> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) >>> at >>> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >>> at >>> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) >>> at >>> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) >>> at java.lang.Thread.run(Thread.java:745) >>> Caused by: java.util.concurrent.ExecutionException: >>> org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:830) >>> at >>> com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:649) >>> at >>> com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>> at >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>> ... 1 more >>> Caused by: org.openrdf.query.QueryEvaluationException: >>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>> java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) >>> at >>> info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) >>> at org.openrdf.query.QueryResults.report(QueryResults.java:155) >>> at >>> org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1705) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1562) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1527) >>> at >>> com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:699) >>> ... 4 more >>> Caused by: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) >>> at >>> com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) >>> ... 11 more >>> Caused by: java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: >>> java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) >>> ... 16 more >>> Caused by: java.lang.RuntimeException: >>> java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) >>> at >>> com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) >>> at >>> com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) >>> ... 4 more >>> Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) >>> at >>> com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1511) >>> at >>> com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) >>> at >>> com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) >>> ... 9 more >>> Caused by: java.lang.Exception: >>> task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, >>> cause=java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) >>> at >>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at >>> com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) >>> ... 3 more >>> Caused by: java.util.concurrent.ExecutionException: >>> java.lang.RuntimeException: cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >>> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) >>> ... 8 more >>> Caused by: java.lang.RuntimeException: >>> cause=java.lang.NullPointerException, >>> state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} >>> at >>> com.bigdata.bop.join.JVMHashJoinUtility.launderThrowable(JVMHashJoinUtility.java:1406) >>> at >>> com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:431) >>> at >>> com.bigdata.bop.join.HashIndexOp$ChunkTask.acceptSolutions(HashIndexOp.java:433) >>> at >>> com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:338) >>> at >>> com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:237) >>> at java.util.concurrent.FutureTask.run(FutureTask.java:262) >>> at >>> com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) >>> ... 8 more >>> Caused by: java.lang.NullPointerException >>> at >>> com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:410) >>> ... 13 more >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Stefan B. <ste...@wu...> - 2015-09-17 09:03:15
|
Hi Bryan, thanks! Is there some kind of developer/beta version I could use once you have a fix? Cheers, Stefan Am 15.09.2015 um 23:27 schrieb Bryan Thompson: > Stefan, > > Thanks for reporting this issue. We were able to replicate the > problem and have found some additional cases where there are issues: > > - https://jira.blazegraph.com/browse/BLZG-1493 (same NPE) > - https://jira.blazegraph.com/browse/BLZG-1495 (wrong answer) > > We will look into these. The 1.5.3 release is being locked down right > now, so the fix will not be in 1.5.3. However, we can reach out once > we do have a fix and see if you can validate it on your setup. Feel > free to subscribe to the tickets for updates. > > Thanks, > Bryan > > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... <mailto:br...@sy...> > http://blazegraph.com > http://blog.bigdata.com <http://bigdata.com> > http://mapgraph.io > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Blazegraph is now available with GPU acceleration using our > disruptive technology to accelerate data-parallel graph analytics and > graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments > are for the sole use of the intended recipient(s) and are confidential > or proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments > is prohibited. If you have received this communication in error, > please notify the sender by reply email and permanently delete all > copies of the email and its contents and attachments. > > > On Mon, Sep 14, 2015 at 4:26 AM, Stefan Bischof > <ste...@wu... <mailto:ste...@wu...>> wrote: > > Hi all! > > Only last week I started using Blazegraph and was impressed by the > performance and support for property path queries. Currently we are > evaluating different SPARQL engines and especially exploiting property > path queries for a kind of backward-chaining reasoning (see [1] > for more > details). The queries contain very long and complicated property > paths. > > When evaluating these queries (example query and full stack trace > in the > end of the message) I get a NullPointerException. The data I > loaded with > the DataLoader is just the LUBM ontology and just > University0_0.owl from > LUBM. The query (generated by a query rewriter [2]) returns all > instances of lubm:Student under OWL QL semantics and thus encodes all > necessary reasoning in the path expressions. > > Can you see what the actual problem with this query is? > Is the query just too big, too many joins? > What can I do fix this? > > Thank you very much! > Stefan Bischof > > [1] S Bischof, M Krötzsch, A Polleres, S Rudolph: Schema-agnostic > query > rewriting in SPARQL 1.1. ISWC 2014 > [2] http://citydata.wu.ac.at/SPR/ > > Query: > > PREFIX dc: <http://purl.org/dc/elements/1.1/> > PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> > PREFIX foaf: <http://xmlns.com/foaf/0.1/> > PREFIX owl: <http://www.w3.org/2002/07/owl#> > PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> > PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> > PREFIX lubm: <http://swat.cse.lehigh.edu/onto/univ-bench.owl#> > > SELECT * > WHERE > { { ?_v0 > (((((rdfs:subClassOf|owl:equivalentClass)|^owl:equivalentClass)|((owl:intersectionOf/(rdf:rest)*)/rdf:first))|((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(^owl:onProperty|rdfs:domain)))|((((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(owl:inverseOf|^owl:inverseOf))/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/rdfs:range))* > lubm:Student . > { { ?p rdf:type ?_v0} > UNION > { ?_v1 > ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/(^owl:onProperty|rdfs:domain) > ?_v0 . > ?p ?_v1 _:b0 > } > } > UNION > { ?_v1 > ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/rdfs:range > ?_v0 . > _:b1 ?_v1 ?p > } > } > } > > > > > Full Stacktrace: > > java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: > java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:632) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:259) > at > com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) > at > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) > at > javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at > javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: > java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:830) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:649) > at > com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > ... 1 more > Caused by: org.openrdf.query.QueryEvaluationException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) > at > info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) > at > org.openrdf.query.QueryResults.report(QueryResults.java:155) > at > org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1705) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1562) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1527) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:699) > ... 4 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) > at > com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) > at > com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) > at > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) > ... 11 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) > ... 16 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) > at > com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) > at > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) > at > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) > ... 4 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) > at > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1511) > at > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) > ... 9 more > Caused by: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) > ... 3 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) > ... 8 more > Caused by: java.lang.RuntimeException: > cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.bop.join.JVMHashJoinUtility.launderThrowable(JVMHashJoinUtility.java:1406) > at > com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:431) > at > com.bigdata.bop.join.HashIndexOp$ChunkTask.acceptSolutions(HashIndexOp.java:433) > at > com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:338) > at > com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:237) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) > ... 8 more > Caused by: java.lang.NullPointerException > at > com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:410) > ... 13 more > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Jeremy J C. <jj...@sy...> - 2015-09-15 22:37:57
|
Ah, I believe I have seen this issue with somewhat shorter but still complex property paths, but in a non-repeatable manner. Jeremy > On Sep 15, 2015, at 2:27 PM, Bryan Thompson <br...@sy...> wrote: > > Stefan, > > Thanks for reporting this issue. We were able to replicate the problem and have found some additional cases where there are issues: > > - https://jira.blazegraph.com/browse/BLZG-1493 <https://jira.blazegraph.com/browse/BLZG-1493> (same NPE) > - https://jira.blazegraph.com/browse/BLZG-1495 <https://jira.blazegraph.com/browse/BLZG-1495> (wrong answer) > > We will look into these. The 1.5.3 release is being locked down right now, so the fix will not be in 1.5.3. However, we can reach out once we do have a fix and see if you can validate it on your setup. Feel free to subscribe to the tickets for updates. > > Thanks, > Bryan > > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... <mailto:br...@sy...> > http://blazegraph.com <http://blazegraph.com/> > http://blog.bigdata.com <http://bigdata.com/> > http://mapgraph.io <http://mapgraph.io/> > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > > On Mon, Sep 14, 2015 at 4:26 AM, Stefan Bischof <ste...@wu... <mailto:ste...@wu...>> wrote: > Hi all! > > Only last week I started using Blazegraph and was impressed by the > performance and support for property path queries. Currently we are > evaluating different SPARQL engines and especially exploiting property > path queries for a kind of backward-chaining reasoning (see [1] for more > details). The queries contain very long and complicated property paths. > > When evaluating these queries (example query and full stack trace in the > end of the message) I get a NullPointerException. The data I loaded with > the DataLoader is just the LUBM ontology and just University0_0.owl from > LUBM. The query (generated by a query rewriter [2]) returns all > instances of lubm:Student under OWL QL semantics and thus encodes all > necessary reasoning in the path expressions. > > Can you see what the actual problem with this query is? > Is the query just too big, too many joins? > What can I do fix this? > > Thank you very much! > Stefan Bischof > > [1] S Bischof, M Krötzsch, A Polleres, S Rudolph: Schema-agnostic query > rewriting in SPARQL 1.1. ISWC 2014 > [2] http://citydata.wu.ac.at/SPR/ <http://citydata.wu.ac.at/SPR/> > > Query: > > PREFIX dc: <http://purl.org/dc/elements/1.1/ <http://purl.org/dc/elements/1.1/>> > PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema# <http://www.w3.org/2000/01/rdf-schema#>> > PREFIX foaf: <http://xmlns.com/foaf/0.1/ <http://xmlns.com/foaf/0.1/>> > PREFIX owl: <http://www.w3.org/2002/07/owl# <http://www.w3.org/2002/07/owl#>> > PREFIX xsd: <http://www.w3.org/2001/XMLSchema# <http://www.w3.org/2001/XMLSchema#>> > PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns# <http://www.w3.org/1999/02/22-rdf-syntax-ns#>> > PREFIX lubm: <http://swat.cse.lehigh.edu/onto/univ-bench.owl# <http://swat.cse.lehigh.edu/onto/univ-bench.owl#>> > > SELECT * > WHERE > { { ?_v0 > (((((rdfs:subClassOf|owl:equivalentClass)|^owl:equivalentClass)|((owl:intersectionOf/(rdf:rest)*)/rdf:first))|((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(^owl:onProperty|rdfs:domain)))|((((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(owl:inverseOf|^owl:inverseOf))/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/rdfs:range))* > lubm:Student . > { { ?p rdf:type ?_v0} > UNION > { ?_v1 > ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/(^owl:onProperty|rdfs:domain) > ?_v0 . > ?p ?_v1 _:b0 > } > } > UNION > { ?_v1 > ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/rdfs:range > ?_v0 . > _:b1 ?_v1 ?p > } > } > } > > > > > Full Stacktrace: > > java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:632) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:259) > at > com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) > at > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:830) > at > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:649) > at > com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > ... 1 more > Caused by: org.openrdf.query.QueryEvaluationException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) > at > info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) > at org.openrdf.query.QueryResults.report(QueryResults.java:155) > at > org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1705) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1562) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1527) > at > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:699) > ... 4 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) > at > com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) > at > com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) > at > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) > ... 11 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) > ... 16 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) > at > com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) > at > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) > at > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) > ... 4 more > Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) > at > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1511) > at > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) > at > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) > ... 9 more > Caused by: java.lang.Exception: > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) > ... 3 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) > ... 8 more > Caused by: java.lang.RuntimeException: > cause=java.lang.NullPointerException, > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > com.bigdata.bop.join.JVMHashJoinUtility.launderThrowable(JVMHashJoinUtility.java:1406) > at > com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:431) > at > com.bigdata.bop.join.HashIndexOp$ChunkTask.acceptSolutions(HashIndexOp.java:433) > at > com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:338) > at > com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:237) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) > ... 8 more > Caused by: java.lang.NullPointerException > at > com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:410) > ... 13 more > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Bryan T. <br...@sy...> - 2015-09-15 21:28:07
|
Stefan, Thanks for reporting this issue. We were able to replicate the problem and have found some additional cases where there are issues: - https://jira.blazegraph.com/browse/BLZG-1493 (same NPE) - https://jira.blazegraph.com/browse/BLZG-1495 (wrong answer) We will look into these. The 1.5.3 release is being locked down right now, so the fix will not be in 1.5.3. However, we can reach out once we do have a fix and see if you can validate it on your setup. Feel free to subscribe to the tickets for updates. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com <http://bigdata.com> http://mapgraph.io Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, Sep 14, 2015 at 4:26 AM, Stefan Bischof <ste...@wu...> wrote: > Hi all! > > Only last week I started using Blazegraph and was impressed by the > performance and support for property path queries. Currently we are > evaluating different SPARQL engines and especially exploiting property > path queries for a kind of backward-chaining reasoning (see [1] for more > details). The queries contain very long and complicated property paths. > > When evaluating these queries (example query and full stack trace in the > end of the message) I get a NullPointerException. The data I loaded with > the DataLoader is just the LUBM ontology and just University0_0.owl from > LUBM. The query (generated by a query rewriter [2]) returns all > instances of lubm:Student under OWL QL semantics and thus encodes all > necessary reasoning in the path expressions. > > Can you see what the actual problem with this query is? > Is the query just too big, too many joins? > What can I do fix this? > > Thank you very much! > Stefan Bischof > > [1] S Bischof, M Krötzsch, A Polleres, S Rudolph: Schema-agnostic query > rewriting in SPARQL 1.1. ISWC 2014 > [2] http://citydata.wu.ac.at/SPR/ > > Query: > > PREFIX dc: <http://purl.org/dc/elements/1.1/> > PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> > PREFIX foaf: <http://xmlns.com/foaf/0.1/> > PREFIX owl: <http://www.w3.org/2002/07/owl#> > PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> > PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> > PREFIX lubm: <http://swat.cse.lehigh.edu/onto/univ-bench.owl#> > > SELECT * > WHERE > { { ?_v0 > > (((((rdfs:subClassOf|owl:equivalentClass)|^owl:equivalentClass)|((owl:intersectionOf/(rdf:rest)*)/rdf:first))|((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(^owl:onProperty|rdfs:domain)))|((((owl:onProperty/((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*)/(owl:inverseOf|^owl:inverseOf))/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/rdfs:range))* > lubm:Student . > { { ?p rdf:type ?_v0} > UNION > { ?_v1 > > ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/(^owl:onProperty|rdfs:domain) > ?_v0 . > ?p ?_v1 _:b0 > } > } > UNION > { ?_v1 > > ((((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty)|(((owl:inverseOf|^owl:inverseOf)/(((rdfs:subPropertyOf|owl:equivalentProperty)|^owl:equivalentProperty))*)/(owl:inverseOf|^owl:inverseOf))))*/rdfs:range > ?_v0 . > _:b1 ?_v1 ?p > } > } > } > > > > > Full Stacktrace: > > java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:281) > at > > com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:632) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:259) > at > com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:248) > at > > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:138) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > at > > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:830) > at > > com.bigdata.rdf.sail.webapp.QueryServlet$SparqlQueryTask.call(QueryServlet.java:649) > at > > com.bigdata.rdf.task.ApiTaskForIndexManager.call(ApiTaskForIndexManager.java:68) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > ... 1 more > Caused by: org.openrdf.query.QueryEvaluationException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:188) > at > info.aduna.iteration.IterationWrapper.hasNext(IterationWrapper.java:68) > at org.openrdf.query.QueryResults.report(QueryResults.java:155) > at > org.openrdf.repository.sail.SailTupleQuery.evaluate(SailTupleQuery.java:76) > at > > com.bigdata.rdf.sail.webapp.BigdataRDFContext$TupleQueryTask.doQuery(BigdataRDFContext.java:1705) > at > > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.innerCall(BigdataRDFContext.java:1562) > at > > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:1527) > at > > com.bigdata.rdf.sail.webapp.BigdataRDFContext$AbstractQueryTask.call(BigdataRDFContext.java:699) > ... 4 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1523) > at > > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator._hasNext(BlockingBuffer.java:1710) > at > > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.hasNext(BlockingBuffer.java:1563) > at > > com.bigdata.striterator.AbstractChunkedResolverator._hasNext(AbstractChunkedResolverator.java:365) > at > > com.bigdata.striterator.AbstractChunkedResolverator.hasNext(AbstractChunkedResolverator.java:341) > at > > com.bigdata.rdf.sail.Bigdata2Sesame2BindingSetIterator.hasNext(Bigdata2Sesame2BindingSetIterator.java:134) > ... 11 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.Exception: > > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > > com.bigdata.relation.accesspath.BlockingBuffer$BlockingIterator.checkFuture(BlockingBuffer.java:1454) > ... 16 more > Caused by: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:59) > at > > com.bigdata.rdf.sail.RunningQueryCloseableIterator.close(RunningQueryCloseableIterator.java:73) > at > > com.bigdata.rdf.sail.RunningQueryCloseableIterator.hasNext(RunningQueryCloseableIterator.java:82) > at > > com.bigdata.striterator.ChunkedWrappedIterator.hasNext(ChunkedWrappedIterator.java:197) > at > > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:222) > at > > com.bigdata.striterator.AbstractChunkedResolverator$ChunkConsumerTask.call(AbstractChunkedResolverator.java:197) > ... 4 more > Caused by: java.util.concurrent.ExecutionException: java.lang.Exception: > > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at com.bigdata.util.concurrent.Haltable.get(Haltable.java:273) > at > > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:1511) > at > > com.bigdata.bop.engine.AbstractRunningQuery.get(AbstractRunningQuery.java:104) > at > > com.bigdata.rdf.sail.RunningQueryCloseableIterator.checkFuture(RunningQueryCloseableIterator.java:46) > ... 9 more > Caused by: java.lang.Exception: > > task=ChunkTask{query=12f11bc0-b22a-48c8-89f7-a3603bae639a,bopId=58,partitionId=-1,sinkId=75,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1337) > at > > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTaskWrapper.run(ChunkedRunningQuery.java:896) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > com.bigdata.concurrent.FutureTaskMon.run(FutureTaskMon.java:63) > at > > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkFutureTask.run(ChunkedRunningQuery.java:791) > ... 3 more > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: cause=java.lang.NullPointerException, > > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1317) > ... 8 more > Caused by: java.lang.RuntimeException: > cause=java.lang.NullPointerException, > > state=JVMHashJoinUtility{open=false,joinType=Normal,joinVars=[],outputDistinctJVs=true,size=20,considered(left=26,right=312,joins=312)} > at > > com.bigdata.bop.join.JVMHashJoinUtility.launderThrowable(JVMHashJoinUtility.java:1406) > at > > com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:431) > at > > com.bigdata.bop.join.HashIndexOp$ChunkTask.acceptSolutions(HashIndexOp.java:433) > at > com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:338) > at > com.bigdata.bop.join.HashIndexOp$ChunkTask.call(HashIndexOp.java:237) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > > com.bigdata.bop.engine.ChunkedRunningQuery$ChunkTask.call(ChunkedRunningQuery.java:1316) > ... 8 more > Caused by: java.lang.NullPointerException > at > > com.bigdata.bop.join.JVMHashJoinUtility.acceptSolutions(JVMHashJoinUtility.java:410) > ... 13 more > > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Mike P. <mi...@sy...> - 2015-09-14 14:07:19
|
>> After running a sample application which calls: FunctionRegistry.Factory factory = new MyFunctionFactory(); URI myFunctionURI = new URIImpl("https://www.ebi.ac.uk/chembl/chox#validate "); FunctionRegistry.add(myFunctionURI, factory); Are you calling that from client code? It needs to be called on the server, usually when the server is initialized (see BigdataRDFServletContextListener::contextInitialized). On Mon, Sep 14, 2015 at 1:30 AM, andreas81_81 <and...@o2...> wrote: > Hi all, > > I have blazegraph 1.5.2 running on my machine (Windows, bundled version) > and I've written sample java application (based on the information on > blazegraph wiki page) but it does not work. My custom function is exactly > the same as provied on the page (security filter), the function is > registered, the query seems to be correct ... but blazegraph is unable to > find the registered function. I'm running blazegraph with the following > command: > java -classpath > d:\dev\Blazegraph\org.lhasalimited.blazegraph\target\classes\ -server > -Xmx4g -jar bigdata-bundled.jar > > (the directory contains class with my custom function). After running a > sample application which calls: > FunctionRegistry.Factory factory = new MyFunctionFactory(); > URI myFunctionURI = new URIImpl(" > https://www.ebi.ac.uk/chembl/chox#validate"); > FunctionRegistry.add(myFunctionURI, factory); > > The application throws: > > java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=2125bb5c-9355-4482-8b52-c3036fe60744,bopId=1,partitionId=-1,sinkId=3,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.lang.UnsupportedOperationException: unknown function: > https://www.ebi.ac.uk/chembl/chox#validate > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:532) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:189) > at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) > at > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:595) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:191) > at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:72) > at > com.bigdata.rdf.sail.webapp.HALoadBalancerServlet.forwardToLocalService(HALoadBalancerServlet.java:938) > at > com.bigdata.rdf.sail.webapp.HALoadBalancerServlet.service(HALoadBalancerServlet.java:816) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > at java.lang.Thread.run(Thread.java:745) > > > Do you have any idea what i'm doing wrong? > > > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Brad B. <be...@sy...> - 2015-09-14 13:12:12
|
I've also updated We've updated https://jira.blazegraph.com/browse/BLZG-1485 with a work-around and will add a new example. For now you can either try the workaround in the ticket for the NanoSparqlServer or your same code but using Blazegraph embedded such as with https://github.com/SYSTAP/blazegraph-samples/tree/master/sample-sesame-embedded . Let us know how it works out. Thanks, --Brad On Mon, Sep 14, 2015 at 9:00 AM, Bryan Thompson <br...@sy...> wrote: > Andreas, > > The basic issue here is that the custom function registrations are not > sticky. They need to be made within the server each time the server > starts. For applications that embed blazegraph, you just do this from your > code when you start the server and before submitting any queries. For the > server version, you do this by extending the > BigdataRDFServletContextListener and specifying your version of that class > (which must extend ours) in web.xml. Your version just overrides > contextInitialized(). > > Igor is going to update the documentation to further clarify this as it > has lead to some confusion recently. > > Thanks, > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.bigdata.com <http://bigdata.com> > http://mapgraph.io > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Blazegraph is now available with GPU acceleration using our disruptive > technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > On Mon, Sep 14, 2015 at 3:30 AM, andreas81_81 <and...@o2...> wrote: > >> Hi all, >> >> I have blazegraph 1.5.2 running on my machine (Windows, bundled version) >> and I've written sample java application (based on the information on >> blazegraph wiki page) but it does not work. My custom function is exactly >> the same as provied on the page (security filter), the function is >> registered, the query seems to be correct ... but blazegraph is unable to >> find the registered function. I'm running blazegraph with the following >> command: >> java -classpath >> d:\dev\Blazegraph\org.lhasalimited.blazegraph\target\classes\ -server >> -Xmx4g -jar bigdata-bundled.jar >> >> (the directory contains class with my custom function). After running a >> sample application which calls: >> FunctionRegistry.Factory factory = new MyFunctionFactory(); >> URI myFunctionURI = new URIImpl(" >> https://www.ebi.ac.uk/chembl/chox#validate"); >> FunctionRegistry.add(myFunctionURI, factory); >> >> The application throws: >> >> java.util.concurrent.ExecutionException: >> java.util.concurrent.ExecutionException: >> org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> java.util.concurrent.ExecutionException: java.lang.Exception: >> task=ChunkTask{query=2125bb5c-9355-4482-8b52-c3036fe60744,bopId=1,partitionId=-1,sinkId=3,altSinkId=null}, >> cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: >> java.lang.UnsupportedOperationException: unknown function: >> https://www.ebi.ac.uk/chembl/chox#validate >> at java.util.concurrent.FutureTask.report(FutureTask.java:122) >> at java.util.concurrent.FutureTask.get(FutureTask.java:188) >> at >> com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) >> at >> com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:532) >> at >> com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:189) >> at >> com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) >> at >> com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >> at >> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >> at >> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >> at >> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >> at >> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:595) >> at >> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >> at >> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >> at >> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >> at >> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >> at >> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >> at >> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:191) >> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:72) >> at >> com.bigdata.rdf.sail.webapp.HALoadBalancerServlet.forwardToLocalService(HALoadBalancerServlet.java:938) >> at >> com.bigdata.rdf.sail.webapp.HALoadBalancerServlet.service(HALoadBalancerServlet.java:816) >> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) >> at >> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) >> at >> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) >> at >> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) >> at >> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) >> at >> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) >> at >> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) >> at >> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) >> at >> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) >> at >> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) >> at >> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >> at >> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) >> at >> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) >> at >> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) >> at org.eclipse.jetty.server.Server.handle(Server.java:497) >> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) >> at >> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) >> at >> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) >> at >> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) >> at >> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) >> at java.lang.Thread.run(Thread.java:745) >> >> >> Do you have any idea what i'm doing wrong? >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Bryan T. <br...@sy...> - 2015-09-14 13:00:32
|
Andreas, The basic issue here is that the custom function registrations are not sticky. They need to be made within the server each time the server starts. For applications that embed blazegraph, you just do this from your code when you start the server and before submitting any queries. For the server version, you do this by extending the BigdataRDFServletContextListener and specifying your version of that class (which must extend ours) in web.xml. Your version just overrides contextInitialized(). Igor is going to update the documentation to further clarify this as it has lead to some confusion recently. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com <http://bigdata.com> http://mapgraph.io Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, Sep 14, 2015 at 3:30 AM, andreas81_81 <and...@o2...> wrote: > Hi all, > > I have blazegraph 1.5.2 running on my machine (Windows, bundled version) > and I've written sample java application (based on the information on > blazegraph wiki page) but it does not work. My custom function is exactly > the same as provied on the page (security filter), the function is > registered, the query seems to be correct ... but blazegraph is unable to > find the registered function. I'm running blazegraph with the following > command: > java -classpath > d:\dev\Blazegraph\org.lhasalimited.blazegraph\target\classes\ -server > -Xmx4g -jar bigdata-bundled.jar > > (the directory contains class with my custom function). After running a > sample application which calls: > FunctionRegistry.Factory factory = new MyFunctionFactory(); > URI myFunctionURI = new URIImpl(" > https://www.ebi.ac.uk/chembl/chox#validate"); > FunctionRegistry.add(myFunctionURI, factory); > > The application throws: > > java.util.concurrent.ExecutionException: > java.util.concurrent.ExecutionException: > org.openrdf.query.QueryEvaluationException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.util.concurrent.ExecutionException: java.lang.Exception: > task=ChunkTask{query=2125bb5c-9355-4482-8b52-c3036fe60744,bopId=1,partitionId=-1,sinkId=3,altSinkId=null}, > cause=java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.lang.UnsupportedOperationException: unknown function: > https://www.ebi.ac.uk/chembl/chox#validate > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > com.bigdata.rdf.sail.webapp.BigdataServlet.submitApiTask(BigdataServlet.java:261) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doSparqlQuery(QueryServlet.java:532) > at > com.bigdata.rdf.sail.webapp.QueryServlet.doPost(QueryServlet.java:189) > at com.bigdata.rdf.sail.webapp.RESTServlet.doPost(RESTServlet.java:237) > at > com.bigdata.rdf.sail.webapp.MultiTenancyServlet.doPost(MultiTenancyServlet.java:137) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:595) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:191) > at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:72) > at > com.bigdata.rdf.sail.webapp.HALoadBalancerServlet.forwardToLocalService(HALoadBalancerServlet.java:938) > at > com.bigdata.rdf.sail.webapp.HALoadBalancerServlet.service(HALoadBalancerServlet.java:816) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:769) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle(Server.java:497) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248) > at > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:610) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:539) > at java.lang.Thread.run(Thread.java:745) > > > Do you have any idea what i'm doing wrong? > > > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Bryan T. <br...@sy...> - 2015-09-14 11:06:34
|
I am not sure what level you want to use here. Internally, we process a default graph access path in a variety of different ways depending on the cardinality of the named graphs in the default graph vs the set of all named graphs. However, except for the cases where it reduces to either a single graph or no graphs, we always wind up imposing a filter on the iterator to strip off the context position of the statements and then impose a DISTINCT SPO filter on top of that. This is in AST2BOpJoins. But that is a pretty internal API and is likely to change. But, no, you would write: SELECT * {?a ?b ?c} or CONSTRUCT ... {?a ?b ?c} to obtain the default graph for a quad store (RDF merge of all graphs). If you are using the GRAPH keyword then you are using named graph access patterns (the name of the named graph is extracted and the RDF merge is not applied). Not default graph access patterns (the name of the named graph is not available and an RDF merge is applied). Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.bigdata.com <http://bigdata.com> http://mapgraph.io Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, Sep 14, 2015 at 6:11 AM, Jean-Marc Vanel <jea...@gm...> wrote: > Fine , and thanks ; > > this answer is for a SPARQL query, > but what about the embeded API ? > Can one access this union graph by API ? > I guess one accesses this union graph simply by doing this SPARQL query > by API: > > CONSTRUCT { > ?S ?P ?O > } WHERE { > GRAPH ?G { > ?S ?P ?O > } > } > > or is there a more efficient way ? > > FYI my use case is inferring input forms from ontologies and data in a > number of named graphs: > > https://github.com/jmvanel/semantic_forms/blob/master/scala/forms_play/README.md > > Thanks to Banana-RDF, I should be able to support Jena TDB and BlazeGraph > with the same code. > > > > 2015-09-14 11:56 GMT+02:00 Bryan Thompson <br...@sy...>: > >> Any default graph query in blazegraph has these semantics unless you use >> FROM to restrict the set of considered graphs. >> >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.bigdata.com <http://bigdata.com> >> http://mapgraph.io >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. Blazegraph is now available with GPU acceleration using our disruptive >> technology to accelerate data-parallel graph analytics and graph query. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> >> On Mon, Sep 14, 2015 at 5:54 AM, Jean-Marc Vanel < >> jea...@gm...> wrote: >> >>> In Jena TDB there is the special graph urn:x-arq:UnionGraph >>> >>> >>> https://jena.apache.org/documentation/tdb/datasets.html#special-graph-names >>> >>> that is the RDF merge of all the named graphs in the SPARQL dataset . >>> >>> It is a convenient feature although not standard. >>> >>> Is there an equivalent feature in BlazeGraph ? >>> >>> -- >>> Jean-Marc Vanel >>> Déductions SARL - Consulting, services, training, >>> Rule-based programming, Semantic Web >>> http://deductions-software.com/ >>> +33 (0)6 89 16 29 52 >>> Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> > > > -- > Jean-Marc Vanel > Déductions SARL - Consulting, services, training, > Rule-based programming, Semantic Web > http://deductions-software.com/ > +33 (0)6 89 16 29 52 > Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui > |