From: Alex M. <ale...@gm...> - 2015-11-08 18:49:09
|
Hi, Using the REST API how do I export the same data file that I uploaded? I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. With the following curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz curl -X POST http://62.59.40.122:9999/bigdata/sparql --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz I get data exported but not the same large file that I inserted. Regards Alex www.tilogeo.com |
From: Bryan T. <br...@sy...> - 2015-11-09 13:47:03
|
Alex, I believe that you should be using the parameters defined at [1] for SPARQL UPDATE. Notably, replace ?c=... with using-named-graph-uriSpecify zero or more named graphs for this the update request (protocol option with the same semantics as USING NAMED). This is per the SPARQL UPDATE specification. Thanks, Bryan [1] https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> wrote: > Hi, > > Using the REST API how do I export the same data file that I uploaded? > > I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. > > With the following > > curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf > http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz > > curl -X POST http://62.59.40.122:9999/bigdata/sparql > --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz > > I get data exported but not the same large file that I inserted. > > Regards > Alex > www.tilogeo.com > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Alex M. <ale...@gm...> - 2015-11-09 18:11:22
|
Hi Bryan, I've tried that and a number of methods. On export though I get data that I guess is a description for the service. Can blazegraph create some specific examples to show how to accomplish this using curl? The task is to load an rdf xml file and then export the same file using a named graph. I'm evaluating the system for a large client and have completed this task for other systems but I'm not clear on how to do this with the given documentation. [exec] <rdf:RDF [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> [exec] [exec] <rdf:Description rdf:nodeID="service"> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> [exec] </rdf:Description> Regards Alex www.tilogeo.com On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> wrote: > Alex, > > I believe that you should be using the parameters defined at [1] for > SPARQL UPDATE. Notably, replace ?c=... with > using-named-graph-uriSpecify zero or more named graphs for this the > update request (protocol option with the same semantics as USING NAMED). > > This is per the SPARQL UPDATE specification. > > Thanks, > Bryan > > [1] > https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Blazegraph is now available with GPU acceleration using our disruptive > technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> wrote: > >> Hi, >> >> Using the REST API how do I export the same data file that I uploaded? >> >> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >> >> With the following >> >> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >> >> curl -X POST http://62.59.40.122:9999/bigdata/sparql >> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >> >> I get data exported but not the same large file that I inserted. >> >> Regards >> Alex >> www.tilogeo.com >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > |
From: Martynas J. <mar...@gr...> - 2015-11-09 18:58:49
|
Are your query parameters percent-encoded? https://en.wikipedia.org/wiki/Percent-encoding On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> wrote: > Hi Bryan, > > I've tried that and a number of methods. On export though I get data that > I guess is a description for the service. > > Can blazegraph create some specific examples to show how to accomplish > this using curl? The task is to load an rdf xml file and then export the > same file using a named graph. > > I'm evaluating the system for a large client and have completed this task > for other systems but I'm not clear on how to do this with the given > documentation. > > [exec] <rdf:RDF > [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> > [exec] > [exec] <rdf:Description rdf:nodeID="service"> > [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> > [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> > [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> > [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> > [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> > [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> > [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> > [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> > [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> > [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> > [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> > [exec] </rdf:Description> > > > > > Regards > Alex > www.tilogeo.com > > On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> wrote: > >> Alex, >> >> I believe that you should be using the parameters defined at [1] for >> SPARQL UPDATE. Notably, replace ?c=... with >> using-named-graph-uriSpecify zero or more named graphs for this the >> update request (protocol option with the same semantics as USING NAMED). >> >> This is per the SPARQL UPDATE specification. >> >> Thanks, >> Bryan >> >> [1] >> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.blazegraph.com >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. Blazegraph is now available with GPU acceleration using our disruptive >> technology to accelerate data-parallel graph analytics and graph query. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> >> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> wrote: >> >>> Hi, >>> >>> Using the REST API how do I export the same data file that I uploaded? >>> >>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>> >>> With the following >>> >>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>> >>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>> >>> I get data exported but not the same large file that I inserted. >>> >>> Regards >>> Alex >>> www.tilogeo.com >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> > > > ------------------------------------------------------------------------------ > Presto, an open source distributed SQL query engine for big data, initially > developed by Facebook, enables you to easily query your data on Hadoop in a > more interactive manner. Teradata is also now providing full enterprise > support for Presto. Download a free open source copy now. > http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Joakim S. <joa...@bl...> - 2015-11-09 20:09:06
|
Hi Has anyone tried to export a named graph using ExportKB? After digging on the web I came up with this: String namespace = the sub graph that I want to export tripleStore = (AbstractTripleStore) bd.getQueryEngine().getIndexManager().getResourceLocator().locate( namespace, ITx.UNISOLATED); export = new ExportKB( tripleStore, outFile , RDFFormat.NTRIPLES, false); But I can’t get it to work |
From: Alex M. <ale...@gm...> - 2015-11-09 21:31:56
|
I get the same result Regards Alex www.tilogeo.com On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius <mar...@gr...> wrote: > Are your query parameters percent-encoded? > https://en.wikipedia.org/wiki/Percent-encoding > > On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> wrote: > >> Hi Bryan, >> >> I've tried that and a number of methods. On export though I get data that >> I guess is a description for the service. >> >> Can blazegraph create some specific examples to show how to accomplish >> this using curl? The task is to load an rdf xml file and then export the >> same file using a named graph. >> >> I'm evaluating the system for a large client and have completed this task >> for other systems but I'm not clear on how to do this with the given >> documentation. >> >> [exec] <rdf:RDF >> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >> [exec] >> [exec] <rdf:Description rdf:nodeID="service"> >> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >> [exec] </rdf:Description> >> >> >> >> >> Regards >> Alex >> www.tilogeo.com >> >> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> wrote: >> >>> Alex, >>> >>> I believe that you should be using the parameters defined at [1] for >>> SPARQL UPDATE. Notably, replace ?c=... with >>> using-named-graph-uriSpecify zero or more named graphs for this the >>> update request (protocol option with the same semantics as USING NAMED). >>> >>> This is per the SPARQL UPDATE specification. >>> >>> Thanks, >>> Bryan >>> >>> [1] >>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>> >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... >>> http://blazegraph.com >>> http://blog.blazegraph.com >>> >>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >>> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >>> APIs. Blazegraph is now available with GPU acceleration using our disruptive >>> technology to accelerate data-parallel graph analytics and graph query. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>> are for the sole use of the intended recipient(s) and are confidential or >>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>> dissemination or copying of this email or its contents or attachments is >>> prohibited. If you have received this communication in error, please notify >>> the sender by reply email and permanently delete all copies of the email >>> and its contents and attachments. >>> >>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> wrote: >>> >>>> Hi, >>>> >>>> Using the REST API how do I export the same data file that I uploaded? >>>> >>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>> >>>> With the following >>>> >>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>> >>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>> >>>> I get data exported but not the same large file that I inserted. >>>> >>>> Regards >>>> Alex >>>> www.tilogeo.com >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>> >>>> >>> >> >> >> ------------------------------------------------------------------------------ >> Presto, an open source distributed SQL query engine for big data, >> initially >> developed by Facebook, enables you to easily query your data on Hadoop in >> a >> more interactive manner. Teradata is also now providing full enterprise >> support for Presto. Download a free open source copy now. >> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > |
From: Alex M. <ale...@gm...> - 2015-11-09 21:34:33
|
Hi martynas, Sorry sent that last one by accident.. I get the same result with the following, encoding the url. URLENCODE=$(cat $1?named-graph-uri=$2 | xxd -plain | tr -d '\n' | sed 's/\(..\)/%\1/g') curl -H "Accept: application/rdf+xml" $URLENCODED -o $3/$4.rdf Regards Alex www.tilogeo.com On Mon, Nov 9, 2015 at 9:31 PM, Alex Muir <ale...@gm...> wrote: > I get the same result > > > > Regards > Alex > www.tilogeo.com > > On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius <mar...@gr... > > wrote: > >> Are your query parameters percent-encoded? >> https://en.wikipedia.org/wiki/Percent-encoding >> >> On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> wrote: >> >>> Hi Bryan, >>> >>> I've tried that and a number of methods. On export though I get data >>> that I guess is a description for the service. >>> >>> Can blazegraph create some specific examples to show how to accomplish >>> this using curl? The task is to load an rdf xml file and then export the >>> same file using a named graph. >>> >>> I'm evaluating the system for a large client and have completed this >>> task for other systems but I'm not clear on how to do this with the given >>> documentation. >>> >>> [exec] <rdf:RDF >>> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >>> [exec] >>> [exec] <rdf:Description rdf:nodeID="service"> >>> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >>> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >>> [exec] </rdf:Description> >>> >>> >>> >>> >>> Regards >>> Alex >>> www.tilogeo.com >>> >>> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> wrote: >>> >>>> Alex, >>>> >>>> I believe that you should be using the parameters defined at [1] for >>>> SPARQL UPDATE. Notably, replace ?c=... with >>>> using-named-graph-uriSpecify zero or more named graphs for this the >>>> update request (protocol option with the same semantics as USING NAMED). >>>> >>>> This is per the SPARQL UPDATE specification. >>>> >>>> Thanks, >>>> Bryan >>>> >>>> [1] >>>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>>> >>>> ---- >>>> Bryan Thompson >>>> Chief Scientist & Founder >>>> SYSTAP, LLC >>>> 4501 Tower Road >>>> Greensboro, NC 27410 >>>> br...@sy... >>>> http://blazegraph.com >>>> http://blog.blazegraph.com >>>> >>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >>>> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >>>> APIs. Blazegraph is now available with GPU acceleration using our disruptive >>>> technology to accelerate data-parallel graph analytics and graph query. >>>> >>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>>> are for the sole use of the intended recipient(s) and are confidential or >>>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>>> dissemination or copying of this email or its contents or attachments is >>>> prohibited. If you have received this communication in error, please notify >>>> the sender by reply email and permanently delete all copies of the email >>>> and its contents and attachments. >>>> >>>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> Using the REST API how do I export the same data file that I uploaded? >>>>> >>>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>>> >>>>> With the following >>>>> >>>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>>> >>>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>>> >>>>> I get data exported but not the same large file that I inserted. >>>>> >>>>> Regards >>>>> Alex >>>>> www.tilogeo.com >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> >>>>> _______________________________________________ >>>>> Bigdata-developers mailing list >>>>> Big...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>> >>>>> >>>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Presto, an open source distributed SQL query engine for big data, >>> initially >>> developed by Facebook, enables you to easily query your data on Hadoop >>> in a >>> more interactive manner. Teradata is also now providing full enterprise >>> support for Presto. Download a free open source copy now. >>> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> > |
From: Alex M. <ale...@gm...> - 2015-11-09 21:38:02
|
oops.. actually sorry that was not true... had a bug in that.. :) Regards Alex www.tilogeo.com On Mon, Nov 9, 2015 at 9:34 PM, Alex Muir <ale...@gm...> wrote: > Hi martynas, > > Sorry sent that last one by accident.. > > I get the same result with the following, encoding the url. > > URLENCODE=$(cat $1?named-graph-uri=$2 | xxd -plain | tr -d '\n' | sed > 's/\(..\)/%\1/g') > curl -H "Accept: application/rdf+xml" $URLENCODED -o $3/$4.rdf > > > > Regards > Alex > www.tilogeo.com > > On Mon, Nov 9, 2015 at 9:31 PM, Alex Muir <ale...@gm...> wrote: > >> I get the same result >> >> >> >> Regards >> Alex >> www.tilogeo.com >> >> On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius < >> mar...@gr...> wrote: >> >>> Are your query parameters percent-encoded? >>> https://en.wikipedia.org/wiki/Percent-encoding >>> >>> On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> wrote: >>> >>>> Hi Bryan, >>>> >>>> I've tried that and a number of methods. On export though I get data >>>> that I guess is a description for the service. >>>> >>>> Can blazegraph create some specific examples to show how to accomplish >>>> this using curl? The task is to load an rdf xml file and then export the >>>> same file using a named graph. >>>> >>>> I'm evaluating the system for a large client and have completed this >>>> task for other systems but I'm not clear on how to do this with the given >>>> documentation. >>>> >>>> [exec] <rdf:RDF >>>> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >>>> [exec] >>>> [exec] <rdf:Description rdf:nodeID="service"> >>>> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >>>> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >>>> [exec] </rdf:Description> >>>> >>>> >>>> >>>> >>>> Regards >>>> Alex >>>> www.tilogeo.com >>>> >>>> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> >>>> wrote: >>>> >>>>> Alex, >>>>> >>>>> I believe that you should be using the parameters defined at [1] for >>>>> SPARQL UPDATE. Notably, replace ?c=... with >>>>> using-named-graph-uriSpecify zero or more named graphs for this the >>>>> update request (protocol option with the same semantics as USING NAMED). >>>>> >>>>> This is per the SPARQL UPDATE specification. >>>>> >>>>> Thanks, >>>>> Bryan >>>>> >>>>> [1] >>>>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>>>> >>>>> ---- >>>>> Bryan Thompson >>>>> Chief Scientist & Founder >>>>> SYSTAP, LLC >>>>> 4501 Tower Road >>>>> Greensboro, NC 27410 >>>>> br...@sy... >>>>> http://blazegraph.com >>>>> http://blog.blazegraph.com >>>>> >>>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra >>>>> high-performance graph database that supports both RDF/SPARQL and >>>>> Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU >>>>> acceleration using our disruptive technology to accelerate >>>>> data-parallel graph analytics and graph query. >>>>> >>>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>>>> are for the sole use of the intended recipient(s) and are confidential or >>>>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>>>> dissemination or copying of this email or its contents or attachments is >>>>> prohibited. If you have received this communication in error, please notify >>>>> the sender by reply email and permanently delete all copies of the email >>>>> and its contents and attachments. >>>>> >>>>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> Using the REST API how do I export the same data file that I uploaded? >>>>>> >>>>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>>>> >>>>>> With the following >>>>>> >>>>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>>>> >>>>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>>>> >>>>>> I get data exported but not the same large file that I inserted. >>>>>> >>>>>> Regards >>>>>> Alex >>>>>> www.tilogeo.com >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> >>>>>> _______________________________________________ >>>>>> Bigdata-developers mailing list >>>>>> Big...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>> >>>>>> >>>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Presto, an open source distributed SQL query engine for big data, >>>> initially >>>> developed by Facebook, enables you to easily query your data on Hadoop >>>> in a >>>> more interactive manner. Teradata is also now providing full enterprise >>>> support for Presto. Download a free open source copy now. >>>> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>> >>>> >>> >> > |
From: Martynas J. <mar...@gr...> - 2015-11-09 22:04:56
|
Not sure what the command line does, better if you send the full request URI. Not that you only have to encode the components, such as querystring params/values, not the whole URI. On Mon, Nov 9, 2015 at 10:37 PM, Alex Muir <ale...@gm...> wrote: > oops.. actually sorry that was not true... had a bug in that.. :) > > > Regards > Alex > www.tilogeo.com > > On Mon, Nov 9, 2015 at 9:34 PM, Alex Muir <ale...@gm...> wrote: > >> Hi martynas, >> >> Sorry sent that last one by accident.. >> >> I get the same result with the following, encoding the url. >> >> URLENCODE=$(cat $1?named-graph-uri=$2 | xxd -plain | tr -d '\n' | sed >> 's/\(..\)/%\1/g') >> curl -H "Accept: application/rdf+xml" $URLENCODED -o $3/$4.rdf >> >> >> >> Regards >> Alex >> www.tilogeo.com >> >> On Mon, Nov 9, 2015 at 9:31 PM, Alex Muir <ale...@gm...> wrote: >> >>> I get the same result >>> >>> >>> >>> Regards >>> Alex >>> www.tilogeo.com >>> >>> On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius < >>> mar...@gr...> wrote: >>> >>>> Are your query parameters percent-encoded? >>>> https://en.wikipedia.org/wiki/Percent-encoding >>>> >>>> On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> >>>> wrote: >>>> >>>>> Hi Bryan, >>>>> >>>>> I've tried that and a number of methods. On export though I get data >>>>> that I guess is a description for the service. >>>>> >>>>> Can blazegraph create some specific examples to show how to accomplish >>>>> this using curl? The task is to load an rdf xml file and then export the >>>>> same file using a named graph. >>>>> >>>>> I'm evaluating the system for a large client and have completed this >>>>> task for other systems but I'm not clear on how to do this with the given >>>>> documentation. >>>>> >>>>> [exec] <rdf:RDF >>>>> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >>>>> [exec] >>>>> [exec] <rdf:Description rdf:nodeID="service"> >>>>> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >>>>> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >>>>> [exec] </rdf:Description> >>>>> >>>>> >>>>> >>>>> >>>>> Regards >>>>> Alex >>>>> www.tilogeo.com >>>>> >>>>> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> >>>>> wrote: >>>>> >>>>>> Alex, >>>>>> >>>>>> I believe that you should be using the parameters defined at [1] for >>>>>> SPARQL UPDATE. Notably, replace ?c=... with >>>>>> using-named-graph-uriSpecify zero or more named graphs for this the >>>>>> update request (protocol option with the same semantics as USING NAMED). >>>>>> >>>>>> This is per the SPARQL UPDATE specification. >>>>>> >>>>>> Thanks, >>>>>> Bryan >>>>>> >>>>>> [1] >>>>>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>>>>> >>>>>> ---- >>>>>> Bryan Thompson >>>>>> Chief Scientist & Founder >>>>>> SYSTAP, LLC >>>>>> 4501 Tower Road >>>>>> Greensboro, NC 27410 >>>>>> br...@sy... >>>>>> http://blazegraph.com >>>>>> http://blog.blazegraph.com >>>>>> >>>>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra >>>>>> high-performance graph database that supports both RDF/SPARQL and >>>>>> Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU >>>>>> acceleration using our disruptive technology to accelerate >>>>>> data-parallel graph analytics and graph query. >>>>>> >>>>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>>>>> are for the sole use of the intended recipient(s) and are confidential or >>>>>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>>>>> dissemination or copying of this email or its contents or attachments is >>>>>> prohibited. If you have received this communication in error, please notify >>>>>> the sender by reply email and permanently delete all copies of the email >>>>>> and its contents and attachments. >>>>>> >>>>>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> >>>>>> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Using the REST API how do I export the same data file that I uploaded? >>>>>>> >>>>>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>>>>> >>>>>>> With the following >>>>>>> >>>>>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>>>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>>>>> >>>>>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>>>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>>>>> >>>>>>> I get data exported but not the same large file that I inserted. >>>>>>> >>>>>>> Regards >>>>>>> Alex >>>>>>> www.tilogeo.com >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Bigdata-developers mailing list >>>>>>> Big...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Presto, an open source distributed SQL query engine for big data, >>>>> initially >>>>> developed by Facebook, enables you to easily query your data on Hadoop >>>>> in a >>>>> more interactive manner. Teradata is also now providing full enterprise >>>>> support for Presto. Download a free open source copy now. >>>>> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >>>>> _______________________________________________ >>>>> Bigdata-developers mailing list >>>>> Big...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>> >>>>> >>>> >>> >> > |
From: Martynas J. <mar...@gr...> - 2015-11-09 21:44:53
|
*Note that.. On Mon, Nov 9, 2015 at 10:39 PM, Martynas Jusevičius <mar...@gr...> wrote: > Not sure what the command line does, better if you send the full request > URI. > > Not that you only have to encode the components, such as querystring > params/values, not the whole URI. > > On Mon, Nov 9, 2015 at 10:37 PM, Alex Muir <ale...@gm...> wrote: > >> oops.. actually sorry that was not true... had a bug in that.. :) >> >> >> Regards >> Alex >> www.tilogeo.com >> >> On Mon, Nov 9, 2015 at 9:34 PM, Alex Muir <ale...@gm...> wrote: >> >>> Hi martynas, >>> >>> Sorry sent that last one by accident.. >>> >>> I get the same result with the following, encoding the url. >>> >>> URLENCODE=$(cat $1?named-graph-uri=$2 | xxd -plain | tr -d '\n' | sed >>> 's/\(..\)/%\1/g') >>> curl -H "Accept: application/rdf+xml" $URLENCODED -o $3/$4.rdf >>> >>> >>> >>> Regards >>> Alex >>> www.tilogeo.com >>> >>> On Mon, Nov 9, 2015 at 9:31 PM, Alex Muir <ale...@gm...> wrote: >>> >>>> I get the same result >>>> >>>> >>>> >>>> Regards >>>> Alex >>>> www.tilogeo.com >>>> >>>> On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius < >>>> mar...@gr...> wrote: >>>> >>>>> Are your query parameters percent-encoded? >>>>> https://en.wikipedia.org/wiki/Percent-encoding >>>>> >>>>> On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> >>>>> wrote: >>>>> >>>>>> Hi Bryan, >>>>>> >>>>>> I've tried that and a number of methods. On export though I get data >>>>>> that I guess is a description for the service. >>>>>> >>>>>> Can blazegraph create some specific examples to show how to >>>>>> accomplish this using curl? The task is to load an rdf xml file and then >>>>>> export the same file using a named graph. >>>>>> >>>>>> I'm evaluating the system for a large client and have completed this >>>>>> task for other systems but I'm not clear on how to do this with the given >>>>>> documentation. >>>>>> >>>>>> [exec] <rdf:RDF >>>>>> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >>>>>> [exec] >>>>>> [exec] <rdf:Description rdf:nodeID="service"> >>>>>> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >>>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >>>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>>> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >>>>>> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >>>>>> [exec] </rdf:Description> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Regards >>>>>> Alex >>>>>> www.tilogeo.com >>>>>> >>>>>> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> >>>>>> wrote: >>>>>> >>>>>>> Alex, >>>>>>> >>>>>>> I believe that you should be using the parameters defined at [1] for >>>>>>> SPARQL UPDATE. Notably, replace ?c=... with >>>>>>> using-named-graph-uriSpecify zero or more named graphs for this the >>>>>>> update request (protocol option with the same semantics as USING NAMED). >>>>>>> >>>>>>> This is per the SPARQL UPDATE specification. >>>>>>> >>>>>>> Thanks, >>>>>>> Bryan >>>>>>> >>>>>>> [1] >>>>>>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>>>>>> >>>>>>> ---- >>>>>>> Bryan Thompson >>>>>>> Chief Scientist & Founder >>>>>>> SYSTAP, LLC >>>>>>> 4501 Tower Road >>>>>>> Greensboro, NC 27410 >>>>>>> br...@sy... >>>>>>> http://blazegraph.com >>>>>>> http://blog.blazegraph.com >>>>>>> >>>>>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra >>>>>>> high-performance graph database that supports both RDF/SPARQL and >>>>>>> Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU >>>>>>> acceleration using our disruptive technology to accelerate >>>>>>> data-parallel graph analytics and graph query. >>>>>>> >>>>>>> CONFIDENTIALITY NOTICE: This email and its contents and >>>>>>> attachments are for the sole use of the intended recipient(s) and are >>>>>>> confidential or proprietary to SYSTAP. Any unauthorized review, use, >>>>>>> disclosure, dissemination or copying of this email or its contents or >>>>>>> attachments is prohibited. If you have received this communication in >>>>>>> error, please notify the sender by reply email and permanently delete all >>>>>>> copies of the email and its contents and attachments. >>>>>>> >>>>>>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> >>>>>>> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> Using the REST API how do I export the same data file that I uploaded? >>>>>>>> >>>>>>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>>>>>> >>>>>>>> With the following >>>>>>>> >>>>>>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>>>>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>>>>>> >>>>>>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>>>>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>>>>>> >>>>>>>> I get data exported but not the same large file that I inserted. >>>>>>>> >>>>>>>> Regards >>>>>>>> Alex >>>>>>>> www.tilogeo.com >>>>>>>> >>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Bigdata-developers mailing list >>>>>>>> Big...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Presto, an open source distributed SQL query engine for big data, >>>>>> initially >>>>>> developed by Facebook, enables you to easily query your data on >>>>>> Hadoop in a >>>>>> more interactive manner. Teradata is also now providing full >>>>>> enterprise >>>>>> support for Presto. Download a free open source copy now. >>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >>>>>> _______________________________________________ >>>>>> Bigdata-developers mailing list >>>>>> Big...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>> >>>>>> >>>>> >>>> >>> >> > |
From: Alex M. <ale...@gm...> - 2015-11-09 22:39:36
|
I've tried various means of encoding the components but I always get the same service description results or notifications about encoding and expected data. Can't seem to get the right combination. Hope someone out there can give an example of curl that imports rdf and exports the same ref using context... Is there any difference between these? It seems different names for the same concept in the rest api. Is that correct? named-graph-uri= The Context (aka Named Graph) c= context-uri= Thanks Regards Alex www.tilogeo.com On Mon, Nov 9, 2015 at 9:44 PM, Martynas Jusevičius <mar...@gr...> wrote: > *Note that.. > > On Mon, Nov 9, 2015 at 10:39 PM, Martynas Jusevičius < > mar...@gr...> wrote: > >> Not sure what the command line does, better if you send the full request >> URI. >> >> Not that you only have to encode the components, such as querystring >> params/values, not the whole URI. >> >> On Mon, Nov 9, 2015 at 10:37 PM, Alex Muir <ale...@gm...> wrote: >> >>> oops.. actually sorry that was not true... had a bug in that.. :) >>> >>> >>> Regards >>> Alex >>> www.tilogeo.com >>> >>> On Mon, Nov 9, 2015 at 9:34 PM, Alex Muir <ale...@gm...> wrote: >>> >>>> Hi martynas, >>>> >>>> Sorry sent that last one by accident.. >>>> >>>> I get the same result with the following, encoding the url. >>>> >>>> URLENCODE=$(cat $1?named-graph-uri=$2 | xxd -plain | tr -d '\n' | sed >>>> 's/\(..\)/%\1/g') >>>> curl -H "Accept: application/rdf+xml" $URLENCODED -o $3/$4.rdf >>>> >>>> >>>> >>>> Regards >>>> Alex >>>> www.tilogeo.com >>>> >>>> On Mon, Nov 9, 2015 at 9:31 PM, Alex Muir <ale...@gm...> >>>> wrote: >>>> >>>>> I get the same result >>>>> >>>>> >>>>> >>>>> Regards >>>>> Alex >>>>> www.tilogeo.com >>>>> >>>>> On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius < >>>>> mar...@gr...> wrote: >>>>> >>>>>> Are your query parameters percent-encoded? >>>>>> https://en.wikipedia.org/wiki/Percent-encoding >>>>>> >>>>>> On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> >>>>>> wrote: >>>>>> >>>>>>> Hi Bryan, >>>>>>> >>>>>>> I've tried that and a number of methods. On export though I get data >>>>>>> that I guess is a description for the service. >>>>>>> >>>>>>> Can blazegraph create some specific examples to show how to >>>>>>> accomplish this using curl? The task is to load an rdf xml file and then >>>>>>> export the same file using a named graph. >>>>>>> >>>>>>> I'm evaluating the system for a large client and have completed this >>>>>>> task for other systems but I'm not clear on how to do this with the given >>>>>>> documentation. >>>>>>> >>>>>>> [exec] <rdf:RDF >>>>>>> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >>>>>>> [exec] >>>>>>> [exec] <rdf:Description rdf:nodeID="service"> >>>>>>> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >>>>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >>>>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>>>> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >>>>>>> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >>>>>>> [exec] </rdf:Description> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Regards >>>>>>> Alex >>>>>>> www.tilogeo.com >>>>>>> >>>>>>> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> >>>>>>> wrote: >>>>>>> >>>>>>>> Alex, >>>>>>>> >>>>>>>> I believe that you should be using the parameters defined at [1] >>>>>>>> for SPARQL UPDATE. Notably, replace ?c=... with >>>>>>>> using-named-graph-uriSpecify zero or more named graphs for this >>>>>>>> the update request (protocol option with the same semantics as USING NAMED). >>>>>>>> >>>>>>>> This is per the SPARQL UPDATE specification. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Bryan >>>>>>>> >>>>>>>> [1] >>>>>>>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>>>>>>> >>>>>>>> ---- >>>>>>>> Bryan Thompson >>>>>>>> Chief Scientist & Founder >>>>>>>> SYSTAP, LLC >>>>>>>> 4501 Tower Road >>>>>>>> Greensboro, NC 27410 >>>>>>>> br...@sy... >>>>>>>> http://blazegraph.com >>>>>>>> http://blog.blazegraph.com >>>>>>>> >>>>>>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra >>>>>>>> high-performance graph database that supports both RDF/SPARQL and >>>>>>>> Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU >>>>>>>> acceleration using our disruptive technology to accelerate >>>>>>>> data-parallel graph analytics and graph query. >>>>>>>> >>>>>>>> CONFIDENTIALITY NOTICE: This email and its contents and >>>>>>>> attachments are for the sole use of the intended recipient(s) and are >>>>>>>> confidential or proprietary to SYSTAP. Any unauthorized review, use, >>>>>>>> disclosure, dissemination or copying of this email or its contents or >>>>>>>> attachments is prohibited. If you have received this communication in >>>>>>>> error, please notify the sender by reply email and permanently delete all >>>>>>>> copies of the email and its contents and attachments. >>>>>>>> >>>>>>>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> Using the REST API how do I export the same data file that I uploaded? >>>>>>>>> >>>>>>>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>>>>>>> >>>>>>>>> With the following >>>>>>>>> >>>>>>>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>>>>>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>>>>>>> >>>>>>>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>>>>>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>>>>>>> >>>>>>>>> I get data exported but not the same large file that I inserted. >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> Alex >>>>>>>>> www.tilogeo.com >>>>>>>>> >>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Bigdata-developers mailing list >>>>>>>>> Big...@li... >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Presto, an open source distributed SQL query engine for big data, >>>>>>> initially >>>>>>> developed by Facebook, enables you to easily query your data on >>>>>>> Hadoop in a >>>>>>> more interactive manner. Teradata is also now providing full >>>>>>> enterprise >>>>>>> support for Presto. Download a free open source copy now. >>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >>>>>>> _______________________________________________ >>>>>>> Bigdata-developers mailing list >>>>>>> Big...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > |
From: Brad B. <be...@sy...> - 2015-11-10 04:04:01
|
Alex, What should be working is something like: curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf http://62.59.40.122:9999/bigdata/sparql?context-uri=http://abc.com/id/graph/xyz <http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz> curl -X POST http://62.59.40.122:9999/bigdata/sparql --data-urlencode 'query=construct where {?s ?p ?o}' -H 'Accept: application/rdf+xml' --data-urlencode 'named-graph-uri=http://abc.com/id/graph/xyz <http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz>' Let us know how if that works on your end. Thanks, --Brad On Mon, Nov 9, 2015 at 5:39 PM, Alex Muir <ale...@gm...> wrote: > I've tried various means of encoding the components but I always get the > same service description results or notifications about encoding and > expected data. Can't seem to get the right combination. > > Hope someone out there can give an example of curl that imports rdf and > exports the same ref using context... > > Is there any difference between these? It seems different names for the > same concept in the rest api. Is that correct? > > named-graph-uri= > The Context (aka Named Graph) c= > context-uri= > > Thanks > > > > > > Regards > Alex > www.tilogeo.com > > On Mon, Nov 9, 2015 at 9:44 PM, Martynas Jusevičius <mar...@gr... > > wrote: > >> *Note that.. >> >> On Mon, Nov 9, 2015 at 10:39 PM, Martynas Jusevičius < >> mar...@gr...> wrote: >> >>> Not sure what the command line does, better if you send the full request >>> URI. >>> >>> Not that you only have to encode the components, such as querystring >>> params/values, not the whole URI. >>> >>> On Mon, Nov 9, 2015 at 10:37 PM, Alex Muir <ale...@gm...> >>> wrote: >>> >>>> oops.. actually sorry that was not true... had a bug in that.. :) >>>> >>>> >>>> Regards >>>> Alex >>>> www.tilogeo.com >>>> >>>> On Mon, Nov 9, 2015 at 9:34 PM, Alex Muir <ale...@gm...> >>>> wrote: >>>> >>>>> Hi martynas, >>>>> >>>>> Sorry sent that last one by accident.. >>>>> >>>>> I get the same result with the following, encoding the url. >>>>> >>>>> URLENCODE=$(cat $1?named-graph-uri=$2 | xxd -plain | tr -d '\n' | sed >>>>> 's/\(..\)/%\1/g') >>>>> curl -H "Accept: application/rdf+xml" $URLENCODED -o $3/$4.rdf >>>>> >>>>> >>>>> >>>>> Regards >>>>> Alex >>>>> www.tilogeo.com >>>>> >>>>> On Mon, Nov 9, 2015 at 9:31 PM, Alex Muir <ale...@gm...> >>>>> wrote: >>>>> >>>>>> I get the same result >>>>>> >>>>>> >>>>>> >>>>>> Regards >>>>>> Alex >>>>>> www.tilogeo.com >>>>>> >>>>>> On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius < >>>>>> mar...@gr...> wrote: >>>>>> >>>>>>> Are your query parameters percent-encoded? >>>>>>> https://en.wikipedia.org/wiki/Percent-encoding >>>>>>> >>>>>>> On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Bryan, >>>>>>>> >>>>>>>> I've tried that and a number of methods. On export though I get >>>>>>>> data that I guess is a description for the service. >>>>>>>> >>>>>>>> Can blazegraph create some specific examples to show how to >>>>>>>> accomplish this using curl? The task is to load an rdf xml file and then >>>>>>>> export the same file using a named graph. >>>>>>>> >>>>>>>> I'm evaluating the system for a large client and have completed >>>>>>>> this task for other systems but I'm not clear on how to do this with the >>>>>>>> given documentation. >>>>>>>> >>>>>>>> [exec] <rdf:RDF >>>>>>>> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >>>>>>>> [exec] >>>>>>>> [exec] <rdf:Description rdf:nodeID="service"> >>>>>>>> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >>>>>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >>>>>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >>>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >>>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >>>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >>>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >>>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >>>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>>>>> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >>>>>>>> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >>>>>>>> [exec] </rdf:Description> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Regards >>>>>>>> Alex >>>>>>>> www.tilogeo.com >>>>>>>> >>>>>>>> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Alex, >>>>>>>>> >>>>>>>>> I believe that you should be using the parameters defined at [1] >>>>>>>>> for SPARQL UPDATE. Notably, replace ?c=... with >>>>>>>>> using-named-graph-uriSpecify zero or more named graphs for this >>>>>>>>> the update request (protocol option with the same semantics as USING NAMED). >>>>>>>>> >>>>>>>>> This is per the SPARQL UPDATE specification. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Bryan >>>>>>>>> >>>>>>>>> [1] >>>>>>>>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>>>>>>>> >>>>>>>>> ---- >>>>>>>>> Bryan Thompson >>>>>>>>> Chief Scientist & Founder >>>>>>>>> SYSTAP, LLC >>>>>>>>> 4501 Tower Road >>>>>>>>> Greensboro, NC 27410 >>>>>>>>> br...@sy... >>>>>>>>> http://blazegraph.com >>>>>>>>> http://blog.blazegraph.com >>>>>>>>> >>>>>>>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra >>>>>>>>> high-performance graph database that supports both RDF/SPARQL and >>>>>>>>> Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU >>>>>>>>> acceleration using our disruptive technology to accelerate >>>>>>>>> data-parallel graph analytics and graph query. >>>>>>>>> >>>>>>>>> CONFIDENTIALITY NOTICE: This email and its contents and >>>>>>>>> attachments are for the sole use of the intended recipient(s) and are >>>>>>>>> confidential or proprietary to SYSTAP. Any unauthorized review, use, >>>>>>>>> disclosure, dissemination or copying of this email or its contents or >>>>>>>>> attachments is prohibited. If you have received this communication in >>>>>>>>> error, please notify the sender by reply email and permanently delete all >>>>>>>>> copies of the email and its contents and attachments. >>>>>>>>> >>>>>>>>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> Using the REST API how do I export the same data file that I uploaded? >>>>>>>>>> >>>>>>>>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>>>>>>>> >>>>>>>>>> With the following >>>>>>>>>> >>>>>>>>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>>>>>>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>>>>>>>> >>>>>>>>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>>>>>>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>>>>>>>> >>>>>>>>>> I get data exported but not the same large file that I inserted. >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> Alex >>>>>>>>>> www.tilogeo.com >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Bigdata-developers mailing list >>>>>>>>>> Big...@li... >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> Presto, an open source distributed SQL query engine for big data, >>>>>>>> initially >>>>>>>> developed by Facebook, enables you to easily query your data on >>>>>>>> Hadoop in a >>>>>>>> more interactive manner. Teradata is also now providing full >>>>>>>> enterprise >>>>>>>> support for Presto. Download a free open source copy now. >>>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >>>>>>>> _______________________________________________ >>>>>>>> Bigdata-developers mailing list >>>>>>>> Big...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > > > ------------------------------------------------------------------------------ > Presto, an open source distributed SQL query engine for big data, initially > developed by Facebook, enables you to easily query your data on Hadoop in a > more interactive manner. Teradata is also now providing full enterprise > support for Presto. Download a free open source copy now. > http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Alex M. <ale...@gm...> - 2015-11-10 13:28:29
|
Thanks Brad,, That did work out great.. Thanks Regards Alex www.tilogeo.com On Tue, Nov 10, 2015 at 4:03 AM, Brad Bebee <be...@sy...> wrote: > Alex, > > What should be working is something like: > > curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf > http://62.59.40.122:9999/bigdata/sparql?context-uri=http://abc.com/id/graph/xyz <http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz> > > curl -X POST http://62.59.40.122:9999/bigdata/sparql > --data-urlencode 'query=construct where {?s ?p ?o}' -H 'Accept: application/rdf+xml' --data-urlencode 'named-graph-uri=http://abc.com/id/graph/xyz <http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz>' > > > Let us know how if that works on your end. > > Thanks, --Brad > > > > > > On Mon, Nov 9, 2015 at 5:39 PM, Alex Muir <ale...@gm...> wrote: > >> I've tried various means of encoding the components but I always get the >> same service description results or notifications about encoding and >> expected data. Can't seem to get the right combination. >> >> Hope someone out there can give an example of curl that imports rdf and >> exports the same ref using context... >> >> Is there any difference between these? It seems different names for the >> same concept in the rest api. Is that correct? >> >> named-graph-uri= >> The Context (aka Named Graph) c= >> context-uri= >> >> Thanks >> >> >> >> >> >> Regards >> Alex >> www.tilogeo.com >> >> On Mon, Nov 9, 2015 at 9:44 PM, Martynas Jusevičius < >> mar...@gr...> wrote: >> >>> *Note that.. >>> >>> On Mon, Nov 9, 2015 at 10:39 PM, Martynas Jusevičius < >>> mar...@gr...> wrote: >>> >>>> Not sure what the command line does, better if you send the full >>>> request URI. >>>> >>>> Not that you only have to encode the components, such as querystring >>>> params/values, not the whole URI. >>>> >>>> On Mon, Nov 9, 2015 at 10:37 PM, Alex Muir <ale...@gm...> >>>> wrote: >>>> >>>>> oops.. actually sorry that was not true... had a bug in that.. :) >>>>> >>>>> >>>>> Regards >>>>> Alex >>>>> www.tilogeo.com >>>>> >>>>> On Mon, Nov 9, 2015 at 9:34 PM, Alex Muir <ale...@gm...> >>>>> wrote: >>>>> >>>>>> Hi martynas, >>>>>> >>>>>> Sorry sent that last one by accident.. >>>>>> >>>>>> I get the same result with the following, encoding the url. >>>>>> >>>>>> URLENCODE=$(cat $1?named-graph-uri=$2 | xxd -plain | tr -d '\n' | sed >>>>>> 's/\(..\)/%\1/g') >>>>>> curl -H "Accept: application/rdf+xml" $URLENCODED -o $3/$4.rdf >>>>>> >>>>>> >>>>>> >>>>>> Regards >>>>>> Alex >>>>>> www.tilogeo.com >>>>>> >>>>>> On Mon, Nov 9, 2015 at 9:31 PM, Alex Muir <ale...@gm...> >>>>>> wrote: >>>>>> >>>>>>> I get the same result >>>>>>> >>>>>>> >>>>>>> >>>>>>> Regards >>>>>>> Alex >>>>>>> www.tilogeo.com >>>>>>> >>>>>>> On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius < >>>>>>> mar...@gr...> wrote: >>>>>>> >>>>>>>> Are your query parameters percent-encoded? >>>>>>>> https://en.wikipedia.org/wiki/Percent-encoding >>>>>>>> >>>>>>>> On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi Bryan, >>>>>>>>> >>>>>>>>> I've tried that and a number of methods. On export though I get >>>>>>>>> data that I guess is a description for the service. >>>>>>>>> >>>>>>>>> Can blazegraph create some specific examples to show how to >>>>>>>>> accomplish this using curl? The task is to load an rdf xml file and then >>>>>>>>> export the same file using a named graph. >>>>>>>>> >>>>>>>>> I'm evaluating the system for a large client and have completed >>>>>>>>> this task for other systems but I'm not clear on how to do this with the >>>>>>>>> given documentation. >>>>>>>>> >>>>>>>>> [exec] <rdf:RDF >>>>>>>>> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >>>>>>>>> [exec] >>>>>>>>> [exec] <rdf:Description rdf:nodeID="service"> >>>>>>>>> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >>>>>>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >>>>>>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >>>>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >>>>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >>>>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >>>>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >>>>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >>>>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>>>>>> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >>>>>>>>> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >>>>>>>>> [exec] </rdf:Description> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> Alex >>>>>>>>> www.tilogeo.com >>>>>>>>> >>>>>>>>> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Alex, >>>>>>>>>> >>>>>>>>>> I believe that you should be using the parameters defined at [1] >>>>>>>>>> for SPARQL UPDATE. Notably, replace ?c=... with >>>>>>>>>> using-named-graph-uriSpecify zero or more named graphs for this >>>>>>>>>> the update request (protocol option with the same semantics as USING NAMED). >>>>>>>>>> >>>>>>>>>> This is per the SPARQL UPDATE specification. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Bryan >>>>>>>>>> >>>>>>>>>> [1] >>>>>>>>>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>>>>>>>>> >>>>>>>>>> ---- >>>>>>>>>> Bryan Thompson >>>>>>>>>> Chief Scientist & Founder >>>>>>>>>> SYSTAP, LLC >>>>>>>>>> 4501 Tower Road >>>>>>>>>> Greensboro, NC 27410 >>>>>>>>>> br...@sy... >>>>>>>>>> http://blazegraph.com >>>>>>>>>> http://blog.blazegraph.com >>>>>>>>>> >>>>>>>>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra >>>>>>>>>> high-performance graph database that supports both RDF/SPARQL and >>>>>>>>>> Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU >>>>>>>>>> acceleration using our disruptive technology to accelerate >>>>>>>>>> data-parallel graph analytics and graph query. >>>>>>>>>> >>>>>>>>>> CONFIDENTIALITY NOTICE: This email and its contents and >>>>>>>>>> attachments are for the sole use of the intended recipient(s) and are >>>>>>>>>> confidential or proprietary to SYSTAP. Any unauthorized review, use, >>>>>>>>>> disclosure, dissemination or copying of this email or its contents or >>>>>>>>>> attachments is prohibited. If you have received this communication in >>>>>>>>>> error, please notify the sender by reply email and permanently delete all >>>>>>>>>> copies of the email and its contents and attachments. >>>>>>>>>> >>>>>>>>>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> Using the REST API how do I export the same data file that I uploaded? >>>>>>>>>>> >>>>>>>>>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>>>>>>>>> >>>>>>>>>>> With the following >>>>>>>>>>> >>>>>>>>>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>>>>>>>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>>>>>>>>> >>>>>>>>>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>>>>>>>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>>>>>>>>> >>>>>>>>>>> I get data exported but not the same large file that I inserted. >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> Alex >>>>>>>>>>> www.tilogeo.com >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Bigdata-developers mailing list >>>>>>>>>>> Big...@li... >>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> Presto, an open source distributed SQL query engine for big data, >>>>>>>>> initially >>>>>>>>> developed by Facebook, enables you to easily query your data on >>>>>>>>> Hadoop in a >>>>>>>>> more interactive manner. Teradata is also now providing full >>>>>>>>> enterprise >>>>>>>>> support for Presto. Download a free open source copy now. >>>>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >>>>>>>>> _______________________________________________ >>>>>>>>> Bigdata-developers mailing list >>>>>>>>> Big...@li... >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >> >> ------------------------------------------------------------------------------ >> Presto, an open source distributed SQL query engine for big data, >> initially >> developed by Facebook, enables you to easily query your data on Hadoop in >> a >> more interactive manner. Teradata is also now providing full enterprise >> support for Presto. Download a free open source copy now. >> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > |
From: Brad B. <be...@sy...> - 2015-11-10 14:16:25
|
Alex, Good news. Let us know if you hit any other items. Thanks, Brad _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.systap.com Blazegraph™ is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. MapGraph™ is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Nov 10, 2015 8:28 AM, "Alex Muir" <ale...@gm...> wrote: > Thanks Brad,, > > That did work out great.. Thanks > > > > Regards > Alex > www.tilogeo.com > > On Tue, Nov 10, 2015 at 4:03 AM, Brad Bebee <be...@sy...> wrote: > >> Alex, >> >> What should be working is something like: >> >> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >> http://62.59.40.122:9999/bigdata/sparql?context-uri=http://abc.com/id/graph/xyz <http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz> >> >> curl -X POST http://62.59.40.122:9999/bigdata/sparql >> --data-urlencode 'query=construct where {?s ?p ?o}' -H 'Accept: application/rdf+xml' --data-urlencode 'named-graph-uri=http://abc.com/id/graph/xyz <http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz>' >> >> >> Let us know how if that works on your end. >> >> Thanks, --Brad >> >> >> >> >> >> On Mon, Nov 9, 2015 at 5:39 PM, Alex Muir <ale...@gm...> wrote: >> >>> I've tried various means of encoding the components but I always get the >>> same service description results or notifications about encoding and >>> expected data. Can't seem to get the right combination. >>> >>> Hope someone out there can give an example of curl that imports rdf and >>> exports the same ref using context... >>> >>> Is there any difference between these? It seems different names for the >>> same concept in the rest api. Is that correct? >>> >>> named-graph-uri= >>> The Context (aka Named Graph) c= >>> context-uri= >>> >>> Thanks >>> >>> >>> >>> >>> >>> Regards >>> Alex >>> www.tilogeo.com >>> >>> On Mon, Nov 9, 2015 at 9:44 PM, Martynas Jusevičius < >>> mar...@gr...> wrote: >>> >>>> *Note that.. >>>> >>>> On Mon, Nov 9, 2015 at 10:39 PM, Martynas Jusevičius < >>>> mar...@gr...> wrote: >>>> >>>>> Not sure what the command line does, better if you send the full >>>>> request URI. >>>>> >>>>> Not that you only have to encode the components, such as querystring >>>>> params/values, not the whole URI. >>>>> >>>>> On Mon, Nov 9, 2015 at 10:37 PM, Alex Muir <ale...@gm...> >>>>> wrote: >>>>> >>>>>> oops.. actually sorry that was not true... had a bug in that.. :) >>>>>> >>>>>> >>>>>> Regards >>>>>> Alex >>>>>> www.tilogeo.com >>>>>> >>>>>> On Mon, Nov 9, 2015 at 9:34 PM, Alex Muir <ale...@gm...> >>>>>> wrote: >>>>>> >>>>>>> Hi martynas, >>>>>>> >>>>>>> Sorry sent that last one by accident.. >>>>>>> >>>>>>> I get the same result with the following, encoding the url. >>>>>>> >>>>>>> URLENCODE=$(cat $1?named-graph-uri=$2 | xxd -plain | tr -d '\n' | >>>>>>> sed 's/\(..\)/%\1/g') >>>>>>> curl -H "Accept: application/rdf+xml" $URLENCODED -o $3/$4.rdf >>>>>>> >>>>>>> >>>>>>> >>>>>>> Regards >>>>>>> Alex >>>>>>> www.tilogeo.com >>>>>>> >>>>>>> On Mon, Nov 9, 2015 at 9:31 PM, Alex Muir <ale...@gm...> >>>>>>> wrote: >>>>>>> >>>>>>>> I get the same result >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Regards >>>>>>>> Alex >>>>>>>> www.tilogeo.com >>>>>>>> >>>>>>>> On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius < >>>>>>>> mar...@gr...> wrote: >>>>>>>> >>>>>>>>> Are your query parameters percent-encoded? >>>>>>>>> https://en.wikipedia.org/wiki/Percent-encoding >>>>>>>>> >>>>>>>>> On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi Bryan, >>>>>>>>>> >>>>>>>>>> I've tried that and a number of methods. On export though I get >>>>>>>>>> data that I guess is a description for the service. >>>>>>>>>> >>>>>>>>>> Can blazegraph create some specific examples to show how to >>>>>>>>>> accomplish this using curl? The task is to load an rdf xml file and then >>>>>>>>>> export the same file using a named graph. >>>>>>>>>> >>>>>>>>>> I'm evaluating the system for a large client and have completed >>>>>>>>>> this task for other systems but I'm not clear on how to do this with the >>>>>>>>>> given documentation. >>>>>>>>>> >>>>>>>>>> [exec] <rdf:RDF >>>>>>>>>> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >>>>>>>>>> [exec] >>>>>>>>>> [exec] <rdf:Description rdf:nodeID="service"> >>>>>>>>>> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >>>>>>>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >>>>>>>>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >>>>>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >>>>>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >>>>>>>>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >>>>>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >>>>>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >>>>>>>>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >>>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >>>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>>>>>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>>>>>>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>>>>>>>> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >>>>>>>>>> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >>>>>>>>>> [exec] </rdf:Description> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> Alex >>>>>>>>>> www.tilogeo.com >>>>>>>>>> >>>>>>>>>> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Alex, >>>>>>>>>>> >>>>>>>>>>> I believe that you should be using the parameters defined at [1] >>>>>>>>>>> for SPARQL UPDATE. Notably, replace ?c=... with >>>>>>>>>>> using-named-graph-uriSpecify zero or more named graphs for this >>>>>>>>>>> the update request (protocol option with the same semantics as USING NAMED). >>>>>>>>>>> >>>>>>>>>>> This is per the SPARQL UPDATE specification. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Bryan >>>>>>>>>>> >>>>>>>>>>> [1] >>>>>>>>>>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>>>>>>>>>> >>>>>>>>>>> ---- >>>>>>>>>>> Bryan Thompson >>>>>>>>>>> Chief Scientist & Founder >>>>>>>>>>> SYSTAP, LLC >>>>>>>>>>> 4501 Tower Road >>>>>>>>>>> Greensboro, NC 27410 >>>>>>>>>>> br...@sy... >>>>>>>>>>> http://blazegraph.com >>>>>>>>>>> http://blog.blazegraph.com >>>>>>>>>>> >>>>>>>>>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra >>>>>>>>>>> high-performance graph database that supports both RDF/SPARQL and >>>>>>>>>>> Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU >>>>>>>>>>> acceleration using our disruptive technology to accelerate >>>>>>>>>>> data-parallel graph analytics and graph query. >>>>>>>>>>> >>>>>>>>>>> CONFIDENTIALITY NOTICE: This email and its contents and >>>>>>>>>>> attachments are for the sole use of the intended recipient(s) and are >>>>>>>>>>> confidential or proprietary to SYSTAP. Any unauthorized review, use, >>>>>>>>>>> disclosure, dissemination or copying of this email or its contents or >>>>>>>>>>> attachments is prohibited. If you have received this communication in >>>>>>>>>>> error, please notify the sender by reply email and permanently delete all >>>>>>>>>>> copies of the email and its contents and attachments. >>>>>>>>>>> >>>>>>>>>>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm... >>>>>>>>>>> > wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> Using the REST API how do I export the same data file that I uploaded? >>>>>>>>>>>> >>>>>>>>>>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>>>>>>>>>> >>>>>>>>>>>> With the following >>>>>>>>>>>> >>>>>>>>>>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>>>>>>>>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>>>>>>>>>> >>>>>>>>>>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>>>>>>>>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>>>>>>>>>> >>>>>>>>>>>> I get data exported but not the same large file that I >>>>>>>>>>>> inserted. >>>>>>>>>>>> >>>>>>>>>>>> Regards >>>>>>>>>>>> Alex >>>>>>>>>>>> www.tilogeo.com >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Bigdata-developers mailing list >>>>>>>>>>>> Big...@li... >>>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> Presto, an open source distributed SQL query engine for big data, >>>>>>>>>> initially >>>>>>>>>> developed by Facebook, enables you to easily query your data on >>>>>>>>>> Hadoop in a >>>>>>>>>> more interactive manner. Teradata is also now providing full >>>>>>>>>> enterprise >>>>>>>>>> support for Presto. Download a free open source copy now. >>>>>>>>>> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >>>>>>>>>> _______________________________________________ >>>>>>>>>> Bigdata-developers mailing list >>>>>>>>>> Big...@li... >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Presto, an open source distributed SQL query engine for big data, >>> initially >>> developed by Facebook, enables you to easily query your data on Hadoop >>> in a >>> more interactive manner. Teradata is also now providing full enterprise >>> support for Presto. Download a free open source copy now. >>> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> >> >> -- >> _______________ >> Brad Bebee >> CEO, Managing Partner >> SYSTAP, LLC >> e: be...@sy... >> m: 202.642.7961 >> f: 571.367.5000 >> w: www.blazegraph.com >> >> Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new >> technology to use GPUs to accelerate data-parallel graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> > > |
From: Brad B. <be...@sy...> - 2015-11-10 16:53:29
|
Joakim, It looks like this is a bug in the ExportKB [1]. As a work-around, you can export the KBs via the REST API. [1] https://jira.blazegraph.com/browse/BLZG-1603 Thanks, --Brad On Mon, Nov 9, 2015 at 3:08 PM, Joakim Soderberg < joa...@bl...> wrote: > Hi > Has anyone tried to export a named graph using ExportKB? > > After digging on the web I came up with this: > > String namespace = the sub graph that I want to export > > tripleStore = (AbstractTripleStore) bd.getQueryEngine().getIndexManager().getResourceLocator().locate( > namespace, ITx.UNISOLATED); > > export = new ExportKB( tripleStore, outFile , RDFFormat.NTRIPLES, false); > > But I can’t get it to work > > > ------------------------------------------------------------------------------ > Presto, an open source distributed SQL query engine for big data, initially > developed by Facebook, enables you to easily query your data on Hadoop in a > more interactive manner. Teradata is also now providing full enterprise > support for Presto. Download a free open source copy now. > http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Joakim S. <joa...@bl...> - 2015-11-10 16:58:06
|
Thanks Brad, I am operating in embedded mode, so I will use SPARQLResultsJSONWriter() until it’s fixed. /J > On Nov 10, 2015, at 8:53 AM, Brad Bebee <be...@sy...> wrote: > > Joakim, > > It looks like this is a bug in the ExportKB [1]. As a work-around, you can export the KBs via the REST API. > > [1] https://jira.blazegraph.com/browse/BLZG-1603 <https://jira.blazegraph.com/browse/BLZG-1603> > > Thanks, --Brad > > On Mon, Nov 9, 2015 at 3:08 PM, Joakim Soderberg <joa...@bl... <mailto:joa...@bl...>> wrote: > Hi > Has anyone tried to export a named graph using ExportKB? > > After digging on the web I came up with this: > > String namespace = the sub graph that I want to export > > tripleStore = (AbstractTripleStore) bd.getQueryEngine().getIndexManager().getResourceLocator().locate( namespace, ITx.UNISOLATED); > > export = new ExportKB( tripleStore, outFile , RDFFormat.NTRIPLES, false); > > But I can’t get it to work > > ------------------------------------------------------------------------------ > Presto, an open source distributed SQL query engine for big data, initially > developed by Facebook, enables you to easily query your data on Hadoop in a > more interactive manner. Teradata is also now providing full enterprise > support for Presto. Download a free open source copy now. > http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 <http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140> > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> > > > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... <mailto:be...@sy...> > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com <http://www.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > |
From: Brad B. <be...@sy...> - 2015-11-10 17:08:41
|
Great -- thanks. I've commented on the ticket with the workaround. Thanks, --Brad On Tue, Nov 10, 2015 at 11:57 AM, Joakim Soderberg < joa...@bl...> wrote: > Thanks Brad, > I am operating in embedded mode, so I will use SPARQLResultsJSONWriter() until > it’s fixed. > > /J > > > On Nov 10, 2015, at 8:53 AM, Brad Bebee <be...@sy...> wrote: > > Joakim, > > It looks like this is a bug in the ExportKB [1]. As a work-around, you > can export the KBs via the REST API. > > [1] https://jira.blazegraph.com/browse/BLZG-1603 > > Thanks, --Brad > > On Mon, Nov 9, 2015 at 3:08 PM, Joakim Soderberg < > joa...@bl...> wrote: > >> Hi >> Has anyone tried to export a named graph using ExportKB? >> >> After digging on the web I came up with this: >> >> String namespace = the sub graph that I want to export >> >> tripleStore = (AbstractTripleStore) bd.getQueryEngine().getIndexManager().getResourceLocator().locate( >> namespace, ITx.UNISOLATED); >> >> export = new ExportKB( tripleStore, outFile , RDFFormat.NTRIPLES, false); >> >> But I can’t get it to work >> >> >> ------------------------------------------------------------------------------ >> Presto, an open source distributed SQL query engine for big data, >> initially >> developed by Facebook, enables you to easily query your data on Hadoop in >> a >> more interactive manner. Teradata is also now providing full enterprise >> support for Presto. Download a free open source copy now. >> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Joakim S. <joa...@bl...> - 2015-11-13 19:26:21
|
Hi, What did most likely go wrong if I get the following error: exception javax.servlet.ServletException: Servlet.init() for servlet com.blippar.servlet.SparqlServlet threw exception org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) java.lang.Thread.run(Thread.java:745) root cause java.lang.RuntimeException: FileLock Overlap com.bigdata.journal.FileMetadata.reopenChannel(FileMetadata.java:1245) com.bigdata.journal.FileMetadata.access$000(FileMetadata.java:58) com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1163) com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1153) com.bigdata.journal.FileMetadata.<init>(FileMetadata.java:946) com.bigdata.journal.FileMetadata.createInstance(FileMetadata.java:1470) com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1156) com.bigdata.journal.Journal.<init>(Journal.java:275) com.bigdata.journal.Journal.<init>(Journal.java:268) com.bigdata.rdf.sail.BigdataSail.createLTS(BigdataSail.java:710) com.bigdata.rdf.sail.BigdataSail.<init>(BigdataSail.java:689) com.blippar.servlet.DataManager.initialize(DataManager.java:74) com.blippar.servlet.SparqlServlet.init(SparqlServlet.java:42) javax.servlet.GenericServlet.init(GenericServlet.java:158) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) java.lang.Thread.run(Thread.java:745) root cause java.nio.channels.OverlappingFileLockException sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255) sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152) sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1075) com.bigdata.journal.FileMetadata.reopenChannel(FileMetadata.java:1210) com.bigdata.journal.FileMetadata.access$000(FileMetadata.java:58) com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1163) com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1153) com.bigdata.journal.FileMetadata.<init>(FileMetadata.java:946) com.bigdata.journal.FileMetadata.createInstance(FileMetadata.java:1470) com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1156) com.bigdata.journal.Journal.<init>(Journal.java:275) com.bigdata.journal.Journal.<init>(Journal.java:268) com.bigdata.rdf.sail.BigdataSail.createLTS(BigdataSail.java:710) com.bigdata.rdf.sail.BigdataSail.<init>(BigdataSail.java:689) com.blippar.servlet.DataManager.initialize(DataManager.java:74) com.blippar.servlet.SparqlServlet.init(SparqlServlet.java:42) javax.servlet.GenericServlet.init(GenericServlet.java:158) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) java.lang.Thread.run(Thread.java:745) |
From: Bryan T. <br...@sy...> - 2015-11-13 19:41:19
|
What OS? Are you trying to open the journal in two separate processes (this is the most common cause). ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Fri, Nov 13, 2015 at 2:26 PM, Joakim Soderberg < joa...@bl...> wrote: > Hi, > What did most likely go wrong if I get the following error: > > *exception* > > javax.servlet.ServletException: Servlet.init() for servlet com.blippar.servlet.SparqlServlet threw exception > org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) > org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) > org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) > org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) > java.lang.Thread.run(Thread.java:745) > > *root cause* > > java.lang.RuntimeException: FileLock Overlap > com.bigdata.journal.FileMetadata.reopenChannel(FileMetadata.java:1245) > com.bigdata.journal.FileMetadata.access$000(FileMetadata.java:58) > com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1163) > com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1153) > com.bigdata.journal.FileMetadata.<init>(FileMetadata.java:946) > com.bigdata.journal.FileMetadata.createInstance(FileMetadata.java:1470) > com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1156) > com.bigdata.journal.Journal.<init>(Journal.java:275) > com.bigdata.journal.Journal.<init>(Journal.java:268) > com.bigdata.rdf.sail.BigdataSail.createLTS(BigdataSail.java:710) > com.bigdata.rdf.sail.BigdataSail.<init>(BigdataSail.java:689) > com.blippar.servlet.DataManager.initialize(DataManager.java:74) > com.blippar.servlet.SparqlServlet.init(SparqlServlet.java:42) > javax.servlet.GenericServlet.init(GenericServlet.java:158) > org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) > org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) > org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) > org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) > java.lang.Thread.run(Thread.java:745) > > *root cause* > > java.nio.channels.OverlappingFileLockException > sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255) > sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152) > sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1075) > com.bigdata.journal.FileMetadata.reopenChannel(FileMetadata.java:1210) > com.bigdata.journal.FileMetadata.access$000(FileMetadata.java:58) > com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1163) > com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1153) > com.bigdata.journal.FileMetadata.<init>(FileMetadata.java:946) > com.bigdata.journal.FileMetadata.createInstance(FileMetadata.java:1470) > com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1156) > com.bigdata.journal.Journal.<init>(Journal.java:275) > com.bigdata.journal.Journal.<init>(Journal.java:268) > com.bigdata.rdf.sail.BigdataSail.createLTS(BigdataSail.java:710) > com.bigdata.rdf.sail.BigdataSail.<init>(BigdataSail.java:689) > com.blippar.servlet.DataManager.initialize(DataManager.java:74) > com.blippar.servlet.SparqlServlet.init(SparqlServlet.java:42) > javax.servlet.GenericServlet.init(GenericServlet.java:158) > org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) > org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) > org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) > org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) > java.lang.Thread.run(Thread.java:745) > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Joakim S. <joa...@bl...> - 2015-11-13 22:10:11
|
The OS is CentOS Linux release 7.1.1503 (Core) I am batch loading triples and it is possible that there are several processes running > On Nov 13, 2015, at 11:41 AM, Bryan Thompson <br...@sy...> wrote: > > What OS? Are you trying to open the journal in two separate processes (this is the most common cause). > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... <mailto:br...@sy...> > http://blazegraph.com <http://blazegraph.com/> > http://blog.blazegraph.com <http://blog.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > > On Fri, Nov 13, 2015 at 2:26 PM, Joakim Soderberg <joa...@bl... <mailto:joa...@bl...>> wrote: > Hi, > What did most likely go wrong if I get the following error: > > exception > > javax.servlet.ServletException: Servlet.init() for servlet com.blippar.servlet.SparqlServlet threw exception > org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) > org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) > org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) > org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) > java.lang.Thread.run(Thread.java:745) > root cause > > java.lang.RuntimeException: FileLock Overlap > com.bigdata.journal.FileMetadata.reopenChannel(FileMetadata.java:1245) > com.bigdata.journal.FileMetadata.access$000(FileMetadata.java:58) > com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1163) > com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1153) > com.bigdata.journal.FileMetadata.<init>(FileMetadata.java:946) > com.bigdata.journal.FileMetadata.createInstance(FileMetadata.java:1470) > com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1156) > com.bigdata.journal.Journal.<init>(Journal.java:275) > com.bigdata.journal.Journal.<init>(Journal.java:268) > com.bigdata.rdf.sail.BigdataSail.createLTS(BigdataSail.java:710) > com.bigdata.rdf.sail.BigdataSail.<init>(BigdataSail.java:689) > com.blippar.servlet.DataManager.initialize(DataManager.java:74) > com.blippar.servlet.SparqlServlet.init(SparqlServlet.java:42) > javax.servlet.GenericServlet.init(GenericServlet.java:158) > org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) > org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) > org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) > org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) > java.lang.Thread.run(Thread.java:745) > root cause > > java.nio.channels.OverlappingFileLockException > sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255) > sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152) > sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1075) > com.bigdata.journal.FileMetadata.reopenChannel(FileMetadata.java:1210) > com.bigdata.journal.FileMetadata.access$000(FileMetadata.java:58) > com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1163) > com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1153) > com.bigdata.journal.FileMetadata.<init>(FileMetadata.java:946) > com.bigdata.journal.FileMetadata.createInstance(FileMetadata.java:1470) > com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1156) > com.bigdata.journal.Journal.<init>(Journal.java:275) > com.bigdata.journal.Journal.<init>(Journal.java:268) > com.bigdata.rdf.sail.BigdataSail.createLTS(BigdataSail.java:710) > com.bigdata.rdf.sail.BigdataSail.<init>(BigdataSail.java:689) > com.blippar.servlet.DataManager.initialize(DataManager.java:74) > com.blippar.servlet.SparqlServlet.init(SparqlServlet.java:42) > javax.servlet.GenericServlet.init(GenericServlet.java:158) > org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) > org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) > org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) > org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) > org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) > java.lang.Thread.run(Thread.java:745) > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> > > |
From: Bryan T. <br...@sy...> - 2015-11-13 22:17:24
|
Only a single process may have the Journal open at a time. An file overlap exception is expected if multiple processes are attempting to open the journal. The journal is thread safe, but must be open for only a single process. The thread safety requires an awareness of both readers and writers with concurrent access to the journal. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Fri, Nov 13, 2015 at 5:10 PM, Joakim Soderberg < joa...@bl...> wrote: > The OS is CentOS Linux release 7.1.1503 (Core) > I am batch loading triples and it is possible that there are several > processes running > > > On Nov 13, 2015, at 11:41 AM, Bryan Thompson <br...@sy...> wrote: > > What OS? Are you trying to open the journal in two separate processes > (this is the most common cause). > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Blazegraph is now available with GPU acceleration using our disruptive > technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > On Fri, Nov 13, 2015 at 2:26 PM, Joakim Soderberg < > joa...@bl...> wrote: > >> Hi, >> What did most likely go wrong if I get the following error: >> >> *exception* >> >> javax.servlet.ServletException: Servlet.init() for servlet com.blippar.servlet.SparqlServlet threw exception >> org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) >> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) >> org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) >> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) >> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) >> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) >> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) >> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) >> java.lang.Thread.run(Thread.java:745) >> >> *root cause* >> >> java.lang.RuntimeException: FileLock Overlap >> com.bigdata.journal.FileMetadata.reopenChannel(FileMetadata.java:1245) >> com.bigdata.journal.FileMetadata.access$000(FileMetadata.java:58) >> com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1163) >> com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1153) >> com.bigdata.journal.FileMetadata.<init>(FileMetadata.java:946) >> com.bigdata.journal.FileMetadata.createInstance(FileMetadata.java:1470) >> com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1156) >> com.bigdata.journal.Journal.<init>(Journal.java:275) >> com.bigdata.journal.Journal.<init>(Journal.java:268) >> com.bigdata.rdf.sail.BigdataSail.createLTS(BigdataSail.java:710) >> com.bigdata.rdf.sail.BigdataSail.<init>(BigdataSail.java:689) >> com.blippar.servlet.DataManager.initialize(DataManager.java:74) >> com.blippar.servlet.SparqlServlet.init(SparqlServlet.java:42) >> javax.servlet.GenericServlet.init(GenericServlet.java:158) >> org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) >> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) >> org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) >> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) >> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) >> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) >> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) >> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) >> java.lang.Thread.run(Thread.java:745) >> >> *root cause* >> >> java.nio.channels.OverlappingFileLockException >> sun.nio.ch.SharedFileLockTable.checkList(FileLockTable.java:255) >> sun.nio.ch.SharedFileLockTable.add(FileLockTable.java:152) >> sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1075) >> com.bigdata.journal.FileMetadata.reopenChannel(FileMetadata.java:1210) >> com.bigdata.journal.FileMetadata.access$000(FileMetadata.java:58) >> com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1163) >> com.bigdata.journal.FileMetadata$1.reopenChannel(FileMetadata.java:1153) >> com.bigdata.journal.FileMetadata.<init>(FileMetadata.java:946) >> com.bigdata.journal.FileMetadata.createInstance(FileMetadata.java:1470) >> com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1156) >> com.bigdata.journal.Journal.<init>(Journal.java:275) >> com.bigdata.journal.Journal.<init>(Journal.java:268) >> com.bigdata.rdf.sail.BigdataSail.createLTS(BigdataSail.java:710) >> com.bigdata.rdf.sail.BigdataSail.<init>(BigdataSail.java:689) >> com.blippar.servlet.DataManager.initialize(DataManager.java:74) >> com.blippar.servlet.SparqlServlet.init(SparqlServlet.java:42) >> javax.servlet.GenericServlet.init(GenericServlet.java:158) >> org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) >> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) >> org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) >> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) >> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) >> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) >> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1527) >> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1484) >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) >> java.lang.Thread.run(Thread.java:745) >> >> >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > |
From: Joakim S. <joa...@bl...> - 2015-11-14 01:13:18
|
Is this warning something I should take serious and act upon? WARN : AbstractBTree.java:3716: wrote: name=kb.spo.OCSP, 10 records (#nodes=1, #leaves=9) in 5283ms : addrRoot=-207112492768283427 |
From: Bryan T. <br...@sy...> - 2015-11-14 02:03:16
|
Just indicates a full GC pause during IO. ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Fri, Nov 13, 2015 at 8:13 PM, Joakim Soderberg < joa...@bl...> wrote: > Is this warning something I should take serious and act upon? > > WARN : AbstractBTree.java:3716: wrote: name=kb.spo.OCSP, 10 records > (#nodes=1, #leaves=9) in 5283ms : addrRoot=-207112492768283427 > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |