This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Alex M. <ale...@gm...> - 2015-11-09 21:38:02
|
oops.. actually sorry that was not true... had a bug in that.. :) Regards Alex www.tilogeo.com On Mon, Nov 9, 2015 at 9:34 PM, Alex Muir <ale...@gm...> wrote: > Hi martynas, > > Sorry sent that last one by accident.. > > I get the same result with the following, encoding the url. > > URLENCODE=$(cat $1?named-graph-uri=$2 | xxd -plain | tr -d '\n' | sed > 's/\(..\)/%\1/g') > curl -H "Accept: application/rdf+xml" $URLENCODED -o $3/$4.rdf > > > > Regards > Alex > www.tilogeo.com > > On Mon, Nov 9, 2015 at 9:31 PM, Alex Muir <ale...@gm...> wrote: > >> I get the same result >> >> >> >> Regards >> Alex >> www.tilogeo.com >> >> On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius < >> mar...@gr...> wrote: >> >>> Are your query parameters percent-encoded? >>> https://en.wikipedia.org/wiki/Percent-encoding >>> >>> On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> wrote: >>> >>>> Hi Bryan, >>>> >>>> I've tried that and a number of methods. On export though I get data >>>> that I guess is a description for the service. >>>> >>>> Can blazegraph create some specific examples to show how to accomplish >>>> this using curl? The task is to load an rdf xml file and then export the >>>> same file using a named graph. >>>> >>>> I'm evaluating the system for a large client and have completed this >>>> task for other systems but I'm not clear on how to do this with the given >>>> documentation. >>>> >>>> [exec] <rdf:RDF >>>> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >>>> [exec] >>>> [exec] <rdf:Description rdf:nodeID="service"> >>>> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >>>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >>>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >>>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>>> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >>>> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >>>> [exec] </rdf:Description> >>>> >>>> >>>> >>>> >>>> Regards >>>> Alex >>>> www.tilogeo.com >>>> >>>> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> >>>> wrote: >>>> >>>>> Alex, >>>>> >>>>> I believe that you should be using the parameters defined at [1] for >>>>> SPARQL UPDATE. Notably, replace ?c=... with >>>>> using-named-graph-uriSpecify zero or more named graphs for this the >>>>> update request (protocol option with the same semantics as USING NAMED). >>>>> >>>>> This is per the SPARQL UPDATE specification. >>>>> >>>>> Thanks, >>>>> Bryan >>>>> >>>>> [1] >>>>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>>>> >>>>> ---- >>>>> Bryan Thompson >>>>> Chief Scientist & Founder >>>>> SYSTAP, LLC >>>>> 4501 Tower Road >>>>> Greensboro, NC 27410 >>>>> br...@sy... >>>>> http://blazegraph.com >>>>> http://blog.blazegraph.com >>>>> >>>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra >>>>> high-performance graph database that supports both RDF/SPARQL and >>>>> Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU >>>>> acceleration using our disruptive technology to accelerate >>>>> data-parallel graph analytics and graph query. >>>>> >>>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>>>> are for the sole use of the intended recipient(s) and are confidential or >>>>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>>>> dissemination or copying of this email or its contents or attachments is >>>>> prohibited. If you have received this communication in error, please notify >>>>> the sender by reply email and permanently delete all copies of the email >>>>> and its contents and attachments. >>>>> >>>>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> Using the REST API how do I export the same data file that I uploaded? >>>>>> >>>>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>>>> >>>>>> With the following >>>>>> >>>>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>>>> >>>>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>>>> >>>>>> I get data exported but not the same large file that I inserted. >>>>>> >>>>>> Regards >>>>>> Alex >>>>>> www.tilogeo.com >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> >>>>>> _______________________________________________ >>>>>> Bigdata-developers mailing list >>>>>> Big...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>>> >>>>>> >>>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Presto, an open source distributed SQL query engine for big data, >>>> initially >>>> developed by Facebook, enables you to easily query your data on Hadoop >>>> in a >>>> more interactive manner. Teradata is also now providing full enterprise >>>> support for Presto. Download a free open source copy now. >>>> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>> >>>> >>> >> > |
From: Alex M. <ale...@gm...> - 2015-11-09 21:34:33
|
Hi martynas, Sorry sent that last one by accident.. I get the same result with the following, encoding the url. URLENCODE=$(cat $1?named-graph-uri=$2 | xxd -plain | tr -d '\n' | sed 's/\(..\)/%\1/g') curl -H "Accept: application/rdf+xml" $URLENCODED -o $3/$4.rdf Regards Alex www.tilogeo.com On Mon, Nov 9, 2015 at 9:31 PM, Alex Muir <ale...@gm...> wrote: > I get the same result > > > > Regards > Alex > www.tilogeo.com > > On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius <mar...@gr... > > wrote: > >> Are your query parameters percent-encoded? >> https://en.wikipedia.org/wiki/Percent-encoding >> >> On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> wrote: >> >>> Hi Bryan, >>> >>> I've tried that and a number of methods. On export though I get data >>> that I guess is a description for the service. >>> >>> Can blazegraph create some specific examples to show how to accomplish >>> this using curl? The task is to load an rdf xml file and then export the >>> same file using a named graph. >>> >>> I'm evaluating the system for a large client and have completed this >>> task for other systems but I'm not clear on how to do this with the given >>> documentation. >>> >>> [exec] <rdf:RDF >>> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >>> [exec] >>> [exec] <rdf:Description rdf:nodeID="service"> >>> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >>> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >>> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >>> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >>> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >>> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >>> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >>> [exec] </rdf:Description> >>> >>> >>> >>> >>> Regards >>> Alex >>> www.tilogeo.com >>> >>> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> wrote: >>> >>>> Alex, >>>> >>>> I believe that you should be using the parameters defined at [1] for >>>> SPARQL UPDATE. Notably, replace ?c=... with >>>> using-named-graph-uriSpecify zero or more named graphs for this the >>>> update request (protocol option with the same semantics as USING NAMED). >>>> >>>> This is per the SPARQL UPDATE specification. >>>> >>>> Thanks, >>>> Bryan >>>> >>>> [1] >>>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>>> >>>> ---- >>>> Bryan Thompson >>>> Chief Scientist & Founder >>>> SYSTAP, LLC >>>> 4501 Tower Road >>>> Greensboro, NC 27410 >>>> br...@sy... >>>> http://blazegraph.com >>>> http://blog.blazegraph.com >>>> >>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >>>> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >>>> APIs. Blazegraph is now available with GPU acceleration using our disruptive >>>> technology to accelerate data-parallel graph analytics and graph query. >>>> >>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>>> are for the sole use of the intended recipient(s) and are confidential or >>>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>>> dissemination or copying of this email or its contents or attachments is >>>> prohibited. If you have received this communication in error, please notify >>>> the sender by reply email and permanently delete all copies of the email >>>> and its contents and attachments. >>>> >>>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> Using the REST API how do I export the same data file that I uploaded? >>>>> >>>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>>> >>>>> With the following >>>>> >>>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>>> >>>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>>> >>>>> I get data exported but not the same large file that I inserted. >>>>> >>>>> Regards >>>>> Alex >>>>> www.tilogeo.com >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> >>>>> _______________________________________________ >>>>> Bigdata-developers mailing list >>>>> Big...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>> >>>>> >>>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Presto, an open source distributed SQL query engine for big data, >>> initially >>> developed by Facebook, enables you to easily query your data on Hadoop >>> in a >>> more interactive manner. Teradata is also now providing full enterprise >>> support for Presto. Download a free open source copy now. >>> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> > |
From: Alex M. <ale...@gm...> - 2015-11-09 21:31:56
|
I get the same result Regards Alex www.tilogeo.com On Mon, Nov 9, 2015 at 6:28 PM, Martynas Jusevičius <mar...@gr...> wrote: > Are your query parameters percent-encoded? > https://en.wikipedia.org/wiki/Percent-encoding > > On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> wrote: > >> Hi Bryan, >> >> I've tried that and a number of methods. On export though I get data that >> I guess is a description for the service. >> >> Can blazegraph create some specific examples to show how to accomplish >> this using curl? The task is to load an rdf xml file and then export the >> same file using a named graph. >> >> I'm evaluating the system for a large client and have completed this task >> for other systems but I'm not clear on how to do this with the given >> documentation. >> >> [exec] <rdf:RDF >> [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> >> [exec] >> [exec] <rdf:Description rdf:nodeID="service"> >> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> >> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> >> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> >> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> >> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> >> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> >> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> >> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> >> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> >> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> >> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> >> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> >> [exec] </rdf:Description> >> >> >> >> >> Regards >> Alex >> www.tilogeo.com >> >> On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> wrote: >> >>> Alex, >>> >>> I believe that you should be using the parameters defined at [1] for >>> SPARQL UPDATE. Notably, replace ?c=... with >>> using-named-graph-uriSpecify zero or more named graphs for this the >>> update request (protocol option with the same semantics as USING NAMED). >>> >>> This is per the SPARQL UPDATE specification. >>> >>> Thanks, >>> Bryan >>> >>> [1] >>> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >>> >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... >>> http://blazegraph.com >>> http://blog.blazegraph.com >>> >>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >>> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >>> APIs. Blazegraph is now available with GPU acceleration using our disruptive >>> technology to accelerate data-parallel graph analytics and graph query. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>> are for the sole use of the intended recipient(s) and are confidential or >>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>> dissemination or copying of this email or its contents or attachments is >>> prohibited. If you have received this communication in error, please notify >>> the sender by reply email and permanently delete all copies of the email >>> and its contents and attachments. >>> >>> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> wrote: >>> >>>> Hi, >>>> >>>> Using the REST API how do I export the same data file that I uploaded? >>>> >>>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>>> >>>> With the following >>>> >>>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>>> >>>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>>> >>>> I get data exported but not the same large file that I inserted. >>>> >>>> Regards >>>> Alex >>>> www.tilogeo.com >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>> >>>> >>> >> >> >> ------------------------------------------------------------------------------ >> Presto, an open source distributed SQL query engine for big data, >> initially >> developed by Facebook, enables you to easily query your data on Hadoop in >> a >> more interactive manner. Teradata is also now providing full enterprise >> support for Presto. Download a free open source copy now. >> http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > |
From: Joakim S. <joa...@bl...> - 2015-11-09 20:09:06
|
Hi Has anyone tried to export a named graph using ExportKB? After digging on the web I came up with this: String namespace = the sub graph that I want to export tripleStore = (AbstractTripleStore) bd.getQueryEngine().getIndexManager().getResourceLocator().locate( namespace, ITx.UNISOLATED); export = new ExportKB( tripleStore, outFile , RDFFormat.NTRIPLES, false); But I can’t get it to work |
From: Martynas J. <mar...@gr...> - 2015-11-09 18:58:49
|
Are your query parameters percent-encoded? https://en.wikipedia.org/wiki/Percent-encoding On Mon, Nov 9, 2015 at 7:11 PM, Alex Muir <ale...@gm...> wrote: > Hi Bryan, > > I've tried that and a number of methods. On export though I get data that > I guess is a description for the service. > > Can blazegraph create some specific examples to show how to accomplish > this using curl? The task is to load an rdf xml file and then export the > same file using a named graph. > > I'm evaluating the system for a large client and have completed this task > for other systems but I'm not clear on how to do this with the given > documentation. > > [exec] <rdf:RDF > [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> > [exec] > [exec] <rdf:Description rdf:nodeID="service"> > [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> > [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> > [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> > [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> > [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> > [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> > [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> > [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> > [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> > [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> > [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> > [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> > [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> > [exec] </rdf:Description> > > > > > Regards > Alex > www.tilogeo.com > > On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> wrote: > >> Alex, >> >> I believe that you should be using the parameters defined at [1] for >> SPARQL UPDATE. Notably, replace ?c=... with >> using-named-graph-uriSpecify zero or more named graphs for this the >> update request (protocol option with the same semantics as USING NAMED). >> >> This is per the SPARQL UPDATE specification. >> >> Thanks, >> Bryan >> >> [1] >> https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.blazegraph.com >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. Blazegraph is now available with GPU acceleration using our disruptive >> technology to accelerate data-parallel graph analytics and graph query. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> >> On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> wrote: >> >>> Hi, >>> >>> Using the REST API how do I export the same data file that I uploaded? >>> >>> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >>> >>> With the following >>> >>> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >>> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >>> >>> curl -X POST http://62.59.40.122:9999/bigdata/sparql >>> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >>> >>> I get data exported but not the same large file that I inserted. >>> >>> Regards >>> Alex >>> www.tilogeo.com >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> > > > ------------------------------------------------------------------------------ > Presto, an open source distributed SQL query engine for big data, initially > developed by Facebook, enables you to easily query your data on Hadoop in a > more interactive manner. Teradata is also now providing full enterprise > support for Presto. Download a free open source copy now. > http://pubads.g.doubleclick.net/gampad/clk?id=250295911&iu=/4140 > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Alex M. <ale...@gm...> - 2015-11-09 18:11:22
|
Hi Bryan, I've tried that and a number of methods. On export though I get data that I guess is a description for the service. Can blazegraph create some specific examples to show how to accomplish this using curl? The task is to load an rdf xml file and then export the same file using a named graph. I'm evaluating the system for a large client and have completed this task for other systems but I'm not clear on how to do this with the given documentation. [exec] <rdf:RDF [exec] xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> [exec] [exec] <rdf:Description rdf:nodeID="service"> [exec] <rdf:type rdf:resource="http://www.w3.org/ns/sparql-service-description#Service"/> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/sparql"/> [exec] <endpoint xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://52.89.40.122:9999/bigdata/LBS/sparql"/> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL10Query"/> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Query"/> [exec] <supportedLanguage xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#SPARQL11Update"/> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#BasicFederatedQuery"/> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/sparql-service-description#UnionDefaultGraph"/> [exec] <feature xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.bigdata.com/rdf#/features/KB/Mode/Quads"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://sw.deri.org/2008/07/n-quads/#n-quads"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> [exec] <inputFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/RDF_XML"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N-Triples"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/Turtle"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/N3"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec/"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_XML"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_JSON"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_CSV"/> [exec] <resultFormat xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/formats/SPARQL_Results_TSV"/> [exec] <entailmentRegime xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:resource="http://www.w3.org/ns/entailment/Simple"/> [exec] <defaultDataset xmlns="http://www.w3.org/ns/sparql-service-description#" rdf:nodeID="defaultDataset"/> [exec] </rdf:Description> Regards Alex www.tilogeo.com On Mon, Nov 9, 2015 at 1:46 PM, Bryan Thompson <br...@sy...> wrote: > Alex, > > I believe that you should be using the parameters defined at [1] for > SPARQL UPDATE. Notably, replace ?c=... with > using-named-graph-uriSpecify zero or more named graphs for this the > update request (protocol option with the same semantics as USING NAMED). > > This is per the SPARQL UPDATE specification. > > Thanks, > Bryan > > [1] > https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Blazegraph is now available with GPU acceleration using our disruptive > technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> wrote: > >> Hi, >> >> Using the REST API how do I export the same data file that I uploaded? >> >> I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. >> >> With the following >> >> curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf >> http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz >> >> curl -X POST http://62.59.40.122:9999/bigdata/sparql >> --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz >> >> I get data exported but not the same large file that I inserted. >> >> Regards >> Alex >> www.tilogeo.com >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > |
From: Bryan T. <br...@sy...> - 2015-11-09 13:47:03
|
Alex, I believe that you should be using the parameters defined at [1] for SPARQL UPDATE. Notably, replace ?c=... with using-named-graph-uriSpecify zero or more named graphs for this the update request (protocol option with the same semantics as USING NAMED). This is per the SPARQL UPDATE specification. Thanks, Bryan [1] https://wiki.blazegraph.com/wiki/index.php/REST_API#UPDATE_.28SPARQL_1.1_UPDATE.29 ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Sun, Nov 8, 2015 at 1:49 PM, Alex Muir <ale...@gm...> wrote: > Hi, > > Using the REST API how do I export the same data file that I uploaded? > > I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. > > With the following > > curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf > http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz > > curl -X POST http://62.59.40.122:9999/bigdata/sparql > --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz > > I get data exported but not the same large file that I inserted. > > Regards > Alex > www.tilogeo.com > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Alex M. <ale...@gm...> - 2015-11-08 18:49:09
|
Hi, Using the REST API how do I export the same data file that I uploaded? I'm unclear with the BLAZEGRAPH REST API on the method to associate a named graph on upload and export that same named graph. With the following curl -X POST -H 'Content-Type:application/xml' --data-binary @data.rdf http://62.59.40.122:9999/bigdata/sparql?c=http://abc.com/id/graph/xyz curl -X POST http://62.59.40.122:9999/bigdata/sparql --data-urlencode 'query=named-graph-uri http://abc.com/id/graph/xyz' -H 'Accept: application/rdf+xml" | gzip > data.rdf.gz I get data exported but not the same large file that I inserted. Regards Alex www.tilogeo.com |
From: Bryan T. <br...@sy...> - 2015-11-04 17:41:18
|
Inlining is a means of directly expressing a URI or Literal such that it can be inserted into the statement indices without having to be encoded (and decoded) using a dictionary. We automatically inline numeric, boolean, xsd:dateTime, etc. If you have URIs that are generated using integer or UUID suffixes, then inlining those can be a very large performance gain. There is a section of the wiki on inlining. Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Wed, Nov 4, 2015 at 12:27 PM, Joakim Soderberg < joa...@bl...> wrote: > I’m curious, what do you refer to by "inlining and vocabulary > optimization"? TBOX stuff? > > > On Nov 4, 2015, at 5:27 AM, Brad Bebee <be...@sy...> wrote: > > Alex, > > Great -- as Bryan mentioned, we've been doing a lot of work on load > performance. The right inlining and vocabulary optimizations can have a > significant impact on load performance (40-50% increase in speed depending > on your data). We'll have some blog posts related to the release. > > Thanks, --Brad > > On Wed, Nov 4, 2015 at 8:25 AM, Alex Muir <ale...@gm...> wrote: > >> Okay thanks Brad and Bryan, will use the REST API and look into >> optimizations >> >> >> Regards >> Alex >> www.tilogeo.com >> >> On Wed, Nov 4, 2015 at 1:19 PM, Brad Bebee <be...@sy...> wrote: >> >>> Alex, >>> >>> Adding to Bryan's comments, if you have a Blazegraph instance running >>> remotely and you want to insert data into it, you can use the REST API to >>> post URIs of the files to load. Assuming you can make those URIs >>> resolvable on the remote server, it will resolve the URIs and load them. >>> >>> >>> https://wiki.blazegraph.com/wiki/index.php/REST_API#INSERT_RDF_.28POST_with_URLs.29 >>> >>> Thanks, --Brad >>> >>> On Wed, Nov 4, 2015 at 8:15 AM, Bryan Thompson <br...@sy...> wrote: >>> >>>> Alex, >>>> >>>> If you are referring to the DataLoader, it is an embedded utility >>>> class. It is not designed to operate with a remote database instance. >>>> >>>> You can mimic many of the advantages of the DataLoader by increasing >>>> BigdataSail.Options.BUFFER_CAPACITY to 100,000. >>>> >>>> You should also follow the guidelines on the wiki for performance >>>> optimization if you are interested in bulk data load. See the section >>>> entitled Optimizations and benchmarking >>>> <https://wiki.blazegraph.com/wiki/index.php/NanoSparqlServer#p-Optimizations_and_benchmarking>. >>>> E.g., https://wiki.blazegraph.com/wiki/index.php/IOOptimization. >>>> >>>> Some of the more important optimizations for write throughput are: >>>> >>>> - Write cache service native buffer pool size. >>>> - Use of URI inlining techniques if you have URIs that have numeric or >>>> UUID patterns embedded into them. >>>> - Fast disk. >>>> >>>> We have a number of improvements in the development branch that improve >>>> load speed, including code to overlap the parser with the index writers. >>>> Those will be in the 2.0 release. >>>> >>>> Thanks, >>>> Bryan >>>> >>>> >>>> ---- >>>> Bryan Thompson >>>> Chief Scientist & Founder >>>> SYSTAP, LLC >>>> 4501 Tower Road >>>> Greensboro, NC 27410 >>>> br...@sy... >>>> http://blazegraph.com >>>> http://blog.blazegraph.com >>>> >>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >>>> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >>>> APIs. Blazegraph is now available with GPU acceleration using our disruptive >>>> technology to accelerate data-parallel graph analytics and graph query. >>>> >>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>>> are for the sole use of the intended recipient(s) and are confidential or >>>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>>> dissemination or copying of this email or its contents or attachments is >>>> prohibited. If you have received this communication in error, please notify >>>> the sender by reply email and permanently delete all copies of the email >>>> and its contents and attachments. >>>> >>>> On Wed, Nov 4, 2015 at 7:48 AM, Alex Muir <ale...@gm...> >>>> wrote: >>>> >>>>> Well I downloaded the blazegraph git examples and extracted the unique >>>>> bigdata properties. I don't think any of them are related to specifying a >>>>> remote server. >>>>> >>>>> Perhaps there is another way to specify to upload to a remote server >>>>> with the bulk loader? >>>>> >>>>> com.bigdata.btree.BTree.branchingFactor >>>>> com.bigdata.btree.keys.KeyBuilder.collator >>>>> com.bigdata.btree.writeRetentionQueue.capacity >>>>> com.bigdata.journal.AbstractJournal.bufferMode >>>>> com.bigdata.journal.AbstractJournal.file >>>>> com.bigdata.journal.AbstractJournal.initialExtent >>>>> com.bigdata.journal.AbstractJournal.maximumExtent >>>>> com.bigdata.journal.AbstractJournal.writeCacheBufferCount >>>>> >>>>> com.bigdata.namespace.BSBM_284826.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.BSBM_284826.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.BSBM_284826.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.BSBM_284826.spo.OSP.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.BSBM_284826.spo.POS.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.BSBM_284826.spo.SPO.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.BSBM_566496.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.BSBM_566496.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.BSBM_566496.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.BSBM_566496.spo.OSP.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.BSBM_566496.spo.POS.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.BSBM_566496.spo.SPO.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.chem2bio2rdf.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.chem2bio2rdf.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.chem2bio2rdf.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.chem2bio2rdf.spo.CSPO.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.chem2bio2rdf.spo.OCSP.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.chem2bio2rdf.spo.PCSO.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.chem2bio2rdf.spo.POCS.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.chem2bio2rdf.spo.SOPC.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.chem2bio2rdf.spo.SPOC.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.dbpedia.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.dbpedia.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.dbpedia.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.dbpedia.spo.OSP.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.dbpedia.spo.POS.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.dbpedia.spo.SPO.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.kb.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>>>> com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.kb.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>>>> >>>>> com.bigdata.namespace.kb.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>>>> com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor >>>>> com.bigdata.rdf.rio.RDFParserOptions.stopAtFirstError >>>>> com.bigdata.rdf.sail.BigdataSail.bufferCapacity >>>>> com.bigdata.rdf.sail.BigdataSail.truthMaintenance >>>>> com.bigdata.rdf.sail.bufferCapacity >>>>> com.bigdata.rdf.sail.newEvalStrategy >>>>> com.bigdata.rdf.sail.queryTimeExpander >>>>> com.bigdata.rdf.sail.truthMaintenance >>>>> com.bigdata.rdf.store.AbstractTripleStore.axiomsClass >>>>> com.bigdata.rdf.store.AbstractTripleStore.bloomFilter >>>>> com.bigdata.rdf.store.AbstractTripleStore.extensionFactoryClass >>>>> com.bigdata.rdf.store.AbstractTripleStore.justify >>>>> com.bigdata.rdf.store.AbstractTripleStore.quads >>>>> com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers >>>>> com.bigdata.rdf.store.AbstractTripleStore.textIndex >>>>> com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass >>>>> com.bigdata.resource.OverflowManager.overflowEnabled >>>>> com.bigdata.service.AbstractTransactionService.minReleaseAge >>>>> com.bigdata.service.EmbeddedFederation.dataDir >>>>> com.bigdata.service.IBigdataClient.collectPlatformStatistics >>>>> >>>>> >>>>> >>>>> Regards >>>>> Alex >>>>> www.tilogeo.com >>>>> >>>>> On Wed, Nov 4, 2015 at 11:12 AM, Alex Muir <ale...@gm...> >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I'm interested to bulk upload onto a remote server >>>>>> >>>>>> https://wiki.blazegraph.com/wiki/index.php/Bulk_Data_Load >>>>>> >>>>>> I assume that I can specify a remote server in the properties file >>>>>> however I'm thus far unable to find more information on what goes in a >>>>>> property file from the website. >>>>>> >>>>>> Is there a page defining all the properties? >>>>>> >>>>>> Regards >>>>>> Alex >>>>>> www.tilogeo.com >>>>>> >>>>> >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> >>>>> _______________________________________________ >>>>> Bigdata-developers mailing list >>>>> Big...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>>> >>>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>> >>>> >>> >>> >>> -- >>> _______________ >>> Brad Bebee >>> CEO, Managing Partner >>> SYSTAP, LLC >>> e: be...@sy... >>> m: 202.642.7961 >>> f: 571.367.5000 >>> w: www.blazegraph.com >>> >>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >>> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >>> APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new >>> technology to use GPUs to accelerate data-parallel graph analytics. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>> are for the sole use of the intended recipient(s) and are confidential or >>> proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, >>> dissemination or copying of this email or its contents or attachments is >>> prohibited. If you have received this communication in error, please notify >>> the sender by reply email and permanently delete all copies of the email >>> and its contents and attachments. >>> >> >> > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Joakim S. <joa...@bl...> - 2015-11-04 17:28:06
|
I’m curious, what do you refer to by "inlining and vocabulary optimization"? TBOX stuff? > On Nov 4, 2015, at 5:27 AM, Brad Bebee <be...@sy...> wrote: > > Alex, > > Great -- as Bryan mentioned, we've been doing a lot of work on load performance. The right inlining and vocabulary optimizations can have a significant impact on load performance (40-50% increase in speed depending on your data). We'll have some blog posts related to the release. > > Thanks, --Brad > > On Wed, Nov 4, 2015 at 8:25 AM, Alex Muir <ale...@gm... <mailto:ale...@gm...>> wrote: > Okay thanks Brad and Bryan, will use the REST API and look into optimizations > > > Regards > Alex > www.tilogeo.com <http://www.tilogeo.com/> > > On Wed, Nov 4, 2015 at 1:19 PM, Brad Bebee <be...@sy... <mailto:be...@sy...>> wrote: > Alex, > > Adding to Bryan's comments, if you have a Blazegraph instance running remotely and you want to insert data into it, you can use the REST API to post URIs of the files to load. Assuming you can make those URIs resolvable on the remote server, it will resolve the URIs and load them. > > https://wiki.blazegraph.com/wiki/index.php/REST_API#INSERT_RDF_.28POST_with_URLs.29 <https://wiki.blazegraph.com/wiki/index.php/REST_API#INSERT_RDF_.28POST_with_URLs.29> > > Thanks, --Brad > > On Wed, Nov 4, 2015 at 8:15 AM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: > Alex, > > If you are referring to the DataLoader, it is an embedded utility class. It is not designed to operate with a remote database instance. > > You can mimic many of the advantages of the DataLoader by increasing BigdataSail.Options.BUFFER_CAPACITY to 100,000. > > You should also follow the guidelines on the wiki for performance optimization if you are interested in bulk data load. See the section entitled Optimizations and benchmarking <https://wiki.blazegraph.com/wiki/index.php/NanoSparqlServer#p-Optimizations_and_benchmarking>. E.g., https://wiki.blazegraph.com/wiki/index.php/IOOptimization <https://wiki.blazegraph.com/wiki/index.php/IOOptimization>. > > Some of the more important optimizations for write throughput are: > > - Write cache service native buffer pool size. > - Use of URI inlining techniques if you have URIs that have numeric or UUID patterns embedded into them. > - Fast disk. > > We have a number of improvements in the development branch that improve load speed, including code to overlap the parser with the index writers. Those will be in the 2.0 release. > > Thanks, > Bryan > > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... <mailto:br...@sy...> > http://blazegraph.com <http://blazegraph.com/> > http://blog.blazegraph.com <http://blog.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > > On Wed, Nov 4, 2015 at 7:48 AM, Alex Muir <ale...@gm... <mailto:ale...@gm...>> wrote: > Well I downloaded the blazegraph git examples and extracted the unique bigdata properties. I don't think any of them are related to specifying a remote server. > > Perhaps there is another way to specify to upload to a remote server with the bulk loader? > > com.bigdata.btree.BTree.branchingFactor > com.bigdata.btree.keys.KeyBuilder.collator > com.bigdata.btree.writeRetentionQueue.capacity > com.bigdata.journal.AbstractJournal.bufferMode > com.bigdata.journal.AbstractJournal.file > com.bigdata.journal.AbstractJournal.initialExtent > com.bigdata.journal.AbstractJournal.maximumExtent > com.bigdata.journal.AbstractJournal.writeCacheBufferCount > com.bigdata.namespace.BSBM_284826.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.BSBM_284826.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.BSBM_284826.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.BSBM_284826.spo.OSP.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.BSBM_284826.spo.POS.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.BSBM_284826.spo.SPO.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.BSBM_566496.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.BSBM_566496.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.BSBM_566496.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.BSBM_566496.spo.OSP.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.BSBM_566496.spo.POS.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.BSBM_566496.spo.SPO.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.chem2bio2rdf.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.chem2bio2rdf.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.chem2bio2rdf.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.chem2bio2rdf.spo.CSPO.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.chem2bio2rdf.spo.OCSP.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.chem2bio2rdf.spo.PCSO.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.chem2bio2rdf.spo.POCS.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.chem2bio2rdf.spo.SOPC.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.chem2bio2rdf.spo.SPOC.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.dbpedia.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.dbpedia.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.dbpedia.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.dbpedia.spo.OSP.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.dbpedia.spo.POS.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.dbpedia.spo.SPO.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.kb.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.kb.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.kb.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor > com.bigdata.rdf.rio.RDFParserOptions.stopAtFirstError > com.bigdata.rdf.sail.BigdataSail.bufferCapacity > com.bigdata.rdf.sail.BigdataSail.truthMaintenance > com.bigdata.rdf.sail.bufferCapacity > com.bigdata.rdf.sail.newEvalStrategy > com.bigdata.rdf.sail.queryTimeExpander > com.bigdata.rdf.sail.truthMaintenance > com.bigdata.rdf.store.AbstractTripleStore.axiomsClass > com.bigdata.rdf.store.AbstractTripleStore.bloomFilter > com.bigdata.rdf.store.AbstractTripleStore.extensionFactoryClass > com.bigdata.rdf.store.AbstractTripleStore.justify > com.bigdata.rdf.store.AbstractTripleStore.quads > com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers > com.bigdata.rdf.store.AbstractTripleStore.textIndex > com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass > com.bigdata.resource.OverflowManager.overflowEnabled > com.bigdata.service.AbstractTransactionService.minReleaseAge > com.bigdata.service.EmbeddedFederation.dataDir > com.bigdata.service.IBigdataClient.collectPlatformStatistics > > > > Regards > Alex > www.tilogeo.com <http://www.tilogeo.com/> > On Wed, Nov 4, 2015 at 11:12 AM, Alex Muir <ale...@gm... <mailto:ale...@gm...>> wrote: > Hi, > > I'm interested to bulk upload onto a remote server > > https://wiki.blazegraph.com/wiki/index.php/Bulk_Data_Load <https://wiki.blazegraph.com/wiki/index.php/Bulk_Data_Load> > > I assume that I can specify a remote server in the properties file however I'm thus far unable to find more information on what goes in a property file from the website. > > Is there a page defining all the properties? > > Regards > Alex > www.tilogeo.com <http://www.tilogeo.com/> > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> > > > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... <mailto:be...@sy...> > m: 202.642.7961 <tel:202.642.7961> > f: 571.367.5000 <tel:571.367.5000> > w: www.blazegraph.com <http://www.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > > > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... <mailto:be...@sy...> > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com <http://www.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Brad B. <be...@sy...> - 2015-11-04 13:27:23
|
Alex, Great -- as Bryan mentioned, we've been doing a lot of work on load performance. The right inlining and vocabulary optimizations can have a significant impact on load performance (40-50% increase in speed depending on your data). We'll have some blog posts related to the release. Thanks, --Brad On Wed, Nov 4, 2015 at 8:25 AM, Alex Muir <ale...@gm...> wrote: > Okay thanks Brad and Bryan, will use the REST API and look into > optimizations > > > Regards > Alex > www.tilogeo.com > > On Wed, Nov 4, 2015 at 1:19 PM, Brad Bebee <be...@sy...> wrote: > >> Alex, >> >> Adding to Bryan's comments, if you have a Blazegraph instance running >> remotely and you want to insert data into it, you can use the REST API to >> post URIs of the files to load. Assuming you can make those URIs >> resolvable on the remote server, it will resolve the URIs and load them. >> >> >> https://wiki.blazegraph.com/wiki/index.php/REST_API#INSERT_RDF_.28POST_with_URLs.29 >> >> Thanks, --Brad >> >> On Wed, Nov 4, 2015 at 8:15 AM, Bryan Thompson <br...@sy...> wrote: >> >>> Alex, >>> >>> If you are referring to the DataLoader, it is an embedded utility >>> class. It is not designed to operate with a remote database instance. >>> >>> You can mimic many of the advantages of the DataLoader by increasing >>> BigdataSail.Options.BUFFER_CAPACITY to 100,000. >>> >>> You should also follow the guidelines on the wiki for performance >>> optimization if you are interested in bulk data load. See the section >>> entitled Optimizations and benchmarking >>> <https://wiki.blazegraph.com/wiki/index.php/NanoSparqlServer#p-Optimizations_and_benchmarking>. >>> E.g., https://wiki.blazegraph.com/wiki/index.php/IOOptimization. >>> >>> Some of the more important optimizations for write throughput are: >>> >>> - Write cache service native buffer pool size. >>> - Use of URI inlining techniques if you have URIs that have numeric or >>> UUID patterns embedded into them. >>> - Fast disk. >>> >>> We have a number of improvements in the development branch that improve >>> load speed, including code to overlap the parser with the index writers. >>> Those will be in the 2.0 release. >>> >>> Thanks, >>> Bryan >>> >>> >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... >>> http://blazegraph.com >>> http://blog.blazegraph.com >>> >>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >>> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >>> APIs. Blazegraph is now available with GPU acceleration using our disruptive >>> technology to accelerate data-parallel graph analytics and graph query. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>> are for the sole use of the intended recipient(s) and are confidential or >>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>> dissemination or copying of this email or its contents or attachments is >>> prohibited. If you have received this communication in error, please notify >>> the sender by reply email and permanently delete all copies of the email >>> and its contents and attachments. >>> >>> On Wed, Nov 4, 2015 at 7:48 AM, Alex Muir <ale...@gm...> wrote: >>> >>>> Well I downloaded the blazegraph git examples and extracted the unique >>>> bigdata properties. I don't think any of them are related to specifying a >>>> remote server. >>>> >>>> Perhaps there is another way to specify to upload to a remote server >>>> with the bulk loader? >>>> >>>> com.bigdata.btree.BTree.branchingFactor >>>> com.bigdata.btree.keys.KeyBuilder.collator >>>> com.bigdata.btree.writeRetentionQueue.capacity >>>> com.bigdata.journal.AbstractJournal.bufferMode >>>> com.bigdata.journal.AbstractJournal.file >>>> com.bigdata.journal.AbstractJournal.initialExtent >>>> com.bigdata.journal.AbstractJournal.maximumExtent >>>> com.bigdata.journal.AbstractJournal.writeCacheBufferCount >>>> >>>> com.bigdata.namespace.BSBM_284826.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.BSBM_284826.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.BSBM_284826.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.BSBM_284826.spo.OSP.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.BSBM_284826.spo.POS.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.BSBM_284826.spo.SPO.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.BSBM_566496.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.BSBM_566496.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.BSBM_566496.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.BSBM_566496.spo.OSP.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.BSBM_566496.spo.POS.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.BSBM_566496.spo.SPO.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.chem2bio2rdf.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.chem2bio2rdf.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.chem2bio2rdf.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.chem2bio2rdf.spo.CSPO.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.chem2bio2rdf.spo.OCSP.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.chem2bio2rdf.spo.PCSO.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.chem2bio2rdf.spo.POCS.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.chem2bio2rdf.spo.SOPC.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.chem2bio2rdf.spo.SPOC.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.dbpedia.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.dbpedia.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.dbpedia.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.dbpedia.spo.OSP.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.dbpedia.spo.POS.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.dbpedia.spo.SPO.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.kb.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>>> com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.kb.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>>> >>>> com.bigdata.namespace.kb.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>>> com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor >>>> com.bigdata.rdf.rio.RDFParserOptions.stopAtFirstError >>>> com.bigdata.rdf.sail.BigdataSail.bufferCapacity >>>> com.bigdata.rdf.sail.BigdataSail.truthMaintenance >>>> com.bigdata.rdf.sail.bufferCapacity >>>> com.bigdata.rdf.sail.newEvalStrategy >>>> com.bigdata.rdf.sail.queryTimeExpander >>>> com.bigdata.rdf.sail.truthMaintenance >>>> com.bigdata.rdf.store.AbstractTripleStore.axiomsClass >>>> com.bigdata.rdf.store.AbstractTripleStore.bloomFilter >>>> com.bigdata.rdf.store.AbstractTripleStore.extensionFactoryClass >>>> com.bigdata.rdf.store.AbstractTripleStore.justify >>>> com.bigdata.rdf.store.AbstractTripleStore.quads >>>> com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers >>>> com.bigdata.rdf.store.AbstractTripleStore.textIndex >>>> com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass >>>> com.bigdata.resource.OverflowManager.overflowEnabled >>>> com.bigdata.service.AbstractTransactionService.minReleaseAge >>>> com.bigdata.service.EmbeddedFederation.dataDir >>>> com.bigdata.service.IBigdataClient.collectPlatformStatistics >>>> >>>> >>>> >>>> Regards >>>> Alex >>>> www.tilogeo.com >>>> >>>> On Wed, Nov 4, 2015 at 11:12 AM, Alex Muir <ale...@gm...> >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> I'm interested to bulk upload onto a remote server >>>>> >>>>> https://wiki.blazegraph.com/wiki/index.php/Bulk_Data_Load >>>>> >>>>> I assume that I can specify a remote server in the properties file >>>>> however I'm thus far unable to find more information on what goes in a >>>>> property file from the website. >>>>> >>>>> Is there a page defining all the properties? >>>>> >>>>> Regards >>>>> Alex >>>>> www.tilogeo.com >>>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>> >>>> >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> >> >> -- >> _______________ >> Brad Bebee >> CEO, Managing Partner >> SYSTAP, LLC >> e: be...@sy... >> m: 202.642.7961 >> f: 571.367.5000 >> w: www.blazegraph.com >> >> Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new >> technology to use GPUs to accelerate data-parallel graph analytics. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> > > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Alex M. <ale...@gm...> - 2015-11-04 13:25:26
|
Okay thanks Brad and Bryan, will use the REST API and look into optimizations Regards Alex www.tilogeo.com On Wed, Nov 4, 2015 at 1:19 PM, Brad Bebee <be...@sy...> wrote: > Alex, > > Adding to Bryan's comments, if you have a Blazegraph instance running > remotely and you want to insert data into it, you can use the REST API to > post URIs of the files to load. Assuming you can make those URIs > resolvable on the remote server, it will resolve the URIs and load them. > > > https://wiki.blazegraph.com/wiki/index.php/REST_API#INSERT_RDF_.28POST_with_URLs.29 > > Thanks, --Brad > > On Wed, Nov 4, 2015 at 8:15 AM, Bryan Thompson <br...@sy...> wrote: > >> Alex, >> >> If you are referring to the DataLoader, it is an embedded utility class. >> It is not designed to operate with a remote database instance. >> >> You can mimic many of the advantages of the DataLoader by increasing >> BigdataSail.Options.BUFFER_CAPACITY to 100,000. >> >> You should also follow the guidelines on the wiki for performance >> optimization if you are interested in bulk data load. See the section >> entitled Optimizations and benchmarking >> <https://wiki.blazegraph.com/wiki/index.php/NanoSparqlServer#p-Optimizations_and_benchmarking>. >> E.g., https://wiki.blazegraph.com/wiki/index.php/IOOptimization. >> >> Some of the more important optimizations for write throughput are: >> >> - Write cache service native buffer pool size. >> - Use of URI inlining techniques if you have URIs that have numeric or >> UUID patterns embedded into them. >> - Fast disk. >> >> We have a number of improvements in the development branch that improve >> load speed, including code to overlap the parser with the index writers. >> Those will be in the 2.0 release. >> >> Thanks, >> Bryan >> >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.blazegraph.com >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. Blazegraph is now available with GPU acceleration using our disruptive >> technology to accelerate data-parallel graph analytics and graph query. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> >> On Wed, Nov 4, 2015 at 7:48 AM, Alex Muir <ale...@gm...> wrote: >> >>> Well I downloaded the blazegraph git examples and extracted the unique >>> bigdata properties. I don't think any of them are related to specifying a >>> remote server. >>> >>> Perhaps there is another way to specify to upload to a remote server >>> with the bulk loader? >>> >>> com.bigdata.btree.BTree.branchingFactor >>> com.bigdata.btree.keys.KeyBuilder.collator >>> com.bigdata.btree.writeRetentionQueue.capacity >>> com.bigdata.journal.AbstractJournal.bufferMode >>> com.bigdata.journal.AbstractJournal.file >>> com.bigdata.journal.AbstractJournal.initialExtent >>> com.bigdata.journal.AbstractJournal.maximumExtent >>> com.bigdata.journal.AbstractJournal.writeCacheBufferCount >>> >>> com.bigdata.namespace.BSBM_284826.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.BSBM_284826.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.BSBM_284826.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.BSBM_284826.spo.OSP.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.BSBM_284826.spo.POS.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.BSBM_284826.spo.SPO.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.BSBM_566496.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.BSBM_566496.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.BSBM_566496.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.BSBM_566496.spo.OSP.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.BSBM_566496.spo.POS.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.BSBM_566496.spo.SPO.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.chem2bio2rdf.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.chem2bio2rdf.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.chem2bio2rdf.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.chem2bio2rdf.spo.CSPO.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.chem2bio2rdf.spo.OCSP.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.chem2bio2rdf.spo.PCSO.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.chem2bio2rdf.spo.POCS.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.chem2bio2rdf.spo.SOPC.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.chem2bio2rdf.spo.SPOC.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.dbpedia.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.dbpedia.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.dbpedia.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.dbpedia.spo.OSP.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.dbpedia.spo.POS.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.dbpedia.spo.SPO.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.kb.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >>> com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.kb.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >>> >>> com.bigdata.namespace.kb.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >>> com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor >>> com.bigdata.rdf.rio.RDFParserOptions.stopAtFirstError >>> com.bigdata.rdf.sail.BigdataSail.bufferCapacity >>> com.bigdata.rdf.sail.BigdataSail.truthMaintenance >>> com.bigdata.rdf.sail.bufferCapacity >>> com.bigdata.rdf.sail.newEvalStrategy >>> com.bigdata.rdf.sail.queryTimeExpander >>> com.bigdata.rdf.sail.truthMaintenance >>> com.bigdata.rdf.store.AbstractTripleStore.axiomsClass >>> com.bigdata.rdf.store.AbstractTripleStore.bloomFilter >>> com.bigdata.rdf.store.AbstractTripleStore.extensionFactoryClass >>> com.bigdata.rdf.store.AbstractTripleStore.justify >>> com.bigdata.rdf.store.AbstractTripleStore.quads >>> com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers >>> com.bigdata.rdf.store.AbstractTripleStore.textIndex >>> com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass >>> com.bigdata.resource.OverflowManager.overflowEnabled >>> com.bigdata.service.AbstractTransactionService.minReleaseAge >>> com.bigdata.service.EmbeddedFederation.dataDir >>> com.bigdata.service.IBigdataClient.collectPlatformStatistics >>> >>> >>> >>> Regards >>> Alex >>> www.tilogeo.com >>> >>> On Wed, Nov 4, 2015 at 11:12 AM, Alex Muir <ale...@gm...> >>> wrote: >>> >>>> Hi, >>>> >>>> I'm interested to bulk upload onto a remote server >>>> >>>> https://wiki.blazegraph.com/wiki/index.php/Bulk_Data_Load >>>> >>>> I assume that I can specify a remote server in the properties file >>>> however I'm thus far unable to find more information on what goes in a >>>> property file from the website. >>>> >>>> Is there a page defining all the properties? >>>> >>>> Regards >>>> Alex >>>> www.tilogeo.com >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > > -- > _______________ > Brad Bebee > CEO, Managing Partner > SYSTAP, LLC > e: be...@sy... > m: 202.642.7961 > f: 571.367.5000 > w: www.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new > technology to use GPUs to accelerate data-parallel graph analytics. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > |
From: Brad B. <be...@sy...> - 2015-11-04 13:19:55
|
Alex, Adding to Bryan's comments, if you have a Blazegraph instance running remotely and you want to insert data into it, you can use the REST API to post URIs of the files to load. Assuming you can make those URIs resolvable on the remote server, it will resolve the URIs and load them. https://wiki.blazegraph.com/wiki/index.php/REST_API#INSERT_RDF_.28POST_with_URLs.29 Thanks, --Brad On Wed, Nov 4, 2015 at 8:15 AM, Bryan Thompson <br...@sy...> wrote: > Alex, > > If you are referring to the DataLoader, it is an embedded utility class. > It is not designed to operate with a remote database instance. > > You can mimic many of the advantages of the DataLoader by increasing > BigdataSail.Options.BUFFER_CAPACITY to 100,000. > > You should also follow the guidelines on the wiki for performance > optimization if you are interested in bulk data load. See the section > entitled Optimizations and benchmarking > <https://wiki.blazegraph.com/wiki/index.php/NanoSparqlServer#p-Optimizations_and_benchmarking>. > E.g., https://wiki.blazegraph.com/wiki/index.php/IOOptimization. > > Some of the more important optimizations for write throughput are: > > - Write cache service native buffer pool size. > - Use of URI inlining techniques if you have URIs that have numeric or > UUID patterns embedded into them. > - Fast disk. > > We have a number of improvements in the development branch that improve > load speed, including code to overlap the parser with the index writers. > Those will be in the 2.0 release. > > Thanks, > Bryan > > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Blazegraph is now available with GPU acceleration using our disruptive > technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > On Wed, Nov 4, 2015 at 7:48 AM, Alex Muir <ale...@gm...> wrote: > >> Well I downloaded the blazegraph git examples and extracted the unique >> bigdata properties. I don't think any of them are related to specifying a >> remote server. >> >> Perhaps there is another way to specify to upload to a remote server with >> the bulk loader? >> >> com.bigdata.btree.BTree.branchingFactor >> com.bigdata.btree.keys.KeyBuilder.collator >> com.bigdata.btree.writeRetentionQueue.capacity >> com.bigdata.journal.AbstractJournal.bufferMode >> com.bigdata.journal.AbstractJournal.file >> com.bigdata.journal.AbstractJournal.initialExtent >> com.bigdata.journal.AbstractJournal.maximumExtent >> com.bigdata.journal.AbstractJournal.writeCacheBufferCount >> >> com.bigdata.namespace.BSBM_284826.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.BSBM_284826.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.BSBM_284826.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.BSBM_284826.spo.OSP.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.BSBM_284826.spo.POS.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.BSBM_284826.spo.SPO.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.BSBM_566496.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.BSBM_566496.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.BSBM_566496.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.BSBM_566496.spo.OSP.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.BSBM_566496.spo.POS.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.BSBM_566496.spo.SPO.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.chem2bio2rdf.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.chem2bio2rdf.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.chem2bio2rdf.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.chem2bio2rdf.spo.CSPO.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.chem2bio2rdf.spo.OCSP.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.chem2bio2rdf.spo.PCSO.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.chem2bio2rdf.spo.POCS.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.chem2bio2rdf.spo.SOPC.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.chem2bio2rdf.spo.SPOC.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.dbpedia.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.dbpedia.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.dbpedia.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.dbpedia.spo.OSP.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.dbpedia.spo.POS.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.dbpedia.spo.SPO.com.bigdata.btree.BTree.branchingFactor >> com.bigdata.namespace.kb.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor >> com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.kb.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor >> >> com.bigdata.namespace.kb.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor >> com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor >> com.bigdata.rdf.rio.RDFParserOptions.stopAtFirstError >> com.bigdata.rdf.sail.BigdataSail.bufferCapacity >> com.bigdata.rdf.sail.BigdataSail.truthMaintenance >> com.bigdata.rdf.sail.bufferCapacity >> com.bigdata.rdf.sail.newEvalStrategy >> com.bigdata.rdf.sail.queryTimeExpander >> com.bigdata.rdf.sail.truthMaintenance >> com.bigdata.rdf.store.AbstractTripleStore.axiomsClass >> com.bigdata.rdf.store.AbstractTripleStore.bloomFilter >> com.bigdata.rdf.store.AbstractTripleStore.extensionFactoryClass >> com.bigdata.rdf.store.AbstractTripleStore.justify >> com.bigdata.rdf.store.AbstractTripleStore.quads >> com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers >> com.bigdata.rdf.store.AbstractTripleStore.textIndex >> com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass >> com.bigdata.resource.OverflowManager.overflowEnabled >> com.bigdata.service.AbstractTransactionService.minReleaseAge >> com.bigdata.service.EmbeddedFederation.dataDir >> com.bigdata.service.IBigdataClient.collectPlatformStatistics >> >> >> >> Regards >> Alex >> www.tilogeo.com >> >> On Wed, Nov 4, 2015 at 11:12 AM, Alex Muir <ale...@gm...> wrote: >> >>> Hi, >>> >>> I'm interested to bulk upload onto a remote server >>> >>> https://wiki.blazegraph.com/wiki/index.php/Bulk_Data_Load >>> >>> I assume that I can specify a remote server in the properties file >>> however I'm thus far unable to find more information on what goes in a >>> property file from the website. >>> >>> Is there a page defining all the properties? >>> >>> Regards >>> Alex >>> www.tilogeo.com >>> >> >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- _______________ Brad Bebee CEO, Managing Partner SYSTAP, LLC e: be...@sy... m: 202.642.7961 f: 571.367.5000 w: www.blazegraph.com Blazegraph™ <http://www.blazegraph.com> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Mapgraph™ <http://www.systap.com/mapgraph> is our disruptive new technology to use GPUs to accelerate data-parallel graph analytics. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP, LLC. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Bryan T. <br...@sy...> - 2015-11-04 13:16:07
|
Alex, If you are referring to the DataLoader, it is an embedded utility class. It is not designed to operate with a remote database instance. You can mimic many of the advantages of the DataLoader by increasing BigdataSail.Options.BUFFER_CAPACITY to 100,000. You should also follow the guidelines on the wiki for performance optimization if you are interested in bulk data load. See the section entitled Optimizations and benchmarking <https://wiki.blazegraph.com/wiki/index.php/NanoSparqlServer#p-Optimizations_and_benchmarking>. E.g., https://wiki.blazegraph.com/wiki/index.php/IOOptimization. Some of the more important optimizations for write throughput are: - Write cache service native buffer pool size. - Use of URI inlining techniques if you have URIs that have numeric or UUID patterns embedded into them. - Fast disk. We have a number of improvements in the development branch that improve load speed, including code to overlap the parser with the index writers. Those will be in the 2.0 release. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Wed, Nov 4, 2015 at 7:48 AM, Alex Muir <ale...@gm...> wrote: > Well I downloaded the blazegraph git examples and extracted the unique > bigdata properties. I don't think any of them are related to specifying a > remote server. > > Perhaps there is another way to specify to upload to a remote server with > the bulk loader? > > com.bigdata.btree.BTree.branchingFactor > com.bigdata.btree.keys.KeyBuilder.collator > com.bigdata.btree.writeRetentionQueue.capacity > com.bigdata.journal.AbstractJournal.bufferMode > com.bigdata.journal.AbstractJournal.file > com.bigdata.journal.AbstractJournal.initialExtent > com.bigdata.journal.AbstractJournal.maximumExtent > com.bigdata.journal.AbstractJournal.writeCacheBufferCount > > com.bigdata.namespace.BSBM_284826.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.BSBM_284826.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.BSBM_284826.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.BSBM_284826.spo.OSP.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.BSBM_284826.spo.POS.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.BSBM_284826.spo.SPO.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.BSBM_566496.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.BSBM_566496.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.BSBM_566496.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.BSBM_566496.spo.OSP.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.BSBM_566496.spo.POS.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.BSBM_566496.spo.SPO.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.chem2bio2rdf.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.chem2bio2rdf.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.chem2bio2rdf.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.chem2bio2rdf.spo.CSPO.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.chem2bio2rdf.spo.OCSP.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.chem2bio2rdf.spo.PCSO.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.chem2bio2rdf.spo.POCS.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.chem2bio2rdf.spo.SOPC.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.chem2bio2rdf.spo.SPOC.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.dbpedia.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.dbpedia.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.dbpedia.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.dbpedia.spo.OSP.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.dbpedia.spo.POS.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.dbpedia.spo.SPO.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.kb.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.kb.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor > > com.bigdata.namespace.kb.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor > com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor > com.bigdata.rdf.rio.RDFParserOptions.stopAtFirstError > com.bigdata.rdf.sail.BigdataSail.bufferCapacity > com.bigdata.rdf.sail.BigdataSail.truthMaintenance > com.bigdata.rdf.sail.bufferCapacity > com.bigdata.rdf.sail.newEvalStrategy > com.bigdata.rdf.sail.queryTimeExpander > com.bigdata.rdf.sail.truthMaintenance > com.bigdata.rdf.store.AbstractTripleStore.axiomsClass > com.bigdata.rdf.store.AbstractTripleStore.bloomFilter > com.bigdata.rdf.store.AbstractTripleStore.extensionFactoryClass > com.bigdata.rdf.store.AbstractTripleStore.justify > com.bigdata.rdf.store.AbstractTripleStore.quads > com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers > com.bigdata.rdf.store.AbstractTripleStore.textIndex > com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass > com.bigdata.resource.OverflowManager.overflowEnabled > com.bigdata.service.AbstractTransactionService.minReleaseAge > com.bigdata.service.EmbeddedFederation.dataDir > com.bigdata.service.IBigdataClient.collectPlatformStatistics > > > > Regards > Alex > www.tilogeo.com > > On Wed, Nov 4, 2015 at 11:12 AM, Alex Muir <ale...@gm...> wrote: > >> Hi, >> >> I'm interested to bulk upload onto a remote server >> >> https://wiki.blazegraph.com/wiki/index.php/Bulk_Data_Load >> >> I assume that I can specify a remote server in the properties file >> however I'm thus far unable to find more information on what goes in a >> property file from the website. >> >> Is there a page defining all the properties? >> >> Regards >> Alex >> www.tilogeo.com >> > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Alex M. <ale...@gm...> - 2015-11-04 12:48:21
|
Well I downloaded the blazegraph git examples and extracted the unique bigdata properties. I don't think any of them are related to specifying a remote server. Perhaps there is another way to specify to upload to a remote server with the bulk loader? com.bigdata.btree.BTree.branchingFactor com.bigdata.btree.keys.KeyBuilder.collator com.bigdata.btree.writeRetentionQueue.capacity com.bigdata.journal.AbstractJournal.bufferMode com.bigdata.journal.AbstractJournal.file com.bigdata.journal.AbstractJournal.initialExtent com.bigdata.journal.AbstractJournal.maximumExtent com.bigdata.journal.AbstractJournal.writeCacheBufferCount com.bigdata.namespace.BSBM_284826.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.BSBM_284826.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.BSBM_284826.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.BSBM_284826.spo.OSP.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.BSBM_284826.spo.POS.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.BSBM_284826.spo.SPO.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.BSBM_566496.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.BSBM_566496.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.BSBM_566496.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.BSBM_566496.spo.OSP.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.BSBM_566496.spo.POS.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.BSBM_566496.spo.SPO.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.chem2bio2rdf.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.chem2bio2rdf.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.chem2bio2rdf.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.chem2bio2rdf.spo.CSPO.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.chem2bio2rdf.spo.OCSP.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.chem2bio2rdf.spo.PCSO.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.chem2bio2rdf.spo.POCS.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.chem2bio2rdf.spo.SOPC.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.chem2bio2rdf.spo.SPOC.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.dbpedia.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.dbpedia.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.dbpedia.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.dbpedia.spo.OSP.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.dbpedia.spo.POS.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.dbpedia.spo.SPO.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.kb.lex.BLOBS.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.kb.lex.ID2TERM.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.kb.lex.TERM2ID.com.bigdata.btree.BTree.branchingFactor com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor com.bigdata.rdf.rio.RDFParserOptions.stopAtFirstError com.bigdata.rdf.sail.BigdataSail.bufferCapacity com.bigdata.rdf.sail.BigdataSail.truthMaintenance com.bigdata.rdf.sail.bufferCapacity com.bigdata.rdf.sail.newEvalStrategy com.bigdata.rdf.sail.queryTimeExpander com.bigdata.rdf.sail.truthMaintenance com.bigdata.rdf.store.AbstractTripleStore.axiomsClass com.bigdata.rdf.store.AbstractTripleStore.bloomFilter com.bigdata.rdf.store.AbstractTripleStore.extensionFactoryClass com.bigdata.rdf.store.AbstractTripleStore.justify com.bigdata.rdf.store.AbstractTripleStore.quads com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers com.bigdata.rdf.store.AbstractTripleStore.textIndex com.bigdata.rdf.store.AbstractTripleStore.vocabularyClass com.bigdata.resource.OverflowManager.overflowEnabled com.bigdata.service.AbstractTransactionService.minReleaseAge com.bigdata.service.EmbeddedFederation.dataDir com.bigdata.service.IBigdataClient.collectPlatformStatistics Regards Alex www.tilogeo.com On Wed, Nov 4, 2015 at 11:12 AM, Alex Muir <ale...@gm...> wrote: > Hi, > > I'm interested to bulk upload onto a remote server > > https://wiki.blazegraph.com/wiki/index.php/Bulk_Data_Load > > I assume that I can specify a remote server in the properties file however > I'm thus far unable to find more information on what goes in a property > file from the website. > > Is there a page defining all the properties? > > Regards > Alex > www.tilogeo.com > |
From: Alex M. <ale...@gm...> - 2015-11-04 11:12:57
|
Hi, I'm interested to bulk upload onto a remote server https://wiki.blazegraph.com/wiki/index.php/Bulk_Data_Load I assume that I can specify a remote server in the properties file however I'm thus far unable to find more information on what goes in a property file from the website. Is there a page defining all the properties? Regards Alex www.tilogeo.com |
From: Joakim S. <joa...@bl...> - 2015-11-04 03:17:32
|
Done! Is this the correct way to export a named graph? BigdataSail bd = dm.getBigdataSail(); ExportKB export = null; AbstractTripleStore tripleStore = (AbstractTripleStore) bd.getQueryEngine().getIndexManager().getResourceLocator().locate( namespace, ITx.UNISOLATED); For some reason tripleStore is null. However I’m not sure how to set the last argument in locate( , ITx.UNISOLATED) > On Nov 3, 2015, at 3:16 PM, Bryan Thompson <br...@sy...> wrote: > > Ok. Please file a bug report for that. This is a case of overly aggressive checking. It should be allowed and just silently strip of the named graph. Please provide the full stack trace in the ticket. > > Thanks, > Bryan > > On Tuesday, November 3, 2015, Joakim Soderberg <joa...@bl... <mailto:joa...@bl...>> wrote: > Yes I tried that. It gives the following error: > > Nov 03, 2015 3:07:17 PM org.apache.catalina.core.StandardWrapperValve invoke > SEVERE: Servlet.service() for servlet [com.blippar.servlet.MinervaServlet] in context with path [/blipparwise] threw exception > java.lang.IllegalArgumentException: RDFFormat does not support quads: N-Triples (mimeTypes=text/plain; ext=nt) > at com.bigdata.rdf.sail.ExportKB.<init>(ExportKB.java:142) > > > > > >> On Nov 3, 2015, at 9:20 AM, Bryan Thompson <br...@sy... <javascript:_e(%7B%7D,'cvml','br...@sy...');>> wrote: >> >> Have you tried using a triples only format then? It will probably work and strip off the named graph. >> >> Bryan >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... <javascript:_e(%7B%7D,'cvml','br...@sy...');> >> http://blazegraph.com <http://blazegraph.com/> >> http://blog.blazegraph.com <http://blog.blazegraph.com/> >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >> >> >> >> On Tue, Nov 3, 2015 at 12:18 PM, Joakim Soderberg <joa...@bl... <javascript:_e(%7B%7D,'cvml','joa...@bl...');>> wrote: >> RIght, but I want to have quads in my KB but export as triples. >> >> >> >>> On Nov 2, 2015, at 3:52 PM, Bryan Thompson <br...@sy... <javascript:_e(%7B%7D,'cvml','br...@sy...');>> wrote: >>> >>> The ExportKB class works for both triples and quads. >>> >>> Just specify an RDFFormat that supports quads. >>> >>> Bryan >>> >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... <javascript:_e(%7B%7D,'cvml','br...@sy...');> >>> http://blazegraph.com <http://blazegraph.com/> >>> http://blog.blazegraph.com <http://blog.blazegraph.com/> >>> >>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >>> >>> >>> >>> On Mon, Nov 2, 2015 at 5:45 PM, Joakim Soderberg <joa...@bl... <javascript:_e(%7B%7D,'cvml','joa...@bl...');>> wrote: >>> Hi, >>> My KB uses named graphs and therefore blazegraph.properties has setting: com.bigdata.rdf.store.AbstractTripleStore.quads=true >>> >>> which entails that I can’t export the KB as as triples: >>> ExportKB export = new ExportKB( bd.getDatabase(), outFile , RDFFormat.NTRIPLES, false); >>> >>> since they are quads. >>> >>> Is it possible to come around this? I.e. having a quad KB export as triples? >>> >>> >>> /J >>> >>>> On Oct 26, 2015, at 4:57 AM, Bryan Thompson <br...@sy... <javascript:_e(%7B%7D,'cvml','br...@sy...');>> wrote: >>>> >>>> It depends. One (offline) procedure is outlined here: >>>> >>>> - https://wiki.blazegraph.com/wiki/index.php/DataMigration <https://wiki.blazegraph.com/wiki/index.php/DataMigration> >>>> >>>> Different procedures might be appropriate for export of a named graph from a quads namespace or export while the database is online. >>>> >>>> Thanks, >>>> Bryan >>>> >>>> ---- >>>> Bryan Thompson >>>> Chief Scientist & Founder >>>> SYSTAP, LLC >>>> 4501 Tower Road >>>> Greensboro, NC 27410 >>>> br...@sy... <javascript:_e(%7B%7D,'cvml','br...@sy...');> >>>> http://blazegraph.com <http://blazegraph.com/> >>>> http://blog.blazegraph.com <http://blog.blazegraph.com/> >>>> >>>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. >>>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >>>> >>>> >>>> >>>> On Fri, Oct 23, 2015 at 2:06 PM, Joakim Soderberg <joa...@bl... <javascript:_e(%7B%7D,'cvml','joa...@bl...');>> wrote: >>>> Hi, >>>> What is the recommended (fastest) way to export data from a KB? I am currently using blazegraph 1.52 >>>> >>>> thanks >>>> joakim >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... <javascript:_e(%7B%7D,'cvml','Big...@li...');> >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> >>>> >>>> >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... <javascript:_e(%7B%7D,'cvml','Big...@li...');> >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> >>> >>> >> >> > > > > -- > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... <mailto:br...@sy...> > http://blazegraph.com <http://blazegraph.com/> > http://blog.blazegraph.com <http://blog.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > |
From: Bryan T. <br...@sy...> - 2015-11-03 23:16:38
|
Ok. Please file a bug report for that. This is a case of overly aggressive checking. It should be allowed and just silently strip of the named graph. Please provide the full stack trace in the ticket. Thanks, Bryan On Tuesday, November 3, 2015, Joakim Soderberg <joa...@bl...> wrote: > Yes I tried that. It gives the following error: > > Nov 03, 2015 3:07:17 PM org.apache.catalina.core.StandardWrapperValve > invoke > SEVERE: Servlet.service() for servlet [com.blippar.servlet.MinervaServlet] > in context with path [/blipparwise] threw exception > java.lang.IllegalArgumentException: RDFFormat does not support quads: > N-Triples (mimeTypes=text/plain; ext=nt) > at com.bigdata.rdf.sail.ExportKB.<init>(ExportKB.java:142) > > > > > On Nov 3, 2015, at 9:20 AM, Bryan Thompson <br...@sy... > <javascript:_e(%7B%7D,'cvml','br...@sy...');>> wrote: > > Have you tried using a triples only format then? It will probably work and > strip off the named graph. > > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... <javascript:_e(%7B%7D,'cvml','br...@sy...');> > http://blazegraph.com > http://blog.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Blazegraph is now available with GPU acceleration using our disruptive > technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > On Tue, Nov 3, 2015 at 12:18 PM, Joakim Soderberg < > joa...@bl... > <javascript:_e(%7B%7D,'cvml','joa...@bl...');>> wrote: > >> RIght, but I want to have quads in my KB but export as triples. >> >> >> On Nov 2, 2015, at 3:52 PM, Bryan Thompson <br...@sy... >> <javascript:_e(%7B%7D,'cvml','br...@sy...');>> wrote: >> >> The ExportKB class works for both triples and quads. >> >> Just specify an RDFFormat that supports quads. >> >> Bryan >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... <javascript:_e(%7B%7D,'cvml','br...@sy...');> >> http://blazegraph.com >> http://blog.blazegraph.com >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. Blazegraph is now available with GPU acceleration using our disruptive >> technology to accelerate data-parallel graph analytics and graph query. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> >> On Mon, Nov 2, 2015 at 5:45 PM, Joakim Soderberg < >> joa...@bl... >> <javascript:_e(%7B%7D,'cvml','joa...@bl...');>> wrote: >> >>> Hi, >>> My KB uses named graphs and therefore blazegraph.properties has setting: >>> com.bigdata.rdf.store.AbstractTripleStore.quads=true >>> >>> which entails that I can’t export the KB as as triples: >>> ExportKB export = new ExportKB( bd.getDatabase(), outFile , >>> RDFFormat.NTRIPLES, false); >>> >>> since they are quads. >>> >>> Is it possible to come around this? I.e. having a quad KB export as >>> triples? >>> >>> >>> /J >>> >>> On Oct 26, 2015, at 4:57 AM, Bryan Thompson <br...@sy... >>> <javascript:_e(%7B%7D,'cvml','br...@sy...');>> wrote: >>> >>> It depends. One (offline) procedure is outlined here: >>> >>> - https://wiki.blazegraph.com/wiki/index.php/DataMigration >>> >>> Different procedures might be appropriate for export of a named graph >>> from a quads namespace or export while the database is online. >>> >>> Thanks, >>> Bryan >>> >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... <javascript:_e(%7B%7D,'cvml','br...@sy...');> >>> http://blazegraph.com >>> http://blog.blazegraph.com >>> >>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >>> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >>> APIs. Blazegraph is now available with GPU acceleration using our disruptive >>> technology to accelerate data-parallel graph analytics and graph query. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>> are for the sole use of the intended recipient(s) and are confidential or >>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>> dissemination or copying of this email or its contents or attachments is >>> prohibited. If you have received this communication in error, please notify >>> the sender by reply email and permanently delete all copies of the email >>> and its contents and attachments. >>> >>> On Fri, Oct 23, 2015 at 2:06 PM, Joakim Soderberg < >>> joa...@bl... >>> <javascript:_e(%7B%7D,'cvml','joa...@bl...');>> wrote: >>> >>>> Hi, >>>> What is the recommended (fastest) way to export data from a KB? I am >>>> currently using blazegraph 1.52 >>>> >>>> thanks >>>> joakim >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> <javascript:_e(%7B%7D,'cvml','Big...@li...');> >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>> >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> <javascript:_e(%7B%7D,'cvml','Big...@li...');> >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> >> > > -- ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. |
From: Joakim S. <joa...@bl...> - 2015-11-03 23:08:44
|
Yes I tried that. It gives the following error: Nov 03, 2015 3:07:17 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet [com.blippar.servlet.MinervaServlet] in context with path [/blipparwise] threw exception java.lang.IllegalArgumentException: RDFFormat does not support quads: N-Triples (mimeTypes=text/plain; ext=nt) at com.bigdata.rdf.sail.ExportKB.<init>(ExportKB.java:142) > On Nov 3, 2015, at 9:20 AM, Bryan Thompson <br...@sy...> wrote: > > Have you tried using a triples only format then? It will probably work and strip off the named graph. > > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... <mailto:br...@sy...> > http://blazegraph.com <http://blazegraph.com/> > http://blog.blazegraph.com <http://blog.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > > On Tue, Nov 3, 2015 at 12:18 PM, Joakim Soderberg <joa...@bl... <mailto:joa...@bl...>> wrote: > RIght, but I want to have quads in my KB but export as triples. > > > >> On Nov 2, 2015, at 3:52 PM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >> >> The ExportKB class works for both triples and quads. >> >> Just specify an RDFFormat that supports quads. >> >> Bryan >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... <mailto:br...@sy...> >> http://blazegraph.com <http://blazegraph.com/> >> http://blog.blazegraph.com <http://blog.blazegraph.com/> >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >> >> >> >> On Mon, Nov 2, 2015 at 5:45 PM, Joakim Soderberg <joa...@bl... <mailto:joa...@bl...>> wrote: >> Hi, >> My KB uses named graphs and therefore blazegraph.properties has setting: com.bigdata.rdf.store.AbstractTripleStore.quads=true >> >> which entails that I can’t export the KB as as triples: >> ExportKB export = new ExportKB( bd.getDatabase(), outFile , RDFFormat.NTRIPLES, false); >> >> since they are quads. >> >> Is it possible to come around this? I.e. having a quad KB export as triples? >> >> >> /J >> >>> On Oct 26, 2015, at 4:57 AM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >>> >>> It depends. One (offline) procedure is outlined here: >>> >>> - https://wiki.blazegraph.com/wiki/index.php/DataMigration <https://wiki.blazegraph.com/wiki/index.php/DataMigration> >>> >>> Different procedures might be appropriate for export of a named graph from a quads namespace or export while the database is online. >>> >>> Thanks, >>> Bryan >>> >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... <mailto:br...@sy...> >>> http://blazegraph.com <http://blazegraph.com/> >>> http://blog.blazegraph.com <http://blog.blazegraph.com/> >>> >>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >>> >>> >>> >>> On Fri, Oct 23, 2015 at 2:06 PM, Joakim Soderberg <joa...@bl... <mailto:joa...@bl...>> wrote: >>> Hi, >>> What is the recommended (fastest) way to export data from a KB? I am currently using blazegraph 1.52 >>> >>> thanks >>> joakim >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... <mailto:Big...@li...> >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> >>> >>> >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... <mailto:Big...@li...> >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> >> >> > > |
From: Bryan T. <br...@sy...> - 2015-11-03 22:08:34
|
Also, if you do not want to preserve the named graphs, then CONSTRUCT will let you run a query and report the triples back to the client. There are guidelines on the wiki when using CONSTRUCT on very large stores. See the query hints page. Thanks, Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Tue, Nov 3, 2015 at 12:20 PM, Bryan Thompson <br...@sy...> wrote: > Have you tried using a triples only format then? It will probably work and > strip off the named graph. > > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Blazegraph is now available with GPU acceleration using our disruptive > technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > On Tue, Nov 3, 2015 at 12:18 PM, Joakim Soderberg < > joa...@bl...> wrote: > >> RIght, but I want to have quads in my KB but export as triples. >> >> >> On Nov 2, 2015, at 3:52 PM, Bryan Thompson <br...@sy...> wrote: >> >> The ExportKB class works for both triples and quads. >> >> Just specify an RDFFormat that supports quads. >> >> Bryan >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.blazegraph.com >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. Blazegraph is now available with GPU acceleration using our disruptive >> technology to accelerate data-parallel graph analytics and graph query. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> >> On Mon, Nov 2, 2015 at 5:45 PM, Joakim Soderberg < >> joa...@bl...> wrote: >> >>> Hi, >>> My KB uses named graphs and therefore blazegraph.properties has setting: >>> com.bigdata.rdf.store.AbstractTripleStore.quads=true >>> >>> which entails that I can’t export the KB as as triples: >>> ExportKB export = new ExportKB( bd.getDatabase(), outFile , >>> RDFFormat.NTRIPLES, false); >>> >>> since they are quads. >>> >>> Is it possible to come around this? I.e. having a quad KB export as >>> triples? >>> >>> >>> /J >>> >>> On Oct 26, 2015, at 4:57 AM, Bryan Thompson <br...@sy...> wrote: >>> >>> It depends. One (offline) procedure is outlined here: >>> >>> - https://wiki.blazegraph.com/wiki/index.php/DataMigration >>> >>> Different procedures might be appropriate for export of a named graph >>> from a quads namespace or export while the database is online. >>> >>> Thanks, >>> Bryan >>> >>> ---- >>> Bryan Thompson >>> Chief Scientist & Founder >>> SYSTAP, LLC >>> 4501 Tower Road >>> Greensboro, NC 27410 >>> br...@sy... >>> http://blazegraph.com >>> http://blog.blazegraph.com >>> >>> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >>> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >>> APIs. Blazegraph is now available with GPU acceleration using our disruptive >>> technology to accelerate data-parallel graph analytics and graph query. >>> >>> CONFIDENTIALITY NOTICE: This email and its contents and attachments >>> are for the sole use of the intended recipient(s) and are confidential or >>> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >>> dissemination or copying of this email or its contents or attachments is >>> prohibited. If you have received this communication in error, please notify >>> the sender by reply email and permanently delete all copies of the email >>> and its contents and attachments. >>> >>> On Fri, Oct 23, 2015 at 2:06 PM, Joakim Soderberg < >>> joa...@bl...> wrote: >>> >>>> Hi, >>>> What is the recommended (fastest) way to export data from a KB? I am >>>> currently using blazegraph 1.52 >>>> >>>> thanks >>>> joakim >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> Bigdata-developers mailing list >>>> Big...@li... >>>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>>> >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> >> > |
From: Bryan T. <br...@sy...> - 2015-11-03 17:21:01
|
Have you tried using a triples only format then? It will probably work and strip off the named graph. Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Tue, Nov 3, 2015 at 12:18 PM, Joakim Soderberg < joa...@bl...> wrote: > RIght, but I want to have quads in my KB but export as triples. > > > On Nov 2, 2015, at 3:52 PM, Bryan Thompson <br...@sy...> wrote: > > The ExportKB class works for both triples and quads. > > Just specify an RDFFormat that supports quads. > > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Blazegraph is now available with GPU acceleration using our disruptive > technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > On Mon, Nov 2, 2015 at 5:45 PM, Joakim Soderberg < > joa...@bl...> wrote: > >> Hi, >> My KB uses named graphs and therefore blazegraph.properties has setting: >> com.bigdata.rdf.store.AbstractTripleStore.quads=true >> >> which entails that I can’t export the KB as as triples: >> ExportKB export = new ExportKB( bd.getDatabase(), outFile , >> RDFFormat.NTRIPLES, false); >> >> since they are quads. >> >> Is it possible to come around this? I.e. having a quad KB export as >> triples? >> >> >> /J >> >> On Oct 26, 2015, at 4:57 AM, Bryan Thompson <br...@sy...> wrote: >> >> It depends. One (offline) procedure is outlined here: >> >> - https://wiki.blazegraph.com/wiki/index.php/DataMigration >> >> Different procedures might be appropriate for export of a named graph >> from a quads namespace or export while the database is online. >> >> Thanks, >> Bryan >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... >> http://blazegraph.com >> http://blog.blazegraph.com >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance >> graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints >> APIs. Blazegraph is now available with GPU acceleration using our disruptive >> technology to accelerate data-parallel graph analytics and graph query. >> >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are >> for the sole use of the intended recipient(s) and are confidential or >> proprietary to SYSTAP. Any unauthorized review, use, disclosure, >> dissemination or copying of this email or its contents or attachments is >> prohibited. If you have received this communication in error, please notify >> the sender by reply email and permanently delete all copies of the email >> and its contents and attachments. >> >> On Fri, Oct 23, 2015 at 2:06 PM, Joakim Soderberg < >> joa...@bl...> wrote: >> >>> Hi, >>> What is the recommended (fastest) way to export data from a KB? I am >>> currently using blazegraph 1.52 >>> >>> thanks >>> joakim >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Bigdata-developers mailing list >>> Big...@li... >>> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >>> >>> >> >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > |
From: Joakim S. <joa...@bl...> - 2015-11-03 17:18:31
|
RIght, but I want to have quads in my KB but export as triples. > On Nov 2, 2015, at 3:52 PM, Bryan Thompson <br...@sy...> wrote: > > The ExportKB class works for both triples and quads. > > Just specify an RDFFormat that supports quads. > > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... <mailto:br...@sy...> > http://blazegraph.com <http://blazegraph.com/> > http://blog.blazegraph.com <http://blog.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > > On Mon, Nov 2, 2015 at 5:45 PM, Joakim Soderberg <joa...@bl... <mailto:joa...@bl...>> wrote: > Hi, > My KB uses named graphs and therefore blazegraph.properties has setting: com.bigdata.rdf.store.AbstractTripleStore.quads=true > > which entails that I can’t export the KB as as triples: > ExportKB export = new ExportKB( bd.getDatabase(), outFile , RDFFormat.NTRIPLES, false); > > since they are quads. > > Is it possible to come around this? I.e. having a quad KB export as triples? > > > /J > >> On Oct 26, 2015, at 4:57 AM, Bryan Thompson <br...@sy... <mailto:br...@sy...>> wrote: >> >> It depends. One (offline) procedure is outlined here: >> >> - https://wiki.blazegraph.com/wiki/index.php/DataMigration <https://wiki.blazegraph.com/wiki/index.php/DataMigration> >> >> Different procedures might be appropriate for export of a named graph from a quads namespace or export while the database is online. >> >> Thanks, >> Bryan >> >> ---- >> Bryan Thompson >> Chief Scientist & Founder >> SYSTAP, LLC >> 4501 Tower Road >> Greensboro, NC 27410 >> br...@sy... <mailto:br...@sy...> >> http://blazegraph.com <http://blazegraph.com/> >> http://blog.blazegraph.com <http://blog.blazegraph.com/> >> >> Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. >> CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. >> >> >> >> On Fri, Oct 23, 2015 at 2:06 PM, Joakim Soderberg <joa...@bl... <mailto:joa...@bl...>> wrote: >> Hi, >> What is the recommended (fastest) way to export data from a KB? I am currently using blazegraph 1.52 >> >> thanks >> joakim >> >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... <mailto:Big...@li...> >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> >> >> > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> > > |
From: Bryan T. <br...@sy...> - 2015-11-02 23:52:56
|
The ExportKB class works for both triples and quads. Just specify an RDFFormat that supports quads. Bryan ---- Bryan Thompson Chief Scientist & Founder SYSTAP, LLC 4501 Tower Road Greensboro, NC 27410 br...@sy... http://blazegraph.com http://blog.blazegraph.com Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. On Mon, Nov 2, 2015 at 5:45 PM, Joakim Soderberg < joa...@bl...> wrote: > Hi, > My KB uses named graphs and therefore blazegraph.properties has setting: > com.bigdata.rdf.store.AbstractTripleStore.quads=true > > which entails that I can’t export the KB as as triples: > ExportKB export = new ExportKB( bd.getDatabase(), outFile , > RDFFormat.NTRIPLES, false); > > since they are quads. > > Is it possible to come around this? I.e. having a quad KB export as > triples? > > > /J > > On Oct 26, 2015, at 4:57 AM, Bryan Thompson <br...@sy...> wrote: > > It depends. One (offline) procedure is outlined here: > > - https://wiki.blazegraph.com/wiki/index.php/DataMigration > > Different procedures might be appropriate for export of a named graph from > a quads namespace or export while the database is online. > > Thanks, > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... > http://blazegraph.com > http://blog.blazegraph.com > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance > graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints > APIs. Blazegraph is now available with GPU acceleration using our disruptive > technology to accelerate data-parallel graph analytics and graph query. > > CONFIDENTIALITY NOTICE: This email and its contents and attachments are > for the sole use of the intended recipient(s) and are confidential or > proprietary to SYSTAP. Any unauthorized review, use, disclosure, > dissemination or copying of this email or its contents or attachments is > prohibited. If you have received this communication in error, please notify > the sender by reply email and permanently delete all copies of the email > and its contents and attachments. > > On Fri, Oct 23, 2015 at 2:06 PM, Joakim Soderberg < > joa...@bl...> wrote: > >> Hi, >> What is the recommended (fastest) way to export data from a KB? I am >> currently using blazegraph 1.52 >> >> thanks >> joakim >> >> >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> >> > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Joakim S. <joa...@bl...> - 2015-11-02 22:45:16
|
Hi, My KB uses named graphs and therefore blazegraph.properties has setting: com.bigdata.rdf.store.AbstractTripleStore.quads=true which entails that I can’t export the KB as as triples: ExportKB export = new ExportKB( bd.getDatabase(), outFile , RDFFormat.NTRIPLES, false); since they are quads. Is it possible to come around this? I.e. having a quad KB export as triples? /J > On Oct 26, 2015, at 4:57 AM, Bryan Thompson <br...@sy...> wrote: > > It depends. One (offline) procedure is outlined here: > > - https://wiki.blazegraph.com/wiki/index.php/DataMigration <https://wiki.blazegraph.com/wiki/index.php/DataMigration> > > Different procedures might be appropriate for export of a named graph from a quads namespace or export while the database is online. > > Thanks, > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... <mailto:br...@sy...> > http://blazegraph.com <http://blazegraph.com/> > http://blog.blazegraph.com <http://blog.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > > On Fri, Oct 23, 2015 at 2:06 PM, Joakim Soderberg <joa...@bl... <mailto:joa...@bl...>> wrote: > Hi, > What is the recommended (fastest) way to export data from a KB? I am currently using blazegraph 1.52 > > thanks > joakim > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> > > |
From: Jeremy J C. <jj...@sy...> - 2015-10-29 21:32:32
|
Yes that makes sense, we moved forward by more carefully stopping the system and taking a further copy Jeremy > On Oct 29, 2015, at 11:46 AM, Bryan Thompson <br...@sy...> wrote: > > Copy of the journal under update is not supported and could yield this outcome. > > You could try returning to the previous commit point. See com.bigdata.journal.Options for an option to open the previous root block. However, copy under update could easily have caused the copy to be bad in a fashion that is not recoverable (in the copy). The original journal should be fine of course. > > Thanks, > Bryan > > ---- > Bryan Thompson > Chief Scientist & Founder > SYSTAP, LLC > 4501 Tower Road > Greensboro, NC 27410 > br...@sy... <mailto:br...@sy...> > http://blazegraph.com <http://blazegraph.com/> > http://blog.blazegraph.com <http://blog.blazegraph.com/> > > Blazegraph™ <http://www.blazegraph.com/> is our ultra high-performance graph database that supports both RDF/SPARQL and Tinkerpop/Blueprints APIs. Blazegraph is now available with GPU acceleration using our disruptive technology to accelerate data-parallel graph analytics and graph query. > CONFIDENTIALITY NOTICE: This email and its contents and attachments are for the sole use of the intended recipient(s) and are confidential or proprietary to SYSTAP. Any unauthorized review, use, disclosure, dissemination or copying of this email or its contents or attachments is prohibited. If you have received this communication in error, please notify the sender by reply email and permanently delete all copies of the email and its contents and attachments. > > > > On Thu, Oct 29, 2015 at 2:18 PM, Jeremy J Carroll <jj...@sy... <mailto:jj...@sy...>> wrote: > > Does this error message mean the journal file is corrupt? > > One theory we have is that the journal file may have been copied while updates were in progress (which I do not believe is supported) > > thanks > > Jeremy > > > > WARN : 1 main org.eclipse.jetty.util.component.AbstractLifeCycle.setFailed(AbstractLifeCycle.java:212): FAILED o.e.j.w.WebAppContext@887af79{/bigdata,file:/usr/share/blazegraph/var/jetty/,STARTING}{/usr/share/blazegraph/var/jetty}: java.lang.Error: Two allocators at same address > java.lang.Error: Two allocators at same address > at com.bigdata.rwstore.FixedAllocator.compareTo(FixedAllocator.java:102) > at java.util.ComparableTimSort.mergeLo(ComparableTimSort.java:684) > at java.util.ComparableTimSort.mergeAt(ComparableTimSort.java:481) > at java.util.ComparableTimSort.mergeCollapse(ComparableTimSort.java:406) > at java.util.ComparableTimSort.sort(ComparableTimSort.java:213) > at java.util.Arrays.sort(Arrays.java:1312) > at java.util.Arrays.sort(Arrays.java:1506) > at java.util.ArrayList.sort(ArrayList.java:1454) > at java.util.Collections.sort(Collections.java:141) > at com.bigdata.rwstore.RWStore.readAllocationBlocks(RWStore.java:1683) > at com.bigdata.rwstore.RWStore.initfromRootBlock(RWStore.java:1558) > at com.bigdata.rwstore.RWStore.<init>(RWStore.java:970) > at com.bigdata.journal.RWStrategy.<init>(RWStrategy.java:137) > at com.bigdata.journal.AbstractJournal.<init>(AbstractJournal.java:1265) > at com.bigdata.journal.Journal.<init>(Journal.java:275) > at com.bigdata.journal.Journal.<init>(Journal.java:268) > at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.openIndexManager(BigdataRDFServletContextListener.java:798) > at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.contextInitialized(BigdataRDFServletContextListener.java:276) > at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:798) > at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:444) > at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:789) > at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:294) > at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1341) > at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1334) > at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741) > at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:497) > at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) > at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) > at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) > at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:163) > at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) > at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) > at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) > at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) > at org.eclipse.jetty.server.Server.start(Server.java:387) > at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) > at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) > at org.eclipse.jetty.server.Server.doStart(Server.java:354) > at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at com.bigdata.rdf.sail.webapp.NanoSparqlServer.awaitServerStart(NanoSparqlServer.java:485) > at com.bigdata.rdf.sail.webapp.NanoSparqlServer.main(NanoSparqlServer.java:449) > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... <mailto:Big...@li...> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers <https://lists.sourceforge.net/lists/listinfo/bigdata-developers> > |