This list is closed, nobody may subscribe to it.
| 2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
| 2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
| 2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
| 2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
| 2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
| 2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
| 2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Bryan T. <br...@sy...> - 2011-08-03 16:52:19
|
This is a bigdata (R) release. This release is capable of loading 1B triples in under one hour on a 15 node cluster. JDK 1.6 is required. Bigdata(R) is a horizontally scaled open source architecture for indexed data with an emphasis on semantic web data architectures. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. Starting with this release, we offer a WAR artifact [8] for easy installation of the Journal mode database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. You can checkout this release from the following URL: https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_1 Bug fixes: - https://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). - https://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). - https://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). - https://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - https://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - https://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - https://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - https://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - https://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). - https://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - https://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) Note: Some of these bug fixes require data migration. For details, see https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration New features: - Single machine data storage to ~50B triples/quads (RWStore); - Simple embedded and/or webapp deployment (NanoSparqlServer); - 100% native SPARQL 1.0 evaluation with lots of query optimizations; Feature summary: - Triples, quads, or triples with provenance (SIDs); - Fast RDFS+ inference and truth maintenance; - Clustered data storage is essentially unlimited; - Fast statement level provenance mode (SIDs). The road map [3] for the next releases includes: - High-volume analytic query and SPARQL 1.1 query, including aggregations; - Simplified deployment, configuration, and administration for clusters; and - High availability for the journal and the cluster. For more information, please see the following links: [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted [3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm [8] https://sourceforge.net/projects/bigdata/files/bigdata/ About bigdata: Bigdata(r) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(r) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(r) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(r) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |
|
From: Bryan T. <br...@sy...> - 2011-08-03 16:44:03
|
This is a bigdata (R) release. This release is capable of loading 1B triples in under one hour on a 15 node cluster. JDK 1.6 is required. Bigdata(R) is a horizontally scaled open source architecture for indexed data with an emphasis on semantic web data architectures. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. Starting with this release, we offer a WAR artifact [8] for easy installation of the Journal mode database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. You can checkout this release from the following URL: https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_1 Bug fixes: - https://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). - https://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). - https://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). - https://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - https://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - https://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - https://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - https://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - https://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). - https://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - https://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) Note: Some of these bug fixes require data migration. For details, see https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration New features: - Single machine data storage to ~50B triples/quads (RWStore); - Simple embedded and/or webapp deployment (NanoSparqlServer); - 100% native SPARQL 1.0 evaluation with lots of query optimizations; Feature summary: - Triples, quads, or triples with provenance (SIDs); - Fast RDFS+ inference and truth maintenance; - Clustered data storage is essentially unlimited; - Fast statement level provenance mode (SIDs). The road map [3] for the next releases includes: - High-volume analytic query and SPARQL 1.1 query, including aggregations; - Simplified deployment, configuration, and administration for clusters; and - High availability for the journal and the cluster. For more information, please see the following links: [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted [3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm [8] https://sourceforge.net/projects/bigdata/files/bigdata/ About bigdata: Bigdata(r) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(r) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(r) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(r) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |
|
From: Bryan T. <br...@sy...> - 2011-07-26 15:31:51
|
All, We are ready to issue bug fix release (1.0.1). The draft release notes are inline below. Pending feedback, I will cut the release over the next day or two. Work has been proceeding in parallel towards a release supporting scalable SPARQL 1.1 aggregation. This will be our 1.1.0 release and should be ready in a few months. Thanks, Bryan This is a bigdata (R) release. This release is capable of loading 1B triples in under one hour on a 15 node cluster. JDK 1.6 is required. Bigdata(R) is a horizontally scaled open source architecture for indexed data with an emphasis on semantic web data architectures. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. Starting with this release, we offer a WAR artifact [8] for easy installation of the Journal mode database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. You can checkout this release from the following URL: https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_1 Bug fixes: - https://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). - https://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). - https://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). - https://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - https://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - https://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - https://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - https://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - https://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). Note: Some of these bug fixes require data migration. For details, see https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration New features: - Single machine data storage to ~50B triples/quads (RWStore); - Simple embedded and/or webapp deployment (NanoSparqlServer); - 100% native SPARQL 1.0 evaluation with lots of query optimizations; Feature summary: - Triples, quads, or triples with provenance (SIDs); - Fast RDFS+ inference and truth maintenance; - Clustered data storage is essentially unlimited; - Fast statement level provenance mode (SIDs). The road map [3] for the next releases includes: - High-volume analytic query and SPARQL 1.1 query, including aggregations; - Simplified deployment, configuration, and administration for clusters; and - High availability for the journal and the cluster. For more information, please see the following links: [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted [3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm [8] https://sourceforge.net/projects/bigdata/files/bigdata/ About bigdata: Bigdata(r) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(r) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(r) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(r) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |
|
From: Bryan T. <br...@sy...> - 2011-07-13 15:48:48
|
Israel, I've added a section to the NanoSparqlServer page on the wiki which explains how to edit the configuration file in order to create scale-out KB instance [1]. There is one additional thing that you will want to do which was not covered by the email below, which is to change the value of the "namespace" property in the "lubm" component block to be the namespace of the KB instance that you are trying to create (you have to specify it both in the configuration file and to the NanoSparqlServer since the configuration file is parametrized to override various indices based on the namespace.) Thanks, Bryan [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=NanoSparqlServer#Scale-out_.28cluster_.2F_federation.29 ________________________________________ From: Bryan Thompson [br...@sy...] Sent: Wednesday, July 13, 2011 10:53 AM To: Isr...@th...; big...@li... Subject: Re: [Bigdata-developers] using nanoSparqlServer with federation Israel, Here is another way to accomplish the same purpose. Modify line #605 of bigdataCluster16.config file: old: // properties = new NV[] {}; new: properties = lubm.properties; This will direct the NanoSparqlServer to use the configuration for the KB instance described the the "lubm" component in the file. You can then modify the "lubm" component to reflect your use case, e.g., triples versus quads, etc. To setup for quads, change the following lines in the "lubm" configuration block: old: //new NV(BigdataSail.Options.AXIOMS_CLASS, "com.bigdata.rdf.axioms.RdfsAxioms"), new: new NV(BigdataSail.Options.AXIOMS_CLASS,"com.bigdata.rdf.axioms.NoAxioms"), new: new NV(BigdataSail.Options.QUADS_MODE,"true"), old: new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_INVERSE_OF, "true"), new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_TRANSITIVE_PROPERTY, "true"), new: // new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_INVERSE_OF, "true"), // new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_TRANSITIVE_PROPERTY, "true"), When you start the NanoSparqlServer you are giving it three arguments: NanoSparqlServer port namespace configFile The namespace will be created if it does not exist and it will use the properties from the "com.bigdata.rdf.sail.webapp.NanoSparqlServer" component block in the configuration file. Since you probably already have a KB instance named "kb", specify a different namespace this time around. E.g., "kb2". Thanks, Bryan ________________________________________ From: Bryan Thompson [br...@sy...] Sent: Wednesday, July 13, 2011 9:53 AM To: Isr...@th...; big...@li... Subject: Re: [Bigdata-developers] using nanoSparqlServer with federation Israel, Here is what I think may be going on. I think that you are not explicitly creating a scale-out kb instance after you start the federation and before you start the NanoSparqlServer. If you look at the top of the log when you *first* started the NanoSparqlServer on the cluster, I think that you may see a line similar to the one below. WARN : 4210 main com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.contextInitialized(BigdataRDFServletContextListener.java:165): Creating KB instance: namespace=kb This indicates that a KB instance was created for you using the properties configured for the BigdataClient for the NanoSparqlServer. However, those properties are amost certain to be incorrect in scale-out as they will be nearly entirely defaults (e.g., basically not specified) out of the box. (The situation is a bit different in the standalone WAR.) The default configuration will create a KB instance which uses some features that are not supported in scale-out, including backchainers for things like (s rdf:type rdfs:Resource). I suspect that there is something about the default configuration which is leading you to this class cast exception, but it would be helpful to have the full stack trace on that. Normally people will bulk load some data into a cluster as a starting point. However, you are trying to get started through the NanoSparqlServer. So, you are going to need to create an appropriate KB instance and then specify the namespace of that KB instance when you start the NanoSparqlServer (along with the port and the configuration file used to connect to the cluster). The bigdataCluster16.config (and the other sample cluster configuration files) all contain a section labeled "lubm" which defines a set of initialization properties which are used in conjuction with the MappedRDFDataLoadMaster to create a scale-out KB instance when the bulk loader is executed. The "lubm" section starts at ~ 1291 of the bigdataCluster16.config file. The setup for the bulk loader follows in the next component configuration block. You can look at the bulk loader code to see how it goes about creating a scale-out instance. The relevant code is ~ RDFMappedDataLoadMaster#952 (createTripleStore()). That method assumes that the named KB instance does not exist. Conditional logic for this can be found at line 892 (openTripleStore()). It should be pretty easy to create a simple utility class which will create the desired KB instance under program control or using properties specified in a Configuration file. In general, you will need to edit the configuration blocks in order to specify appropriate configuration properties for the scale-out KB instance that you are trying to create. For example, are you intending to use triples or quads mode? All of that needs to get configured and an appropriate KB instance created which you can then connect to using the NanoSparqlServer and run your updates/queries against. I would recommend starting from the configuration block in the bigdataCluster16.config file since it "tweaks" several properties for a clustered KB instance. I'll follow up off list with my contact information if you want to talk about this directly. Thanks, Bryan ________________________________________ From: Isr...@th... [Isr...@th...] Sent: Wednesday, July 13, 2011 7:49 AM To: Bryan Thompson Subject: RE: using nanoSparqlServer with federation Hi Bryan, In which log should I expect to see the stack trace? I have changed the root logger level to DEBUG in the log4j.properties file, but still all the log files, except for the state.log are empty. I did not use any program, I just used the poster plug-in in Firefox to POST a URI. Thanks, Israel -----Original Message----- From: Bryan Thompson [mailto:br...@sy...] Sent: Wednesday, July 13, 2011 2:26 PM To: Klein, Israel; big...@li... Subject: RE: using nanoSparqlServer with federation Israel, If you are having difficulties getting a full stack trace, can you send a small program which exhibits the exception and I will look into it this morning. I've tracked down the code change that I was thinking of, which was to TimestampChooser, and it appears that the change there was applied to both the release and the development branch so this is something different. The BTree is the node-local mutable b+tree object. The IClientIndex is a remote view of a sharded B+Tree. I'm not sure where the class cast is coming from based on what is inline below, but if you have the rest of the stack trace or can provide a small example I will track this down. Thanks, bryan ________________________________________ From: Isr...@th... [Isr...@th...] Sent: Wednesday, July 13, 2011 6:42 AM To: big...@li... Subject: [Bigdata-developers] FW: using nanoSparqlServer with federation From: Klein, Israel Sent: Wednesday, July 13, 2011 1:39 PM To: 'big...@li...' Subject: using nanoSparqlServer with federation Hi, I have installed a federation on 9 machines using the bigdataCluster16.config example which seems to work fine (used the latest BIGDATA_RELEASE_1_0_0 branch). Then I have started a nanoSparqlServer.sh and I can access its /status and /counters URLs. However, when I try to POST a new URI I get the following exception: <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/> <title>Error 500 java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</title> </head> <body> <h2>HTTP ERROR: 500</h2> <p>Problem accessing /sparql. Reason: <pre> java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</pre></p> <hr /><i><small>Powered by Jetty://</small></i> I use http://[host]:[port]/[namespace]/sparql?uri=[a<http://[host]:[port]/%5bn amespace%5d/sparql?uri=%5ba> url which returns an RDF in n3 format] which works fine when using it on a nanoSparalServer war on a separate Tomcat. Thanks, Israel This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters. This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters. ------------------------------------------------------------------------------ AppSumo Presents a FREE Video for the SourceForge Community by Eric Ries, the creator of the Lean Startup Methodology on "Lean Startup Secrets Revealed." This video shows you how to validate your ideas, optimize your ideas and identify your business strategy. http://p.sf.net/sfu/appsumosfdev2dev _______________________________________________ Bigdata-developers mailing list Big...@li... https://lists.sourceforge.net/lists/listinfo/bigdata-developers ------------------------------------------------------------------------------ AppSumo Presents a FREE Video for the SourceForge Community by Eric Ries, the creator of the Lean Startup Methodology on "Lean Startup Secrets Revealed." This video shows you how to validate your ideas, optimize your ideas and identify your business strategy. http://p.sf.net/sfu/appsumosfdev2dev _______________________________________________ Bigdata-developers mailing list Big...@li... https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
|
From: Bryan T. <br...@sy...> - 2011-07-13 14:54:04
|
Israel,
Here is another way to accomplish the same purpose. Modify line #605 of bigdataCluster16.config file:
old:
// properties = new NV[] {};
new:
properties = lubm.properties;
This will direct the NanoSparqlServer to use the configuration for the KB instance described the the "lubm" component in the file. You can then modify the "lubm" component to reflect your use case, e.g., triples versus quads, etc.
To setup for quads, change the following lines in the "lubm" configuration block:
old:
//new NV(BigdataSail.Options.AXIOMS_CLASS, "com.bigdata.rdf.axioms.RdfsAxioms"),
new:
new NV(BigdataSail.Options.AXIOMS_CLASS,"com.bigdata.rdf.axioms.NoAxioms"),
new:
new NV(BigdataSail.Options.QUADS_MODE,"true"),
old:
new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_INVERSE_OF, "true"),
new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_TRANSITIVE_PROPERTY, "true"),
new:
// new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_INVERSE_OF, "true"),
// new NV(BigdataSail.Options.FORWARD_CHAIN_OWL_TRANSITIVE_PROPERTY, "true"),
When you start the NanoSparqlServer you are giving it three arguments:
NanoSparqlServer port namespace configFile
The namespace will be created if it does not exist and it will use the properties from the "com.bigdata.rdf.sail.webapp.NanoSparqlServer" component block in the configuration file. Since you probably already have a KB instance named "kb", specify a different namespace this time around. E.g., "kb2".
Thanks,
Bryan
________________________________________
From: Bryan Thompson [br...@sy...]
Sent: Wednesday, July 13, 2011 9:53 AM
To: Isr...@th...; big...@li...
Subject: Re: [Bigdata-developers] using nanoSparqlServer with federation
Israel,
Here is what I think may be going on. I think that you are not explicitly creating a scale-out kb instance after you start the federation and before you start the NanoSparqlServer. If you look at the top of the log when you *first* started the NanoSparqlServer on the cluster, I think that you may see a line similar to the one below.
WARN : 4210 main com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.contextInitialized(BigdataRDFServletContextListener.java:165): Creating KB instance: namespace=kb
This indicates that a KB instance was created for you using the properties configured for the BigdataClient for the NanoSparqlServer. However, those properties are amost certain to be incorrect in scale-out as they will be nearly entirely defaults (e.g., basically not specified) out of the box. (The situation is a bit different in the standalone WAR.) The default configuration will create a KB instance which uses some features that are not supported in scale-out, including backchainers for things like (s rdf:type rdfs:Resource). I suspect that there is something about the default configuration which is leading you to this class cast exception, but it would be helpful to have the full stack trace on that.
Normally people will bulk load some data into a cluster as a starting point. However, you are trying to get started through the NanoSparqlServer. So, you are going to need to create an appropriate KB instance and then specify the namespace of that KB instance when you start the NanoSparqlServer (along with the port and the configuration file used to connect to the cluster).
The bigdataCluster16.config (and the other sample cluster configuration files) all contain a section labeled "lubm" which defines a set of initialization properties which are used in conjuction with the MappedRDFDataLoadMaster to create a scale-out KB instance when the bulk loader is executed. The "lubm" section starts at ~ 1291 of the bigdataCluster16.config file. The setup for the bulk loader follows in the next component configuration block. You can look at the bulk loader code to see how it goes about creating a scale-out instance. The relevant code is ~ RDFMappedDataLoadMaster#952 (createTripleStore()). That method assumes that the named KB instance does not exist. Conditional logic for this can be found at line 892 (openTripleStore()). It should be pretty easy to create a simple utility class which will create the desired KB instance under program control or using properties specified in a Configuration file.
In general, you will need to edit the configuration blocks in order to specify appropriate configuration properties for the scale-out KB instance that you are trying to create. For example, are you intending to use triples or quads mode? All of that needs to get configured and an appropriate KB instance created which you can then connect to using the NanoSparqlServer and run your updates/queries against. I would recommend starting from the configuration block in the bigdataCluster16.config file since it "tweaks" several properties for a clustered KB instance.
I'll follow up off list with my contact information if you want to talk about this directly.
Thanks,
Bryan
________________________________________
From: Isr...@th... [Isr...@th...]
Sent: Wednesday, July 13, 2011 7:49 AM
To: Bryan Thompson
Subject: RE: using nanoSparqlServer with federation
Hi Bryan,
In which log should I expect to see the stack trace? I have changed the
root logger level to DEBUG in the log4j.properties file, but still all
the log files, except for the state.log are empty.
I did not use any program, I just used the poster plug-in in Firefox to
POST a URI.
Thanks,
Israel
-----Original Message-----
From: Bryan Thompson [mailto:br...@sy...]
Sent: Wednesday, July 13, 2011 2:26 PM
To: Klein, Israel; big...@li...
Subject: RE: using nanoSparqlServer with federation
Israel,
If you are having difficulties getting a full stack trace, can you send
a small program which exhibits the exception and I will look into it
this morning. I've tracked down the code change that I was thinking of,
which was to TimestampChooser, and it appears that the change there was
applied to both the release and the development branch so this is
something different. The BTree is the node-local mutable b+tree object.
The IClientIndex is a remote view of a sharded B+Tree. I'm not sure
where the class cast is coming from based on what is inline below, but
if you have the rest of the stack trace or can provide a small example I
will track this down.
Thanks,
bryan
________________________________________
From: Isr...@th... [Isr...@th...]
Sent: Wednesday, July 13, 2011 6:42 AM
To: big...@li...
Subject: [Bigdata-developers] FW: using nanoSparqlServer with federation
From: Klein, Israel
Sent: Wednesday, July 13, 2011 1:39 PM
To: 'big...@li...'
Subject: using nanoSparqlServer with federation
Hi,
I have installed a federation on 9 machines using the
bigdataCluster16.config example which seems to work fine (used the
latest BIGDATA_RELEASE_1_0_0 branch). Then I have started a
nanoSparqlServer.sh and I can access its /status and /counters URLs.
However, when I try to POST a new URI I get the following exception:
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/>
<title>Error 500 java.lang.RuntimeException: java.lang.RuntimeException:
java.lang.RuntimeException: java.util.concurrent.ExecutionException:
java.util.concurrent.ExecutionException: java.lang.ClassCastException:
com.bigdata.btree.BTree cannot be cast to
com.bigdata.service.ndx.IClientIndex</title>
</head>
<body>
<h2>HTTP ERROR: 500</h2>
<p>Problem accessing /sparql. Reason:
<pre> java.lang.RuntimeException: java.lang.RuntimeException:
java.lang.RuntimeException: java.util.concurrent.ExecutionException:
java.util.concurrent.ExecutionException: java.lang.ClassCastException:
com.bigdata.btree.BTree cannot be cast to
com.bigdata.service.ndx.IClientIndex</pre></p>
<hr /><i><small>Powered by Jetty://</small></i>
I use
http://[host]:[port]/[namespace]/sparql?uri=[a<http://[host]:[port]/%5bn
amespace%5d/sparql?uri=%5ba> url which returns an RDF in n3 format]
which works fine when using it on a nanoSparalServer war on a separate
Tomcat.
Thanks,
Israel
This email was sent to you by Thomson Reuters, the global news and
information company. Any views expressed in this message are those of
the individual sender, except where the sender specifically states them
to be the views of Thomson Reuters.
This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters.
------------------------------------------------------------------------------
AppSumo Presents a FREE Video for the SourceForge Community by Eric
Ries, the creator of the Lean Startup Methodology on "Lean Startup
Secrets Revealed." This video shows you how to validate your ideas,
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
_______________________________________________
Bigdata-developers mailing list
Big...@li...
https://lists.sourceforge.net/lists/listinfo/bigdata-developers
|
|
From: Bryan T. <br...@sy...> - 2011-07-13 14:21:17
|
Israel, Here is what I think may be going on. I think that you are not explicitly creating a scale-out kb instance after you start the federation and before you start the NanoSparqlServer. If you look at the top of the log when you *first* started the NanoSparqlServer on the cluster, I think that you may see a line similar to the one below. WARN : 4210 main com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.contextInitialized(BigdataRDFServletContextListener.java:165): Creating KB instance: namespace=kb This indicates that a KB instance was created for you using the properties configured for the BigdataClient for the NanoSparqlServer. However, those properties are amost certain to be incorrect in scale-out as they will be nearly entirely defaults (e.g., basically not specified) out of the box. (The situation is a bit different in the standalone WAR.) The default configuration will create a KB instance which uses some features that are not supported in scale-out, including backchainers for things like (s rdf:type rdfs:Resource). I suspect that there is something about the default configuration which is leading you to this class cast exception, but it would be helpful to have the full stack trace on that. Normally people will bulk load some data into a cluster as a starting point. However, you are trying to get started through the NanoSparqlServer. So, you are going to need to create an appropriate KB instance and then specify the namespace of that KB instance when you start the NanoSparqlServer (along with the port and the configuration file used to connect to the cluster). The bigdataCluster16.config (and the other sample cluster configuration files) all contain a section labeled "lubm" which defines a set of initialization properties which are used in conjuction with the MappedRDFDataLoadMaster to create a scale-out KB instance when the bulk loader is executed. The "lubm" section starts at ~ 1291 of the bigdataCluster16.config file. The setup for the bulk loader follows in the next component configuration block. You can look at the bulk loader code to see how it goes about creating a scale-out instance. The relevant code is ~ RDFMappedDataLoadMaster#952 (createTripleStore()). That method assumes that the named KB instance does not exist. Conditional logic for this can be found at line 892 (openTripleStore()). It should be pretty easy to create a simple utility class which will create the desired KB instance under program control or using properties specified in a Configuration file. In general, you will need to edit the configuration blocks in order to specify appropriate configuration properties for the scale-out KB instance that you are trying to create. For example, are you intending to use triples or quads mode? All of that needs to get configured and an appropriate KB instance created which you can then connect to using the NanoSparqlServer and run your updates/queries against. I would recommend starting from the configuration block in the bigdataCluster16.config file since it "tweaks" several properties for a clustered KB instance. I'll follow up off list with my contact information if you want to talk about this directly. Thanks, Bryan ________________________________________ From: Isr...@th... [Isr...@th...] Sent: Wednesday, July 13, 2011 7:49 AM To: Bryan Thompson Subject: RE: using nanoSparqlServer with federation Hi Bryan, In which log should I expect to see the stack trace? I have changed the root logger level to DEBUG in the log4j.properties file, but still all the log files, except for the state.log are empty. I did not use any program, I just used the poster plug-in in Firefox to POST a URI. Thanks, Israel -----Original Message----- From: Bryan Thompson [mailto:br...@sy...] Sent: Wednesday, July 13, 2011 2:26 PM To: Klein, Israel; big...@li... Subject: RE: using nanoSparqlServer with federation Israel, If you are having difficulties getting a full stack trace, can you send a small program which exhibits the exception and I will look into it this morning. I've tracked down the code change that I was thinking of, which was to TimestampChooser, and it appears that the change there was applied to both the release and the development branch so this is something different. The BTree is the node-local mutable b+tree object. The IClientIndex is a remote view of a sharded B+Tree. I'm not sure where the class cast is coming from based on what is inline below, but if you have the rest of the stack trace or can provide a small example I will track this down. Thanks, bryan ________________________________________ From: Isr...@th... [Isr...@th...] Sent: Wednesday, July 13, 2011 6:42 AM To: big...@li... Subject: [Bigdata-developers] FW: using nanoSparqlServer with federation From: Klein, Israel Sent: Wednesday, July 13, 2011 1:39 PM To: 'big...@li...' Subject: using nanoSparqlServer with federation Hi, I have installed a federation on 9 machines using the bigdataCluster16.config example which seems to work fine (used the latest BIGDATA_RELEASE_1_0_0 branch). Then I have started a nanoSparqlServer.sh and I can access its /status and /counters URLs. However, when I try to POST a new URI I get the following exception: <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/> <title>Error 500 java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</title> </head> <body> <h2>HTTP ERROR: 500</h2> <p>Problem accessing /sparql. Reason: <pre> java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</pre></p> <hr /><i><small>Powered by Jetty://</small></i> I use http://[host]:[port]/[namespace]/sparql?uri=[a<http://[host]:[port]/%5bn amespace%5d/sparql?uri=%5ba> url which returns an RDF in n3 format] which works fine when using it on a nanoSparalServer war on a separate Tomcat. Thanks, Israel This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters. This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters. |
|
From: Bryan T. <br...@sy...> - 2011-07-13 12:21:42
|
Israel, Do NOT change the root level logger to DEBUG for the cluster. Bigdata has a LOT of conditional logging. If you go to root level DEBUG it will produce a huge amount of log information, especially in the com.bigdata.btree package. The problem you are reporting should be logged out at an ERROR level by the BigdataRDFServlet (see my previous email). The default cluster configuration is designed to aggregate log messages to a node running a log4j SocketAppender. This is handled in the bigdata start/stop script and in src/resources/config/log4j.properties and log4jServer.properties. The log4jServer.properties file should be putting everything onto a single log file whose name is configured in log4jServer.properties. Generally, you want to tail that log file to observe errors. I would also suggest viewing the page source for the error page that was served up. There might be more information which is not being rendered properly. Thanks, Bryan ________________________________________ From: Isr...@th... [Isr...@th...] Sent: Wednesday, July 13, 2011 7:49 AM To: Bryan Thompson Subject: RE: using nanoSparqlServer with federation Hi Bryan, In which log should I expect to see the stack trace? I have changed the root logger level to DEBUG in the log4j.properties file, but still all the log files, except for the state.log are empty. I did not use any program, I just used the poster plug-in in Firefox to POST a URI. Thanks, Israel -----Original Message----- From: Bryan Thompson [mailto:br...@sy...] Sent: Wednesday, July 13, 2011 2:26 PM To: Klein, Israel; big...@li... Subject: RE: using nanoSparqlServer with federation Israel, If you are having difficulties getting a full stack trace, can you send a small program which exhibits the exception and I will look into it this morning. I've tracked down the code change that I was thinking of, which was to TimestampChooser, and it appears that the change there was applied to both the release and the development branch so this is something different. The BTree is the node-local mutable b+tree object. The IClientIndex is a remote view of a sharded B+Tree. I'm not sure where the class cast is coming from based on what is inline below, but if you have the rest of the stack trace or can provide a small example I will track this down. Thanks, bryan ________________________________________ From: Isr...@th... [Isr...@th...] Sent: Wednesday, July 13, 2011 6:42 AM To: big...@li... Subject: [Bigdata-developers] FW: using nanoSparqlServer with federation From: Klein, Israel Sent: Wednesday, July 13, 2011 1:39 PM To: 'big...@li...' Subject: using nanoSparqlServer with federation Hi, I have installed a federation on 9 machines using the bigdataCluster16.config example which seems to work fine (used the latest BIGDATA_RELEASE_1_0_0 branch). Then I have started a nanoSparqlServer.sh and I can access its /status and /counters URLs. However, when I try to POST a new URI I get the following exception: <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/> <title>Error 500 java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</title> </head> <body> <h2>HTTP ERROR: 500</h2> <p>Problem accessing /sparql. Reason: <pre> java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</pre></p> <hr /><i><small>Powered by Jetty://</small></i> I use http://[host]:[port]/[namespace]/sparql?uri=[a<http://[host]:[port]/%5bn amespace%5d/sparql?uri=%5ba> url which returns an RDF in n3 format] which works fine when using it on a nanoSparalServer war on a separate Tomcat. Thanks, Israel This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters. This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters. |
|
From: Bryan T. <br...@sy...> - 2011-07-13 12:14:03
|
Should be com.bigdata.rdf.sail.webapp.BigdataRDFServlet coming out of the launderThrowable() method in that class. Thanks, bryan ________________________________________ From: Isr...@th... [Isr...@th...] Sent: Wednesday, July 13, 2011 7:49 AM To: Bryan Thompson Subject: RE: using nanoSparqlServer with federation Hi Bryan, In which log should I expect to see the stack trace? I have changed the root logger level to DEBUG in the log4j.properties file, but still all the log files, except for the state.log are empty. I did not use any program, I just used the poster plug-in in Firefox to POST a URI. Thanks, Israel -----Original Message----- From: Bryan Thompson [mailto:br...@sy...] Sent: Wednesday, July 13, 2011 2:26 PM To: Klein, Israel; big...@li... Subject: RE: using nanoSparqlServer with federation Israel, If you are having difficulties getting a full stack trace, can you send a small program which exhibits the exception and I will look into it this morning. I've tracked down the code change that I was thinking of, which was to TimestampChooser, and it appears that the change there was applied to both the release and the development branch so this is something different. The BTree is the node-local mutable b+tree object. The IClientIndex is a remote view of a sharded B+Tree. I'm not sure where the class cast is coming from based on what is inline below, but if you have the rest of the stack trace or can provide a small example I will track this down. Thanks, bryan ________________________________________ From: Isr...@th... [Isr...@th...] Sent: Wednesday, July 13, 2011 6:42 AM To: big...@li... Subject: [Bigdata-developers] FW: using nanoSparqlServer with federation From: Klein, Israel Sent: Wednesday, July 13, 2011 1:39 PM To: 'big...@li...' Subject: using nanoSparqlServer with federation Hi, I have installed a federation on 9 machines using the bigdataCluster16.config example which seems to work fine (used the latest BIGDATA_RELEASE_1_0_0 branch). Then I have started a nanoSparqlServer.sh and I can access its /status and /counters URLs. However, when I try to POST a new URI I get the following exception: <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/> <title>Error 500 java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</title> </head> <body> <h2>HTTP ERROR: 500</h2> <p>Problem accessing /sparql. Reason: <pre> java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</pre></p> <hr /><i><small>Powered by Jetty://</small></i> I use http://[host]:[port]/[namespace]/sparql?uri=[a<http://[host]:[port]/%5bn amespace%5d/sparql?uri=%5ba> url which returns an RDF in n3 format] which works fine when using it on a nanoSparalServer war on a separate Tomcat. Thanks, Israel This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters. This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters. |
|
From: Bryan T. <br...@sy...> - 2011-07-13 11:57:03
|
Israel, If you are having difficulties getting a full stack trace, can you send a small program which exhibits the exception and I will look into it this morning. I've tracked down the code change that I was thinking of, which was to TimestampChooser, and it appears that the change there was applied to both the release and the development branch so this is something different. The BTree is the node-local mutable b+tree object. The IClientIndex is a remote view of a sharded B+Tree. I'm not sure where the class cast is coming from based on what is inline below, but if you have the rest of the stack trace or can provide a small example I will track this down. Thanks, bryan ________________________________________ From: Isr...@th... [Isr...@th...] Sent: Wednesday, July 13, 2011 6:42 AM To: big...@li... Subject: [Bigdata-developers] FW: using nanoSparqlServer with federation From: Klein, Israel Sent: Wednesday, July 13, 2011 1:39 PM To: 'big...@li...' Subject: using nanoSparqlServer with federation Hi, I have installed a federation on 9 machines using the bigdataCluster16.config example which seems to work fine (used the latest BIGDATA_RELEASE_1_0_0 branch). Then I have started a nanoSparqlServer.sh and I can access its /status and /counters URLs. However, when I try to POST a new URI I get the following exception: <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/> <title>Error 500 java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</title> </head> <body> <h2>HTTP ERROR: 500</h2> <p>Problem accessing /sparql. Reason: <pre> java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</pre></p> <hr /><i><small>Powered by Jetty://</small></i> I use http://[host]:[port]/[namespace]/sparql?uri=[a<http://[host]:[port]/%5bnamespace%5d/sparql?uri=%5ba> url which returns an RDF in n3 format] which works fine when using it on a nanoSparalServer war on a separate Tomcat. Thanks, Israel This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters. |
|
From: Bryan T. <br...@sy...> - 2011-07-13 11:01:36
|
Hello, This is something which is fixed in the development branch and I had thought was fixed in the release as well. Do you have the rest of the stack trace? I believe that it goes through one of the key-value store (com.bigdata.sparse) atomic update classes. Thanks, Bryan ________________________________________ From: Isr...@th... [Isr...@th...] Sent: Wednesday, July 13, 2011 6:42 AM To: big...@li... Subject: [Bigdata-developers] FW: using nanoSparqlServer with federation From: Klein, Israel Sent: Wednesday, July 13, 2011 1:39 PM To: 'big...@li...' Subject: using nanoSparqlServer with federation Hi, I have installed a federation on 9 machines using the bigdataCluster16.config example which seems to work fine (used the latest BIGDATA_RELEASE_1_0_0 branch). Then I have started a nanoSparqlServer.sh and I can access its /status and /counters URLs. However, when I try to POST a new URI I get the following exception: <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/> <title>Error 500 java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</title> </head> <body> <h2>HTTP ERROR: 500</h2> <p>Problem accessing /sparql. Reason: <pre> java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</pre></p> <hr /><i><small>Powered by Jetty://</small></i> I use http://[host]:[port]/[namespace]/sparql?uri=[a<http://[host]:[port]/%5bnamespace%5d/sparql?uri=%5ba> url which returns an RDF in n3 format] which works fine when using it on a nanoSparalServer war on a separate Tomcat. Thanks, Israel This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters. |
|
From: <Isr...@th...> - 2011-07-13 10:55:10
|
From: Klein, Israel Sent: Wednesday, July 13, 2011 1:39 PM To: 'big...@li...' Subject: using nanoSparqlServer with federation Hi, I have installed a federation on 9 machines using the bigdataCluster16.config example which seems to work fine (used the latest BIGDATA_RELEASE_1_0_0 branch). Then I have started a nanoSparqlServer.sh and I can access its /status and /counters URLs. However, when I try to POST a new URI I get the following exception: <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/> <title>Error 500 java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</title> </head> <body> <h2>HTTP ERROR: 500</h2> <p>Problem accessing /sparql. Reason: <pre> java.lang.RuntimeException: java.lang.RuntimeException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: com.bigdata.btree.BTree cannot be cast to com.bigdata.service.ndx.IClientIndex</pre></p> <hr /><i><small>Powered by Jetty://</small></i> I use http://[host]:[port]/[namespace]/sparql?uri=[a <http://[host]:[port]/%5bnamespace%5d/sparql?uri=%5ba> url which returns an RDF in n3 format] which works fine when using it on a nanoSparalServer war on a separate Tomcat. Thanks, Israel This email was sent to you by Thomson Reuters, the global news and information company. Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Thomson Reuters. |
|
From: Jack P. <jac...@gm...> - 2011-07-05 17:43:54
|
I suppose if you are speaking of those iomega 4tb boxes as consumer level, then yes. I maintain a connection with it, but do notice that it sometimes takes a while to respond. Jack On Tue, Jul 5, 2011 at 10:36 AM, Bryan Thompson <br...@sy...> wrote: > We have people who deploy against NAS. Or are you speaking about a consumer level device? Those often do not offer a full time connection. Bryan > >> -----Original Message----- >> From: Jack Park [mailto:jac...@gm...] >> Sent: Tuesday, July 05, 2011 1:32 PM >> To: big...@li... >> Subject: Re: [Bigdata-developers] bigdata 1.0.0 release announcement >> >> When one chooses the path to the journal, has anyone had any >> success choosing a path to one of those RAID networked >> terabyte stores? >> >> Jack >> >> On Tue, Jul 5, 2011 at 10:26 AM, Bryan Thompson >> <br...@sy...> wrote: >> > This is a bigdata (R) release. This release is capable of >> loading 1B >> > triples in under one hour on a 15 node cluster. JDK 1.6 is >> required. >> > >> > Bigdata(R) is a horizontally scaled open source architecture for >> > indexed data with an emphasis on semantic web data architectures. >> > Bigdata operates in both a single machine mode (Journal) >> and a cluster >> > mode (Federation). The Journal provides fast scalable ACID indexed >> > storage for very large data sets. The federation provides fast >> > scalable shard-wise parallel indexed storage using dynamic sharding >> > and shard-wise ACID updates. Both platforms support fully >> concurrent readers with snapshot isolation. >> > >> > Distributed processing offers greater throughput but does >> not reduce >> > query or update latency. Choose the Journal when the anticipated >> > scale and throughput requirements permit. Choose the >> Federation when >> > the administrative and machine overhead associated with operating a >> > cluster is an acceptable tradeoff to have essentially >> unlimited data scaling and throughput. >> > >> > See [1,2,8] for instructions on installing bigdata(R), [4] for the >> > javadoc, and [3,5,6] for news, questions, and the latest >> developments. >> > For more information about SYSTAP, LLC and bigdata, see [7]. >> > >> > Starting with this release, we offer a WAR artifact [8] for easy >> > installation of the Journal mode database. For custom >> development and >> > cluster installations we recommend checking out the code from SVN >> > using the tag for this release. The code will build automatically >> > under eclipse. You can also build the code using the ant >> script. The >> > cluster installer requires the use of the ant script. You >> can checkout this release from the following URL: >> > >> > >> https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/BIGDATA_R >> > ELEASE_1_0_0 >> > >> > New features: >> > >> > - Single machine data storage to ~50B triples/quads (RWStore); >> > - Simple embedded and/or webapp deployment (NanoSparqlServer); >> > - Fast 100% native SPARQL 1.0 evaluation with lots of query >> > optimizations; >> > >> > Feature summary: >> > >> > - Triples, quads, or triples with provenance (SIDs); >> > - Fast RDFS+ inference and truth maintenance; >> > - Clustered data storage is essentially unlimited; >> > - Fast statement level provenance mode (SIDs). >> > >> > The road map [3] for the next releases includes: >> > >> > - High-volume analytic query and SPARQL 1.1 query, including >> > aggregations; >> > - Simplified deployment, configuration, and administration for >> > clusters; and >> > - High availability for the journal and the cluster. >> > >> > For more information, please see the following links: >> > >> > [1] >> > >> https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Pa >> > ge [2] >> > >> https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Getting >> > Started [3] >> > >> https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap >> > [4] http://www.bigdata.com/bigdata/docs/api/ >> > [5] http://sourceforge.net/projects/bigdata/ >> > [6] http://www.bigdata.com/blog >> > [7] http://www.systap.com/bigdata.htm >> > [8] https://sourceforge.net/projects/bigdata/files/bigdata/ >> > >> > About bigdata: >> > >> > Bigdata(r) is a horizontally-scaled, general purpose storage and >> > computing fabric for ordered data (B+Trees), designed to operate on >> > either a single server or a cluster of commodity hardware. >> Bigdata(r) >> > uses dynamically partitioned key-range shards in order to >> remove any >> > realistic scaling limits - in principle, bigdata(r) may be >> deployed on >> > 10s, 100s, or even thousands of machines and new capacity >> may be added >> > incrementally without requiring the full reload of all data. The >> > bigdata(r) RDF database supports RDFS and OWL Lite >> reasoning, high-level query (SPARQL), and datum level provenance. >> >> -------------------------------------------------------------- >> ---------------- >> All of the data generated in your IT infrastructure is >> seriously valuable. >> Why? It contains a definitive record of application >> performance, security threats, fraudulent activity, and more. >> Splunk takes this data and makes sense of it. IT sense. And >> common sense. >> http://p.sf.net/sfu/splunk-d2d-c2 >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> |
|
From: Bryan T. <br...@sy...> - 2011-07-05 17:36:44
|
We have people who deploy against NAS. Or are you speaking about a consumer level device? Those often do not offer a full time connection. Bryan > -----Original Message----- > From: Jack Park [mailto:jac...@gm...] > Sent: Tuesday, July 05, 2011 1:32 PM > To: big...@li... > Subject: Re: [Bigdata-developers] bigdata 1.0.0 release announcement > > When one chooses the path to the journal, has anyone had any > success choosing a path to one of those RAID networked > terabyte stores? > > Jack > > On Tue, Jul 5, 2011 at 10:26 AM, Bryan Thompson > <br...@sy...> wrote: > > This is a bigdata (R) release. This release is capable of > loading 1B > > triples in under one hour on a 15 node cluster. JDK 1.6 is > required. > > > > Bigdata(R) is a horizontally scaled open source architecture for > > indexed data with an emphasis on semantic web data architectures. > > Bigdata operates in both a single machine mode (Journal) > and a cluster > > mode (Federation). The Journal provides fast scalable ACID indexed > > storage for very large data sets. The federation provides fast > > scalable shard-wise parallel indexed storage using dynamic sharding > > and shard-wise ACID updates. Both platforms support fully > concurrent readers with snapshot isolation. > > > > Distributed processing offers greater throughput but does > not reduce > > query or update latency. Choose the Journal when the anticipated > > scale and throughput requirements permit. Choose the > Federation when > > the administrative and machine overhead associated with operating a > > cluster is an acceptable tradeoff to have essentially > unlimited data scaling and throughput. > > > > See [1,2,8] for instructions on installing bigdata(R), [4] for the > > javadoc, and [3,5,6] for news, questions, and the latest > developments. > > For more information about SYSTAP, LLC and bigdata, see [7]. > > > > Starting with this release, we offer a WAR artifact [8] for easy > > installation of the Journal mode database. For custom > development and > > cluster installations we recommend checking out the code from SVN > > using the tag for this release. The code will build automatically > > under eclipse. You can also build the code using the ant > script. The > > cluster installer requires the use of the ant script. You > can checkout this release from the following URL: > > > > > https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/BIGDATA_R > > ELEASE_1_0_0 > > > > New features: > > > > - Single machine data storage to ~50B triples/quads (RWStore); > > - Simple embedded and/or webapp deployment (NanoSparqlServer); > > - Fast 100% native SPARQL 1.0 evaluation with lots of query > > optimizations; > > > > Feature summary: > > > > - Triples, quads, or triples with provenance (SIDs); > > - Fast RDFS+ inference and truth maintenance; > > - Clustered data storage is essentially unlimited; > > - Fast statement level provenance mode (SIDs). > > > > The road map [3] for the next releases includes: > > > > - High-volume analytic query and SPARQL 1.1 query, including > > aggregations; > > - Simplified deployment, configuration, and administration for > > clusters; and > > - High availability for the journal and the cluster. > > > > For more information, please see the following links: > > > > [1] > > > https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Pa > > ge [2] > > > https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Getting > > Started [3] > > > https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap > > [4] http://www.bigdata.com/bigdata/docs/api/ > > [5] http://sourceforge.net/projects/bigdata/ > > [6] http://www.bigdata.com/blog > > [7] http://www.systap.com/bigdata.htm > > [8] https://sourceforge.net/projects/bigdata/files/bigdata/ > > > > About bigdata: > > > > Bigdata(r) is a horizontally-scaled, general purpose storage and > > computing fabric for ordered data (B+Trees), designed to operate on > > either a single server or a cluster of commodity hardware. > Bigdata(r) > > uses dynamically partitioned key-range shards in order to > remove any > > realistic scaling limits - in principle, bigdata(r) may be > deployed on > > 10s, 100s, or even thousands of machines and new capacity > may be added > > incrementally without requiring the full reload of all data. The > > bigdata(r) RDF database supports RDFS and OWL Lite > reasoning, high-level query (SPARQL), and datum level provenance. > > -------------------------------------------------------------- > ---------------- > All of the data generated in your IT infrastructure is > seriously valuable. > Why? It contains a definitive record of application > performance, security threats, fraudulent activity, and more. > Splunk takes this data and makes sense of it. IT sense. And > common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
|
From: Jack P. <jac...@gm...> - 2011-07-05 17:31:48
|
When one chooses the path to the journal, has anyone had any success choosing a path to one of those RAID networked terabyte stores? Jack On Tue, Jul 5, 2011 at 10:26 AM, Bryan Thompson <br...@sy...> wrote: > This is a bigdata (R) release. This release is capable of loading 1B triples in > under one hour on a 15 node cluster. JDK 1.6 is required. > > Bigdata(R) is a horizontally scaled open source architecture for indexed data > with an emphasis on semantic web data architectures. Bigdata operates in both > a single machine mode (Journal) and a cluster mode (Federation). The Journal > provides fast scalable ACID indexed storage for very large data sets. The > federation provides fast scalable shard-wise parallel indexed storage using > dynamic sharding and shard-wise ACID updates. Both platforms support fully > concurrent readers with snapshot isolation. > > Distributed processing offers greater throughput but does not reduce query or > update latency. Choose the Journal when the anticipated scale and throughput > requirements permit. Choose the Federation when the administrative and machine > overhead associated with operating a cluster is an acceptable tradeoff to have > essentially unlimited data scaling and throughput. > > See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and > [3,5,6] for news, questions, and the latest developments. For more information > about SYSTAP, LLC and bigdata, see [7]. > > Starting with this release, we offer a WAR artifact [8] for easy installation of > the Journal mode database. For custom development and cluster installations we > recommend checking out the code from SVN using the tag for this release. The > code will build automatically under eclipse. You can also build the code using > the ant script. The cluster installer requires the use of the ant script. You > can checkout this release from the following URL: > > https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/BIGDATA_RELEASE_1_0_0 > > New features: > > - Single machine data storage to ~50B triples/quads (RWStore); > - Simple embedded and/or webapp deployment (NanoSparqlServer); > - Fast 100% native SPARQL 1.0 evaluation with lots of query optimizations; > > Feature summary: > > - Triples, quads, or triples with provenance (SIDs); > - Fast RDFS+ inference and truth maintenance; > - Clustered data storage is essentially unlimited; > - Fast statement level provenance mode (SIDs). > > The road map [3] for the next releases includes: > > - High-volume analytic query and SPARQL 1.1 query, including aggregations; > - Simplified deployment, configuration, and administration for clusters; and > - High availability for the journal and the cluster. > > For more information, please see the following links: > > [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page > [2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted > [3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap > [4] http://www.bigdata.com/bigdata/docs/api/ > [5] http://sourceforge.net/projects/bigdata/ > [6] http://www.bigdata.com/blog > [7] http://www.systap.com/bigdata.htm > [8] https://sourceforge.net/projects/bigdata/files/bigdata/ > > About bigdata: > > Bigdata(r) is a horizontally-scaled, general purpose storage and computing fabric > for ordered data (B+Trees), designed to operate on either a single server or a > cluster of commodity hardware. Bigdata(r) uses dynamically partitioned key-range > shards in order to remove any realistic scaling limits - in principle, bigdata(r) > may be deployed on 10s, 100s, or even thousands of machines and new capacity may > be added incrementally without requiring the full reload of all data. The bigdata(r) > RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), > and datum level provenance. |
|
From: Bryan T. <br...@sy...> - 2011-07-05 17:26:16
|
This is a bigdata (R) release. This release is capable of loading 1B triples in under one hour on a 15 node cluster. JDK 1.6 is required. Bigdata(R) is a horizontally scaled open source architecture for indexed data with an emphasis on semantic web data architectures. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. Starting with this release, we offer a WAR artifact [8] for easy installation of the Journal mode database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. You can checkout this release from the following URL: https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/BIGDATA_RELEASE_1_0_0 New features: - Single machine data storage to ~50B triples/quads (RWStore); - Simple embedded and/or webapp deployment (NanoSparqlServer); - Fast 100% native SPARQL 1.0 evaluation with lots of query optimizations; Feature summary: - Triples, quads, or triples with provenance (SIDs); - Fast RDFS+ inference and truth maintenance; - Clustered data storage is essentially unlimited; - Fast statement level provenance mode (SIDs). The road map [3] for the next releases includes: - High-volume analytic query and SPARQL 1.1 query, including aggregations; - Simplified deployment, configuration, and administration for clusters; and - High availability for the journal and the cluster. For more information, please see the following links: [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted [3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm [8] https://sourceforge.net/projects/bigdata/files/bigdata/ About bigdata: Bigdata(r) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(r) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(r) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(r) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |
|
From: Jack P. <jac...@gm...> - 2011-07-05 16:45:11
|
<just joined the list> On Tue, Jul 5, 2011 at 9:41 AM, Jack Park <jac...@gm...> wrote: > Seriously cool. > Monday the 11th is the only hole in my schedule for next week; the > following week appears friendly to lots of ideas. > > Cheers > Jack > > On Tue, Jul 5, 2011 at 8:34 AM, Bryan Thompson <br...@sy...> wrote: >> All, >> >> There appears to be quite a bit of interest in supporting higher level reasoning against bigdata. >> >> I'd like to organize a conference call (Skype) to: >> >> - Make introductions; >> - Identify core requirements; >> - Identify an approach for which there is some level of consensus *and* on which people would be interested in collaborating. >> >> I see the following as possible goals for such an effort: >> >> - API hooks for other languages (Prolog, Lisp). This would allow people to provide their own integrations. >> >> - Embedded support for Prolog in SPARQL extension operators. The prolog program would see solutions as they streamed through the query. The prolog program could issue subqueries against bigdata when filtering or expanding solutions. >> >> - Query rewriting to support rule-oriented model theories, such as the OWL2 RL profile or application defined rules. >> >> - Support for incremental materialization of entailments and view maintenance in a manner which is aware of bigdata's concurrency control architecture. >> >> Please rely either on list or directly with your level of interest and availability for a 1 hour call within the next 1-2 weeks. >> >> Thanks, >> Bryan > |
|
From: Jack P. <jac...@gm...> - 2011-07-05 16:41:17
|
Seriously cool. Monday the 11th is the only hole in my schedule for next week; the following week appears friendly to lots of ideas. Cheers Jack On Tue, Jul 5, 2011 at 8:34 AM, Bryan Thompson <br...@sy...> wrote: > All, > > There appears to be quite a bit of interest in supporting higher level reasoning against bigdata. > > I'd like to organize a conference call (Skype) to: > > - Make introductions; > - Identify core requirements; > - Identify an approach for which there is some level of consensus *and* on which people would be interested in collaborating. > > I see the following as possible goals for such an effort: > > - API hooks for other languages (Prolog, Lisp). This would allow people to provide their own integrations. > > - Embedded support for Prolog in SPARQL extension operators. The prolog program would see solutions as they streamed through the query. The prolog program could issue subqueries against bigdata when filtering or expanding solutions. > > - Query rewriting to support rule-oriented model theories, such as the OWL2 RL profile or application defined rules. > > - Support for incremental materialization of entailments and view maintenance in a manner which is aware of bigdata's concurrency control architecture. > > Please rely either on list or directly with your level of interest and availability for a 1 hour call within the next 1-2 weeks. > > Thanks, > Bryan |
|
From: Bryan T. <br...@sy...> - 2011-07-05 15:33:59
|
All, There appears to be quite a bit of interest in supporting higher level reasoning against bigdata. I'd like to organize a conference call (Skype) to: - Make introductions; - Identify core requirements; - Identify an approach for which there is some level of consensus *and* on which people would be interested in collaborating. I see the following as possible goals for such an effort: - API hooks for other languages (Prolog, Lisp). This would allow people to provide their own integrations. - Embedded support for Prolog in SPARQL extension operators. The prolog program would see solutions as they streamed through the query. The prolog program could issue subqueries against bigdata when filtering or expanding solutions. - Query rewriting to support rule-oriented model theories, such as the OWL2 RL profile or application defined rules. - Support for incremental materialization of entailments and view maintenance in a manner which is aware of bigdata's concurrency control architecture. Please rely either on list or directly with your level of interest and availability for a 1 hour call within the next 1-2 weeks. Thanks, Bryan |
|
From: Bryan T. <br...@sy...> - 2011-06-24 17:43:36
|
We are preparing to merge the QUADS branch down to the trunk. Please be sure that you are fully committed and then change over to the TERMS (TERMS_REFACTOR_BRANCH) for new development. We'll do a release from the quads branch shortly. The quads branch provided: - Single machine scaling to ~50B triples/quads (RWStore). - Lot's of query optimizations (36,000 QMpH for BSBM3 reduced query mix). - 100% native SPARQL 1.0 query evaluation. - New REST API for SPARQL and update - Simple webapp installer (non-Sesame) - Simple embedded webapp (using jetty) We continue to support the Sesame API, but the Sesame web application install process is too cumbersome, so we've introduced the NanoSparqlServer as an easily installed alternative with a REST API. We will handle analytic query (scalable support for SPARQL 1.1) in the TERMS branch. Thanks, Bryan |
|
From: b. <no...@so...> - 2011-06-23 14:38:51
|
#72: Testing - Make changes to allow all tests of TestServiceStarter to pass
when run using both ant & eclipse
---------------------------------+------------------------------------------
Reporter: btmurphy | Owner: btmurphy
Type: defect | Status: closed
Priority: major | Milestone:
Component: Bigdata Federation | Version:
Resolution: fixed | Keywords:
---------------------------------+------------------------------------------
Changes (by thompsonbry):
* status: new => closed
* resolution: => fixed
Comment:
This has been resolved in [1].
[1] https://sourceforge.net/apps/trac/bigdata/ticket/297#comment:17
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/72#comment:11>
bigdata® <http://www.bigdata.com/blog>
bigdata® is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: Bryan T. <br...@sy...> - 2011-05-16 17:09:00
|
Release 0.84.0 is done. We are planning another release shortly from the quads branch. Thanks, Bryan |
|
From: Bryan T. <br...@sy...> - 2011-05-16 13:06:44
|
I am preparing release 0.84.0 from the trunk. The release notes are inline below. This release has several important bug fixes and includes a new feature for fast reverse provenance lookup. We will be doing another release very soon off the quads branch which will include the RWStore (10B+ triples on a single machine) and full native query evaluation. After that, our attention will turn to SPARQL 1.1 and analytic query support. I'll send out another message once this release has been posted. Thanks, Bryan This is a bigdata (R) release. This release is capable of loading 1B triples in under one hour on a 15 node cluster. JDK 1.6 is required. See [1] for instructions on installing bigdata(R), [2] for the javadoc and [3] and [4] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [5]. Please note that we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. You can checkout this release from the following URL: https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/BIGDATA_RELEASE_0_84_0 New features: - Inlining provenance metadata into the statement indices and fast reverse lookup of provenance metadata using statement identifiers (SIDs). Significant bug fixes: - The journal size could double in some cases following a restart due to a type in the WORMStrategy constructor. See https://sourceforge.net/apps/trac/bigdata/ticket/236 - Fixed a concurrency hole in the commit protocol for the Journal which could result in a concurrent modification to the B+Tree during the commit protocol. - Fixed a problem in the abort protocol for the BigdataSail: - Fixed a problem where the BigdataSail would permit the same thread to obtain more than one UNISOLATED connection: See https://sourceforge.net/apps/trac/bigdata/ticket/278 See https://sourceforge.net/apps/trac/bigdata/ticket/284 See https://sourceforge.net/apps/trac/bigdata/ticket/288 See https://sourceforge.net/apps/trac/bigdata/ticket/289 The road map [3] for the next releases include: - Single machine data storage to 10B+ triples; - 100% native SPARQL evaluation with lots of query optimizations; - High-volume analytic query workloads and SPARQL 1.1 query, including aggregations; - Simplified deployment, configuration, and administration for clusters. - High availability for the journal and the cluster; For more information, please see the following links: [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted [3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [2] http://www.bigdata.com/bigdata/docs/api/ [3] http://sourceforge.net/projects/bigdata/ [4] http://www.bigdata.com/blog [5] http://www.systap.com/bigdata.htm About bigdata: Bigdata(r) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(r) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(r) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(r) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |
|
From: husdon <no...@no...> - 2010-11-22 19:58:24
|
See <http://localhost/job/BigData/167/changes> |
|
From: husdon <no...@no...> - 2010-11-19 19:52:02
|
See <http://localhost/job/BigData/changes> |
|
From: husdon <no...@no...> - 2010-11-02 13:28:40
|
See <http://localhost/job/BigData/changes> |