This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bryan T. <br...@sy...> - 2012-11-20 13:00:00
|
There is something wrong with your zookeeper setup or how you describe that setup to bigdata: > Zookeeper not connected: startup sequence aborted. Thanks, Bryan From: WanCai Wen <wen...@gm...<mailto:wen...@gm...>> Date: Tuesday, November 20, 2012 4:40 AM To: "big...@li...<mailto:big...@li...>" <big...@li...<mailto:big...@li...>> Subject: Re: [Bigdata-developers] start bigdata has a problem Zookeeper not connected: startup sequence aborted. |
From: WanCai W. <wen...@gm...> - 2012-11-20 09:40:15
|
2012/11/20 WanCai Wen <wen...@gm...> > hello,every one > > who can help me ? please. > how to setup cluster?Each instance must install bigdata and start it? > i install bigdata in one instance and set it as a manager server. > > *change status=start in state file.* > FATAL: 180886 main > com.bigdata.service.jini.AbstractServer.fatal(AbstractServer.java:419): > Could not start service: > com.bigdata.jini.start.ServicesManagerServer{serviceName=com...@ip...ernal#1151716620, > hostname=ip-10-161-81-110.us-west-1.compute.internal, > serviceUUID=f7f2d6b0-fa71-4a29-8639-a55575eb1441} > java.lang.RuntimeException: java.lang.Exception: Zookeeper not connected: > startup sequence aborted. > at > com.bigdata.jini.start.AbstractServicesManagerService.start(AbstractServicesManagerService.java:244) > at > com.bigdata.jini.start.ServicesManagerServer$AdministrableServicesManagerService.start(ServicesManagerServer.java:564) > at > com.bigdata.jini.start.ServicesManagerServer$AdministrableServicesManagerService.start(ServicesManagerServer.java:404) > at > com.bigdata.service.jini.AbstractServer.<init>(AbstractServer.java:841) > at > com.bigdata.jini.start.ServicesManagerServer.<init>(ServicesManagerServer.java:354) > at > com.bigdata.jini.start.ServicesManagerServer.main(ServicesManagerServer.java:372) > Caused by: java.lang.Exception: Zookeeper not connected: startup sequence > aborted. > at > com.bigdata.jini.start.ServicesManagerStartupTask.doStartup(ServicesManagerStartupTask.java:178) > at > com.bigdata.jini.start.ServicesManagerStartupTask.call(ServicesManagerStartupTask.java:105) > at > com.bigdata.jini.start.AbstractServicesManagerService.setup(AbstractServicesManagerService.java:306) > at > com.bigdata.jini.start.AbstractServicesManagerService.start(AbstractServicesManagerService.java:240) > ... 5 more > > > *listServices.sh* > Waiting 5000ms for service discovery. > Zookeeper is not running. > Discovered 0 jini service registrars. > Discovered 0 services > Discovered 0 stale bigdata services. > Discovered 0 live bigdata services. > Discovered 0 other services. > Bigdata services by serviceIface: > Bigdata services by hostname: > > |
From: WanCai W. <wen...@gm...> - 2012-11-20 03:50:39
|
hi , i am a newer of bigdata RDF ,recently,i want to build a scale-out . but i don't kown how to start. please who can help me? |
From: Bryan T. <br...@sy...> - 2012-09-16 15:52:31
|
This is a critical maintenance release of bigdata(R). Users of version 1.2.1 are strongly encouraged to upgrade to this release. Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. You can download the WAR from: http://sourceforge.net/projects/bigdata/ You can checkout this release from: https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_2_2 New features: - SPARQL 1.1 UPDATE - SPARQL 1.1 Service Description - SPARQL 1.1 Basic Federated Query - New integration point for custom services (ServiceRegistry). - Remote Java client for NanoSparqlServer - Sesame 2.6.3 - Ganglia integration (cluster) - Performance improvements (cluster) - MemoryManager mode for the Journal (native memory Journal) Feature summary: - Single machine data storage to ~50B triples/quads (RWStore); - Clustered data storage is essentially unlimited; - Simple embedded and/or webapp deployment (NanoSparqlServer); - Triples, quads, or triples with provenance (SIDs); - Fast RDFS+ inference and truth maintenance; - Fast 100% native SPARQL 1.1 evaluation; - Integrated "analytic" query package; - %100 Java memory manager leverages the JVM native heap (no GC); Road map [3]: - SPARQL 1.1 property paths (last missing feature for SPARQL 1.1); - Runtime Query Optimizer for Analytic Query mode; - Simplified deployment, configuration, and administration for clusters; and - High availability for the journal and the cluster. Change log: Note: Versions with (*) MAY require data migration. For details, see [9]. 1.2.2: - http://sourceforge.net/apps/trac/bigdata/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) - http://sourceforge.net/apps/trac/bigdata/ticket/602 (RWStore does not discard logged deletes on reset()) - http://sourceforge.net/apps/trac/bigdata/ticket/603 (Prepare critical maintenance release as branch of 1.2.1) 1.2.1: - http://sourceforge.net/apps/trac/bigdata/ticket/533 (Review materialization for inline IVs) - http://sourceforge.net/apps/trac/bigdata/ticket/539 (NotMaterializedException with REGEX and Vocab) - http://sourceforge.net/apps/trac/bigdata/ticket/540 (SPARQL UPDATE using NSS via index.html) - http://sourceforge.net/apps/trac/bigdata/ticket/541 (MemoryManaged backed Journal mode) - http://sourceforge.net/apps/trac/bigdata/ticket/546 (Index cache for Journal) - http://sourceforge.net/apps/trac/bigdata/ticket/549 (BTree can not be cast to Name2Addr (MemStore recycler)) - http://sourceforge.net/apps/trac/bigdata/ticket/550 (NPE in Leaf.getKey() : root cause was user error) - http://sourceforge.net/apps/trac/bigdata/ticket/558 (SPARQL INSERT not working in same request after INSERT DATA) - http://sourceforge.net/apps/trac/bigdata/ticket/562 (Sub-select in INSERT cause NPE in UpdateExprBuilder) - http://sourceforge.net/apps/trac/bigdata/ticket/563 (DISTINCT ORDER BY) - http://sourceforge.net/apps/trac/bigdata/ticket/567 (Failure to set cached value on IV results in incorrect behavior for complex UPDATE operation) - http://sourceforge.net/apps/trac/bigdata/ticket/568 (DELETE WHERE fails with Java AssertionError) - http://sourceforge.net/apps/trac/bigdata/ticket/569 (LOAD-CREATE-LOAD using virgin journal fails with "Graph exists" exception) - http://sourceforge.net/apps/trac/bigdata/ticket/571 (DELETE/INSERT WHERE handling of blank nodes) - http://sourceforge.net/apps/trac/bigdata/ticket/573 (NullPointerException when attempting to INSERT DATA containing a blank node) 1.2.0: (*) - http://sourceforge.net/apps/trac/bigdata/ticket/92 (Monitoring webapp) - http://sourceforge.net/apps/trac/bigdata/ticket/267 (Support evaluation of 3rd party operators) - http://sourceforge.net/apps/trac/bigdata/ticket/337 (Compact and efficient movement of binding sets between nodes.) - http://sourceforge.net/apps/trac/bigdata/ticket/433 (Cluster leaks threads under read-only index operations: DGC thread leak) - http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) - http://sourceforge.net/apps/trac/bigdata/ticket/438 (KeyBeforePartitionException on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/439 (Class loader problem) - http://sourceforge.net/apps/trac/bigdata/ticket/441 (Ganglia integration) - http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) - http://sourceforge.net/apps/trac/bigdata/ticket/444 (SPARQL query can fail to notice when IRunningQuery.isDone() on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) - http://sourceforge.net/apps/trac/bigdata/ticket/446 (HTTP Repostory broken with bigdata 1.1.0) - http://sourceforge.net/apps/trac/bigdata/ticket/448 (SPARQL 1.1 UPDATE) - http://sourceforge.net/apps/trac/bigdata/ticket/449 (SPARQL 1.1 Federation extension) - http://sourceforge.net/apps/trac/bigdata/ticket/451 (Serialization error in SIDs mode on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/454 (Global Row Store Read on Cluster uses Tx) - http://sourceforge.net/apps/trac/bigdata/ticket/456 (IExtension implementations do point lookups on lexicon) - http://sourceforge.net/apps/trac/bigdata/ticket/457 ("No such index" on cluster under concurrent query workload) - http://sourceforge.net/apps/trac/bigdata/ticket/458 (Java level deadlock in DS) - http://sourceforge.net/apps/trac/bigdata/ticket/460 (Uncaught interrupt resolving RDF terms) - http://sourceforge.net/apps/trac/bigdata/ticket/461 (KeyAfterPartitionException / KeyBeforePartitionException on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/463 (NoSuchVocabularyItem with LUBMVocabulary for DerivedNumericsExtension) - http://sourceforge.net/apps/trac/bigdata/ticket/464 (Query statistics do not update correctly on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/465 (Too many GRS reads on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/469 (Sail does not flush assertion buffers before query) - http://sourceforge.net/apps/trac/bigdata/ticket/472 (acceptTaskService pool size on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/475 (Optimize serialization for query messages on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/476 (Test suite for writeCheckpoint() and recycling for BTree/HTree) - http://sourceforge.net/apps/trac/bigdata/ticket/478 (Cluster does not map input solution(s) across shards) - http://sourceforge.net/apps/trac/bigdata/ticket/480 (Error releasing deferred frees using 1.0.6 against a 1.0.4 journal) - http://sourceforge.net/apps/trac/bigdata/ticket/481 (PhysicalAddressResolutionException against 1.0.6) - http://sourceforge.net/apps/trac/bigdata/ticket/482 (RWStore reset() should be thread-safe for concurrent readers) - http://sourceforge.net/apps/trac/bigdata/ticket/484 (Java API for NanoSparqlServer REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/491 (AbstractTripleStore.destroy() does not clear the locator cache) - http://sourceforge.net/apps/trac/bigdata/ticket/492 (Empty chunk in ThickChunkMessage (cluster)) - http://sourceforge.net/apps/trac/bigdata/ticket/493 (Virtual Graphs) - http://sourceforge.net/apps/trac/bigdata/ticket/496 (Sesame 2.6.3) - http://sourceforge.net/apps/trac/bigdata/ticket/497 (Implement STRBEFORE, STRAFTER, and REPLACE) - http://sourceforge.net/apps/trac/bigdata/ticket/498 (Bring bigdata RDF/XML parser up to openrdf 2.6.3.) - http://sourceforge.net/apps/trac/bigdata/ticket/500 (SPARQL 1.1 Service Description) - http://www.openrdf.org/issues/browse/SES-884 (Aggregation with an solution set as input should produce an empty solution as output) - http://www.openrdf.org/issues/browse/SES-862 (Incorrect error handling for SPARQL aggregation; fix in 2.6.1) - http://www.openrdf.org/issues/browse/SES-873 (Order the same Blank Nodes together in ORDER BY) - http://sourceforge.net/apps/trac/bigdata/ticket/501 (SPARQL 1.1 BINDINGS are ignored) - http://sourceforge.net/apps/trac/bigdata/ticket/503 (Bigdata2Sesame2BindingSetIterator throws QueryEvaluationException were it should throw NoSuchElementException) - http://sourceforge.net/apps/trac/bigdata/ticket/504 (UNION with Empty Group Pattern) - http://sourceforge.net/apps/trac/bigdata/ticket/505 (Exception when using SPARQL sort & statement identifiers) - http://sourceforge.net/apps/trac/bigdata/ticket/506 (Load, closure and query performance in 1.1.x versus 1.0.x) - http://sourceforge.net/apps/trac/bigdata/ticket/508 (LIMIT causes hash join utility to log errors) - http://sourceforge.net/apps/trac/bigdata/ticket/513 (Expose the LexiconConfiguration to Function BOPs) - http://sourceforge.net/apps/trac/bigdata/ticket/515 (Query with two "FILTER NOT EXISTS" expressions returns no results) - http://sourceforge.net/apps/trac/bigdata/ticket/516 (REGEXBOp should cache the Pattern when it is a constant) - http://sourceforge.net/apps/trac/bigdata/ticket/517 (Java 7 Compiler Compatibility) - http://sourceforge.net/apps/trac/bigdata/ticket/518 (Review function bop subclass hierarchy, optimize datatype bop, etc.) - http://sourceforge.net/apps/trac/bigdata/ticket/520 (CONSTRUCT WHERE shortcut) - http://sourceforge.net/apps/trac/bigdata/ticket/521 (Incremental materialization of Tuple and Graph query results) - http://sourceforge.net/apps/trac/bigdata/ticket/525 (Modify the IChangeLog interface to support multiple agents) - http://sourceforge.net/apps/trac/bigdata/ticket/527 (Expose timestamp of LexiconRelation to function bops) - http://sourceforge.net/apps/trac/bigdata/ticket/532 (ClassCastException during hash join (can not be cast to TermId)) - http://sourceforge.net/apps/trac/bigdata/ticket/533 (Review materialization for inline IVs) - http://sourceforge.net/apps/trac/bigdata/ticket/534 (BSBM BI Q5 error using MERGE JOIN) 1.1.0 (*) - http://sourceforge.net/apps/trac/bigdata/ticket/23 (Lexicon joins) - http://sourceforge.net/apps/trac/bigdata/ticket/109 (Store large literals as "blobs") - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://sourceforge.net/apps/trac/bigdata/ticket/203 (Implement an persistence capable hash table to support analytic query) - http://sourceforge.net/apps/trac/bigdata/ticket/209 (AccessPath should visit binding sets rather than elements for high level query.) - http://sourceforge.net/apps/trac/bigdata/ticket/227 (SliceOp appears to be necessary when operator plan should suffice without) - http://sourceforge.net/apps/trac/bigdata/ticket/232 (Bottom-up evaluation semantics). - http://sourceforge.net/apps/trac/bigdata/ticket/246 (Derived xsd numeric data types must be inlined as extension types.) - http://sourceforge.net/apps/trac/bigdata/ticket/254 (Revisit pruning of intermediate variable bindings during query execution) - http://sourceforge.net/apps/trac/bigdata/ticket/261 (Lift conditions out of subqueries.) - http://sourceforge.net/apps/trac/bigdata/ticket/300 (Native ORDER BY) - http://sourceforge.net/apps/trac/bigdata/ticket/324 (Inline predeclared URIs and namespaces in 2-3 bytes) - http://sourceforge.net/apps/trac/bigdata/ticket/330 (NanoSparqlServer does not locate "html" resources when run from jar) - http://sourceforge.net/apps/trac/bigdata/ticket/334 (Support inlining of unicode data in the statement indices.) - http://sourceforge.net/apps/trac/bigdata/ticket/364 (Scalable default graph evaluation) - http://sourceforge.net/apps/trac/bigdata/ticket/368 (Prune variable bindings during query evaluation) - http://sourceforge.net/apps/trac/bigdata/ticket/370 (Direct translation of openrdf AST to bigdata AST) - http://sourceforge.net/apps/trac/bigdata/ticket/373 (Fix StrBOp and other IValueExpressions) - http://sourceforge.net/apps/trac/bigdata/ticket/377 (Optimize OPTIONALs with multiple statement patterns.) - http://sourceforge.net/apps/trac/bigdata/ticket/380 (Native SPARQL evaluation on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/387 (Cluster does not compute closure) - http://sourceforge.net/apps/trac/bigdata/ticket/395 (HTree hash join performance) - http://sourceforge.net/apps/trac/bigdata/ticket/401 (inline xsd:unsigned datatypes) - http://sourceforge.net/apps/trac/bigdata/ticket/408 (xsd:string cast fails for non-numeric data) - http://sourceforge.net/apps/trac/bigdata/ticket/421 (New query hints model.) - http://sourceforge.net/apps/trac/bigdata/ticket/431 (Use of read-only tx per query defeats cache on cluster) 1.0.3 - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) 1.0.2 - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) 1.0.1 (*) - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) For more information about bigdata(R), please see the following links: [1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted [3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm [8] http://sourceforge.net/projects/bigdata/files/bigdata/ [9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration About bigdata: Bigdata(R) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(R) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(R) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(R) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |
From: Bryan T. <br...@sy...> - 2012-07-02 18:12:52
|
This is a minor version release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. You can download the WAR from: http://sourceforge.net/projects/bigdata/ You can checkout this release from: https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_2_1 New features: - SPARQL 1.1 UPDATE - SPARQL 1.1 Service Description - SPARQL 1.1 Basic Federated Query - New integration point for custom services (ServiceRegistry). - Remote Java client for NanoSparqlServer - Sesame 2.6.3 - Ganglia integration (cluster) - Performance improvements (cluster) - MemoryManager mode for the Journal (native memory Journal) Feature summary: - Single machine data storage to ~50B triples/quads (RWStore); - Clustered data storage is essentially unlimited; - Simple embedded and/or webapp deployment (NanoSparqlServer); - Triples, quads, or triples with provenance (SIDs); - Fast RDFS+ inference and truth maintenance; - Fast 100% native SPARQL 1.1 evaluation; - Integrated "analytic" query package; - %100 Java memory manager leverages the JVM native heap (no GC); Road map [3]: - SPARQL 1.1 property paths (last missing feature for SPARQL 1.1); - Runtime Query Optimizer for Analytic Query mode; - Simplified deployment, configuration, and administration for clusters; and - High availability for the journal and the cluster. Change log: Note: Versions with (*) MAY require data migration. For details, see [9]. 1.2.1: - http://sourceforge.net/apps/trac/bigdata/ticket/533 (Review materialization for inline IVs) - http://sourceforge.net/apps/trac/bigdata/ticket/539 (NotMaterializedException with REGEX and Vocab) - http://sourceforge.net/apps/trac/bigdata/ticket/540 (SPARQL UPDATE using NSS via index.html) - http://sourceforge.net/apps/trac/bigdata/ticket/541 (MemoryManaged backed Journal mode) - http://sourceforge.net/apps/trac/bigdata/ticket/546 (Index cache for Journal) - http://sourceforge.net/apps/trac/bigdata/ticket/549 (BTree can not be cast to Name2Addr (MemStore recycler)) - http://sourceforge.net/apps/trac/bigdata/ticket/550 (NPE in Leaf.getKey() : root cause was user error) - http://sourceforge.net/apps/trac/bigdata/ticket/558 (SPARQL INSERT not working in same request after INSERT DATA) - http://sourceforge.net/apps/trac/bigdata/ticket/562 (Sub-select in INSERT cause NPE in UpdateExprBuilder) - http://sourceforge.net/apps/trac/bigdata/ticket/563 (DISTINCT ORDER BY) - http://sourceforge.net/apps/trac/bigdata/ticket/567 (Failure to set cached value on IV results in incorrect behavior for complex UPDATE operation) - http://sourceforge.net/apps/trac/bigdata/ticket/568 (DELETE WHERE fails with Java AssertionError) - http://sourceforge.net/apps/trac/bigdata/ticket/569 (LOAD-CREATE-LOAD using virgin journal fails with "Graph exists" exception) - http://sourceforge.net/apps/trac/bigdata/ticket/571 (DELETE/INSERT WHERE handling of blank nodes) - http://sourceforge.net/apps/trac/bigdata/ticket/573 (NullPointerException when attempting to INSERT DATA containing a blank node) 1.2.0: (*) - http://sourceforge.net/apps/trac/bigdata/ticket/92 (Monitoring webapp) - http://sourceforge.net/apps/trac/bigdata/ticket/267 (Support evaluation of 3rd party operators) - http://sourceforge.net/apps/trac/bigdata/ticket/337 (Compact and efficient movement of binding sets between nodes.) - http://sourceforge.net/apps/trac/bigdata/ticket/433 (Cluster leaks threads under read-only index operations: DGC thread leak) - http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) - http://sourceforge.net/apps/trac/bigdata/ticket/438 (KeyBeforePartitionException on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/439 (Class loader problem) - http://sourceforge.net/apps/trac/bigdata/ticket/441 (Ganglia integration) - http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) - http://sourceforge.net/apps/trac/bigdata/ticket/444 (SPARQL query can fail to notice when IRunningQuery.isDone() on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) - http://sourceforge.net/apps/trac/bigdata/ticket/446 (HTTP Repostory broken with bigdata 1.1.0) - http://sourceforge.net/apps/trac/bigdata/ticket/448 (SPARQL 1.1 UPDATE) - http://sourceforge.net/apps/trac/bigdata/ticket/449 (SPARQL 1.1 Federation extension) - http://sourceforge.net/apps/trac/bigdata/ticket/451 (Serialization error in SIDs mode on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/454 (Global Row Store Read on Cluster uses Tx) - http://sourceforge.net/apps/trac/bigdata/ticket/456 (IExtension implementations do point lookups on lexicon) - http://sourceforge.net/apps/trac/bigdata/ticket/457 ("No such index" on cluster under concurrent query workload) - http://sourceforge.net/apps/trac/bigdata/ticket/458 (Java level deadlock in DS) - http://sourceforge.net/apps/trac/bigdata/ticket/460 (Uncaught interrupt resolving RDF terms) - http://sourceforge.net/apps/trac/bigdata/ticket/461 (KeyAfterPartitionException / KeyBeforePartitionException on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/463 (NoSuchVocabularyItem with LUBMVocabulary for DerivedNumericsExtension) - http://sourceforge.net/apps/trac/bigdata/ticket/464 (Query statistics do not update correctly on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/465 (Too many GRS reads on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/469 (Sail does not flush assertion buffers before query) - http://sourceforge.net/apps/trac/bigdata/ticket/472 (acceptTaskService pool size on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/475 (Optimize serialization for query messages on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/476 (Test suite for writeCheckpoint() and recycling for BTree/HTree) - http://sourceforge.net/apps/trac/bigdata/ticket/478 (Cluster does not map input solution(s) across shards) - http://sourceforge.net/apps/trac/bigdata/ticket/480 (Error releasing deferred frees using 1.0.6 against a 1.0.4 journal) - http://sourceforge.net/apps/trac/bigdata/ticket/481 (PhysicalAddressResolutionException against 1.0.6) - http://sourceforge.net/apps/trac/bigdata/ticket/482 (RWStore reset() should be thread-safe for concurrent readers) - http://sourceforge.net/apps/trac/bigdata/ticket/484 (Java API for NanoSparqlServer REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/491 (AbstractTripleStore.destroy() does not clear the locator cache) - http://sourceforge.net/apps/trac/bigdata/ticket/492 (Empty chunk in ThickChunkMessage (cluster)) - http://sourceforge.net/apps/trac/bigdata/ticket/493 (Virtual Graphs) - http://sourceforge.net/apps/trac/bigdata/ticket/496 (Sesame 2.6.3) - http://sourceforge.net/apps/trac/bigdata/ticket/497 (Implement STRBEFORE, STRAFTER, and REPLACE) - http://sourceforge.net/apps/trac/bigdata/ticket/498 (Bring bigdata RDF/XML parser up to openrdf 2.6.3.) - http://sourceforge.net/apps/trac/bigdata/ticket/500 (SPARQL 1.1 Service Description) - http://www.openrdf.org/issues/browse/SES-884 (Aggregation with an solution set as input should produce an empty solution as output) - http://www.openrdf.org/issues/browse/SES-862 (Incorrect error handling for SPARQL aggregation; fix in 2.6.1) - http://www.openrdf.org/issues/browse/SES-873 (Order the same Blank Nodes together in ORDER BY) - http://sourceforge.net/apps/trac/bigdata/ticket/501 (SPARQL 1.1 BINDINGS are ignored) - http://sourceforge.net/apps/trac/bigdata/ticket/503 (Bigdata2Sesame2BindingSetIterator throws QueryEvaluationException were it should throw NoSuchElementException) - http://sourceforge.net/apps/trac/bigdata/ticket/504 (UNION with Empty Group Pattern) - http://sourceforge.net/apps/trac/bigdata/ticket/505 (Exception when using SPARQL sort & statement identifiers) - http://sourceforge.net/apps/trac/bigdata/ticket/506 (Load, closure and query performance in 1.1.x versus 1.0.x) - http://sourceforge.net/apps/trac/bigdata/ticket/508 (LIMIT causes hash join utility to log errors) - http://sourceforge.net/apps/trac/bigdata/ticket/513 (Expose the LexiconConfiguration to Function BOPs) - http://sourceforge.net/apps/trac/bigdata/ticket/515 (Query with two "FILTER NOT EXISTS" expressions returns no results) - http://sourceforge.net/apps/trac/bigdata/ticket/516 (REGEXBOp should cache the Pattern when it is a constant) - http://sourceforge.net/apps/trac/bigdata/ticket/517 (Java 7 Compiler Compatibility) - http://sourceforge.net/apps/trac/bigdata/ticket/518 (Review function bop subclass hierarchy, optimize datatype bop, etc.) - http://sourceforge.net/apps/trac/bigdata/ticket/520 (CONSTRUCT WHERE shortcut) - http://sourceforge.net/apps/trac/bigdata/ticket/521 (Incremental materialization of Tuple and Graph query results) - http://sourceforge.net/apps/trac/bigdata/ticket/525 (Modify the IChangeLog interface to support multiple agents) - http://sourceforge.net/apps/trac/bigdata/ticket/527 (Expose timestamp of LexiconRelation to function bops) - http://sourceforge.net/apps/trac/bigdata/ticket/532 (ClassCastException during hash join (can not be cast to TermId)) - http://sourceforge.net/apps/trac/bigdata/ticket/533 (Review materialization for inline IVs) - http://sourceforge.net/apps/trac/bigdata/ticket/534 (BSBM BI Q5 error using MERGE JOIN) 1.1.0 (*) - http://sourceforge.net/apps/trac/bigdata/ticket/23 (Lexicon joins) - http://sourceforge.net/apps/trac/bigdata/ticket/109 (Store large literals as "blobs") - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://sourceforge.net/apps/trac/bigdata/ticket/203 (Implement an persistence capable hash table to support analytic query) - http://sourceforge.net/apps/trac/bigdata/ticket/209 (AccessPath should visit binding sets rather than elements for high level query.) - http://sourceforge.net/apps/trac/bigdata/ticket/227 (SliceOp appears to be necessary when operator plan should suffice without) - http://sourceforge.net/apps/trac/bigdata/ticket/232 (Bottom-up evaluation semantics). - http://sourceforge.net/apps/trac/bigdata/ticket/246 (Derived xsd numeric data types must be inlined as extension types.) - http://sourceforge.net/apps/trac/bigdata/ticket/254 (Revisit pruning of intermediate variable bindings during query execution) - http://sourceforge.net/apps/trac/bigdata/ticket/261 (Lift conditions out of subqueries.) - http://sourceforge.net/apps/trac/bigdata/ticket/300 (Native ORDER BY) - http://sourceforge.net/apps/trac/bigdata/ticket/324 (Inline predeclared URIs and namespaces in 2-3 bytes) - http://sourceforge.net/apps/trac/bigdata/ticket/330 (NanoSparqlServer does not locate "html" resources when run from jar) - http://sourceforge.net/apps/trac/bigdata/ticket/334 (Support inlining of unicode data in the statement indices.) - http://sourceforge.net/apps/trac/bigdata/ticket/364 (Scalable default graph evaluation) - http://sourceforge.net/apps/trac/bigdata/ticket/368 (Prune variable bindings during query evaluation) - http://sourceforge.net/apps/trac/bigdata/ticket/370 (Direct translation of openrdf AST to bigdata AST) - http://sourceforge.net/apps/trac/bigdata/ticket/373 (Fix StrBOp and other IValueExpressions) - http://sourceforge.net/apps/trac/bigdata/ticket/377 (Optimize OPTIONALs with multiple statement patterns.) - http://sourceforge.net/apps/trac/bigdata/ticket/380 (Native SPARQL evaluation on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/387 (Cluster does not compute closure) - http://sourceforge.net/apps/trac/bigdata/ticket/395 (HTree hash join performance) - http://sourceforge.net/apps/trac/bigdata/ticket/401 (inline xsd:unsigned datatypes) - http://sourceforge.net/apps/trac/bigdata/ticket/408 (xsd:string cast fails for non-numeric data) - http://sourceforge.net/apps/trac/bigdata/ticket/421 (New query hints model.) - http://sourceforge.net/apps/trac/bigdata/ticket/431 (Use of read-only tx per query defeats cache on cluster) 1.0.3 - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) 1.0.2 - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) 1.0.1 (*) - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) For more information about bigdata(R), please see the following links: [1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted [3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm [8] http://sourceforge.net/projects/bigdata/files/bigdata/ [9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration About bigdata: Bigdata is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |
From: Bryan T. <br...@sy...> - 2012-06-22 12:46:14
|
Per [1], SourceForge is retiring its hosted applications. For bigdata, this includes both the trac and wiki instances. We will have to migrate these resources by 1 Sept 2012. There will doubtless be a period during which we have broken links to the old wiki and trac locations…. Thanks, Bryan [1] http://sourceforge.net/blog/hosted-apps-retirement/ [2] https://sourceforge.net/apps/trac/bigdata/ [3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page |
From: Peter A. <ans...@gm...> - 2012-06-01 00:37:08
|
Thanks, see ticket #559 [1] [1] https://sourceforge.net/apps/trac/bigdata/ticket/559 On 1 June 2012 09:52, Bryan Thompson <br...@sy...> wrote: > Happy to help. Can you file a ticket for this and assign it to me? Bryan > > Peter Ansell <ans...@gm...> wrote: > > > Hi all, > > Sesame 2.6.6 was released in the last 24 hours. It contains an NQuads > format constant, RDFFormat.NQUADS, that could be useful in unifying > the 4 or so different Rio parsers, as the first step to getting nquads > into the upstream sesame distribution [1]. I am currently trying to > get Sesametools and Any23 to standardise on RDFFormat.NQUADS, and it > would be nice to get BigData into the mix as well to make the nquads > parsers interoperable/interchangeable. > > Thanks, > > Peter > > [1] http://www.openrdf.org/issues/browse/SES-802 > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Bryan T. <br...@sy...> - 2012-06-01 00:19:35
|
Happy to help. Can you file a ticket for this and assign it to me? Bryan Peter Ansell <ans...@gm...> wrote: Hi all, Sesame 2.6.6 was released in the last 24 hours. It contains an NQuads format constant, RDFFormat.NQUADS, that could be useful in unifying the 4 or so different Rio parsers, as the first step to getting nquads into the upstream sesame distribution [1]. I am currently trying to get Sesametools and Any23 to standardise on RDFFormat.NQUADS, and it would be nice to get BigData into the mix as well to make the nquads parsers interoperable/interchangeable. Thanks, Peter [1] http://www.openrdf.org/issues/browse/SES-802 ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Bigdata-developers mailing list Big...@li... https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Peter A. <ans...@gm...> - 2012-05-31 23:48:47
|
Hi all, Sesame 2.6.6 was released in the last 24 hours. It contains an NQuads format constant, RDFFormat.NQUADS, that could be useful in unifying the 4 or so different Rio parsers, as the first step to getting nquads into the upstream sesame distribution [1]. I am currently trying to get Sesametools and Any23 to standardise on RDFFormat.NQUADS, and it would be nice to get BigData into the mix as well to make the nquads parsers interoperable/interchangeable. Thanks, Peter [1] http://www.openrdf.org/issues/browse/SES-802 |
From: Peter A. <ans...@gm...> - 2012-04-05 03:13:35
|
Hi, I find it useful when developing with Eclipse and/or Maven to have access to the sources for a library using the maven library-version-sources.jar convention. I modified the ant build.xml file to generate one of these for me as needed: <target name="sourceJar" depends="prepare" description="Generates the sources jar."> <jar destfile="${build.dir}/${version}-sources.jar"> <fileset dir="${bigdata.dir}/bigdata/src/java" /> <fileset dir="${bigdata.dir}/bigdata/src/samples" /> <fileset dir="${bigdata.dir}/bigdata-ganglia/src/java" /> <fileset dir="${bigdata.dir}/bigdata-gom/src/java" /> <fileset dir="${bigdata.dir}/bigdata-jini/src/java" /> <fileset dir="${bigdata.dir}/bigdata-rdf/src/java" /> <fileset dir="${bigdata.dir}/bigdata-rdf/src/samples" /> <fileset dir="${bigdata.dir}/bigdata-sails/src/java" /> <fileset dir="${bigdata.dir}/bigdata-sails/src/samples" /> <fileset dir="${bigdata.dir}/ctc-striterators/src/java" /> </jar> </target> The resulting -sources.jar file is about 9.6MB compared to 6.3MB for the basic .jar file. Cheers, Peter |
From: Bryan T. <br...@sy...> - 2012-04-02 13:56:03
|
All, I have updated the wiki page for Federated Query [1] to also provide some more depth on how to write custom services [2]. I have also added some hooks in the 1.2.0 maintenance / development branch [3] to notify custom services when new mutable connections start. This makes it possible for services to register an IChangeLog listener and observe mutation events (statements added to or removed from the Sail). Hopefully, this will make it easier to integrate additional indexing or search facilities into bigdata. Thanks, Bryan [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=FederatedQuery [2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=FederatedQuery#Custom_Services [3] https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/BIGDATA_RELEASE_1_2_0 |
From: Bryan T. <br...@sy...> - 2012-04-02 09:47:56
|
László, The location of the RWStore.properties file is specified in bigdata/WEB-INF/web.xml by the following line: <param-value>../webapps/bigdata/RWStore.properties</param-value> The RWStore.properties file is in bigdata/RWStore.properties. If you are having problems, you are probably not in the 'bin' directory when you start the servlet engine. Per [1], you are advised to edit web.xml before starting bigdata since relative path names for files are not as reliable. You should also edit the RWStore.properties to ensure that the configuration conforms to your requirements. Please see [1] for guidence on setup of the bigdata WAR. Thanks, Bryan [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=NanoSparqlServer#Servlet_Container_.28Tomcat.2C_etc.29 ________________________________ From: László Török [mailto:las...@eb...] Sent: Monday, April 02, 2012 4:39 AM To: big...@li... Subject: [Bigdata-developers] bigdata 1.2 war throws an exception when deployed to Jetty 7.6 Hi, I downloaded the latest .war distribution and dropped it into my jetty webapps folder. However, I get the following exception at startup (although the RWStore.properties is in place): 2012-04-02 10:37:44.374:WARN:oejw.WebAppContext:Failed startup of context o.e.j.w.WebAppContext{/bigdata,file:/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata/},/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata.war java.lang.RuntimeException: Could not find file: ../webapps/bigdata/RWStore.properties at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.openIndexManager(BigdataRDFServletContextListener.java:431) at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.contextInitialized(BigdataRDFServletContextListener.java:163) at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:733) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:233) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1214) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:676) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:455) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:36) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:183) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:491) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:138) at org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:142) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:53) at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:604) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:535) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:398) at org.eclipse.jetty.util.Scanner$1.run(Scanner.java:348) at java.util.TimerThread.mainLoop(Timer.java:512) at java.util.TimerThread.run(Timer.java:462) 2012-04-02 10:37:44.382:INFO:oejsl.ELContextCleaner:javax.el.BeanELResolver purged 2012-04-02 10:37:44.382:INFO:oejsh.ContextHandler:stopped o.e.j.w.WebAppContext{/bigdata,file:/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata/},/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata 2012-04-02 10:37:44.383:INFO:oejd.DeploymentManager:Deployable removed: App[o.e.j.w.WebAppContext{/bigdata,null},/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata,/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata] My computer: Macbook Pro, OSX 10.6.8 Regards, -- -------------------------------------------------------- László Török E-Business & Web Science Research Group Universitaet der Bundeswehr Muenchen Email: las...@eb...<mailto:las...@eb...> Phone: +49-(0)89-6004-4850 Fax: +49-(0)89-6004-4620 www: http://www.unibw.de/ebusiness/ (group) Skype: laczoka2000 Twitter: laczoka Check out GoodRelations for E-Commerce on the Web of Linked Data! Project Main Page: http://purl.org/goodrelations/ |
From: László T. <las...@eb...> - 2012-04-02 08:54:37
|
Hi, I downloaded the latest .war distribution and dropped it into my jetty webapps folder. However, I get the following exception at startup (although the RWStore.properties is in place): 2012-04-02 10:37:44.374:WARN:oejw.WebAppContext:Failed startup of context o.e.j.w.WebAppContext{/bigdata,file:/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata/},/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata.war java.lang.RuntimeException: Could not find file: ../webapps/bigdata/RWStore.properties at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.openIndexManager(BigdataRDFServletContextListener.java:431) at com.bigdata.rdf.sail.webapp.BigdataRDFServletContextListener.contextInitialized(BigdataRDFServletContextListener.java:163) at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:733) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:233) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1214) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:676) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:455) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:36) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:183) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:491) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:138) at org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:142) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:53) at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:604) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:535) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:398) at org.eclipse.jetty.util.Scanner$1.run(Scanner.java:348) at java.util.TimerThread.mainLoop(Timer.java:512) at java.util.TimerThread.run(Timer.java:462) 2012-04-02 10:37:44.382:INFO:oejsl.ELContextCleaner:javax.el.BeanELResolver purged 2012-04-02 10:37:44.382:INFO:oejsh.ContextHandler:stopped o.e.j.w.WebAppContext{/bigdata,file:/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata/},/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata 2012-04-02 10:37:44.383:INFO:oejd.DeploymentManager:Deployable removed: App[o.e.j.w.WebAppContext{/bigdata,null},/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata,/Users/laczoka/apps/jetty-distribution-7.6.0.v20120127/webapps/bigdata] My computer: Macbook Pro, OSX 10.6.8 Regards, -- -------------------------------------------------------- László Török E-Business & Web Science Research Group Universitaet der Bundeswehr Muenchen Email: las...@eb... Phone: +49-(0)89-6004-4850 Fax: +49-(0)89-6004-4620 www: http://www.unibw.de/ebusiness/ (group) Skype: laczoka2000 Twitter: laczoka Check out GoodRelations for E-Commerce on the Web of Linked Data! Project Main Page: http://purl.org/goodrelations/ |
From: Bryan T. <br...@sy...> - 2012-04-01 13:47:44
|
This is a major version release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. You can download the WAR from: http://sourceforge.net/projects/bigdata/ You can checkout this release from: https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_2_0 New features: - SPARQL 1.1 UPDATE - SPARQL 1.1 Service Description - SPARQL 1.1 Basic Federated Query - New integration point for custom services (ServiceRegistry). - Remote Java client for NanoSparqlServer - Sesame 2.6.3 - Ganglia integration (cluster) - Performance improvements (cluster) Feature summary: - Single machine data storage to ~50B triples/quads (RWStore); - Clustered data storage is essentially unlimited; - Simple embedded and/or webapp deployment (NanoSparqlServer); - Triples, quads, or triples with provenance (SIDs); - Fast RDFS+ inference and truth maintenance; - Fast 100% native SPARQL 1.1 evaluation; - Integrated "analytic" query package; - %100 Java memory manager leverages the JVM native heap (no GC); Road map [3]: - SPARQL 1.1 property paths (last missing feature for SPARQL 1.1); - Runtime Query Optimizer for Analytic Query mode; - Simplified deployment, configuration, and administration for clusters; and - High availability for the journal and the cluster. Change log: Note: Versions with (*) MAY require data migration. For details, see [9]. 1.2.0: (*) - http://sourceforge.net/apps/trac/bigdata/ticket/92 (Monitoring webapp) - http://sourceforge.net/apps/trac/bigdata/ticket/267 (Support evaluation of 3rd party operators) - http://sourceforge.net/apps/trac/bigdata/ticket/337 (Compact and efficient movement of binding sets between nodes.) - http://sourceforge.net/apps/trac/bigdata/ticket/433 (Cluster leaks threads under read-only index operations: DGC thread leak) - http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) - http://sourceforge.net/apps/trac/bigdata/ticket/438 (KeyBeforePartitionException on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/439 (Class loader problem) - http://sourceforge.net/apps/trac/bigdata/ticket/441 (Ganglia integration) - http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) - http://sourceforge.net/apps/trac/bigdata/ticket/444 (SPARQL query can fail to notice when IRunningQuery.isDone() on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) - http://sourceforge.net/apps/trac/bigdata/ticket/446 (HTTP Repostory broken with bigdata 1.1.0) - http://sourceforge.net/apps/trac/bigdata/ticket/448 (SPARQL 1.1 UPDATE) - http://sourceforge.net/apps/trac/bigdata/ticket/449 (SPARQL 1.1 Federation extension) - http://sourceforge.net/apps/trac/bigdata/ticket/451 (Serialization error in SIDs mode on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/454 (Global Row Store Read on Cluster uses Tx) - http://sourceforge.net/apps/trac/bigdata/ticket/456 (IExtension implementations do point lookups on lexicon) - http://sourceforge.net/apps/trac/bigdata/ticket/457 ("No such index" on cluster under concurrent query workload) - http://sourceforge.net/apps/trac/bigdata/ticket/458 (Java level deadlock in DS) - http://sourceforge.net/apps/trac/bigdata/ticket/460 (Uncaught interrupt resolving RDF terms) - http://sourceforge.net/apps/trac/bigdata/ticket/461 (KeyAfterPartitionException / KeyBeforePartitionException on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/463 (NoSuchVocabularyItem with LUBMVocabulary for DerivedNumericsExtension) - http://sourceforge.net/apps/trac/bigdata/ticket/464 (Query statistics do not update correctly on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/465 (Too many GRS reads on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/469 (Sail does not flush assertion buffers before query) - http://sourceforge.net/apps/trac/bigdata/ticket/472 (acceptTaskService pool size on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/475 (Optimize serialization for query messages on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/476 (Test suite for writeCheckpoint() and recycling for BTree/HTree) - http://sourceforge.net/apps/trac/bigdata/ticket/478 (Cluster does not map input solution(s) across shards) - http://sourceforge.net/apps/trac/bigdata/ticket/480 (Error releasing deferred frees using 1.0.6 against a 1.0.4 journal) - http://sourceforge.net/apps/trac/bigdata/ticket/481 (PhysicalAddressResolutionException against 1.0.6) - http://sourceforge.net/apps/trac/bigdata/ticket/482 (RWStore reset() should be thread-safe for concurrent readers) - http://sourceforge.net/apps/trac/bigdata/ticket/484 (Java API for NanoSparqlServer REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/491 (AbstractTripleStore.destroy() does not clear the locator cache) - http://sourceforge.net/apps/trac/bigdata/ticket/492 (Empty chunk in ThickChunkMessage (cluster)) - http://sourceforge.net/apps/trac/bigdata/ticket/493 (Virtual Graphs) - http://sourceforge.net/apps/trac/bigdata/ticket/496 (Sesame 2.6.3) - http://sourceforge.net/apps/trac/bigdata/ticket/497 (Implement STRBEFORE, STRAFTER, and REPLACE) - http://sourceforge.net/apps/trac/bigdata/ticket/498 (Bring bigdata RDF/XML parser up to openrdf 2.6.3.) - http://sourceforge.net/apps/trac/bigdata/ticket/500 (SPARQL 1.1 Service Description) - http://www.openrdf.org/issues/browse/SES-884 (Aggregation with an solution set as input should produce an empty solution as output) - http://www.openrdf.org/issues/browse/SES-862 (Incorrect error handling for SPARQL aggregation; fix in 2.6.1) - http://www.openrdf.org/issues/browse/SES-873 (Order the same Blank Nodes together in ORDER BY) - http://sourceforge.net/apps/trac/bigdata/ticket/501 (SPARQL 1.1 BINDINGS are ignored) - http://sourceforge.net/apps/trac/bigdata/ticket/503 (Bigdata2Sesame2BindingSetIterator throws QueryEvaluationException were it should throw NoSuchElementException) - http://sourceforge.net/apps/trac/bigdata/ticket/504 (UNION with Empty Group Pattern) - http://sourceforge.net/apps/trac/bigdata/ticket/505 (Exception when using SPARQL sort & statement identifiers) - http://sourceforge.net/apps/trac/bigdata/ticket/506 (Load, closure and query performance in 1.1.x versus 1.0.x) - http://sourceforge.net/apps/trac/bigdata/ticket/508 (LIMIT causes hash join utility to log errors) - http://sourceforge.net/apps/trac/bigdata/ticket/513 (Expose the LexiconConfiguration to Function BOPs) - http://sourceforge.net/apps/trac/bigdata/ticket/515 (Query with two "FILTER NOT EXISTS" expressions returns no results) - http://sourceforge.net/apps/trac/bigdata/ticket/516 (REGEXBOp should cache the Pattern when it is a constant) - http://sourceforge.net/apps/trac/bigdata/ticket/517 (Java 7 Compiler Compatibility) - http://sourceforge.net/apps/trac/bigdata/ticket/518 (Review function bop subclass hierarchy, optimize datatype bop, etc.) - http://sourceforge.net/apps/trac/bigdata/ticket/520 (CONSTRUCT WHERE shortcut) - http://sourceforge.net/apps/trac/bigdata/ticket/521 (Incremental materialization of Tuple and Graph query results) - http://sourceforge.net/apps/trac/bigdata/ticket/525 (Modify the IChangeLog interface to support multiple agents) - http://sourceforge.net/apps/trac/bigdata/ticket/527 (Expose timestamp of LexiconRelation to function bops) - http://sourceforge.net/apps/trac/bigdata/ticket/532 (ClassCastException during hash join (can not be cast to TermId)) - http://sourceforge.net/apps/trac/bigdata/ticket/533 (Review materialization for inline IVs) - http://sourceforge.net/apps/trac/bigdata/ticket/534 (BSBM BI Q5 error using MERGE JOIN) 1.1.0 (*) - http://sourceforge.net/apps/trac/bigdata/ticket/23 (Lexicon joins) - http://sourceforge.net/apps/trac/bigdata/ticket/109 (Store large literals as "blobs") - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://sourceforge.net/apps/trac/bigdata/ticket/203 (Implement an persistence capable hash table to support analytic query) - http://sourceforge.net/apps/trac/bigdata/ticket/209 (AccessPath should visit binding sets rather than elements for high level query.) - http://sourceforge.net/apps/trac/bigdata/ticket/227 (SliceOp appears to be necessary when operator plan should suffice without) - http://sourceforge.net/apps/trac/bigdata/ticket/232 (Bottom-up evaluation semantics). - http://sourceforge.net/apps/trac/bigdata/ticket/246 (Derived xsd numeric data types must be inlined as extension types.) - http://sourceforge.net/apps/trac/bigdata/ticket/254 (Revisit pruning of intermediate variable bindings during query execution) - http://sourceforge.net/apps/trac/bigdata/ticket/261 (Lift conditions out of subqueries.) - http://sourceforge.net/apps/trac/bigdata/ticket/300 (Native ORDER BY) - http://sourceforge.net/apps/trac/bigdata/ticket/324 (Inline predeclared URIs and namespaces in 2-3 bytes) - http://sourceforge.net/apps/trac/bigdata/ticket/330 (NanoSparqlServer does not locate "html" resources when run from jar) - http://sourceforge.net/apps/trac/bigdata/ticket/334 (Support inlining of unicode data in the statement indices.) - http://sourceforge.net/apps/trac/bigdata/ticket/364 (Scalable default graph evaluation) - http://sourceforge.net/apps/trac/bigdata/ticket/368 (Prune variable bindings during query evaluation) - http://sourceforge.net/apps/trac/bigdata/ticket/370 (Direct translation of openrdf AST to bigdata AST) - http://sourceforge.net/apps/trac/bigdata/ticket/373 (Fix StrBOp and other IValueExpressions) - http://sourceforge.net/apps/trac/bigdata/ticket/377 (Optimize OPTIONALs with multiple statement patterns.) - http://sourceforge.net/apps/trac/bigdata/ticket/380 (Native SPARQL evaluation on cluster) - http://sourceforge.net/apps/trac/bigdata/ticket/387 (Cluster does not compute closure) - http://sourceforge.net/apps/trac/bigdata/ticket/395 (HTree hash join performance) - http://sourceforge.net/apps/trac/bigdata/ticket/401 (inline xsd:unsigned datatypes) - http://sourceforge.net/apps/trac/bigdata/ticket/408 (xsd:string cast fails for non-numeric data) - http://sourceforge.net/apps/trac/bigdata/ticket/421 (New query hints model.) - http://sourceforge.net/apps/trac/bigdata/ticket/431 (Use of read-only tx per query defeats cache on cluster) 1.0.3 - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) 1.0.2 - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) 1.0.1 (*) - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) For more information about bigdata(R), please see the following links: [1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted [3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm [8] http://sourceforge.net/projects/bigdata/files/bigdata/ [9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration About bigdata: Bigdata(r) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(r) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(r) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(r) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |
From: Bryan T. <br...@sy...> - 2012-04-01 13:34:12
|
I have tagged the 1.2.0 release: https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_2_0 There is a new development branch for 1.2.0: https://bigdata.svn.sourceforge.net/svnroot/bigdata/braches/BIGDATA_RELEASE_1_2_0 Please switch to the new development branch now. I will be sending out the 1.2.0 release announcements shortly. It is already available for download. Thanks, Bryan |
From: Peter A. <ans...@gm...> - 2012-03-22 04:13:42
|
Hi Bryan, I have only had time to do some simple testing, starting repository, getting Sail and Repository connections to it and then shutting it down. I have been off work sick for a few days but I will get back to it next week unless something else comes up. Peter On 22 March 2012 07:06, Bryan Thompson <br...@sy...> wrote: > Peter, > > How's it going? Any problems running under Java 7? > > Thanks, > Bryan > >> -----Original Message----- >> From: Bryan Thompson >> Sent: Thursday, March 15, 2012 8:26 AM >> To: Bryan Thompson; Peter Ansell; >> big...@li... >> Subject: RE: [Bigdata-developers] Compile errors with bigdata >> 1.1.0 using openjdk-7 >> >> Peter, >> >> I replicated your results using Oracle Java 7u3 with the >> source and target set to 1.7. I filed a ticket for this [1]. >> Most of these are trivial fixes. Apparently Java 7 does not >> permit private fields in outer/inner classes to be >> referenced. We are having this problem when referencing the >> private field in the outer class from a static inner class >> and visa versa. >> >> I'll take a look at the remaining compiler error now. >> >> Bryan >> >> [1] https://sourceforge.net/apps/trac/bigdata/ticket/517#comment:1 >> >> > -----Original Message----- >> > From: Bryan Thompson [mailto:br...@sy...] >> > Sent: Thursday, March 15, 2012 5:01 AM >> > To: Peter Ansell; big...@li... >> > Subject: Re: [Bigdata-developers] Compile errors with bigdata 1.1.0 >> > using openjdk-7 >> > >> > Peter, >> > >> > Do you get the same results with Java 7 and openjdk? The >> > build.propeties file specifies: >> > >> > > [echo] source="1.6" >> > > [echo] target="1.6" >> > >> > You might need to change that to compile under Java 7. >> > >> > We have not tried compilation or execution under Java 7. The main >> > reason for us to go to Java 7 is the async IO support. >> > However, the first Java 7 release was buggy and I am not >> sure that it >> > will be that easy to support async IO and Java 6 in the same >> > distribution. >> > >> > Thanks, >> > Bryan >> > >> > > -----Original Message----- >> > > From: Peter Ansell [mailto:ans...@gm...] >> > > Sent: Thursday, March 15, 2012 12:53 AM >> > > To: big...@li... >> > > Subject: [Bigdata-developers] Compile errors with bigdata >> > 1.1.0 using >> > > openjdk-7 >> > > >> > > Hi, >> > > >> > > I am trying to compile both tags/BIGDATA_RELEASE_1_1_0 and >> > > branches/BIGDATA_RELEASE_1_1_0 using openjdk-7 on Ubuntu >> > 11.10 and I >> > > am getting similar compile errors from both. One of them >> relates to >> > > generics while the others relate to private field access. >> Is either >> > > Java-7 or OpenJDK in general supported by bigdata? I >> > originally tried >> > > compiling with the tag but the errors were very similar with the >> > > branch so the log below is based on the branch. I fear that >> > if I try >> > > to run bigdata directly using the binaries that these >> > compile errors >> > > may generate runtime errors so I am not going to try that >> just yet. >> > > >> > > Thanks, >> > > >> > > Peter >> > > >> > > $ svn info >> > > Path: . >> > > URL: >> > > https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/B >> > > IGDATA_RELEASE_1_1_0 >> > > Repository Root: >> https://bigdata.svn.sourceforge.net/svnroot/bigdata >> > > Repository UUID: 8f7bc0f5-282e-0410-95e3-8d296e9bb460 >> > > Revision: 6126 >> > > Node Kind: directory >> > > Schedule: normal >> > > Last Changed Author: thompsonbry >> > > Last Changed Rev: 6126 >> > > Last Changed Date: 2012-03-15 10:38:44 +1000 (Thu, 15 Mar 2012) >> > > >> > > $ java -version >> > > java version "1.7.0_147-icedtea" >> > > OpenJDK Runtime Environment (IcedTea7 2.0) >> > > (7~b147-2.0-0ubuntu0.11.10.1) OpenJDK 64-Bit Server VM (build >> > > 21.0-b17, mixed mode) >> > > >> > > >> > > $ ant >> > > Buildfile: /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml >> > > >> > > clean: >> > > >> > > bundle: >> > > [copy] Copying 32 files to >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib >> > > [copy] Copying 49 files to >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib >> > > >> > > prepare: >> > > [echo] version=bigdata-1.1.0 >> > > [echo] svn.checkout=true >> > > [mkdir] Created dir: >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes >> > > [mkdir] Created dir: >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/docs >> > > >> > > buildinfo: >> > > [echo] >> > > [echo] package com.bigdata; >> > > [echo] public class BuildInfo { >> > > [echo] public static final String buildVersion="1.1.0"; >> > > [echo] public static final String buildVersionOSGI="1.0"; >> > > [echo] public static final String svnRevision="6126"; >> > > [echo] public static final String >> > > svnURL="https://bigdata.svn.sourceforge.net/svnroot/bigdata/br >> > > anches/BIGDATA_RELEASE_1_1_0"; >> > > [echo] public static final String >> > buildTimestamp="2012/03/15 >> > > 14:37:20 EST"; >> > > [echo] public static final String buildUser="peter"; >> > > [echo] public static final String >> > > buildHost="${env.COMPUTERNAME}"; >> > > [echo] public static final String osArch="amd64"; >> > > [echo] public static final String osName="Linux"; >> > > [echo] public static final String >> > osVersion="3.0.0-16-generic"; >> > > [echo] } >> > > >> > > compile: >> > > [echo] javac >> > > [echo] destdir="ant-build" >> > > [echo] fork="yes" >> > > [echo] memorymaximumsize="1g" >> > > [echo] debug="yes" >> > > [echo] debuglevel="lines,vars,source" >> > > [echo] verbose="off" >> > > [echo] encoding="Cp1252" >> > > [echo] source="1.6" >> > > [echo] target="1.6" >> > > [javac] Compiling 2162 source files to >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes >> > > [javac] javac 1.7.0_147 >> > > [javac] warning: [options] bootstrap class path not set in >> > > conjunction with -source 1.6 >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/counters/striped/StripedCounters.java:140: >> > > error: locks has private access in StripedCounters >> > > [javac] parent.locks[threadHash].unlock(); >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/counters/striped/StripedCounters.java:191: >> > > error: parent has private access in StripedCounters >> > > [javac] t.parent = (T) this; >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/counters/striped/StripedCounters.java:192: >> > > error: batchSize has private access in StripedCounters >> > > [javac] t.batchSize = t.n = batchSize; >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/counters/striped/StripedCounters.java:192: >> > > error: n has private access in StripedCounters >> > > [javac] t.batchSize = t.n = batchSize; >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/striterator/ChunkedFilter.java:101: >> > > error: name clash: filter(I#1) in ChunkedFilter overrides >> a method >> > > whose erasure is the same as another method, yet neither >> > overrides the >> > > other >> > > [javac] public IChunkedOrderedIterator<F> >> > > filter(final I src) { >> > > [javac] ^ >> > > [javac] first method: filter(Iterator) in ChunkedFilter >> > > [javac] second method: filter(I#2) in IFilter >> > > [javac] where I#1,E#1,F#1,I#2,E#2,F#2 are type-variables: >> > > [javac] I#1 extends IChunkedIterator<E#1> >> declared in class >> > > ChunkedFilter >> > > [javac] E#1 extends Object declared in class ChunkedFilter >> > > [javac] F#1 extends Object declared in class ChunkedFilter >> > > [javac] I#2 extends Iterator<E#2> declared in >> > > interface IFilter >> > > [javac] E#2 extends Object declared in interface IFilter >> > > [javac] F#2 extends Object declared in interface IFilter >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/striterator/ChunkedFilter.java:111: >> > > error: name clash: filter(Iterator) in ChunkedFilter and >> > > filter(I) in IFilter have the same erasure, yet neither >> > overrides the >> > > other >> > > [javac] public IChunkedOrderedIterator<F> >> > > filter(final Iterator src) { >> > > [javac] ^ >> > > [javac] where F#1,I,E,F#2 are type-variables: >> > > [javac] F#1 extends Object declared in class ChunkedFilter >> > > [javac] I extends Iterator<E> declared in >> interface IFilter >> > > [javac] E extends Object declared in interface IFilter >> > > [javac] F#2 extends Object declared in interface IFilter >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > >> m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:205: >> > > error: ts has private access in ValueAge >> > > [javac] final long maxAge = now - peek().ts; >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > >> m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:212: >> > > error: ts has private access in ValueAge >> > > [javac] age = now - x.ts; >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > >> m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:221: >> > > error: ref has private access in ValueAge >> > > [javac] log.trace("Evicting: " + >> > > x.ref + " : timeout=" >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja >> > va/com/bigdata/service/jini/master/TaskMaster.> java:837: >> > > error: beginMillis has private access in JobState >> > > [javac] jobState.beginMillis = >> > > System.currentTimeMillis(); >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja >> > va/com/bigdata/service/jini/master/TaskMaster.> java:1093: >> > > error: endMillis has private access in JobState >> > > [javac] jobState.endMillis = >> System.currentTimeMillis(); >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja >> > va/com/bigdata/service/jini/master/TaskMaster.> java:1138: >> > > error: endMillis has private access in JobState >> > > [javac] jobState.endMillis = >> > > System.currentTimeMillis(); >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja >> > va/com/bigdata/service/jini/master/TaskMaster.> java:1289: >> > > error: deleteJob has private access in JobState >> > > [javac] if (jobState.deleteJob >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja >> > va/com/bigdata/service/jini/master/TaskMaster.> java:1373: >> > > error: resumedJob has private access in JobState >> > > [javac] jobState.resumedJob = true; >> > > [javac] ^ >> > > [javac] Note: Some input files use or override a >> deprecated API. >> > > [javac] Note: Recompile with -Xlint:deprecation for details. >> > > [javac] Note: Some input files use unchecked or unsafe >> > operations. >> > > [javac] Note: Recompile with -Xlint:unchecked for details. >> > > [javac] 14 errors >> > > [javac] 1 warning >> > > >> > > BUILD FAILED >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml:223: >> > > Compile failed; see the compiler error output for details. >> > > >> > > Total time: 20 seconds >> > > >> > > -------------------------------------------------------------- >> > > ---------------- >> > > This SF email is sponsosred by: >> > > Try Windows Azure free for 90 days Click Here >> > > http://p.sf.net/sfu/sfd2d-msazure >> > > _______________________________________________ >> > > Bigdata-developers mailing list >> > > Big...@li... >> > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> > > >> > -------------------------------------------------------------- >> > ---------------- >> > This SF email is sponsosred by: >> > Try Windows Azure free for 90 days Click Here >> > http://p.sf.net/sfu/sfd2d-msazure >> > _______________________________________________ >> > Bigdata-developers mailing list >> > Big...@li... >> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> > |
From: Bryan T. <br...@sy...> - 2012-03-21 21:07:25
|
Peter, How's it going? Any problems running under Java 7? Thanks, Bryan > -----Original Message----- > From: Bryan Thompson > Sent: Thursday, March 15, 2012 8:26 AM > To: Bryan Thompson; Peter Ansell; > big...@li... > Subject: RE: [Bigdata-developers] Compile errors with bigdata > 1.1.0 using openjdk-7 > > Peter, > > I replicated your results using Oracle Java 7u3 with the > source and target set to 1.7. I filed a ticket for this [1]. > Most of these are trivial fixes. Apparently Java 7 does not > permit private fields in outer/inner classes to be > referenced. We are having this problem when referencing the > private field in the outer class from a static inner class > and visa versa. > > I'll take a look at the remaining compiler error now. > > Bryan > > [1] https://sourceforge.net/apps/trac/bigdata/ticket/517#comment:1 > > > -----Original Message----- > > From: Bryan Thompson [mailto:br...@sy...] > > Sent: Thursday, March 15, 2012 5:01 AM > > To: Peter Ansell; big...@li... > > Subject: Re: [Bigdata-developers] Compile errors with bigdata 1.1.0 > > using openjdk-7 > > > > Peter, > > > > Do you get the same results with Java 7 and openjdk? The > > build.propeties file specifies: > > > > > [echo] source="1.6" > > > [echo] target="1.6" > > > > You might need to change that to compile under Java 7. > > > > We have not tried compilation or execution under Java 7. The main > > reason for us to go to Java 7 is the async IO support. > > However, the first Java 7 release was buggy and I am not > sure that it > > will be that easy to support async IO and Java 6 in the same > > distribution. > > > > Thanks, > > Bryan > > > > > -----Original Message----- > > > From: Peter Ansell [mailto:ans...@gm...] > > > Sent: Thursday, March 15, 2012 12:53 AM > > > To: big...@li... > > > Subject: [Bigdata-developers] Compile errors with bigdata > > 1.1.0 using > > > openjdk-7 > > > > > > Hi, > > > > > > I am trying to compile both tags/BIGDATA_RELEASE_1_1_0 and > > > branches/BIGDATA_RELEASE_1_1_0 using openjdk-7 on Ubuntu > > 11.10 and I > > > am getting similar compile errors from both. One of them > relates to > > > generics while the others relate to private field access. > Is either > > > Java-7 or OpenJDK in general supported by bigdata? I > > originally tried > > > compiling with the tag but the errors were very similar with the > > > branch so the log below is based on the branch. I fear that > > if I try > > > to run bigdata directly using the binaries that these > > compile errors > > > may generate runtime errors so I am not going to try that > just yet. > > > > > > Thanks, > > > > > > Peter > > > > > > $ svn info > > > Path: . > > > URL: > > > https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/B > > > IGDATA_RELEASE_1_1_0 > > > Repository Root: > https://bigdata.svn.sourceforge.net/svnroot/bigdata > > > Repository UUID: 8f7bc0f5-282e-0410-95e3-8d296e9bb460 > > > Revision: 6126 > > > Node Kind: directory > > > Schedule: normal > > > Last Changed Author: thompsonbry > > > Last Changed Rev: 6126 > > > Last Changed Date: 2012-03-15 10:38:44 +1000 (Thu, 15 Mar 2012) > > > > > > $ java -version > > > java version "1.7.0_147-icedtea" > > > OpenJDK Runtime Environment (IcedTea7 2.0) > > > (7~b147-2.0-0ubuntu0.11.10.1) OpenJDK 64-Bit Server VM (build > > > 21.0-b17, mixed mode) > > > > > > > > > $ ant > > > Buildfile: /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml > > > > > > clean: > > > > > > bundle: > > > [copy] Copying 32 files to > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib > > > [copy] Copying 49 files to > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib > > > > > > prepare: > > > [echo] version=bigdata-1.1.0 > > > [echo] svn.checkout=true > > > [mkdir] Created dir: > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes > > > [mkdir] Created dir: > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/docs > > > > > > buildinfo: > > > [echo] > > > [echo] package com.bigdata; > > > [echo] public class BuildInfo { > > > [echo] public static final String buildVersion="1.1.0"; > > > [echo] public static final String buildVersionOSGI="1.0"; > > > [echo] public static final String svnRevision="6126"; > > > [echo] public static final String > > > svnURL="https://bigdata.svn.sourceforge.net/svnroot/bigdata/br > > > anches/BIGDATA_RELEASE_1_1_0"; > > > [echo] public static final String > > buildTimestamp="2012/03/15 > > > 14:37:20 EST"; > > > [echo] public static final String buildUser="peter"; > > > [echo] public static final String > > > buildHost="${env.COMPUTERNAME}"; > > > [echo] public static final String osArch="amd64"; > > > [echo] public static final String osName="Linux"; > > > [echo] public static final String > > osVersion="3.0.0-16-generic"; > > > [echo] } > > > > > > compile: > > > [echo] javac > > > [echo] destdir="ant-build" > > > [echo] fork="yes" > > > [echo] memorymaximumsize="1g" > > > [echo] debug="yes" > > > [echo] debuglevel="lines,vars,source" > > > [echo] verbose="off" > > > [echo] encoding="Cp1252" > > > [echo] source="1.6" > > > [echo] target="1.6" > > > [javac] Compiling 2162 source files to > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes > > > [javac] javac 1.7.0_147 > > > [javac] warning: [options] bootstrap class path not set in > > > conjunction with -source 1.6 > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/counters/striped/StripedCounters.java:140: > > > error: locks has private access in StripedCounters > > > [javac] parent.locks[threadHash].unlock(); > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/counters/striped/StripedCounters.java:191: > > > error: parent has private access in StripedCounters > > > [javac] t.parent = (T) this; > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/counters/striped/StripedCounters.java:192: > > > error: batchSize has private access in StripedCounters > > > [javac] t.batchSize = t.n = batchSize; > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/counters/striped/StripedCounters.java:192: > > > error: n has private access in StripedCounters > > > [javac] t.batchSize = t.n = batchSize; > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/striterator/ChunkedFilter.java:101: > > > error: name clash: filter(I#1) in ChunkedFilter overrides > a method > > > whose erasure is the same as another method, yet neither > > overrides the > > > other > > > [javac] public IChunkedOrderedIterator<F> > > > filter(final I src) { > > > [javac] ^ > > > [javac] first method: filter(Iterator) in ChunkedFilter > > > [javac] second method: filter(I#2) in IFilter > > > [javac] where I#1,E#1,F#1,I#2,E#2,F#2 are type-variables: > > > [javac] I#1 extends IChunkedIterator<E#1> > declared in class > > > ChunkedFilter > > > [javac] E#1 extends Object declared in class ChunkedFilter > > > [javac] F#1 extends Object declared in class ChunkedFilter > > > [javac] I#2 extends Iterator<E#2> declared in > > > interface IFilter > > > [javac] E#2 extends Object declared in interface IFilter > > > [javac] F#2 extends Object declared in interface IFilter > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/striterator/ChunkedFilter.java:111: > > > error: name clash: filter(Iterator) in ChunkedFilter and > > > filter(I) in IFilter have the same erasure, yet neither > > overrides the > > > other > > > [javac] public IChunkedOrderedIterator<F> > > > filter(final Iterator src) { > > > [javac] ^ > > > [javac] where F#1,I,E,F#2 are type-variables: > > > [javac] F#1 extends Object declared in class ChunkedFilter > > > [javac] I extends Iterator<E> declared in > interface IFilter > > > [javac] E extends Object declared in interface IFilter > > > [javac] F#2 extends Object declared in interface IFilter > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:205: > > > error: ts has private access in ValueAge > > > [javac] final long maxAge = now - peek().ts; > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:212: > > > error: ts has private access in ValueAge > > > [javac] age = now - x.ts; > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:221: > > > error: ref has private access in ValueAge > > > [javac] log.trace("Evicting: " + > > > x.ref + " : timeout=" > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > > va/com/bigdata/service/jini/master/TaskMaster.> java:837: > > > error: beginMillis has private access in JobState > > > [javac] jobState.beginMillis = > > > System.currentTimeMillis(); > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > > va/com/bigdata/service/jini/master/TaskMaster.> java:1093: > > > error: endMillis has private access in JobState > > > [javac] jobState.endMillis = > System.currentTimeMillis(); > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > > va/com/bigdata/service/jini/master/TaskMaster.> java:1138: > > > error: endMillis has private access in JobState > > > [javac] jobState.endMillis = > > > System.currentTimeMillis(); > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > > va/com/bigdata/service/jini/master/TaskMaster.> java:1289: > > > error: deleteJob has private access in JobState > > > [javac] if (jobState.deleteJob > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > > va/com/bigdata/service/jini/master/TaskMaster.> java:1373: > > > error: resumedJob has private access in JobState > > > [javac] jobState.resumedJob = true; > > > [javac] ^ > > > [javac] Note: Some input files use or override a > deprecated API. > > > [javac] Note: Recompile with -Xlint:deprecation for details. > > > [javac] Note: Some input files use unchecked or unsafe > > operations. > > > [javac] Note: Recompile with -Xlint:unchecked for details. > > > [javac] 14 errors > > > [javac] 1 warning > > > > > > BUILD FAILED > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml:223: > > > Compile failed; see the compiler error output for details. > > > > > > Total time: 20 seconds > > > > > > -------------------------------------------------------------- > > > ---------------- > > > This SF email is sponsosred by: > > > Try Windows Azure free for 90 days Click Here > > > http://p.sf.net/sfu/sfd2d-msazure > > > _______________________________________________ > > > Bigdata-developers mailing list > > > Big...@li... > > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > > -------------------------------------------------------------- > > ---------------- > > This SF email is sponsosred by: > > Try Windows Azure free for 90 days Click Here > > http://p.sf.net/sfu/sfd2d-msazure > > _______________________________________________ > > Bigdata-developers mailing list > > Big...@li... > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Peter A. <ans...@gm...> - 2012-03-15 23:50:21
|
Hi Bryan, Thanks, it now it works for me with both "-source 1.6 -target 1.6" and "-source 1.7 -target 1.7". The runtime (with matching JDK) that I have installed is: $ java -version java version "1.7.0_147-icedtea" OpenJDK Runtime Environment (IcedTea7 2.0) (7~b147-2.0-0ubuntu0.11.10.1) OpenJDK 64-Bit Server VM (build 21.0-b17, mixed mode) $ javac -version javac 1.7.0_147 I will get to testing bigdata out now. Thanks, Peter On 15 March 2012 22:44, Bryan Thompson <br...@sy...> wrote: > Peter, > > I've worked through all the compiler errors for Java 7. You can check out the code from the 1.1.x maintenance branch [1] and give it a try. Please let me know how it works and which compilers you try. > > Thanks, > Bryan > > [1] https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/BIGDATA_RELEASE_1_1_0 > >> -----Original Message----- >> From: Bryan Thompson >> Sent: Thursday, March 15, 2012 8:26 AM >> To: Bryan Thompson; Peter Ansell; >> big...@li... >> Subject: RE: [Bigdata-developers] Compile errors with bigdata >> 1.1.0 using openjdk-7 >> >> Peter, >> >> I replicated your results using Oracle Java 7u3 with the >> source and target set to 1.7. I filed a ticket for this [1]. >> Most of these are trivial fixes. Apparently Java 7 does not >> permit private fields in outer/inner classes to be >> referenced. We are having this problem when referencing the >> private field in the outer class from a static inner class >> and visa versa. >> >> I'll take a look at the remaining compiler error now. >> >> Bryan >> >> [1] https://sourceforge.net/apps/trac/bigdata/ticket/517#comment:1 >> >> > -----Original Message----- >> > From: Bryan Thompson [mailto:br...@sy...] >> > Sent: Thursday, March 15, 2012 5:01 AM >> > To: Peter Ansell; big...@li... >> > Subject: Re: [Bigdata-developers] Compile errors with bigdata 1.1.0 >> > using openjdk-7 >> > >> > Peter, >> > >> > Do you get the same results with Java 7 and openjdk? The >> > build.propeties file specifies: >> > >> > > [echo] source="1.6" >> > > [echo] target="1.6" >> > >> > You might need to change that to compile under Java 7. >> > >> > We have not tried compilation or execution under Java 7. The main >> > reason for us to go to Java 7 is the async IO support. >> > However, the first Java 7 release was buggy and I am not >> sure that it >> > will be that easy to support async IO and Java 6 in the same >> > distribution. >> > >> > Thanks, >> > Bryan >> > >> > > -----Original Message----- >> > > From: Peter Ansell [mailto:ans...@gm...] >> > > Sent: Thursday, March 15, 2012 12:53 AM >> > > To: big...@li... >> > > Subject: [Bigdata-developers] Compile errors with bigdata >> > 1.1.0 using >> > > openjdk-7 >> > > >> > > Hi, >> > > >> > > I am trying to compile both tags/BIGDATA_RELEASE_1_1_0 and >> > > branches/BIGDATA_RELEASE_1_1_0 using openjdk-7 on Ubuntu >> > 11.10 and I >> > > am getting similar compile errors from both. One of them >> relates to >> > > generics while the others relate to private field access. >> Is either >> > > Java-7 or OpenJDK in general supported by bigdata? I >> > originally tried >> > > compiling with the tag but the errors were very similar with the >> > > branch so the log below is based on the branch. I fear that >> > if I try >> > > to run bigdata directly using the binaries that these >> > compile errors >> > > may generate runtime errors so I am not going to try that >> just yet. >> > > >> > > Thanks, >> > > >> > > Peter >> > > >> > > $ svn info >> > > Path: . >> > > URL: >> > > https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/B >> > > IGDATA_RELEASE_1_1_0 >> > > Repository Root: >> https://bigdata.svn.sourceforge.net/svnroot/bigdata >> > > Repository UUID: 8f7bc0f5-282e-0410-95e3-8d296e9bb460 >> > > Revision: 6126 >> > > Node Kind: directory >> > > Schedule: normal >> > > Last Changed Author: thompsonbry >> > > Last Changed Rev: 6126 >> > > Last Changed Date: 2012-03-15 10:38:44 +1000 (Thu, 15 Mar 2012) >> > > >> > > $ java -version >> > > java version "1.7.0_147-icedtea" >> > > OpenJDK Runtime Environment (IcedTea7 2.0) >> > > (7~b147-2.0-0ubuntu0.11.10.1) OpenJDK 64-Bit Server VM (build >> > > 21.0-b17, mixed mode) >> > > >> > > >> > > $ ant >> > > Buildfile: /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml >> > > >> > > clean: >> > > >> > > bundle: >> > > [copy] Copying 32 files to >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib >> > > [copy] Copying 49 files to >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib >> > > >> > > prepare: >> > > [echo] version=bigdata-1.1.0 >> > > [echo] svn.checkout=true >> > > [mkdir] Created dir: >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes >> > > [mkdir] Created dir: >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/docs >> > > >> > > buildinfo: >> > > [echo] >> > > [echo] package com.bigdata; >> > > [echo] public class BuildInfo { >> > > [echo] public static final String buildVersion="1.1.0"; >> > > [echo] public static final String buildVersionOSGI="1.0"; >> > > [echo] public static final String svnRevision="6126"; >> > > [echo] public static final String >> > > svnURL="https://bigdata.svn.sourceforge.net/svnroot/bigdata/br >> > > anches/BIGDATA_RELEASE_1_1_0"; >> > > [echo] public static final String >> > buildTimestamp="2012/03/15 >> > > 14:37:20 EST"; >> > > [echo] public static final String buildUser="peter"; >> > > [echo] public static final String >> > > buildHost="${env.COMPUTERNAME}"; >> > > [echo] public static final String osArch="amd64"; >> > > [echo] public static final String osName="Linux"; >> > > [echo] public static final String >> > osVersion="3.0.0-16-generic"; >> > > [echo] } >> > > >> > > compile: >> > > [echo] javac >> > > [echo] destdir="ant-build" >> > > [echo] fork="yes" >> > > [echo] memorymaximumsize="1g" >> > > [echo] debug="yes" >> > > [echo] debuglevel="lines,vars,source" >> > > [echo] verbose="off" >> > > [echo] encoding="Cp1252" >> > > [echo] source="1.6" >> > > [echo] target="1.6" >> > > [javac] Compiling 2162 source files to >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes >> > > [javac] javac 1.7.0_147 >> > > [javac] warning: [options] bootstrap class path not set in >> > > conjunction with -source 1.6 >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/counters/striped/StripedCounters.java:140: >> > > error: locks has private access in StripedCounters >> > > [javac] parent.locks[threadHash].unlock(); >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/counters/striped/StripedCounters.java:191: >> > > error: parent has private access in StripedCounters >> > > [javac] t.parent = (T) this; >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/counters/striped/StripedCounters.java:192: >> > > error: batchSize has private access in StripedCounters >> > > [javac] t.batchSize = t.n = batchSize; >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/counters/striped/StripedCounters.java:192: >> > > error: n has private access in StripedCounters >> > > [javac] t.batchSize = t.n = batchSize; >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/striterator/ChunkedFilter.java:101: >> > > error: name clash: filter(I#1) in ChunkedFilter overrides >> a method >> > > whose erasure is the same as another method, yet neither >> > overrides the >> > > other >> > > [javac] public IChunkedOrderedIterator<F> >> > > filter(final I src) { >> > > [javac] ^ >> > > [javac] first method: filter(Iterator) in ChunkedFilter >> > > [javac] second method: filter(I#2) in IFilter >> > > [javac] where I#1,E#1,F#1,I#2,E#2,F#2 are type-variables: >> > > [javac] I#1 extends IChunkedIterator<E#1> >> declared in class >> > > ChunkedFilter >> > > [javac] E#1 extends Object declared in class ChunkedFilter >> > > [javac] F#1 extends Object declared in class ChunkedFilter >> > > [javac] I#2 extends Iterator<E#2> declared in >> > > interface IFilter >> > > [javac] E#2 extends Object declared in interface IFilter >> > > [javac] F#2 extends Object declared in interface IFilter >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > m/bigdata/striterator/ChunkedFilter.java:111: >> > > error: name clash: filter(Iterator) in ChunkedFilter and >> > > filter(I) in IFilter have the same erasure, yet neither >> > overrides the >> > > other >> > > [javac] public IChunkedOrderedIterator<F> >> > > filter(final Iterator src) { >> > > [javac] ^ >> > > [javac] where F#1,I,E,F#2 are type-variables: >> > > [javac] F#1 extends Object declared in class ChunkedFilter >> > > [javac] I extends Iterator<E> declared in >> interface IFilter >> > > [javac] E extends Object declared in interface IFilter >> > > [javac] F#2 extends Object declared in interface IFilter >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > >> m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:205: >> > > error: ts has private access in ValueAge >> > > [javac] final long maxAge = now - peek().ts; >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > >> m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:212: >> > > error: ts has private access in ValueAge >> > > [javac] age = now - x.ts; >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co >> > > >> m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:221: >> > > error: ref has private access in ValueAge >> > > [javac] log.trace("Evicting: " + >> > > x.ref + " : timeout=" >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja >> > va/com/bigdata/service/jini/master/TaskMaster.> java:837: >> > > error: beginMillis has private access in JobState >> > > [javac] jobState.beginMillis = >> > > System.currentTimeMillis(); >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja >> > va/com/bigdata/service/jini/master/TaskMaster.> java:1093: >> > > error: endMillis has private access in JobState >> > > [javac] jobState.endMillis = >> System.currentTimeMillis(); >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja >> > va/com/bigdata/service/jini/master/TaskMaster.> java:1138: >> > > error: endMillis has private access in JobState >> > > [javac] jobState.endMillis = >> > > System.currentTimeMillis(); >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja >> > va/com/bigdata/service/jini/master/TaskMaster.> java:1289: >> > > error: deleteJob has private access in JobState >> > > [javac] if (jobState.deleteJob >> > > [javac] ^ >> > > [javac] >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja >> > va/com/bigdata/service/jini/master/TaskMaster.> java:1373: >> > > error: resumedJob has private access in JobState >> > > [javac] jobState.resumedJob = true; >> > > [javac] ^ >> > > [javac] Note: Some input files use or override a >> deprecated API. >> > > [javac] Note: Recompile with -Xlint:deprecation for details. >> > > [javac] Note: Some input files use unchecked or unsafe >> > operations. >> > > [javac] Note: Recompile with -Xlint:unchecked for details. >> > > [javac] 14 errors >> > > [javac] 1 warning >> > > >> > > BUILD FAILED >> > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml:223: >> > > Compile failed; see the compiler error output for details. >> > > >> > > Total time: 20 seconds >> > > >> > > -------------------------------------------------------------- >> > > ---------------- >> > > This SF email is sponsosred by: >> > > Try Windows Azure free for 90 days Click Here >> > > http://p.sf.net/sfu/sfd2d-msazure >> > > _______________________________________________ >> > > Bigdata-developers mailing list >> > > Big...@li... >> > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> > > >> > -------------------------------------------------------------- >> > ---------------- >> > This SF email is sponsosred by: >> > Try Windows Azure free for 90 days Click Here >> > http://p.sf.net/sfu/sfd2d-msazure >> > _______________________________________________ >> > Bigdata-developers mailing list >> > Big...@li... >> > https://lists.sourceforge.net/lists/listinfo/bigdata-developers >> > |
From: Bryan T. <br...@sy...> - 2012-03-15 12:44:36
|
Peter, I've worked through all the compiler errors for Java 7. You can check out the code from the 1.1.x maintenance branch [1] and give it a try. Please let me know how it works and which compilers you try. Thanks, Bryan [1] https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/BIGDATA_RELEASE_1_1_0 > -----Original Message----- > From: Bryan Thompson > Sent: Thursday, March 15, 2012 8:26 AM > To: Bryan Thompson; Peter Ansell; > big...@li... > Subject: RE: [Bigdata-developers] Compile errors with bigdata > 1.1.0 using openjdk-7 > > Peter, > > I replicated your results using Oracle Java 7u3 with the > source and target set to 1.7. I filed a ticket for this [1]. > Most of these are trivial fixes. Apparently Java 7 does not > permit private fields in outer/inner classes to be > referenced. We are having this problem when referencing the > private field in the outer class from a static inner class > and visa versa. > > I'll take a look at the remaining compiler error now. > > Bryan > > [1] https://sourceforge.net/apps/trac/bigdata/ticket/517#comment:1 > > > -----Original Message----- > > From: Bryan Thompson [mailto:br...@sy...] > > Sent: Thursday, March 15, 2012 5:01 AM > > To: Peter Ansell; big...@li... > > Subject: Re: [Bigdata-developers] Compile errors with bigdata 1.1.0 > > using openjdk-7 > > > > Peter, > > > > Do you get the same results with Java 7 and openjdk? The > > build.propeties file specifies: > > > > > [echo] source="1.6" > > > [echo] target="1.6" > > > > You might need to change that to compile under Java 7. > > > > We have not tried compilation or execution under Java 7. The main > > reason for us to go to Java 7 is the async IO support. > > However, the first Java 7 release was buggy and I am not > sure that it > > will be that easy to support async IO and Java 6 in the same > > distribution. > > > > Thanks, > > Bryan > > > > > -----Original Message----- > > > From: Peter Ansell [mailto:ans...@gm...] > > > Sent: Thursday, March 15, 2012 12:53 AM > > > To: big...@li... > > > Subject: [Bigdata-developers] Compile errors with bigdata > > 1.1.0 using > > > openjdk-7 > > > > > > Hi, > > > > > > I am trying to compile both tags/BIGDATA_RELEASE_1_1_0 and > > > branches/BIGDATA_RELEASE_1_1_0 using openjdk-7 on Ubuntu > > 11.10 and I > > > am getting similar compile errors from both. One of them > relates to > > > generics while the others relate to private field access. > Is either > > > Java-7 or OpenJDK in general supported by bigdata? I > > originally tried > > > compiling with the tag but the errors were very similar with the > > > branch so the log below is based on the branch. I fear that > > if I try > > > to run bigdata directly using the binaries that these > > compile errors > > > may generate runtime errors so I am not going to try that > just yet. > > > > > > Thanks, > > > > > > Peter > > > > > > $ svn info > > > Path: . > > > URL: > > > https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/B > > > IGDATA_RELEASE_1_1_0 > > > Repository Root: > https://bigdata.svn.sourceforge.net/svnroot/bigdata > > > Repository UUID: 8f7bc0f5-282e-0410-95e3-8d296e9bb460 > > > Revision: 6126 > > > Node Kind: directory > > > Schedule: normal > > > Last Changed Author: thompsonbry > > > Last Changed Rev: 6126 > > > Last Changed Date: 2012-03-15 10:38:44 +1000 (Thu, 15 Mar 2012) > > > > > > $ java -version > > > java version "1.7.0_147-icedtea" > > > OpenJDK Runtime Environment (IcedTea7 2.0) > > > (7~b147-2.0-0ubuntu0.11.10.1) OpenJDK 64-Bit Server VM (build > > > 21.0-b17, mixed mode) > > > > > > > > > $ ant > > > Buildfile: /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml > > > > > > clean: > > > > > > bundle: > > > [copy] Copying 32 files to > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib > > > [copy] Copying 49 files to > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib > > > > > > prepare: > > > [echo] version=bigdata-1.1.0 > > > [echo] svn.checkout=true > > > [mkdir] Created dir: > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes > > > [mkdir] Created dir: > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/docs > > > > > > buildinfo: > > > [echo] > > > [echo] package com.bigdata; > > > [echo] public class BuildInfo { > > > [echo] public static final String buildVersion="1.1.0"; > > > [echo] public static final String buildVersionOSGI="1.0"; > > > [echo] public static final String svnRevision="6126"; > > > [echo] public static final String > > > svnURL="https://bigdata.svn.sourceforge.net/svnroot/bigdata/br > > > anches/BIGDATA_RELEASE_1_1_0"; > > > [echo] public static final String > > buildTimestamp="2012/03/15 > > > 14:37:20 EST"; > > > [echo] public static final String buildUser="peter"; > > > [echo] public static final String > > > buildHost="${env.COMPUTERNAME}"; > > > [echo] public static final String osArch="amd64"; > > > [echo] public static final String osName="Linux"; > > > [echo] public static final String > > osVersion="3.0.0-16-generic"; > > > [echo] } > > > > > > compile: > > > [echo] javac > > > [echo] destdir="ant-build" > > > [echo] fork="yes" > > > [echo] memorymaximumsize="1g" > > > [echo] debug="yes" > > > [echo] debuglevel="lines,vars,source" > > > [echo] verbose="off" > > > [echo] encoding="Cp1252" > > > [echo] source="1.6" > > > [echo] target="1.6" > > > [javac] Compiling 2162 source files to > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes > > > [javac] javac 1.7.0_147 > > > [javac] warning: [options] bootstrap class path not set in > > > conjunction with -source 1.6 > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/counters/striped/StripedCounters.java:140: > > > error: locks has private access in StripedCounters > > > [javac] parent.locks[threadHash].unlock(); > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/counters/striped/StripedCounters.java:191: > > > error: parent has private access in StripedCounters > > > [javac] t.parent = (T) this; > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/counters/striped/StripedCounters.java:192: > > > error: batchSize has private access in StripedCounters > > > [javac] t.batchSize = t.n = batchSize; > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/counters/striped/StripedCounters.java:192: > > > error: n has private access in StripedCounters > > > [javac] t.batchSize = t.n = batchSize; > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/striterator/ChunkedFilter.java:101: > > > error: name clash: filter(I#1) in ChunkedFilter overrides > a method > > > whose erasure is the same as another method, yet neither > > overrides the > > > other > > > [javac] public IChunkedOrderedIterator<F> > > > filter(final I src) { > > > [javac] ^ > > > [javac] first method: filter(Iterator) in ChunkedFilter > > > [javac] second method: filter(I#2) in IFilter > > > [javac] where I#1,E#1,F#1,I#2,E#2,F#2 are type-variables: > > > [javac] I#1 extends IChunkedIterator<E#1> > declared in class > > > ChunkedFilter > > > [javac] E#1 extends Object declared in class ChunkedFilter > > > [javac] F#1 extends Object declared in class ChunkedFilter > > > [javac] I#2 extends Iterator<E#2> declared in > > > interface IFilter > > > [javac] E#2 extends Object declared in interface IFilter > > > [javac] F#2 extends Object declared in interface IFilter > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > m/bigdata/striterator/ChunkedFilter.java:111: > > > error: name clash: filter(Iterator) in ChunkedFilter and > > > filter(I) in IFilter have the same erasure, yet neither > > overrides the > > > other > > > [javac] public IChunkedOrderedIterator<F> > > > filter(final Iterator src) { > > > [javac] ^ > > > [javac] where F#1,I,E,F#2 are type-variables: > > > [javac] F#1 extends Object declared in class ChunkedFilter > > > [javac] I extends Iterator<E> declared in > interface IFilter > > > [javac] E extends Object declared in interface IFilter > > > [javac] F#2 extends Object declared in interface IFilter > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:205: > > > error: ts has private access in ValueAge > > > [javac] final long maxAge = now - peek().ts; > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:212: > > > error: ts has private access in ValueAge > > > [javac] age = now - x.ts; > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > > > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:221: > > > error: ref has private access in ValueAge > > > [javac] log.trace("Evicting: " + > > > x.ref + " : timeout=" > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > > va/com/bigdata/service/jini/master/TaskMaster.> java:837: > > > error: beginMillis has private access in JobState > > > [javac] jobState.beginMillis = > > > System.currentTimeMillis(); > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > > va/com/bigdata/service/jini/master/TaskMaster.> java:1093: > > > error: endMillis has private access in JobState > > > [javac] jobState.endMillis = > System.currentTimeMillis(); > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > > va/com/bigdata/service/jini/master/TaskMaster.> java:1138: > > > error: endMillis has private access in JobState > > > [javac] jobState.endMillis = > > > System.currentTimeMillis(); > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > > va/com/bigdata/service/jini/master/TaskMaster.> java:1289: > > > error: deleteJob has private access in JobState > > > [javac] if (jobState.deleteJob > > > [javac] ^ > > > [javac] > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > > va/com/bigdata/service/jini/master/TaskMaster.> java:1373: > > > error: resumedJob has private access in JobState > > > [javac] jobState.resumedJob = true; > > > [javac] ^ > > > [javac] Note: Some input files use or override a > deprecated API. > > > [javac] Note: Recompile with -Xlint:deprecation for details. > > > [javac] Note: Some input files use unchecked or unsafe > > operations. > > > [javac] Note: Recompile with -Xlint:unchecked for details. > > > [javac] 14 errors > > > [javac] 1 warning > > > > > > BUILD FAILED > > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml:223: > > > Compile failed; see the compiler error output for details. > > > > > > Total time: 20 seconds > > > > > > -------------------------------------------------------------- > > > ---------------- > > > This SF email is sponsosred by: > > > Try Windows Azure free for 90 days Click Here > > > http://p.sf.net/sfu/sfd2d-msazure > > > _______________________________________________ > > > Bigdata-developers mailing list > > > Big...@li... > > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > > -------------------------------------------------------------- > > ---------------- > > This SF email is sponsosred by: > > Try Windows Azure free for 90 days Click Here > > http://p.sf.net/sfu/sfd2d-msazure > > _______________________________________________ > > Bigdata-developers mailing list > > Big...@li... > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > |
From: Bryan T. <br...@sy...> - 2012-03-15 12:25:58
|
Peter, I replicated your results using Oracle Java 7u3 with the source and target set to 1.7. I filed a ticket for this [1]. Most of these are trivial fixes. Apparently Java 7 does not permit private fields in outer/inner classes to be referenced. We are having this problem when referencing the private field in the outer class from a static inner class and visa versa. I'll take a look at the remaining compiler error now. Bryan [1] https://sourceforge.net/apps/trac/bigdata/ticket/517#comment:1 > -----Original Message----- > From: Bryan Thompson [mailto:br...@sy...] > Sent: Thursday, March 15, 2012 5:01 AM > To: Peter Ansell; big...@li... > Subject: Re: [Bigdata-developers] Compile errors with bigdata > 1.1.0 using openjdk-7 > > Peter, > > Do you get the same results with Java 7 and openjdk? The > build.propeties file specifies: > > > [echo] source="1.6" > > [echo] target="1.6" > > You might need to change that to compile under Java 7. > > We have not tried compilation or execution under Java 7. The > main reason for us to go to Java 7 is the async IO support. > However, the first Java 7 release was buggy and I am not sure > that it will be that easy to support async IO and Java 6 in > the same distribution. > > Thanks, > Bryan > > > -----Original Message----- > > From: Peter Ansell [mailto:ans...@gm...] > > Sent: Thursday, March 15, 2012 12:53 AM > > To: big...@li... > > Subject: [Bigdata-developers] Compile errors with bigdata > 1.1.0 using > > openjdk-7 > > > > Hi, > > > > I am trying to compile both tags/BIGDATA_RELEASE_1_1_0 and > > branches/BIGDATA_RELEASE_1_1_0 using openjdk-7 on Ubuntu > 11.10 and I > > am getting similar compile errors from both. One of them relates to > > generics while the others relate to private field access. Is either > > Java-7 or OpenJDK in general supported by bigdata? I > originally tried > > compiling with the tag but the errors were very similar with the > > branch so the log below is based on the branch. I fear that > if I try > > to run bigdata directly using the binaries that these > compile errors > > may generate runtime errors so I am not going to try that just yet. > > > > Thanks, > > > > Peter > > > > $ svn info > > Path: . > > URL: > > https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/B > > IGDATA_RELEASE_1_1_0 > > Repository Root: https://bigdata.svn.sourceforge.net/svnroot/bigdata > > Repository UUID: 8f7bc0f5-282e-0410-95e3-8d296e9bb460 > > Revision: 6126 > > Node Kind: directory > > Schedule: normal > > Last Changed Author: thompsonbry > > Last Changed Rev: 6126 > > Last Changed Date: 2012-03-15 10:38:44 +1000 (Thu, 15 Mar 2012) > > > > $ java -version > > java version "1.7.0_147-icedtea" > > OpenJDK Runtime Environment (IcedTea7 2.0) > > (7~b147-2.0-0ubuntu0.11.10.1) OpenJDK 64-Bit Server VM (build > > 21.0-b17, mixed mode) > > > > > > $ ant > > Buildfile: /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml > > > > clean: > > > > bundle: > > [copy] Copying 32 files to > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib > > [copy] Copying 49 files to > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib > > > > prepare: > > [echo] version=bigdata-1.1.0 > > [echo] svn.checkout=true > > [mkdir] Created dir: > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes > > [mkdir] Created dir: > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/docs > > > > buildinfo: > > [echo] > > [echo] package com.bigdata; > > [echo] public class BuildInfo { > > [echo] public static final String buildVersion="1.1.0"; > > [echo] public static final String buildVersionOSGI="1.0"; > > [echo] public static final String svnRevision="6126"; > > [echo] public static final String > > svnURL="https://bigdata.svn.sourceforge.net/svnroot/bigdata/br > > anches/BIGDATA_RELEASE_1_1_0"; > > [echo] public static final String > buildTimestamp="2012/03/15 > > 14:37:20 EST"; > > [echo] public static final String buildUser="peter"; > > [echo] public static final String > > buildHost="${env.COMPUTERNAME}"; > > [echo] public static final String osArch="amd64"; > > [echo] public static final String osName="Linux"; > > [echo] public static final String > osVersion="3.0.0-16-generic"; > > [echo] } > > > > compile: > > [echo] javac > > [echo] destdir="ant-build" > > [echo] fork="yes" > > [echo] memorymaximumsize="1g" > > [echo] debug="yes" > > [echo] debuglevel="lines,vars,source" > > [echo] verbose="off" > > [echo] encoding="Cp1252" > > [echo] source="1.6" > > [echo] target="1.6" > > [javac] Compiling 2162 source files to > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes > > [javac] javac 1.7.0_147 > > [javac] warning: [options] bootstrap class path not set in > > conjunction with -source 1.6 > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > m/bigdata/counters/striped/StripedCounters.java:140: > > error: locks has private access in StripedCounters > > [javac] parent.locks[threadHash].unlock(); > > [javac] ^ > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > m/bigdata/counters/striped/StripedCounters.java:191: > > error: parent has private access in StripedCounters > > [javac] t.parent = (T) this; > > [javac] ^ > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > m/bigdata/counters/striped/StripedCounters.java:192: > > error: batchSize has private access in StripedCounters > > [javac] t.batchSize = t.n = batchSize; > > [javac] ^ > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > m/bigdata/counters/striped/StripedCounters.java:192: > > error: n has private access in StripedCounters > > [javac] t.batchSize = t.n = batchSize; > > [javac] ^ > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > m/bigdata/striterator/ChunkedFilter.java:101: > > error: name clash: filter(I#1) in ChunkedFilter overrides a method > > whose erasure is the same as another method, yet neither > overrides the > > other > > [javac] public IChunkedOrderedIterator<F> > > filter(final I src) { > > [javac] ^ > > [javac] first method: filter(Iterator) in ChunkedFilter > > [javac] second method: filter(I#2) in IFilter > > [javac] where I#1,E#1,F#1,I#2,E#2,F#2 are type-variables: > > [javac] I#1 extends IChunkedIterator<E#1> declared in class > > ChunkedFilter > > [javac] E#1 extends Object declared in class ChunkedFilter > > [javac] F#1 extends Object declared in class ChunkedFilter > > [javac] I#2 extends Iterator<E#2> declared in > > interface IFilter > > [javac] E#2 extends Object declared in interface IFilter > > [javac] F#2 extends Object declared in interface IFilter > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > m/bigdata/striterator/ChunkedFilter.java:111: > > error: name clash: filter(Iterator) in ChunkedFilter and > > filter(I) in IFilter have the same erasure, yet neither > overrides the > > other > > [javac] public IChunkedOrderedIterator<F> > > filter(final Iterator src) { > > [javac] ^ > > [javac] where F#1,I,E,F#2 are type-variables: > > [javac] F#1 extends Object declared in class ChunkedFilter > > [javac] I extends Iterator<E> declared in interface IFilter > > [javac] E extends Object declared in interface IFilter > > [javac] F#2 extends Object declared in interface IFilter > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:205: > > error: ts has private access in ValueAge > > [javac] final long maxAge = now - peek().ts; > > [javac] ^ > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:212: > > error: ts has private access in ValueAge > > [javac] age = now - x.ts; > > [javac] ^ > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:221: > > error: ref has private access in ValueAge > > [javac] log.trace("Evicting: " + > > x.ref + " : timeout=" > > [javac] ^ > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > va/com/bigdata/service/jini/master/TaskMaster.> java:837: > > error: beginMillis has private access in JobState > > [javac] jobState.beginMillis = > > System.currentTimeMillis(); > > [javac] ^ > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > va/com/bigdata/service/jini/master/TaskMaster.> java:1093: > > error: endMillis has private access in JobState > > [javac] jobState.endMillis = System.currentTimeMillis(); > > [javac] ^ > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > va/com/bigdata/service/jini/master/TaskMaster.> java:1138: > > error: endMillis has private access in JobState > > [javac] jobState.endMillis = > > System.currentTimeMillis(); > > [javac] ^ > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > va/com/bigdata/service/jini/master/TaskMaster.> java:1289: > > error: deleteJob has private access in JobState > > [javac] if (jobState.deleteJob > > [javac] ^ > > [javac] > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja > va/com/bigdata/service/jini/master/TaskMaster.> java:1373: > > error: resumedJob has private access in JobState > > [javac] jobState.resumedJob = true; > > [javac] ^ > > [javac] Note: Some input files use or override a deprecated API. > > [javac] Note: Recompile with -Xlint:deprecation for details. > > [javac] Note: Some input files use unchecked or unsafe > operations. > > [javac] Note: Recompile with -Xlint:unchecked for details. > > [javac] 14 errors > > [javac] 1 warning > > > > BUILD FAILED > > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml:223: > > Compile failed; see the compiler error output for details. > > > > Total time: 20 seconds > > > > -------------------------------------------------------------- > > ---------------- > > This SF email is sponsosred by: > > Try Windows Azure free for 90 days Click Here > > http://p.sf.net/sfu/sfd2d-msazure > > _______________________________________________ > > Bigdata-developers mailing list > > Big...@li... > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > -------------------------------------------------------------- > ---------------- > This SF email is sponsosred by: > Try Windows Azure free for 90 days Click Here > http://p.sf.net/sfu/sfd2d-msazure > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Bryan T. <br...@sy...> - 2012-03-15 09:01:39
|
Peter, Do you get the same results with Java 7 and openjdk? The build.propeties file specifies: > [echo] source="1.6" > [echo] target="1.6" You might need to change that to compile under Java 7. We have not tried compilation or execution under Java 7. The main reason for us to go to Java 7 is the async IO support. However, the first Java 7 release was buggy and I am not sure that it will be that easy to support async IO and Java 6 in the same distribution. Thanks, Bryan > -----Original Message----- > From: Peter Ansell [mailto:ans...@gm...] > Sent: Thursday, March 15, 2012 12:53 AM > To: big...@li... > Subject: [Bigdata-developers] Compile errors with bigdata > 1.1.0 using openjdk-7 > > Hi, > > I am trying to compile both tags/BIGDATA_RELEASE_1_1_0 and > branches/BIGDATA_RELEASE_1_1_0 using openjdk-7 on Ubuntu > 11.10 and I am getting similar compile errors from both. One > of them relates to generics while the others relate to > private field access. Is either > Java-7 or OpenJDK in general supported by bigdata? I > originally tried compiling with the tag but the errors were > very similar with the branch so the log below is based on the > branch. I fear that if I try to run bigdata directly using > the binaries that these compile errors may generate runtime > errors so I am not going to try that just yet. > > Thanks, > > Peter > > $ svn info > Path: . > URL: > https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/B > IGDATA_RELEASE_1_1_0 > Repository Root: https://bigdata.svn.sourceforge.net/svnroot/bigdata > Repository UUID: 8f7bc0f5-282e-0410-95e3-8d296e9bb460 > Revision: 6126 > Node Kind: directory > Schedule: normal > Last Changed Author: thompsonbry > Last Changed Rev: 6126 > Last Changed Date: 2012-03-15 10:38:44 +1000 (Thu, 15 Mar 2012) > > $ java -version > java version "1.7.0_147-icedtea" > OpenJDK Runtime Environment (IcedTea7 2.0) > (7~b147-2.0-0ubuntu0.11.10.1) OpenJDK 64-Bit Server VM (build > 21.0-b17, mixed mode) > > > $ ant > Buildfile: /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml > > clean: > > bundle: > [copy] Copying 32 files to > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib > [copy] Copying 49 files to > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib > > prepare: > [echo] version=bigdata-1.1.0 > [echo] svn.checkout=true > [mkdir] Created dir: > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes > [mkdir] Created dir: > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/docs > > buildinfo: > [echo] > [echo] package com.bigdata; > [echo] public class BuildInfo { > [echo] public static final String buildVersion="1.1.0"; > [echo] public static final String buildVersionOSGI="1.0"; > [echo] public static final String svnRevision="6126"; > [echo] public static final String > svnURL="https://bigdata.svn.sourceforge.net/svnroot/bigdata/br > anches/BIGDATA_RELEASE_1_1_0"; > [echo] public static final String buildTimestamp="2012/03/15 > 14:37:20 EST"; > [echo] public static final String buildUser="peter"; > [echo] public static final String > buildHost="${env.COMPUTERNAME}"; > [echo] public static final String osArch="amd64"; > [echo] public static final String osName="Linux"; > [echo] public static final String osVersion="3.0.0-16-generic"; > [echo] } > > compile: > [echo] javac > [echo] destdir="ant-build" > [echo] fork="yes" > [echo] memorymaximumsize="1g" > [echo] debug="yes" > [echo] debuglevel="lines,vars,source" > [echo] verbose="off" > [echo] encoding="Cp1252" > [echo] source="1.6" > [echo] target="1.6" > [javac] Compiling 2162 source files to > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes > [javac] javac 1.7.0_147 > [javac] warning: [options] bootstrap class path not set > in conjunction with -source 1.6 > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > m/bigdata/counters/striped/StripedCounters.java:140: > error: locks has private access in StripedCounters > [javac] parent.locks[threadHash].unlock(); > [javac] ^ > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > m/bigdata/counters/striped/StripedCounters.java:191: > error: parent has private access in StripedCounters > [javac] t.parent = (T) this; > [javac] ^ > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > m/bigdata/counters/striped/StripedCounters.java:192: > error: batchSize has private access in StripedCounters > [javac] t.batchSize = t.n = batchSize; > [javac] ^ > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > m/bigdata/counters/striped/StripedCounters.java:192: > error: n has private access in StripedCounters > [javac] t.batchSize = t.n = batchSize; > [javac] ^ > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > m/bigdata/striterator/ChunkedFilter.java:101: > error: name clash: filter(I#1) in ChunkedFilter overrides a > method whose erasure is the same as another method, yet > neither overrides the other > [javac] public IChunkedOrderedIterator<F> > filter(final I src) { > [javac] ^ > [javac] first method: filter(Iterator) in ChunkedFilter > [javac] second method: filter(I#2) in IFilter > [javac] where I#1,E#1,F#1,I#2,E#2,F#2 are type-variables: > [javac] I#1 extends IChunkedIterator<E#1> declared in class > ChunkedFilter > [javac] E#1 extends Object declared in class ChunkedFilter > [javac] F#1 extends Object declared in class ChunkedFilter > [javac] I#2 extends Iterator<E#2> declared in > interface IFilter > [javac] E#2 extends Object declared in interface IFilter > [javac] F#2 extends Object declared in interface IFilter > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > m/bigdata/striterator/ChunkedFilter.java:111: > error: name clash: filter(Iterator) in ChunkedFilter and > filter(I) in IFilter have the same erasure, yet neither > overrides the other > [javac] public IChunkedOrderedIterator<F> > filter(final Iterator src) { > [javac] ^ > [javac] where F#1,I,E,F#2 are type-variables: > [javac] F#1 extends Object declared in class ChunkedFilter > [javac] I extends Iterator<E> declared in interface IFilter > [javac] E extends Object declared in interface IFilter > [javac] F#2 extends Object declared in interface IFilter > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:205: > error: ts has private access in ValueAge > [javac] final long maxAge = now - peek().ts; > [javac] ^ > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:212: > error: ts has private access in ValueAge > [javac] age = now - x.ts; > [javac] ^ > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/co > m/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:221: > error: ref has private access in ValueAge > [javac] log.trace("Evicting: " + > x.ref + " : timeout=" > [javac] ^ > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja va/com/bigdata/service/jini/master/TaskMaster.> java:837: > error: beginMillis has private access in JobState > [javac] jobState.beginMillis = > System.currentTimeMillis(); > [javac] ^ > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja va/com/bigdata/service/jini/master/TaskMaster.> java:1093: > error: endMillis has private access in JobState > [javac] jobState.endMillis = System.currentTimeMillis(); > [javac] ^ > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja va/com/bigdata/service/jini/master/TaskMaster.> java:1138: > error: endMillis has private access in JobState > [javac] jobState.endMillis = > System.currentTimeMillis(); > [javac] ^ > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja va/com/bigdata/service/jini/master/TaskMaster.> java:1289: > error: deleteJob has private access in JobState > [javac] if (jobState.deleteJob > [javac] ^ > [javac] > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/ja va/com/bigdata/service/jini/master/TaskMaster.> java:1373: > error: resumedJob has private access in JobState > [javac] jobState.resumedJob = true; > [javac] ^ > [javac] Note: Some input files use or override a deprecated API. > [javac] Note: Recompile with -Xlint:deprecation for details. > [javac] Note: Some input files use unchecked or unsafe operations. > [javac] Note: Recompile with -Xlint:unchecked for details. > [javac] 14 errors > [javac] 1 warning > > BUILD FAILED > /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml:223: > Compile failed; see the compiler error output for details. > > Total time: 20 seconds > > -------------------------------------------------------------- > ---------------- > This SF email is sponsosred by: > Try Windows Azure free for 90 days Click Here > http://p.sf.net/sfu/sfd2d-msazure > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
From: Peter A. <ans...@gm...> - 2012-03-15 04:52:53
|
Hi, I am trying to compile both tags/BIGDATA_RELEASE_1_1_0 and branches/BIGDATA_RELEASE_1_1_0 using openjdk-7 on Ubuntu 11.10 and I am getting similar compile errors from both. One of them relates to generics while the others relate to private field access. Is either Java-7 or OpenJDK in general supported by bigdata? I originally tried compiling with the tag but the errors were very similar with the branch so the log below is based on the branch. I fear that if I try to run bigdata directly using the binaries that these compile errors may generate runtime errors so I am not going to try that just yet. Thanks, Peter $ svn info Path: . URL: https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/BIGDATA_RELEASE_1_1_0 Repository Root: https://bigdata.svn.sourceforge.net/svnroot/bigdata Repository UUID: 8f7bc0f5-282e-0410-95e3-8d296e9bb460 Revision: 6126 Node Kind: directory Schedule: normal Last Changed Author: thompsonbry Last Changed Rev: 6126 Last Changed Date: 2012-03-15 10:38:44 +1000 (Thu, 15 Mar 2012) $ java -version java version "1.7.0_147-icedtea" OpenJDK Runtime Environment (IcedTea7 2.0) (7~b147-2.0-0ubuntu0.11.10.1) OpenJDK 64-Bit Server VM (build 21.0-b17, mixed mode) $ ant Buildfile: /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml clean: bundle: [copy] Copying 32 files to /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib [copy] Copying 49 files to /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/lib prepare: [echo] version=bigdata-1.1.0 [echo] svn.checkout=true [mkdir] Created dir: /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes [mkdir] Created dir: /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/docs buildinfo: [echo] [echo] package com.bigdata; [echo] public class BuildInfo { [echo] public static final String buildVersion="1.1.0"; [echo] public static final String buildVersionOSGI="1.0"; [echo] public static final String svnRevision="6126"; [echo] public static final String svnURL="https://bigdata.svn.sourceforge.net/svnroot/bigdata/branches/BIGDATA_RELEASE_1_1_0"; [echo] public static final String buildTimestamp="2012/03/15 14:37:20 EST"; [echo] public static final String buildUser="peter"; [echo] public static final String buildHost="${env.COMPUTERNAME}"; [echo] public static final String osArch="amd64"; [echo] public static final String osName="Linux"; [echo] public static final String osVersion="3.0.0-16-generic"; [echo] } compile: [echo] javac [echo] destdir="ant-build" [echo] fork="yes" [echo] memorymaximumsize="1g" [echo] debug="yes" [echo] debuglevel="lines,vars,source" [echo] verbose="off" [echo] encoding="Cp1252" [echo] source="1.6" [echo] target="1.6" [javac] Compiling 2162 source files to /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/ant-build/classes [javac] javac 1.7.0_147 [javac] warning: [options] bootstrap class path not set in conjunction with -source 1.6 [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/counters/striped/StripedCounters.java:140: error: locks has private access in StripedCounters [javac] parent.locks[threadHash].unlock(); [javac] ^ [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/counters/striped/StripedCounters.java:191: error: parent has private access in StripedCounters [javac] t.parent = (T) this; [javac] ^ [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/counters/striped/StripedCounters.java:192: error: batchSize has private access in StripedCounters [javac] t.batchSize = t.n = batchSize; [javac] ^ [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/counters/striped/StripedCounters.java:192: error: n has private access in StripedCounters [javac] t.batchSize = t.n = batchSize; [javac] ^ [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/striterator/ChunkedFilter.java:101: error: name clash: filter(I#1) in ChunkedFilter overrides a method whose erasure is the same as another method, yet neither overrides the other [javac] public IChunkedOrderedIterator<F> filter(final I src) { [javac] ^ [javac] first method: filter(Iterator) in ChunkedFilter [javac] second method: filter(I#2) in IFilter [javac] where I#1,E#1,F#1,I#2,E#2,F#2 are type-variables: [javac] I#1 extends IChunkedIterator<E#1> declared in class ChunkedFilter [javac] E#1 extends Object declared in class ChunkedFilter [javac] F#1 extends Object declared in class ChunkedFilter [javac] I#2 extends Iterator<E#2> declared in interface IFilter [javac] E#2 extends Object declared in interface IFilter [javac] F#2 extends Object declared in interface IFilter [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/striterator/ChunkedFilter.java:111: error: name clash: filter(Iterator) in ChunkedFilter and filter(I) in IFilter have the same erasure, yet neither overrides the other [javac] public IChunkedOrderedIterator<F> filter(final Iterator src) { [javac] ^ [javac] where F#1,I,E,F#2 are type-variables: [javac] F#1 extends Object declared in class ChunkedFilter [javac] I extends Iterator<E> declared in interface IFilter [javac] E extends Object declared in interface IFilter [javac] F#2 extends Object declared in interface IFilter [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:205: error: ts has private access in ValueAge [javac] final long maxAge = now - peek().ts; [javac] ^ [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:212: error: ts has private access in ValueAge [javac] age = now - x.ts; [javac] ^ [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata/src/java/com/bigdata/cache/SynchronizedHardReferenceQueueWithTimeout.java:221: error: ref has private access in ValueAge [javac] log.trace("Evicting: " + x.ref + " : timeout=" [javac] ^ [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/java/com/bigdata/service/jini/master/TaskMaster.java:837: error: beginMillis has private access in JobState [javac] jobState.beginMillis = System.currentTimeMillis(); [javac] ^ [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/java/com/bigdata/service/jini/master/TaskMaster.java:1093: error: endMillis has private access in JobState [javac] jobState.endMillis = System.currentTimeMillis(); [javac] ^ [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/java/com/bigdata/service/jini/master/TaskMaster.java:1138: error: endMillis has private access in JobState [javac] jobState.endMillis = System.currentTimeMillis(); [javac] ^ [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/java/com/bigdata/service/jini/master/TaskMaster.java:1289: error: deleteJob has private access in JobState [javac] if (jobState.deleteJob [javac] ^ [javac] /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/bigdata-jini/src/java/com/bigdata/service/jini/master/TaskMaster.java:1373: error: resumedJob has private access in JobState [javac] jobState.resumedJob = true; [javac] ^ [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. [javac] 14 errors [javac] 1 warning BUILD FAILED /home/peter/svnrepos/BIGDATA_RELEASE_1_1_0/build.xml:223: Compile failed; see the compiler error output for details. Total time: 20 seconds |
From: Bryan T. <br...@sy...> - 2012-03-14 12:34:56
|
All, I've put together a wiki page [1] on understanding and optimizing query performance with bigdata. Much of this was taken from an earlier blog post. However, I have updated some sections and also included some pointers on how to use logging to understand query evaluation. Thanks, Bryan [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=QueryOptimization |
From: Bryan T. <br...@sy...> - 2012-02-03 11:43:23
|
This is a 1.0.x maintenance release of bigdata(R). New users are encouraged to go directly to the 1.1.0 release. Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. You can download the WAR from: http://sourceforge.net/projects/bigdata/ You can checkout this release from: https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_5 Feature summary: - Single machine data storage to ~50B triples/quads (RWStore); - Clustered data storage is essentially unlimited; - Simple embedded and/or webapp deployment (NanoSparqlServer); - Triples, quads, or triples with provenance (SIDs); - 100% native SPARQL 1.0 evaluation with lots of query optimizations; - Fast RDFS+ inference and truth maintenance; - Fast statement level provenance mode (SIDs). Road map [3]: - High-volume analytic query and SPARQL 1.1 query, including aggregations; - SPARQL 1.1 Update, Property Paths, and Federation support; - Simplified deployment, configuration, and administration for clusters; and - High availability for the journal and the cluster. Change log: Note: Versions with (*) require data migration. For details, see [9]. 1.0.5 - http://sourceforge.net/apps/trac/bigdata/ticket/362 (Fix incompatible with log4j - slf4j bridge.) - http://sourceforge.net/apps/trac/bigdata/ticket/440 (BTree can not be cast to Name2Addr) - http://sourceforge.net/apps/trac/bigdata/ticket/453 (Releasing blob DeferredFree record) - http://sourceforge.net/apps/trac/bigdata/ticket/467 (IllegalStateException trying to access lexicon index using RWStore with recycling) 1.0.4 - http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) - http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) - http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) 1.0.3 - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) 1.0.2 - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) 1.0.1 (*) - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) For more information about bigdata, please see the following links: [1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted [3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm [8] http://sourceforge.net/projects/bigdata/files/bigdata/ [9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration About bigdata: Bigdata(r) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(r) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(r) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(r) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |
From: Bryan T. <br...@sy...> - 2012-01-25 19:23:06
|
This is a 1.0.x maintenance release of bigdata(R). New users are encouraged to go directly to the 1.1.0 release. Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. You can download the WAR from: http://sourceforge.net/projects/bigdata/ You can checkout this release from: https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_4 Feature summary: - Single machine data storage to ~50B triples/quads (RWStore); - Clustered data storage is essentially unlimited; - Simple embedded and/or webapp deployment (NanoSparqlServer); - Triples, quads, or triples with provenance (SIDs); - 100% native SPARQL 1.0 evaluation with lots of query optimizations; - Fast RDFS+ inference and truth maintenance; - Fast statement level provenance mode (SIDs). Road map [3]: - High-volume analytic query and SPARQL 1.1 query, including aggregations; - SPARQL 1.1 Update, Property Paths, and Federation support; - Simplified deployment, configuration, and administration for clusters; and - High availability for the journal and the cluster. Change log: Note: Versions with (*) require data migration. For details, see [9]. 1.0.4 - http://sourceforge.net/apps/trac/bigdata/ticket/443 (Logger for RWStore transaction service and recycler) - http://sourceforge.net/apps/trac/bigdata/ticket/445 (RWStore does not track tx release correctly) - http://sourceforge.net/apps/trac/bigdata/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) 1.0.3 - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) 1.0.2 - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) 1.0.1 (*) - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) For more information about bigdata, please see the following links: [1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page [2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted [3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm [8] http://sourceforge.net/projects/bigdata/files/bigdata/ [9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration About bigdata: Bigdata(r) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(r) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(r) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(r) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |