This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(139) |
Aug
(94) |
Sep
(232) |
Oct
(143) |
Nov
(138) |
Dec
(55) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(127) |
Feb
(90) |
Mar
(101) |
Apr
(74) |
May
(148) |
Jun
(241) |
Jul
(169) |
Aug
(121) |
Sep
(157) |
Oct
(199) |
Nov
(281) |
Dec
(75) |
2012 |
Jan
(107) |
Feb
(122) |
Mar
(184) |
Apr
(73) |
May
(14) |
Jun
(49) |
Jul
(26) |
Aug
(103) |
Sep
(133) |
Oct
(61) |
Nov
(51) |
Dec
(55) |
2013 |
Jan
(59) |
Feb
(72) |
Mar
(99) |
Apr
(62) |
May
(92) |
Jun
(19) |
Jul
(31) |
Aug
(138) |
Sep
(47) |
Oct
(83) |
Nov
(95) |
Dec
(111) |
2014 |
Jan
(125) |
Feb
(60) |
Mar
(119) |
Apr
(136) |
May
(270) |
Jun
(83) |
Jul
(88) |
Aug
(30) |
Sep
(47) |
Oct
(27) |
Nov
(23) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
(4) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: <tho...@us...> - 2014-11-06 15:29:42
|
Revision: 8708 http://sourceforge.net/p/bigdata/code/8708 Author: thompsonbry Date: 2014-11-06 15:29:37 +0000 (Thu, 06 Nov 2014) Log Message: ----------- Returning to snapshot builds. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/build.properties branches/BIGDATA_RELEASE_1_3_0/pom.xml Modified: branches/BIGDATA_RELEASE_1_3_0/build.properties =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/build.properties 2014-11-06 15:27:12 UTC (rev 8707) +++ branches/BIGDATA_RELEASE_1_3_0/build.properties 2014-11-06 15:29:37 UTC (rev 8708) @@ -95,14 +95,14 @@ # Set true to do a snapshot build. This changes the value of ${version} to # include the date. -snapshot=false +snapshot=true # Javadoc build may be disabled using this property. The javadoc target will # not be executed unless this property is defined (its value does not matter). # Note: The javadoc goes quite if you have enough memory, but can take forever # and then runs out of memory if the JVM is starved for RAM. The heap for the # javadoc JVM is explicitly set in the javadoc target in the build.xml file. -javadoc= +#javadoc= # packaging property set (rpm, deb). package.release=1 Modified: branches/BIGDATA_RELEASE_1_3_0/pom.xml =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/pom.xml 2014-11-06 15:27:12 UTC (rev 8707) +++ branches/BIGDATA_RELEASE_1_3_0/pom.xml 2014-11-06 15:29:37 UTC (rev 8708) @@ -52,7 +52,7 @@ <modelVersion>4.0.0</modelVersion> <groupId>com.bigdata</groupId> <artifactId>bigdata</artifactId> - <version>1.3.3-SNAPSHOT</version> + <version>1.3.4-SNAPSHOT</version> <packaging>pom</packaging> <name>bigdata(R)</name> <description>Bigdata(R) Maven Build</description> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-06 15:27:20
|
Revision: 8707 http://sourceforge.net/p/bigdata/code/8707 Author: thompsonbry Date: 2014-11-06 15:27:12 +0000 (Thu, 06 Nov 2014) Log Message: ----------- tagging the 1.3.4 release Added Paths: ----------- tags/BIGDATA_RELEASE_1_3_4/ This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-06 14:38:58
|
Revision: 8706 http://sourceforge.net/p/bigdata/code/8706 Author: thompsonbry Date: 2014-11-06 14:38:54 +0000 (Thu, 06 Nov 2014) Log Message: ----------- uncommenting the javadoc property so javadoc will be generated for 1.3.4 release. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/build.properties Modified: branches/BIGDATA_RELEASE_1_3_0/build.properties =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/build.properties 2014-11-06 14:21:08 UTC (rev 8705) +++ branches/BIGDATA_RELEASE_1_3_0/build.properties 2014-11-06 14:38:54 UTC (rev 8706) @@ -102,7 +102,7 @@ # Note: The javadoc goes quite if you have enough memory, but can take forever # and then runs out of memory if the JVM is starved for RAM. The heap for the # javadoc JVM is explicitly set in the javadoc target in the build.xml file. -#javadoc= +javadoc= # packaging property set (rpm, deb). package.release=1 This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-06 14:21:20
|
Revision: 8705 http://sourceforge.net/p/bigdata/code/8705 Author: thompsonbry Date: 2014-11-06 14:21:08 +0000 (Thu, 06 Nov 2014) Log Message: ----------- Bumping version for 1.3.4 release. Disabling snapshot builds. Added 1.3.4 release notes. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/build.properties Added Paths: ----------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_4.txt Added: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_4.txt =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_4.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_4.txt 2014-11-06 14:21:08 UTC (rev 8705) @@ -0,0 +1,548 @@ +This is a critical fix release of bigdata(R). All users are encouraged to upgrade immediately. + +Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal), highly available replication cluster mode (HAJournalServer), and a horizontally sharded cluster mode (BigdataFederation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The HAJournalServer adds replication, online backup, horizontal scaling of query, and high availability. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the HAJournalServer for high availability and linear scaling in query throughput. Choose the BigdataFederation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +Starting with the 1.3.0 release, we offer a tarball artifact [10] for easy installation of the HA replication cluster. + +You can download the WAR (standalone) or HA artifacts from: + +http://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://svn.code.sf.net/p/bigdata/code/tags/BIGDATA_RELEASE_1_3_4 + +Critical or otherwise of note in this minor release: + +- #1036 (Journal leaks storage with SPARQL UPDATE and REST API) + +New features in 1.3.x: + +- Java 7 is now required. +- High availability [10]. +- High availability load balancer. +- New RDF/SPARQL workbench. +- Blueprints API. +- RDF Graph Mining Service (GASService) [12]. +- Reification Done Right (RDR) support [11]. +- Property Path performance enhancements. +- Plus numerous other bug fixes and performance enhancements. + +Feature summary: + +- Highly Available Replication Clusters (HAJournalServer [10]) +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited (BigdataFederation); +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (SIDs); +- Fast RDFS+ inference and truth maintenance; +- Fast 100% native SPARQL 1.1 evaluation; +- Integrated "analytic" query package; +- %100 Java memory manager leverages the JVM native heap (no GC); + +Road map [3]: + +- Column-wise indexing; +- Runtime Query Optimizer for quads; +- Performance optimization for scale-out clusters; and +- Simplified deployment, configuration, and administration for scale-out clusters. + +Change log: + + Note: Versions with (*) MAY require data migration. For details, see [9]. + +1.3.4: + +- http://trac.bigdata.com/ticket/946 (Empty PROJECTION causes IllegalArgumentException) +- http://trac.bigdata.com/ticket/1036 (Journal leaks storage with SPARQL UPDATE and REST API) +- http://trac.bigdata.com/ticket/1008 (remote service queries should put parameters in the request body when using POST) + +1.3.3: + +- http://trac.bigdata.com/ticket/980 (Object position of query hint is not a Literal (partial resolution - see #1028 as well)) +- http://trac.bigdata.com/ticket/1018 (Add the ability to track and cancel all queries issued through a BigdataSailRemoteRepositoryConnection) +- http://trac.bigdata.com/ticket/1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) +- http://trac.bigdata.com/ticket/1024 (GregorianCalendar? does weird things before 1582) +- http://trac.bigdata.com/ticket/1026 (SPARQL UPDATE with runtime errors causes problems with lexicon indices) +- http://trac.bigdata.com/ticket/1028 (very rare NotMaterializedException: XSDBoolean(true)) +- http://trac.bigdata.com/ticket/1029 (RWStore commit state not correctly rolled back if abort fails on empty journal) +- http://trac.bigdata.com/ticket/1030 (RWStorage stats cleanup) + +1.3.2: + +- http://trac.bigdata.com/ticket/1016 (Jetty/LBS issues when deployed as WAR under tomcat) +- http://trac.bigdata.com/ticket/1010 (Upgrade apache http components to 1.3.1 (security)) +- http://trac.bigdata.com/ticket/1005 (Invalidate BTree objects if error occurs during eviction) +- http://trac.bigdata.com/ticket/1004 (Concurrent binding problem) +- http://trac.bigdata.com/ticket/1002 (Concurrency issues in JVMHashJoinUtility caused by MAX_PARALLEL query hint override) +- http://trac.bigdata.com/ticket/1000 (Add configuration option to turn off bottom-up evaluation) +- http://trac.bigdata.com/ticket/999 (Extend BigdataSailFactory to take arbitrary properties) +- http://trac.bigdata.com/ticket/998 (SPARQL Update through BigdataGraph) +- http://trac.bigdata.com/ticket/996 (Add custom prefix support for query results) +- http://trac.bigdata.com/ticket/995 (Allow general purpose SPARQL queries through BigdataGraph) +- http://trac.bigdata.com/ticket/992 (Deadlock between AbstractRunningQuery.cancel(), QueryLog.log(), and ArbitraryLengthPathTask) +- http://trac.bigdata.com/ticket/990 (Query hints not recognized in FILTERs) +- http://trac.bigdata.com/ticket/989 (Stored query service) +- http://trac.bigdata.com/ticket/988 (Bad performance for FILTER EXISTS) +- http://trac.bigdata.com/ticket/987 (maven build is broken) +- http://trac.bigdata.com/ticket/986 (Improve locality for small allocation slots) +- http://trac.bigdata.com/ticket/985 (Deadlock in BigdataTriplePatternMaterializer) +- http://trac.bigdata.com/ticket/975 (HA Health Status Page) +- http://trac.bigdata.com/ticket/974 (Name2Addr.indexNameScan(prefix) uses scan + filter) +- http://trac.bigdata.com/ticket/973 (RWStore.commit() should be more defensive) +- http://trac.bigdata.com/ticket/971 (Clarify HTTP Status codes for CREATE NAMESPACE operation) +- http://trac.bigdata.com/ticket/968 (no link to wiki from workbench) +- http://trac.bigdata.com/ticket/966 (Failed to get namespace under concurrent update) +- http://trac.bigdata.com/ticket/965 (Can not run LBS mode with HA1 setup) +- http://trac.bigdata.com/ticket/961 (Clone/modify namespace to create a new one) +- http://trac.bigdata.com/ticket/960 (Export namespace properties in XML/Java properties text format) +- http://trac.bigdata.com/ticket/938 (HA Load Balancer) +- http://trac.bigdata.com/ticket/936 (Support larger metabits allocations) +- http://trac.bigdata.com/ticket/932 (Bigdata/Rexster integration) +- http://trac.bigdata.com/ticket/919 (Formatted Layout for Status pages) +- http://trac.bigdata.com/ticket/899 (REST API Query Cancellation) +- http://trac.bigdata.com/ticket/885 (Panels do not appear on startup in Firefox) +- http://trac.bigdata.com/ticket/884 (Executing a new query should clear the old query results from the console) +- http://trac.bigdata.com/ticket/882 (Abbreviate URIs that can be namespaced with one of the defined common namespaces) +- http://trac.bigdata.com/ticket/880 (Can't explore an absolute URI with < >) +- http://trac.bigdata.com/ticket/878 (Explore page looks weird when empty) +- http://trac.bigdata.com/ticket/873 (Allow user to go use browser back & forward buttons to view explore history) +- http://trac.bigdata.com/ticket/865 (OutOfMemoryError instead of Timeout for SPARQL Property Paths) +- http://trac.bigdata.com/ticket/858 (Change explore URLs to include URI being clicked so user can see what they've clicked on before) +- http://trac.bigdata.com/ticket/855 (AssertionError: Child does not have persistent identity) +- http://trac.bigdata.com/ticket/850 (Search functionality in workbench) +- http://trac.bigdata.com/ticket/847 (Query results panel should recognize well known namespaces for easier reading) +- http://trac.bigdata.com/ticket/845 (Display the properties for a namespace) +- http://trac.bigdata.com/ticket/843 (Create new tabs for status & performance counters, and add per namespace service/VoID description links) +- http://trac.bigdata.com/ticket/837 (Configurator for new namespaces) +- http://trac.bigdata.com/ticket/836 (Allow user to create namespace in the workbench) +- http://trac.bigdata.com/ticket/830 (Output RDF data from queries in table format) +- http://trac.bigdata.com/ticket/829 (Export query results) +- http://trac.bigdata.com/ticket/828 (Save selected namespace in browser) +- http://trac.bigdata.com/ticket/827 (Explore tab in workbench) +- http://trac.bigdata.com/ticket/826 (Create shortcut to execute load/query) +- http://trac.bigdata.com/ticket/823 (Disable textarea when a large file is selected) +- http://trac.bigdata.com/ticket/820 (Allow non-file:// URLs to be loaded) +- http://trac.bigdata.com/ticket/819 (Retrieve default namespace on page load) +- http://trac.bigdata.com/ticket/772 (Query timeout only checked at operator start/stop) +- http://trac.bigdata.com/ticket/765 (order by expr skips invalid expressions) +- http://trac.bigdata.com/ticket/587 (JSP page to configure KBs) +- http://trac.bigdata.com/ticket/343 (Stochastic assert in AbstractBTree#writeNodeOrLeaf() in CI) + +1.3.1: + +- http://trac.bigdata.com/ticket/242 (Deadlines do not play well with GROUP_BY, ORDER_BY, etc.) +- http://trac.bigdata.com/ticket/256 (Amortize RTO cost) +- http://trac.bigdata.com/ticket/257 (Support BOP fragments in the RTO.) +- http://trac.bigdata.com/ticket/258 (Integrate RTO into SAIL) +- http://trac.bigdata.com/ticket/259 (Dynamically increase RTO sampling limit.) +- http://trac.bigdata.com/ticket/526 (Reification done right) +- http://trac.bigdata.com/ticket/580 (Problem with the bigdata RDF/XML parser with sids) +- http://trac.bigdata.com/ticket/622 (NSS using jetty+windows can lose connections (windows only; jdk 6/7 bug)) +- http://trac.bigdata.com/ticket/624 (HA Load Balancer) +- http://trac.bigdata.com/ticket/629 (Graph processing API) +- http://trac.bigdata.com/ticket/721 (Support HA1 configurations) +- http://trac.bigdata.com/ticket/730 (Allow configuration of embedded NSS jetty server using jetty-web.xml) +- http://trac.bigdata.com/ticket/759 (multiple filters interfere) +- http://trac.bigdata.com/ticket/763 (Stochastic results with Analytic Query Mode) +- http://trac.bigdata.com/ticket/774 (Converge on Java 7.) +- http://trac.bigdata.com/ticket/779 (Resynchronization of socket level write replication protocol (HA)) +- http://trac.bigdata.com/ticket/780 (Incremental or asynchronous purge of HALog files) +- http://trac.bigdata.com/ticket/782 (Wrong serialization version) +- http://trac.bigdata.com/ticket/784 (Describe Limit/offset don't work as expected) +- http://trac.bigdata.com/ticket/787 (Update documentations and samples, they are OUTDATED) +- http://trac.bigdata.com/ticket/788 (Name2Addr does not report all root causes if the commit fails.) +- http://trac.bigdata.com/ticket/789 (ant task to build sesame fails, docs for setting up bigdata for sesame are ancient) +- http://trac.bigdata.com/ticket/790 (should not be pruning any children) +- http://trac.bigdata.com/ticket/791 (Clean up query hints) +- http://trac.bigdata.com/ticket/793 (Explain reports incorrect value for opCount) +- http://trac.bigdata.com/ticket/796 (Filter assigned to sub-query by query generator is dropped from evaluation) +- http://trac.bigdata.com/ticket/797 (add sbt setup to getting started wiki) +- http://trac.bigdata.com/ticket/798 (Solution order not always preserved) +- http://trac.bigdata.com/ticket/799 (mis-optimation of quad pattern vs triple pattern) +- http://trac.bigdata.com/ticket/802 (Optimize DatatypeFactory instantiation in DateTimeExtension) +- http://trac.bigdata.com/ticket/803 (prefixMatch does not work in full text search) +- http://trac.bigdata.com/ticket/804 (update bug deleting quads) +- http://trac.bigdata.com/ticket/806 (Incorrect AST generated for OPTIONAL { SELECT }) +- http://trac.bigdata.com/ticket/808 (Wildcard search in bigdata for type suggessions) +- http://trac.bigdata.com/ticket/810 (Expose GAS API as SPARQL SERVICE) +- http://trac.bigdata.com/ticket/815 (RDR query does too much work) +- http://trac.bigdata.com/ticket/816 (Wildcard projection ignores variables inside a SERVICE call.) +- http://trac.bigdata.com/ticket/817 (Unexplained increase in journal size) +- http://trac.bigdata.com/ticket/821 (Reject large files, rather then storing them in a hidden variable) +- http://trac.bigdata.com/ticket/831 (UNION with filter issue) +- http://trac.bigdata.com/ticket/841 (Using "VALUES" in a query returns lexical error) +- http://trac.bigdata.com/ticket/848 (Fix SPARQL Results JSON writer to write the RDR syntax) +- http://trac.bigdata.com/ticket/849 (Create writers that support the RDR syntax) +- http://trac.bigdata.com/ticket/851 (RDR GAS interface) +- http://trac.bigdata.com/ticket/852 (RemoteRepository.cancel() does not consume the HTTP response entity.) +- http://trac.bigdata.com/ticket/853 (Follower does not accept POST of idempotent operations (HA)) +- http://trac.bigdata.com/ticket/854 (Allow override of maximum length before converting an HTTP GET to an HTTP POST) +- http://trac.bigdata.com/ticket/855 (AssertionError: Child does not have persistent identity) +- http://trac.bigdata.com/ticket/862 (Create parser for JSON SPARQL Results) +- http://trac.bigdata.com/ticket/863 (HA1 commit failure) +- http://trac.bigdata.com/ticket/866 (Batch remove API for the SAIL) +- http://trac.bigdata.com/ticket/867 (NSS concurrency problem with list namespaces and create namespace) +- http://trac.bigdata.com/ticket/869 (HA5 test suite) +- http://trac.bigdata.com/ticket/872 (Full text index range count optimization) +- http://trac.bigdata.com/ticket/874 (FILTER not applied when there is UNION in the same join group) +- http://trac.bigdata.com/ticket/876 (When I upload a file I want to see the filename.) +- http://trac.bigdata.com/ticket/877 (RDF Format selector is invisible) +- http://trac.bigdata.com/ticket/883 (CANCEL Query fails on non-default kb namespace on HA follower.) +- http://trac.bigdata.com/ticket/886 (Provide workaround for bad reverse DNS setups.) +- http://trac.bigdata.com/ticket/887 (BIND is leaving a variable unbound) +- http://trac.bigdata.com/ticket/892 (HAJournalServer does not die if zookeeper is not running) +- http://trac.bigdata.com/ticket/893 (large sparql insert optimization slow?) +- http://trac.bigdata.com/ticket/894 (unnecessary synchronization) +- http://trac.bigdata.com/ticket/895 (stack overflow in populateStatsMap) +- http://trac.bigdata.com/ticket/902 (Update Basic Bigdata Chef Cookbook) +- http://trac.bigdata.com/ticket/904 (AssertionError: PropertyPathNode got to ASTJoinOrderByType.optimizeJoinGroup) +- http://trac.bigdata.com/ticket/905 (unsound combo query optimization: union + filter) +- http://trac.bigdata.com/ticket/906 (DC Prefix Button Appends "</li>") +- http://trac.bigdata.com/ticket/907 (Add a quick-start ant task for the BD Server "ant start") +- http://trac.bigdata.com/ticket/912 (Provide a configurable IAnalyzerFactory) +- http://trac.bigdata.com/ticket/913 (Blueprints API Implementation) +- http://trac.bigdata.com/ticket/914 (Settable timeout on SPARQL Query (REST API)) +- http://trac.bigdata.com/ticket/915 (DefaultAnalyzerFactory issues) +- http://trac.bigdata.com/ticket/920 (Content negotiation orders accept header scores in reverse) +- http://trac.bigdata.com/ticket/939 (NSS does not start from command line: bigdata-war/src not found.) +- http://trac.bigdata.com/ticket/940 (ProxyServlet in web.xml breaks tomcat WAR (HA LBS) + +1.3.0: + +- http://trac.bigdata.com/ticket/530 (Journal HA) +- http://trac.bigdata.com/ticket/621 (Coalesce write cache records and install reads in cache) +- http://trac.bigdata.com/ticket/623 (HA TXS) +- http://trac.bigdata.com/ticket/639 (Remove triple-buffering in RWStore) +- http://trac.bigdata.com/ticket/645 (HA backup) +- http://trac.bigdata.com/ticket/646 (River not compatible with newer 1.6.0 and 1.7.0 JVMs) +- http://trac.bigdata.com/ticket/648 (Add a custom function to use full text index for filtering.) +- http://trac.bigdata.com/ticket/651 (RWS test failure) +- http://trac.bigdata.com/ticket/652 (Compress write cache blocks for replication and in HALogs) +- http://trac.bigdata.com/ticket/662 (Latency on followers during commit on leader) +- http://trac.bigdata.com/ticket/663 (Issue with OPTIONAL blocks) +- http://trac.bigdata.com/ticket/664 (RWStore needs post-commit protocol) +- http://trac.bigdata.com/ticket/665 (HA3 LOAD non-responsive with node failure) +- http://trac.bigdata.com/ticket/666 (Occasional CI deadlock in HALogWriter testConcurrentRWWriterReader) +- http://trac.bigdata.com/ticket/670 (Accumulating HALog files cause latency for HA commit) +- http://trac.bigdata.com/ticket/671 (Query on follower fails during UPDATE on leader) +- http://trac.bigdata.com/ticket/673 (DGC in release time consensus protocol causes native thread leak in HAJournalServer at each commit) +- http://trac.bigdata.com/ticket/674 (WCS write cache compaction causes errors in RWS postHACommit()) +- http://trac.bigdata.com/ticket/676 (Bad patterns for timeout computations) +- http://trac.bigdata.com/ticket/677 (HA deadlock under UPDATE + QUERY) +- http://trac.bigdata.com/ticket/678 (DGC Thread and Open File Leaks: sendHALogForWriteSet()) +- http://trac.bigdata.com/ticket/679 (HAJournalServer can not restart due to logically empty log file) +- http://trac.bigdata.com/ticket/681 (HAJournalServer deadlock: pipelineRemove() and getLeaderId()) +- http://trac.bigdata.com/ticket/684 (Optimization with skos altLabel) +- http://trac.bigdata.com/ticket/686 (Consensus protocol does not detect clock skew correctly) +- http://trac.bigdata.com/ticket/687 (HAJournalServer Cache not populated) +- http://trac.bigdata.com/ticket/689 (Missing URL encoding in RemoteRepositoryManager) +- http://trac.bigdata.com/ticket/690 (Error when using the alias "a" instead of rdf:type for a multipart insert) +- http://trac.bigdata.com/ticket/691 (Failed to re-interrupt thread in HAJournalServer) +- http://trac.bigdata.com/ticket/692 (Failed to re-interrupt thread) +- http://trac.bigdata.com/ticket/693 (OneOrMorePath SPARQL property path expression ignored) +- http://trac.bigdata.com/ticket/694 (Transparently cancel update/query in RemoteRepository) +- http://trac.bigdata.com/ticket/695 (HAJournalServer reports "follower" but is in SeekConsensus and is not participating in commits.) +- http://trac.bigdata.com/ticket/701 (Problems in BackgroundTupleResult) +- http://trac.bigdata.com/ticket/702 (InvocationTargetException on /namespace call) +- http://trac.bigdata.com/ticket/704 (ask does not return json) +- http://trac.bigdata.com/ticket/705 (Race between QueryEngine.putIfAbsent() and shutdownNow()) +- http://trac.bigdata.com/ticket/706 (MultiSourceSequentialCloseableIterator.nextSource() can throw NPE) +- http://trac.bigdata.com/ticket/707 (BlockingBuffer.close() does not unblock threads) +- http://trac.bigdata.com/ticket/708 (BIND heisenbug - race condition on select query with BIND) +- http://trac.bigdata.com/ticket/711 (sparql protocol: mime type application/sparql-query) +- http://trac.bigdata.com/ticket/712 (SELECT ?x { OPTIONAL { ?x eg:doesNotExist eg:doesNotExist } } incorrect) +- http://trac.bigdata.com/ticket/715 (Interrupt of thread submitting a query for evaluation does not always terminate the AbstractRunningQuery) +- http://trac.bigdata.com/ticket/716 (Verify that IRunningQuery instances (and nested queries) are correctly cancelled when interrupted) +- http://trac.bigdata.com/ticket/718 (HAJournalServer needs to handle ZK client connection loss) +- http://trac.bigdata.com/ticket/720 (HA3 simultaneous service start failure) +- http://trac.bigdata.com/ticket/723 (HA asynchronous tasks must be canceled when invariants are changed) +- http://trac.bigdata.com/ticket/725 (FILTER EXISTS in subselect) +- http://trac.bigdata.com/ticket/726 (Logically empty HALog for committed transaction) +- http://trac.bigdata.com/ticket/727 (DELETE/INSERT fails with OPTIONAL non-matching WHERE) +- http://trac.bigdata.com/ticket/728 (Refactor to create HAClient) +- http://trac.bigdata.com/ticket/729 (ant bundleJar not working) +- http://trac.bigdata.com/ticket/731 (CBD and Update leads to 500 status code) +- http://trac.bigdata.com/ticket/732 (describe statement limit does not work) +- http://trac.bigdata.com/ticket/733 (Range optimizer not optimizing Slice service) +- http://trac.bigdata.com/ticket/734 (two property paths interfere) +- http://trac.bigdata.com/ticket/736 (MIN() malfunction) +- http://trac.bigdata.com/ticket/737 (class cast exception) +- http://trac.bigdata.com/ticket/739 (Inconsistent treatment of bind and optional property path) +- http://trac.bigdata.com/ticket/741 (ctc-striterators should build as independent top-level project (Apache2)) +- http://trac.bigdata.com/ticket/743 (AbstractTripleStore.destroy() does not filter for correct prefix) +- http://trac.bigdata.com/ticket/746 (Assertion error) +- http://trac.bigdata.com/ticket/747 (BOUND bug) +- http://trac.bigdata.com/ticket/748 (incorrect join with subselect renaming vars) +- http://trac.bigdata.com/ticket/754 (Failure to setup SERVICE hook and changeLog for Unisolated and Read/Write connections) +- http://trac.bigdata.com/ticket/755 (Concurrent QuorumActors can interfere leading to failure to progress) +- http://trac.bigdata.com/ticket/756 (order by and group_concat) +- http://trac.bigdata.com/ticket/760 (Code review on 2-phase commit protocol) +- http://trac.bigdata.com/ticket/764 (RESYNC failure (HA)) +- http://trac.bigdata.com/ticket/770 (alpp ordering) +- http://trac.bigdata.com/ticket/772 (Query timeout only checked at operator start/stop.) +- http://trac.bigdata.com/ticket/776 (Closed as duplicate of #490) +- http://trac.bigdata.com/ticket/778 (HA Leader fail results in transient problem with allocations on other services) +- http://trac.bigdata.com/ticket/783 (Operator Alerts (HA)) + +1.2.4: + +- http://trac.bigdata.com/ticket/777 (ConcurrentModificationException in ASTComplexOptionalOptimizer) + +1.2.3: + +- http://trac.bigdata.com/ticket/168 (Maven Build) +- http://trac.bigdata.com/ticket/196 (Journal leaks memory). +- http://trac.bigdata.com/ticket/235 (Occasional deadlock in CI runs in com.bigdata.io.writecache.TestAll) +- http://trac.bigdata.com/ticket/312 (CI (mock) quorums deadlock) +- http://trac.bigdata.com/ticket/405 (Optimize hash join for subgroups with no incoming bound vars.) +- http://trac.bigdata.com/ticket/412 (StaticAnalysis#getDefinitelyBound() ignores exogenous variables.) +- http://trac.bigdata.com/ticket/485 (RDFS Plus Profile) +- http://trac.bigdata.com/ticket/495 (SPARQL 1.1 Property Paths) +- http://trac.bigdata.com/ticket/519 (Negative parser tests) +- http://trac.bigdata.com/ticket/531 (SPARQL UPDATE for SOLUTION SETS) +- http://trac.bigdata.com/ticket/535 (Optimize JOIN VARS for Sub-Selects) +- http://trac.bigdata.com/ticket/555 (Support PSOutputStream/InputStream at IRawStore) +- http://trac.bigdata.com/ticket/559 (Use RDFFormat.NQUADS as the format identifier for the NQuads parser) +- http://trac.bigdata.com/ticket/570 (MemoryManager Journal does not implement all methods). +- http://trac.bigdata.com/ticket/575 (NSS Admin API) +- http://trac.bigdata.com/ticket/577 (DESCRIBE with OFFSET/LIMIT needs to use sub-select) +- http://trac.bigdata.com/ticket/578 (Concise Bounded Description (CBD)) +- http://trac.bigdata.com/ticket/579 (CONSTRUCT should use distinct SPO filter) +- http://trac.bigdata.com/ticket/583 (VoID in ServiceDescription) +- http://trac.bigdata.com/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) +- http://trac.bigdata.com/ticket/590 (nxparser fails with uppercase language tag) +- http://trac.bigdata.com/ticket/592 (Optimize RWStore allocator sizes) +- http://trac.bigdata.com/ticket/593 (Ugrade to Sesame 2.6.10) +- http://trac.bigdata.com/ticket/594 (WAR was deployed using TRIPLES rather than QUADS by default) +- http://trac.bigdata.com/ticket/596 (Change web.xml parameter names to be consistent with Jini/River) +- http://trac.bigdata.com/ticket/597 (SPARQL UPDATE LISTENER) +- http://trac.bigdata.com/ticket/598 (B+Tree branching factor and HTree addressBits are confused in their NodeSerializer implementations) +- http://trac.bigdata.com/ticket/599 (BlobIV for blank node : NotMaterializedException) +- http://trac.bigdata.com/ticket/600 (BlobIV collision counter hits false limit.) +- http://trac.bigdata.com/ticket/601 (Log uncaught exceptions) +- http://trac.bigdata.com/ticket/602 (RWStore does not discard logged deletes on reset()) +- http://trac.bigdata.com/ticket/607 (History service / index) +- http://trac.bigdata.com/ticket/608 (LOG BlockingBuffer not progressing at INFO or lower level) +- http://trac.bigdata.com/ticket/609 (bigdata-ganglia is required dependency for Journal) +- http://trac.bigdata.com/ticket/611 (The code that processes SPARQL Update has a typo) +- http://trac.bigdata.com/ticket/612 (Bigdata scale-up depends on zookeper) +- http://trac.bigdata.com/ticket/613 (SPARQL UPDATE response inlines large DELETE or INSERT triple graphs) +- http://trac.bigdata.com/ticket/614 (static join optimizer does not get ordering right when multiple tails share vars with ancestry) +- http://trac.bigdata.com/ticket/615 (AST2BOpUtility wraps UNION with an unnecessary hash join) +- http://trac.bigdata.com/ticket/616 (Row store read/update not isolated on Journal) +- http://trac.bigdata.com/ticket/617 (Concurrent KB create fails with "No axioms defined?") +- http://trac.bigdata.com/ticket/618 (DirectBufferPool.poolCapacity maximum of 2GB) +- http://trac.bigdata.com/ticket/619 (RemoteRepository class should use application/x-www-form-urlencoded for large POST requests) +- http://trac.bigdata.com/ticket/620 (UpdateServlet fails to parse MIMEType when doing conneg.) +- http://trac.bigdata.com/ticket/626 (Expose performance counters for read-only indices) +- http://trac.bigdata.com/ticket/627 (Environment variable override for NSS properties file) +- http://trac.bigdata.com/ticket/628 (Create a bigdata-client jar for the NSS REST API) +- http://trac.bigdata.com/ticket/631 (ClassCastException in SIDs mode query) +- http://trac.bigdata.com/ticket/632 (NotMaterializedException when a SERVICE call needs variables that are provided as query input bindings) +- http://trac.bigdata.com/ticket/633 (ClassCastException when binding non-uri values to a variable that occurs in predicate position) +- http://trac.bigdata.com/ticket/638 (Change DEFAULT_MIN_RELEASE_AGE to 1ms) +- http://trac.bigdata.com/ticket/640 (Conditionally rollback() BigdataSailConnection if dirty) +- http://trac.bigdata.com/ticket/642 (Property paths do not work inside of exists/not exists filters) +- http://trac.bigdata.com/ticket/643 (Add web.xml parameters to lock down public NSS end points) +- http://trac.bigdata.com/ticket/644 (Bigdata2Sesame2BindingSetIterator can fail to notice asynchronous close()) +- http://trac.bigdata.com/ticket/650 (Can not POST RDF to a graph using REST API) +- http://trac.bigdata.com/ticket/654 (Rare AssertionError in WriteCache.clearAddrMap()) +- http://trac.bigdata.com/ticket/655 (SPARQL REGEX operator does not perform case-folding correctly for Unicode data) +- http://trac.bigdata.com/ticket/656 (InFactory bug when IN args consist of a single literal) +- http://trac.bigdata.com/ticket/647 (SIDs mode creates unnecessary hash join for GRAPH group patterns) +- http://trac.bigdata.com/ticket/667 (Provide NanoSparqlServer initialization hook) +- http://trac.bigdata.com/ticket/669 (Doubly nested subqueries yield no results with LIMIT) +- http://trac.bigdata.com/ticket/675 (Flush indices in parallel during checkpoint to reduce IO latency) +- http://trac.bigdata.com/ticket/682 (AtomicRowFilter UnsupportedOperationException) + +1.2.2: + +- http://trac.bigdata.com/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) +- http://trac.bigdata.com/ticket/602 (RWStore does not discard logged deletes on reset()) +- http://trac.bigdata.com/ticket/603 (Prepare critical maintenance release as branch of 1.2.1) + +1.2.1: + +- http://trac.bigdata.com/ticket/533 (Review materialization for inline IVs) +- http://trac.bigdata.com/ticket/539 (NotMaterializedException with REGEX and Vocab) +- http://trac.bigdata.com/ticket/540 (SPARQL UPDATE using NSS via index.html) +- http://trac.bigdata.com/ticket/541 (MemoryManaged backed Journal mode) +- http://trac.bigdata.com/ticket/546 (Index cache for Journal) +- http://trac.bigdata.com/ticket/549 (BTree can not be cast to Name2Addr (MemStore recycler)) +- http://trac.bigdata.com/ticket/550 (NPE in Leaf.getKey() : root cause was user error) +- http://trac.bigdata.com/ticket/558 (SPARQL INSERT not working in same request after INSERT DATA) +- http://trac.bigdata.com/ticket/562 (Sub-select in INSERT cause NPE in UpdateExprBuilder) +- http://trac.bigdata.com/ticket/563 (DISTINCT ORDER BY) +- http://trac.bigdata.com/ticket/567 (Failure to set cached value on IV results in incorrect behavior for complex UPDATE operation) +- http://trac.bigdata.com/ticket/568 (DELETE WHERE fails with Java AssertionError) +- http://trac.bigdata.com/ticket/569 (LOAD-CREATE-LOAD using virgin journal fails with "Graph exists" exception) +- http://trac.bigdata.com/ticket/571 (DELETE/INSERT WHERE handling of blank nodes) +- http://trac.bigdata.com/ticket/573 (NullPointerException when attempting to INSERT DATA containing a blank node) + +1.2.0: (*) + +- http://trac.bigdata.com/ticket/92 (Monitoring webapp) +- http://trac.bigdata.com/ticket/267 (Support evaluation of 3rd party operators) +- http://trac.bigdata.com/ticket/337 (Compact and efficient movement of binding sets between nodes.) +- http://trac.bigdata.com/ticket/433 (Cluster leaks threads under read-only index operations: DGC thread leak) +- http://trac.bigdata.com/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) +- http://trac.bigdata.com/ticket/438 (KeyBeforePartitionException on cluster) +- http://trac.bigdata.com/ticket/439 (Class loader problem) +- http://trac.bigdata.com/ticket/441 (Ganglia integration) +- http://trac.bigdata.com/ticket/443 (Logger for RWStore transaction service and recycler) +- http://trac.bigdata.com/ticket/444 (SPARQL query can fail to notice when IRunningQuery.isDone() on cluster) +- http://trac.bigdata.com/ticket/445 (RWStore does not track tx release correctly) +- http://trac.bigdata.com/ticket/446 (HTTP Repostory broken with bigdata 1.1.0) +- http://trac.bigdata.com/ticket/448 (SPARQL 1.1 UPDATE) +- http://trac.bigdata.com/ticket/449 (SPARQL 1.1 Federation extension) +- http://trac.bigdata.com/ticket/451 (Serialization error in SIDs mode on cluster) +- http://trac.bigdata.com/ticket/454 (Global Row Store Read on Cluster uses Tx) +- http://trac.bigdata.com/ticket/456 (IExtension implementations do point lookups on lexicon) +- http://trac.bigdata.com/ticket/457 ("No such index" on cluster under concurrent query workload) +- http://trac.bigdata.com/ticket/458 (Java level deadlock in DS) +- http://trac.bigdata.com/ticket/460 (Uncaught interrupt resolving RDF terms) +- http://trac.bigdata.com/ticket/461 (KeyAfterPartitionException / KeyBeforePartitionException on cluster) +- http://trac.bigdata.com/ticket/463 (NoSuchVocabularyItem with LUBMVocabulary for DerivedNumericsExtension) +- http://trac.bigdata.com/ticket/464 (Query statistics do not update correctly on cluster) +- http://trac.bigdata.com/ticket/465 (Too many GRS reads on cluster) +- http://trac.bigdata.com/ticket/469 (Sail does not flush assertion buffers before query) +- http://trac.bigdata.com/ticket/472 (acceptTaskService pool size on cluster) +- http://trac.bigdata.com/ticket/475 (Optimize serialization for query messages on cluster) +- http://trac.bigdata.com/ticket/476 (Test suite for writeCheckpoint() and recycling for BTree/HTree) +- http://trac.bigdata.com/ticket/478 (Cluster does not map input solution(s) across shards) +- http://trac.bigdata.com/ticket/480 (Error releasing deferred frees using 1.0.6 against a 1.0.4 journal) +- http://trac.bigdata.com/ticket/481 (PhysicalAddressResolutionException against 1.0.6) +- http://trac.bigdata.com/ticket/482 (RWStore reset() should be thread-safe for concurrent readers) +- http://trac.bigdata.com/ticket/484 (Java API for NanoSparqlServer REST API) +- http://trac.bigdata.com/ticket/491 (AbstractTripleStore.destroy() does not clear the locator cache) +- http://trac.bigdata.com/ticket/492 (Empty chunk in ThickChunkMessage (cluster)) +- http://trac.bigdata.com/ticket/493 (Virtual Graphs) +- http://trac.bigdata.com/ticket/496 (Sesame 2.6.3) +- http://trac.bigdata.com/ticket/497 (Implement STRBEFORE, STRAFTER, and REPLACE) +- http://trac.bigdata.com/ticket/498 (Bring bigdata RDF/XML parser up to openrdf 2.6.3.) +- http://trac.bigdata.com/ticket/500 (SPARQL 1.1 Service Description) +- http://www.openrdf.org/issues/browse/SES-884 (Aggregation with an solution set as input should produce an empty solution as output) +- http://www.openrdf.org/issues/browse/SES-862 (Incorrect error handling for SPARQL aggregation; fix in 2.6.1) +- http://www.openrdf.org/issues/browse/SES-873 (Order the same Blank Nodes together in ORDER BY) +- http://trac.bigdata.com/ticket/501 (SPARQL 1.1 BINDINGS are ignored) +- http://trac.bigdata.com/ticket/503 (Bigdata2Sesame2BindingSetIterator throws QueryEvaluationException were it should throw NoSuchElementException) +- http://trac.bigdata.com/ticket/504 (UNION with Empty Group Pattern) +- http://trac.bigdata.com/ticket/505 (Exception when using SPARQL sort & statement identifiers) +- http://trac.bigdata.com/ticket/506 (Load, closure and query performance in 1.1.x versus 1.0.x) +- http://trac.bigdata.com/ticket/508 (LIMIT causes hash join utility to log errors) +- http://trac.bigdata.com/ticket/513 (Expose the LexiconConfiguration to Function BOPs) +- http://trac.bigdata.com/ticket/515 (Query with two "FILTER NOT EXISTS" expressions returns no results) +- http://trac.bigdata.com/ticket/516 (REGEXBOp should cache the Pattern when it is a constant) +- http://trac.bigdata.com/ticket/517 (Java 7 Compiler Compatibility) +- http://trac.bigdata.com/ticket/518 (Review function bop subclass hierarchy, optimize datatype bop, etc.) +- http://trac.bigdata.com/ticket/520 (CONSTRUCT WHERE shortcut) +- http://trac.bigdata.com/ticket/521 (Incremental materialization of Tuple and Graph query results) +- http://trac.bigdata.com/ticket/525 (Modify the IChangeLog interface to support multiple agents) +- http://trac.bigdata.com/ticket/527 (Expose timestamp of LexiconRelation to function bops) +- http://trac.bigdata.com/ticket/532 (ClassCastException during hash join (can not be cast to TermId)) +- http://trac.bigdata.com/ticket/533 (Review materialization for inline IVs) +- http://trac.bigdata.com/ticket/534 (BSBM BI Q5 error using MERGE JOIN) + +1.1.0 (*) + + - http://trac.bigdata.com/ticket/23 (Lexicon joins) + - http://trac.bigdata.com/ticket/109 (Store large literals as "blobs") + - http://trac.bigdata.com/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://trac.bigdata.com/ticket/203 (Implement an persistence capable hash table to support analytic query) + - http://trac.bigdata.com/ticket/209 (AccessPath should visit binding sets rather than elements for high level query.) + - http://trac.bigdata.com/ticket/227 (SliceOp appears to be necessary when operator plan should suffice without) + - http://trac.bigdata.com/ticket/232 (Bottom-up evaluation semantics). + - http://trac.bigdata.com/ticket/246 (Derived xsd numeric data types must be inlined as extension types.) + - http://trac.bigdata.com/ticket/254 (Revisit pruning of intermediate variable bindings during query execution) + - http://trac.bigdata.com/ticket/261 (Lift conditions out of subqueries.) + - http://trac.bigdata.com/ticket/300 (Native ORDER BY) + - http://trac.bigdata.com/ticket/324 (Inline predeclared URIs and namespaces in 2-3 bytes) + - http://trac.bigdata.com/ticket/330 (NanoSparqlServer does not locate "html" resources when run from jar) + - http://trac.bigdata.com/ticket/334 (Support inlining of unicode data in the statement indices.) + - http://trac.bigdata.com/ticket/364 (Scalable default graph evaluation) + - http://trac.bigdata.com/ticket/368 (Prune variable bindings during query evaluation) + - http://trac.bigdata.com/ticket/370 (Direct translation of openrdf AST to bigdata AST) + - http://trac.bigdata.com/ticket/373 (Fix StrBOp and other IValueExpressions) + - http://trac.bigdata.com/ticket/377 (Optimize OPTIONALs with multiple statement patterns.) + - http://trac.bigdata.com/ticket/380 (Native SPARQL evaluation on cluster) + - http://trac.bigdata.com/ticket/387 (Cluster does not compute closure) + - http://trac.bigdata.com/ticket/395 (HTree hash join performance) + - http://trac.bigdata.com/ticket/401 (inline xsd:unsigned datatypes) + - http://trac.bigdata.com/ticket/408 (xsd:string cast fails for non-numeric data) + - http://trac.bigdata.com/ticket/421 (New query hints model.) + - http://trac.bigdata.com/ticket/431 (Use of read-only tx per query defeats cache on cluster) + +1.0.3 + + - http://trac.bigdata.com/ticket/217 (BTreeCounters does not track bytes released) + - http://trac.bigdata.com/ticket/269 (Refactor performance counters using accessor interface) + - http://trac.bigdata.com/ticket/329 (B+Tree should delete bloom filter when it is disabled.) + - http://trac.bigdata.com/ticket/372 (RWStore does not prune the CommitRecordIndex) + - http://trac.bigdata.com/ticket/375 (Persistent memory leaks (RWStore/DISK)) + - http://trac.bigdata.com/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) + - http://trac.bigdata.com/ticket/391 (Release age advanced on WORM mode journal) + - http://trac.bigdata.com/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) + - http://trac.bigdata.com/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) + - http://trac.bigdata.com/ticket/394 (log4j configuration error message in WAR deployment) + - http://trac.bigdata.com/ticket/399 (Add a fast range count method to the REST API) + - http://trac.bigdata.com/ticket/422 (Support temp triple store wrapped by a BigdataSail) + - http://trac.bigdata.com/ticket/424 (NQuads support for NanoSparqlServer) + - http://trac.bigdata.com/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) + - http://trac.bigdata.com/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) + - http://trac.bigdata.com/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://trac.bigdata.com/ticket/435 (Address is 0L) + - http://trac.bigdata.com/ticket/436 (TestMROWTransactions failure in CI) + +1.0.2 + + - http://trac.bigdata.com/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - http://trac.bigdata.com/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://trac.bigdata.com/ticket/356 (Query not terminated by error.) + - http://trac.bigdata.com/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://trac.bigdata.com/ticket/361 (IRunningQuery not closed promptly.) + - http://trac.bigdata.com/ticket/371 (DataLoader fails to load resources available from the classpath.) + - http://trac.bigdata.com/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - http://trac.bigdata.com/ticket/378 (ClosedByInterruptException during heavy query mix.) + - http://trac.bigdata.com/ticket/379 (NotSerializableException for SPOAccessPath.) + - http://trac.bigdata.com/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 (*) + + - http://trac.bigdata.com/ticket/107 (Unicode clean schema names in the sparse row store). + - http://trac.bigdata.com/ticket/124 (TermIdEncoder should use more bits for scale-out). + - http://trac.bigdata.com/ticket/225 (OSX requires specialized performance counter collection classes). + - http://trac.bigdata.com/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - http://trac.bigdata.com/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - http://trac.bigdata.com/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - http://trac.bigdata.com/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - http://trac.bigdata.com/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - http://trac.bigdata.com/ticket/355 (Query failure when comparing with non materialized value). + - http://trac.bigdata.com/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - http://trac.bigdata.com/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://trac.bigdata.com/ticket/362 (log4j - slf4j bridge.) + +For more information about bigdata(R), please see the following links: + +[1] http://wiki.bigdata.com/wiki/index.php/Main_Page +[2] http://wiki.bigdata.com/wiki/index.php/GettingStarted +[3] http://wiki.bigdata.com/wiki/index.php/Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] http://sourceforge.net/projects/bigdata/files/bigdata/ +[9] http://wiki.bigdata.com/wiki/index.php/DataMigration +[10] http://wiki.bigdata.com/wiki/index.php/HAJournalServer +[11] http://www.bigdata.com/whitepapers/reifSPARQL.pdf +[12] http://wiki.bigdata.com/wiki/index.php/RDF_GAS_API + +About bigdata: + +Bigdata(R) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(R) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(R) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(R) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. Modified: branches/BIGDATA_RELEASE_1_3_0/build.properties =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/build.properties 2014-11-05 18:28:32 UTC (rev 8704) +++ branches/BIGDATA_RELEASE_1_3_0/build.properties 2014-11-06 14:21:08 UTC (rev 8705) @@ -90,12 +90,12 @@ release.dir=ant-release # The build version (note: 0.82b -> 0.82.0); 0.83.2 is followed by 1.0.0 -build.ver=1.3.3 +build.ver=1.3.4 build.ver.osgi=1.0 # Set true to do a snapshot build. This changes the value of ${version} to # include the date. -snapshot=true +snapshot=false # Javadoc build may be disabled using this property. The javadoc target will # not be executed unless this property is defined (its value does not matter). This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-11-05 18:28:44
|
Revision: 8704 http://sourceforge.net/p/bigdata/code/8704 Author: thompsonbry Date: 2014-11-05 18:28:32 +0000 (Wed, 05 Nov 2014) Log Message: ----------- Merging from git to SVN. Merge includes: - #946 (Empty PROJECTION causes IllegalArgumentException) - #1008 (remote service queries should put parameters in the request body when using POST) - #1036 (Journal file growth reported with 1.3.3) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/bop/solutions/ProjectionOp.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/resources/logging/log4j.properties branches/BIGDATA_RELEASE_1_3_0/bigdata-jini/src/test/com/bigdata/journal/jini/ha/AbstractHAJournalServerTestCase.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/task/AbstractApiTask.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestAll.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepository.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAll.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServerWithProxyIndexManager.java branches/BIGDATA_RELEASE_1_3_0/build.xml Added Paths: ----------- branches/BIGDATA_RELEASE_1_3_0/README.md branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTicket946.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.rq branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.srx branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.trig branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestRWStoreTxBehaviors.java Added: branches/BIGDATA_RELEASE_1_3_0/README.md =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/README.md (rev 0) +++ branches/BIGDATA_RELEASE_1_3_0/README.md 2014-11-05 18:28:32 UTC (rev 8704) @@ -0,0 +1,7 @@ +## Welcome to Bigdata + +Please see the release notes in [bigdata/src/releases](bigdata/src/releases) for getting started links. This will point you to the installation instructions for the different deployment modes, the online documentation, the wiki, etc. It will also point you to resources for support, subscriptions, and licensing. + +Please also visit us at [bigdata.com](http://www.bigdata.com). + + Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/bop/solutions/ProjectionOp.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/bop/solutions/ProjectionOp.java 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/bop/solutions/ProjectionOp.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -123,8 +123,9 @@ if (vars == null) throw new IllegalArgumentException(); - if (vars.length == 0) - throw new IllegalArgumentException(); + // @see #946 (Empty PROJECTION causes IllegalArgumentException) +// if (vars.length == 0) +// throw new IllegalArgumentException(); } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -3069,7 +3069,7 @@ public long commit() { // Critical Section Check. @see #1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) - if (abortRequired.get()) // FIXME Move this into commitNow() after tagging hot fix.(mark to maek sure this gets done). + if (abortRequired.get()) // FIXME Move this into commitNow() after tagging hot fix. throw new IllegalStateException("Commit cannot be called, a call to abort must be made before further updates"); // The timestamp to be assigned to this commit point. Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -6345,6 +6345,25 @@ } /** + * Debug ONLY method added to permit unit tests to be written that the + * native transaction counter is correctly decremented to zero. The returned + * value is ONLY valid while holding the {@link #m_allocationLock}. + * Therefore this method MAY NOT be used reliably outside of code that can + * guarantee that there are no concurrent committers on the {@link RWStore}. + * + * @see <a href="http://trac.bigdata.com/ticket/1036"> Journal file growth + * reported with 1.3.3 </a> + */ + public int getActiveTxCount() { + m_allocationWriteLock.lock(); + try { + return m_activeTxCount; + } finally { + m_allocationWriteLock.unlock(); + } + } + + /** * Returns the slot size associated with this address */ public int getAssociatedSlotSize(int addr) { Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/resources/logging/log4j.properties =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/resources/logging/log4j.properties 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/resources/logging/log4j.properties 2014-11-05 18:28:32 UTC (rev 8704) @@ -6,6 +6,7 @@ log4j.rootCategory=WARN, dest2 log4j.logger.com.bigdata=WARN +#log4j.logger.com.bigdata.txLog=INFO log4j.logger.com.bigdata.btree=WARN log4j.logger.com.bigdata.counters.History=ERROR log4j.logger.com.bigdata.counters.XMLUtility$MyHandler=ERROR Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-jini/src/test/com/bigdata/journal/jini/ha/AbstractHAJournalServerTestCase.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-jini/src/test/com/bigdata/journal/jini/ha/AbstractHAJournalServerTestCase.java 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-jini/src/test/com/bigdata/journal/jini/ha/AbstractHAJournalServerTestCase.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -1146,6 +1146,7 @@ for (HAGlue service : services) { final HAGlue haGlue = service; assertCondition(new Runnable() { + @Override public void run() { try { assertEquals(expected, haGlue.getRootBlock(req) Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/task/AbstractApiTask.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/task/AbstractApiTask.java 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/task/AbstractApiTask.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -151,6 +151,9 @@ * a read-only connection. When it is associated with the * {@link ITx#UNISOLATED} view or a read-write transaction, this will be a * mutable connection. + * <p> + * This version uses the namespace and timestamp associated with the HTTP + * request. * * @throws RepositoryException */ @@ -163,6 +166,22 @@ * (unless the query explicitly overrides the timestamp of the view on * which it will operate). */ + return getQueryConnection(namespace, timestamp); + + } + + /** + * This version uses the namespace and timestamp provided by the caller. + + * @param namespace + * @param timestamp + * @return + * @throws RepositoryException + */ + protected BigdataSailRepositoryConnection getQueryConnection( + final String namespace, final long timestamp) + throws RepositoryException { + final AbstractTripleStore tripleStore = getTripleStore(namespace, timestamp); @@ -198,13 +217,15 @@ } /** - * Return an UNISOLATED connection. - * - * @return The UNISOLATED connection. - * - * @throws SailException - * @throws RepositoryException - */ + * Return an UNISOLATED connection. + * + * @return The UNISOLATED connection. + * + * @throws SailException + * @throws RepositoryException + * @throws DatasetNotFoundException + * if the specified namespace does not exist. + */ protected BigdataSailRepositoryConnection getUnisolatedConnection() throws SailException, RepositoryException { @@ -214,7 +235,8 @@ if (tripleStore == null) { - throw new RuntimeException("Not found: namespace=" + namespace); + throw new DatasetNotFoundException("Not found: namespace=" + + namespace); } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestAll.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestAll.java 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestAll.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -171,6 +171,9 @@ // test suite for BIND + GRAPH ticket. suite.addTestSuite(TestBindGraph1007.class); + // test suite for a sub-select with an empty PROJECTION. + suite.addTestSuite(TestTicket946.class); + /* * Runtime Query Optimizer (RTO). */ Added: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTicket946.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTicket946.java (rev 0) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTicket946.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -0,0 +1,62 @@ +/** + +Copyright (C) SYSTAP, LLC 2013. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +package com.bigdata.rdf.sparql.ast.eval; + + +/** + * Test suite for an issue where an empty projection causes an + * {@link IllegalArgumentException}. + * + * @see <a href="http://trac.bigdata.com/ticket/946"> Empty PROJECTION causes + * IllegalArgumentException</a> + */ +public class TestTicket946 extends AbstractDataDrivenSPARQLTestCase { + + public TestTicket946() { + } + + public TestTicket946(String name) { + super(name); + } + + /** + * <pre> + * SELECT ?x + * { BIND(1 as ?x) + * { SELECT * { FILTER (true) } } + * } + * </pre> + */ + public void test_ticket_946_empty_projection() throws Exception { + + new TestHelper( + "ticket_946", // testURI, + "ticket_946.rq",// queryFileURL + "ticket_946.trig",// dataFileURL + "ticket_946.srx"// resultFileURL + ).runTest(); + + } + +} Added: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.rq =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.rq (rev 0) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.rq 2014-11-05 18:28:32 UTC (rev 8704) @@ -0,0 +1,4 @@ +SELECT ?x +{ BIND(1 as ?x) + { SELECT * { FILTER (true) } } +} \ No newline at end of file Added: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.srx =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.srx (rev 0) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.srx 2014-11-05 18:28:32 UTC (rev 8704) @@ -0,0 +1,16 @@ +<?xml version="1.0"?> +<sparql + xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" + xmlns:xs="http://www.w3.org/2001/XMLSchema#" + xmlns="http://www.w3.org/2005/sparql-results#" > + <head> + <variable name="?x"/> + </head> + <results> + <result> + <binding name="x"> + <literal datatype="http://www.w3.org/2001/XMLSchema#integer">1</literal> + </binding> + </result> + </results> +</sparql> \ No newline at end of file Added: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.trig =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.trig (rev 0) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_946.trig 2014-11-05 18:28:32 UTC (rev 8704) @@ -0,0 +1 @@ +# No data is required. \ No newline at end of file Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -86,6 +86,7 @@ import com.bigdata.rdf.changesets.IChangeLog; import com.bigdata.rdf.changesets.IChangeRecord; import com.bigdata.rdf.sail.BigdataSail; +import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; import com.bigdata.rdf.sail.BigdataSailBooleanQuery; import com.bigdata.rdf.sail.BigdataSailGraphQuery; import com.bigdata.rdf.sail.BigdataSailQuery; @@ -502,6 +503,9 @@ */ public abstract class AbstractQueryTask implements Callable<Void> { + /** The connection used to isolate the query or update request. */ + private final BigdataSailRepositoryConnection cxn; + /** The namespace against which the query will be run. */ private final String namespace; @@ -691,37 +695,42 @@ return TimeUnit.NANOSECONDS.toMillis(elapsed); } - /** - * - * @param namespace - * The namespace against which the query will be run. - * @param timestamp - * The timestamp of the view for that namespace against which - * the query will be run. - * @param baseURI - * The base URI. - * @param astContainer - * The container with all the information about the submitted - * query, including the original SPARQL query, the parse - * tree, etc. - * @param queryType - * The {@link QueryType}. - * @param mimeType - * The MIME type to be used for the response. The caller must - * verify that the MIME Type is appropriate for the query - * type. - * @param charset - * The character encoding to use with the negotiated MIME - * type (this is <code>null</code> for binary encodings). - * @param fileExt - * The file extension (without the leading ".") to use with - * that MIME Type. - * @param req - * The request. - * @param os - * Where to write the data for the query result. - */ + /** + * Version for SPARQL QUERY. + * + * @param cxn + * The connection used to isolate the query or update + * request. + * @param namespace + * The namespace against which the query will be run. + * @param timestamp + * The timestamp of the view for that namespace against which + * the query will be run. + * @param baseURI + * The base URI. + * @param astContainer + * The container with all the information about the submitted + * query, including the original SPARQL query, the parse + * tree, etc. + * @param queryType + * The {@link QueryType}. + * @param mimeType + * The MIME type to be used for the response. The caller must + * verify that the MIME Type is appropriate for the query + * type. + * @param charset + * The character encoding to use with the negotiated MIME + * type (this is <code>null</code> for binary encodings). + * @param fileExt + * The file extension (without the leading ".") to use with + * that MIME Type. + * @param req + * The request. + * @param os + * Where to write the data for the query result. + */ protected AbstractQueryTask(// + final BigdataSailRepositoryConnection cxn,// final String namespace,// final long timestamp, // final String baseURI, @@ -735,6 +744,8 @@ final OutputStream os// ) { + if (cxn == null) + throw new IllegalArgumentException(); if (namespace == null) throw new IllegalArgumentException(); if (baseURI == null) @@ -754,6 +765,7 @@ if (os == null) throw new IllegalArgumentException(); + this.cxn = cxn; this.namespace = namespace; this.timestamp = timestamp; this.baseURI = baseURI; @@ -782,6 +794,7 @@ } /** + * Version for SPARQL UPDATE. * * @param namespace * The namespace against which the query will be run. @@ -802,6 +815,7 @@ * Where to write the data for the query result. */ protected AbstractQueryTask(// + final BigdataSailRepositoryConnection cxn,// final String namespace,// final long timestamp, // final String baseURI, @@ -815,6 +829,8 @@ final OutputStream os// ) { + if (cxn == null) + throw new IllegalArgumentException(); if (namespace == null) throw new IllegalArgumentException(); if (baseURI == null) @@ -828,6 +844,7 @@ if (os == null) throw new IllegalArgumentException(); + this.cxn = cxn; this.namespace = namespace; this.timestamp = timestamp; this.baseURI = baseURI; @@ -1188,11 +1205,11 @@ @Override public Void call() throws Exception { - BigdataSailRepositoryConnection cxn = null; - boolean success = false; +// BigdataSailRepositoryConnection cxn = null; +// boolean success = false; try { // Note: Will be UPDATE connection if UPDATE request!!! - cxn = getQueryConnection();//namespace, timestamp); +// cxn = getQueryConnection();//namespace, timestamp); if(log.isTraceEnabled()) log.trace("Query running..."); beginNanos = System.nanoTime(); @@ -1216,10 +1233,10 @@ * those. */ doQuery(cxn, new NullOutputStream()); - success = true; +// success = true; } else { doQuery(cxn, os); - success = true; +// success = true; os.flush(); os.close(); } @@ -1237,55 +1254,55 @@ // log.error(t, t); // } // } - if (cxn != null) { - if (!success && !cxn.isReadOnly()) { - /* - * Force rollback of the connection. - * - * Note: It is possible that the commit has already been - * processed, in which case this rollback() will be a - * NOP. This can happen when there is an IO error when - * communicating with the client, but the database has - * already gone through a commit. - */ - try { - // Force rollback of the connection. - cxn.rollback(); - } catch (Throwable t) { - log.error(t, t); - } - } - try { - // Force close of the connection. - cxn.close(); - } catch (Throwable t) { - log.error(t, t); - } - } +// if (cxn != null) { +// if (!success && !cxn.isReadOnly()) { +// /* +// * Force rollback of the connection. +// * +// * Note: It is possible that the commit has already been +// * processed, in which case this rollback() will be a +// * NOP. This can happen when there is an IO error when +// * communicating with the client, but the database has +// * already gone through a commit. +// */ +// try { +// // Force rollback of the connection. +// cxn.rollback(); +// } catch (Throwable t) { +// log.error(t, t); +// } +// } +// try { +// // Force close of the connection. +// cxn.close(); +// } catch (Throwable t) { +// log.error(t, t); +// } +// } } } - } + } // class SparqlRestApiTask @Override final public Void call() throws Exception { - final String queryOrUpdateStr = astContainer.getQueryString(); +// final String queryOrUpdateStr = astContainer.getQueryString(); - try { +// try { return AbstractApiTask.submitApiTask(getIndexManager(), new SparqlRestApiTask(req, resp, namespace, timestamp)) .get(); - } catch (Throwable t) { +// } catch (Throwable t) { +// +// // FIXME GROUP_COMMIT: check calling stack for existing launderThrowable. +// throw BigdataRDFServlet.launderThrowable(t, resp, +// queryOrUpdateStr); +// +// } - // FIXME GROUP_COMMIT: check calling stack for existing launderThrowable. - throw BigdataRDFServlet.launderThrowable(t, resp, - queryOrUpdateStr); - - } - } // call() } // class AbstractQueryTask @@ -1295,14 +1312,15 @@ */ private class AskQueryTask extends AbstractQueryTask { - public AskQueryTask(final String namespace, final long timestamp, + public AskQueryTask(final BigdataSailRepositoryConnection cxn, + final String namespace, final long timestamp, final String baseURI, final ASTContainer astContainer, final QueryType queryType, final BooleanQueryResultFormat format, final HttpServletRequest req, final HttpServletResponse resp, final OutputStream os) { - super(namespace, timestamp, baseURI, astContainer, queryType, + super(cxn, namespace, timestamp, baseURI, astContainer, queryType, format.getDefaultMIMEType(), format.getCharset(), format .getDefaultFileExtension(), req, resp, os); @@ -1334,14 +1352,15 @@ */ private class TupleQueryTask extends AbstractQueryTask { - public TupleQueryTask(final String namespace, final long timestamp, + public TupleQueryTask(final BigdataSailRepositoryConnection cxn, + final String namespace, final long timestamp, final String baseURI, final ASTContainer astContainer, final QueryType queryType, final String mimeType, final Charset charset, final String fileExt, final HttpServletRequest req, final HttpServletResponse resp, final OutputStream os) { - super(namespace, timestamp, baseURI, astContainer, queryType, + super(cxn, namespace, timestamp, baseURI, astContainer, queryType, mimeType, charset, fileExt, req, resp, os); } @@ -1419,13 +1438,14 @@ */ private class GraphQueryTask extends AbstractQueryTask { - public GraphQueryTask(final String namespace, final long timestamp, + public GraphQueryTask(final BigdataSailRepositoryConnection cxn, + final String namespace, final long timestamp, final String baseURI, final ASTContainer astContainer, final QueryType queryType, final RDFFormat format, final HttpServletRequest req, final HttpServletResponse resp, final OutputStream os) { - super(namespace, timestamp, baseURI, astContainer, queryType, + super(cxn, namespace, timestamp, baseURI, astContainer, queryType, format.getDefaultMIMEType(), format.getCharset(), format .getDefaultFileExtension(), req, resp, os); @@ -1437,28 +1457,6 @@ final BigdataSailGraphQuery query = (BigdataSailGraphQuery) setupQuery(cxn); - /* - * FIXME An error thrown here (such as if format is null and we do - * not check it) will cause the response to hang, at least for the - * test suite. Look into this further and make the error handling - * bullet proof! - * - * This may be related to queryId2. That should be imposed on the - * IRunningQuery via QueryHints.QUERYID such that the QueryEngine - * assigns that UUID to the query. We can then correlate the queryId - * to the IRunningQuery, which is important for some of the status - * pages. This will also let us INTERRUPT the IRunningQuery if there - * is an error during evaluation, which might be necessary. For - * example, if the client dies while the query is running. Look at - * the old NSS code and see what it was doing and whether this was - * logic was lost of simply never implemented. - * - * However, I do not see how that would explain the failure of the - * ft.get() method to return. - */ -// if(true) -// throw new RuntimeException(); - // Note: getQueryTask() verifies that format will be non-null. final RDFFormat format = RDFWriterRegistry.getInstance() .getFileFormatForMIMEType(mimeType); @@ -1472,6 +1470,16 @@ } + UpdateTask getUpdateTask(final BigdataSailRepositoryConnection cxn, + final String namespace, final long timestamp, final String baseURI, + final ASTContainer astContainer, final HttpServletRequest req, + final HttpServletResponse resp, final OutputStream os) { + + return new UpdateTask(cxn, namespace, timestamp, baseURI, astContainer, + req, resp, os); + + } + /** * Executes a SPARQL UPDATE. */ @@ -1483,16 +1491,13 @@ */ public final AtomicLong commitTime = new AtomicLong(-1); - public UpdateTask(final String namespace, final long timestamp, + public UpdateTask(final BigdataSailRepositoryConnection cxn, + final String namespace, final long timestamp, final String baseURI, final ASTContainer astContainer, final HttpServletRequest req, final HttpServletResponse resp, final OutputStream os) { - super(namespace, timestamp, baseURI, astContainer, -// null,//queryType -// null,//format.getDefaultMIMEType() -// null,//format.getCharset(), -// null,//format.getDefaultFileExtension(), + super(cxn, namespace, timestamp, baseURI, astContainer, req,// resp,// os// @@ -1926,46 +1931,44 @@ } /** - * Return the task which will execute the SPARQL Query -or- SPARQL UPDATE. - * <p> - * Note: The {@link OutputStream} is passed in rather than the - * {@link HttpServletResponse} in order to permit operations such as - * "DELETE WITH QUERY" where this method is used in a context which writes - * onto an internal pipe rather than onto the {@link HttpServletResponse}. - * - * @param namespace - * The namespace associated with the {@link AbstractTripleStore} - * view. - * @param timestamp - * The timestamp associated with the {@link AbstractTripleStore} - * view. - * @param queryStr - * The query. - * @param acceptOverride - * Override the Accept header (optional). This is used by UPDATE - * and DELETE so they can control the {@link RDFFormat} of the - * materialized query results. - * @param req - * The request. - * @param os - * Where to write the results. - * @param update - * <code>true</code> iff this is a SPARQL UPDATE request. - * - * @return The task -or- <code>null</code> if the named data set was not - * found. When <code>null</code> is returned, the - * {@link HttpServletResponse} will also have been committed. - * @throws IOException - */ + * Return the task which will execute the SPARQL Query -or- SPARQL UPDATE. + * <p> + * Note: The {@link OutputStream} is passed in rather than the + * {@link HttpServletResponse} in order to permit operations such as + * "DELETE WITH QUERY" where this method is used in a context which writes + * onto an internal pipe rather than onto the {@link HttpServletResponse}. + * + * @param namespace + * The namespace associated with the {@link AbstractTripleStore} + * view. + * @param timestamp + * The timestamp associated with the {@link AbstractTripleStore} + * view. + * @param queryStr + * The query. + * @param acceptOverride + * Override the Accept header (optional). This is used by UPDATE + * and DELETE so they can control the {@link RDFFormat} of the + * materialized query results. + * @param req + * The request. + * @param os + * Where to write the results. + * + * @return The task. + * + * @throws IOException + */ public AbstractQueryTask getQueryTask(// + final BigdataSailRepositoryConnection cxn,// final String namespace,// final long timestamp,// final String queryStr,// final String acceptOverride,// final HttpServletRequest req,// final HttpServletResponse resp,// - final OutputStream os,// - final boolean update// + final OutputStream os// +// final boolean update// ) throws MalformedQueryException, IOException { /* @@ -1973,38 +1976,6 @@ */ final String baseURI = req.getRequestURL().toString(); - final AbstractTripleStore tripleStore = getTripleStore(namespace, - timestamp); - - if (tripleStore == null) { - /* - * There is no such triple/quad store instance. - */ - BigdataServlet.buildResponse(resp, BigdataServlet.HTTP_NOTFOUND, - BigdataServlet.MIME_TEXT_PLAIN); - return null; - } - - if (update) { - - /* - * Parse the query so we can figure out how it will need to be executed. - * - * Note: This goes through some pains to make sure that we parse the - * query exactly once in order to minimize the resources associated with - * the query parser. - */ - final ASTContainer astContainer = new Bigdata2ASTSPARQLParser( - tripleStore).parseUpdate2(queryStr, baseURI); - - if (log.isDebugEnabled()) - log.debug(astContainer.toString()); - - return new UpdateTask(namespace, timestamp, baseURI, astContainer, - req, resp, os); - - } - /* * Parse the query so we can figure out how it will need to be executed. * @@ -2012,6 +1983,7 @@ * query exactly once in order to minimize the resources associated with * the query parser. */ + final AbstractTripleStore tripleStore = cxn.getTripleStore(); final ASTContainer astContainer = new Bigdata2ASTSPARQLParser( tripleStore).parseQuery2(queryStr, baseURI); @@ -2065,7 +2037,7 @@ case CONSTRUCT: /* Generate RDF/XML so we can apply XSLT transform. * - * FIXME This should be sending back RDFs or using a lens. + * TODO This should be sending back RDFs or using a lens. */ acceptStr = RDFFormat.RDFXML.getDefaultMIMEType(); break; @@ -2086,7 +2058,7 @@ final BooleanQueryResultFormat format = util .getBooleanQueryResultFormat(BooleanQueryResultFormat.SPARQL); - return new AskQueryTask(namespace, timestamp, baseURI, + return new AskQueryTask(cxn, namespace, timestamp, baseURI, astContainer, queryType, format, req, resp, os); } @@ -2095,7 +2067,7 @@ final RDFFormat format = util.getRDFFormat(RDFFormat.RDFXML); - return new GraphQueryTask(namespace, timestamp, baseURI, + return new GraphQueryTask(cxn, namespace, timestamp, baseURI, astContainer, queryType, format, req, resp, os); } @@ -2120,7 +2092,7 @@ charset = format.getCharset(); fileExt = format.getDefaultFileExtension(); } - return new TupleQueryTask(namespace, timestamp, baseURI, + return new TupleQueryTask(cxn, namespace, timestamp, baseURI, astContainer, queryType, mimeType, charset, fileExt, req, resp, os); @@ -2423,20 +2395,27 @@ } /** - * Obtain a new transaction to protect operations against the specified view - * of the database. - * - * @param timestamp - * The timestamp for the desired view. - * - * @return The transaction identifier -or- <code>timestamp</code> if the - * {@link IIndexManager} is not a {@link Journal}. - * - * @see ITransactionService#newTx(long) - * - * @see <a href="http://trac.bigdata.com/ticket/867"> NSS concurrency - * problem with list namespaces and create namespace </a> - */ + * Obtain a new transaction to protect operations against the specified view + * of the database. This uses the transaction mechanisms to prevent + * recycling during operations NOT OTHERWISE PROTECTED by a + * {@link BigdataSailConnection} for what would otherwise amount to dirty + * reads. This is especially critical for reads on the global row store + * since it can not be protected by the {@link BigdataSailConnection} for + * cases where the KB instance does not yet exist. The presence of such a tx + * does NOT prevent concurrent commits. It only prevents recycling during + * such commits (and even then only on the RWStore backend). + * + * @param timestamp + * The timestamp for the desired view. + * + * @return The transaction identifier -or- <code>timestamp</code> if the + * {@link IIndexManager} is not a {@link Journal}. + * + * @see ITransactionService#newTx(long) + * + * @see <a href="http://trac.bigdata.com/ticket/867"> NSS concurrency + * problem with list namespaces and create namespace </a> + */ public long newTx(final long timestamp) { long tx = timestamp; // use dirty reads unless Journal. @@ -2459,7 +2438,10 @@ } /** - * Abort a transaction obtained by {@link #newTx(long)}. + * Abort a transaction obtained by {@link #newTx(long)}. This decements the + * native active transaction counter for the RWStore. Once that counter + * reaches zero, recycling will occur the next time an unisolated mutation + * goes through a commit on the journal. * * @param tx * The transaction identifier. @@ -2482,6 +2464,15 @@ } + /* + * + */ +// /** +// * Commit a transaction obtained by {@link #newTx(long)} +// * +// * @param tx +// * The transaction identifier. +// */ // public void commitTx(final long tx) { // // if (getIndexManager() instanceof Journal) { Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -47,6 +47,7 @@ import org.openrdf.model.Resource; import org.openrdf.model.Statement; import org.openrdf.model.impl.URIImpl; +import org.openrdf.query.MalformedQueryException; import org.openrdf.rio.RDFFormat; import org.openrdf.rio.RDFHandlerException; import org.openrdf.rio.RDFWriter; @@ -185,18 +186,35 @@ } if (resp != null) { if (!resp.isCommitted()) { - if (InnerCause.isInnerCause(t, - ConstraintViolationException.class)) { - /* - * A constraint violation is a bad request (the data - * violates the rules) not a server error. - */ - resp.setStatus(HTTP_BADREQUEST); + if (InnerCause.isInnerCause(t, DatasetNotFoundException.class)) { + /* + * The addressed KB does not exist. + */ + resp.setStatus(HttpServletResponse.SC_NOT_FOUND); + resp.setContentType(MIME_TEXT_PLAIN); + } else if (InnerCause.isInnerCause(t, + ConstraintViolationException.class)) { + /* + * A constraint violation is a bad request (the data + * violates the rules) not a server error. + */ + resp.setStatus(HttpServletResponse.SC_BAD_REQUEST); + resp.setContentType(MIME_TEXT_PLAIN); + } else if (InnerCause.isInnerCause(t, + MalformedQueryException.class)) { + /* + * Send back a BAD REQUEST (400) along with the text of the + * syntax error message. + * + * TODO Write unit test for 400 response for bad client + * request. + */ + resp.setStatus(HttpServletResponse.SC_BAD_REQUEST); + resp.setContentType(MIME_TEXT_PLAIN); + } else { + // Internal server error. + resp.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR); resp.setContentType(MIME_TEXT_PLAIN); - } else { - // Internal server error. - resp.setStatus(HTTP_INTERNALERROR); - resp.setContentType(MIME_TEXT_PLAIN); } } OutputStream os = null; @@ -337,12 +355,12 @@ /** * Factory for the {@link PipedInputStream}. */ - protected PipedInputStream newPipedInputStream(final PipedOutputStream os) - throws IOException { + final static protected PipedInputStream newPipedInputStream( + final PipedOutputStream os) throws IOException { - return new PipedInputStream(os); + return new PipedInputStream(os); - } + } /** * Report a mutation count and elapsed time back to the user agent. Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -68,14 +68,14 @@ protected void doPost(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { - if (getBigdataRDFContext().getTripleStore(getNamespace(req), - getTimestamp(req)) == null) { - /* - * There is no such triple/quad store instance. - */ - buildResponse(resp, HTTP_NOTFOUND, MIME_TEXT_PLAIN); - return; - } +// if (getBigdataRDFContext().getTripleStore(getNamespace(req), +// getTimestamp(req)) == null) { +// /* +// * There is no such triple/quad store instance. +// */ +// buildResponse(resp, HTTP_NOTFOUND, MIME_TEXT_PLAIN); +// return; +// } final String contentType = req.getContentType(); Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -37,6 +37,7 @@ import org.openrdf.model.Statement; import org.openrdf.model.URI; import org.openrdf.model.Value; +import org.openrdf.query.MalformedQueryException; import org.openrdf.rio.RDFFormat; import org.openrdf.rio.RDFHandlerException; import org.openrdf.rio.RDFParser; @@ -113,126 +114,199 @@ * operation would be broken by group commit since other tasks could have * updated the KB since the lastCommitTime and been checkpointed and hence * be visible to an unisolated operation without there being an intervening - * commit point. + * commit point. [I think that this is resolved by taking the unisolated + * connection first and then taking the read-only lastCommitTime connection + * view, which is what the code now does.] */ private void doDeleteWithQuery(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { + + final String baseURI = req.getRequestURL().toString(); - final long begin = System.currentTimeMillis(); + final String namespace = getNamespace(req); + + final String queryStr = req.getParameter("query"); + + if (queryStr == null) + throw new UnsupportedOperationException(); + + if (log.isInfoEnabled()) + log.info("delete with query: " + queryStr); + + try { + + submitApiTask( + new DeleteWithQueryTask(req, resp, namespace, + ITx.UNISOLATED, // + queryStr,// + baseURI// + )).get(); + + } catch (Throwable t) { + + launderThrowable(t, resp, "UPDATE-WITH-QUERY" + ": queryStr=" + + queryStr + ", baseURI=" + baseURI); + + } + + } + + private static class DeleteWithQueryTask extends AbstractRestApiTask<Void> { + + private final String queryStr; + private final String baseURI; + + /** + * + * @param namespace + * The namespace of the target KB instance. + * @param timestamp + * The timestamp used to obtain a mutable connection. + * @param baseURI + * The base URI for the operation. + */ + public DeleteWithQueryTask(final HttpServletRequest req, + final HttpServletResponse resp, + final String namespace, final long timestamp, + final String queryStr,// + final String baseURI + ) { + super(req, resp, namespace, timestamp); + this.queryStr = queryStr; + this.baseURI = baseURI; + } - final String baseURI = req.getRequestURL().toString(); + @Override + public boolean isReadOnly() { + return false; + } - final String namespace = getNamespace(req); + @Override + public Void call() throws Exception { - final String queryStr = req.getParameter("query"); + final long begin = System.currentTimeMillis(); + + final AtomicLong nmodified = new AtomicLong(0L); - if (queryStr == null) - throw new UnsupportedOperationException(); + BigdataSailRepositoryConnection conn = null; + boolean success = false; + try { - if (log.isInfoEnabled()) - log.info("delete with query: " + queryStr); + conn = getUnisolatedConnection(); - try { + { - /* - * Note: pipe is drained by this thread to consume the query - * results, which are the statements to be deleted. - */ - final PipedOutputStream os = new PipedOutputStream(); - final InputStream is = newPipedInputStream(os); + if (log.isInfoEnabled()) + log.info("delete with query: " + queryStr); - // Use this format for the query results. - final RDFFormat format = RDFFormat.NTRIPLES; - - final AbstractQueryTask queryTask = getBigdataRDFContext() - .getQueryTask(namespace, ITx.READ_COMMITTED, queryStr, - format.getDefaultMIMEType(), - req, resp, os, false/*update*/); + final BigdataRDFContext context = BigdataServlet + .getBigdataRDFContext(req.getServletContext()); - if(queryTask == null) { - // KB not found. Response already committed. - return; - } + /* + * Note: pipe is drained by this thread to consume the query + * results, which are the statements to be deleted. + */ + final PipedOutputStream os = new PipedOutputStream(); - switch (queryTask.queryType) { - case DESCRIBE: - case CONSTRUCT: - break; - default: - buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, - "Must be DESCRIBE or CONSTRUCT query."); - return; - } + // The read-only connection for the query. + BigdataSailRepositoryConnection roconn = null; + try { - final AtomicLong nmodified = new AtomicLong(0L); + final long readOnlyTimestamp = ITx.READ_COMMITTED; - BigdataSailRepositoryConnection conn = null; - boolean success = false; - try { + roconn = getQueryConnection(namespace, + readOnlyTimestamp); - conn = getBigdataRDFContext().getUnisolatedConnection( - namespace); + // Use this format for the query results. + final RDFFormat format = RDFFormat.NTRIPLES; - final RDFParserFactory factory = RDFParserRegistry - .getInstance().get(format); + final AbstractQueryTask queryTask = context + .getQueryTask(roconn, namespace, + readOnlyTimestamp, queryStr, + format.getDefaultMIMEType(), req, resp, + os); - final RDFParser rdfParser = factory.getParser(); + switch (queryTask.queryType) { + case DESCRIBE: + case CONSTRUCT: + break; + default: + throw new MalformedQueryException( + "Must be DESCRIBE or CONSTRUCT query"); + } - rdfParser.setValueFactory(conn.getTripleStore() - .getValueFactory()); + final RDFParserFactory factory = RDFParserRegistry + .getInstance().get(format); - rdfParser.setVerifyData(false); + final RDFParser rdfParser = factory.getParser(); - rdfParser.setStopAtFirstError(true); + rdfParser.setValueFactory(conn.getTripleStore() + .getValueFactory()); - rdfParser - .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); + rdfParser.setVerifyData(false); - rdfParser.setRDFHandler(new RemoveStatementHandler(conn - .getSailConnection(), nmodified)); + rdfParser.setStopAtFirstError(true); - // Wrap as Future. - final FutureTask<Void> ft = new FutureTask<Void>(queryTask); - - // Submit query for evaluation. - getBigdataRDFContext().queryService.execute(ft); - - // Run parser : visited statements will be deleted. - rdfParser.parse(is, baseURI); + rdfParser + .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); - // Await the Future (of the Query) - ft.get(); - - // Commit the mutation. - conn.commit(); + rdfParser.setRDFHandler(new RemoveStatementHandler(conn + .getSailConnection(), nmodified)); - success = true; - - final long elapsed = System.currentTimeMillis() - begin; - - reportModifiedCount(resp, nmodified.get(), elapsed); + // Wrap as Future. + final FutureTask<Void> ft = new FutureTask<Void>( + queryTask); - } finally { + // Submit query for evaluation. + context.queryService.execute(ft); - if (conn != null) { + // Reads on the statements produced by the query. + final InputStream is = newPipedInputStream(os); - if (!success) - conn.rollback(); + // Run parser : visited statements will be deleted. + rdfParser.parse(is, baseURI); - conn.close(); + // Await the Future (of the Query) + ft.get(); - } + } finally { - } + if (roconn != null) { + // close the read-only connection for the query. + roconn.rollback(); + } - } catch (Throwable t) { + } - throw BigdataRDFServlet.launderThrowable(t, resp, queryStr); + } - } + conn.commit(); - } + success = true; + final long elapsed = System.currentTimeMillis() - begin; + + reportModifiedCount(nmodified.get(), elapsed); + + return null; + + } finally { + + if (conn != null) { + + if (!success) + conn.rollback(); + + conn.close(); + + } + + } + + } + + } // class DeleteWithQueryTask + @Override protected void doPost(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2014-11-05 15:13:07 UTC (rev 8703) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2014-11-05 18:28:32 UTC (rev 8704) @@ -48,7 +48,6 @@ import org.openrdf.model.URI; import org.openrdf.model.Value; import org.openrdf.model.impl.GraphImpl; -import org.openrdf.query.MalformedQueryException; import org.openrdf.repository.RepositoryResult; import com.bigdata.bop.BOpUtility; @@ -65,6 +64,7 @@ import com.bigdata.mdi.PartitionLocator; import com.bigdata.rdf.sail.BigdataSailQuery; import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; +import com.bigdata.rdf.sail.sparql.Bigdata2ASTSPARQLParser; import com.bigdata.rdf.sail.sparql.ast.SimpleNode; import com.bigdata.rdf.sail.webapp.BigdataRDFContext.AbstractQueryTask; import com.bigdata.rdf.sail.webapp.BigdataRDFContext.RunningQuery; @@ -327,10 +327,6 @@ return; } - final String namespace = getNamespace(req); - - final long timestamp = ITx.UNISOLATED;//getTimestamp(req); - // The SPARQL update final String updateStr = getUpdateString(req); @@ -343,84 +339,149 @@ } - /* - * Setup task to execute the request. The task is executed on a thread - * pool. This bounds the possible concurrency of query execution (as - * opposed to queries accepted for eventual execution). + try { + + final String namespace = getNamespace(req); + + final long timestamp = ITx.UNISOLATED;//getTimestamp(req); + + submitApiTask( + new SparqlUpdateTask(req, resp, namespace, timestamp, + updateStr, getBigdataRDFContext() // + )).get(); + + } catch (Throwable t) { + + launderThrowable(t, resp, "SPARQL-UPDATE: updateStr=" + updateStr); + + } + + } + + private static class SparqlUpdateTask extends AbstractRestApiTask<Void> { + + private final String updateStr; + private final BigdataRDFContext context; + + /** * - * Note: If the client closes the connection, then the response's - * InputStream will be closed and the task will terminate rather than - * running on in the background with a disconnected client. @see #1026 (SPARQL UPDATE with runtime errors causes problems with lexicon indices) + * @param namespace + * The namespace of the target KB instance. + * @param timestamp + * The timestamp used to obtain a mutable connection. */ - final long tx = getBigdataRDFContext().newTx(timestamp); - boolean ok = false; - try { + public SparqlUpdateTask(// + final HttpServletRequest req,// + final HttpServletResponse resp,// + final String namespace, // + final long timestamp,// + final String updateStr,// + final BigdataRDFContext context// + ) { + super(req, resp, namespace, timestamp); + this.updateStr = updateStr; + this.context = context; + } + + @Override + final public boolean isReadOnly() { + return false; + } - final BigdataRDFContext context = getBigdataRDFContext(); + @Override + public Void call() throws Exception { - final UpdateTask updateTask; - try { + BigdataSailRepositoryConnection conn = null; + boolean success = false; + try { - /* - * Attempt to construct a task which we can use to evaluate the - * query. - */ - - updateTask = (UpdateTask) context.getQueryTask(namespace, - timestamp, updateStr, null/* acceptOverride */, req, - resp, resp.getOutputStream(), true/* update */); - - if (updateTask == null) { - // KB not found. Response already committed. - return; - } - - } catch (MalformedQueryException ex) { - /* - * Send back a BAD REQUEST (400) along with the text of the - * syntax error message. - */ - resp.sendError(HttpServletResponse.SC_BAD_REQUEST, - ex.getLocalizedMessage()); - return; - } + conn = getUnisolatedConnection(); - final FutureTask<Void> ft = new FutureTask<Void>(updateTask); + { - if (log.isTraceEnabled()) - log.trace("Will run update: " + updateStr); + /* + * Setup the baseURI for this request. It will be set to the + * requestURI. + */ + final String baseURI = req.getRequestURL().toString(); - updateTask.updateFuture = ft; - - /* - * Begin executing the query (asynchronous). - * - * Note: UPDATEs currently contend with QUERYs against the same - * thread pool. - */ - getBigdataRDFContext().queryService.execute(ft); + final AbstractTriple... [truncated message content] |
From: <tho...@us...> - 2014-11-05 15:13:10
|
Revision: 8703 http://sourceforge.net/p/bigdata/code/8703 Author: thompsonbry Date: 2014-11-05 15:13:07 +0000 (Wed, 05 Nov 2014) Log Message: ----------- adding target to svnignore prior to merge down from github to obtain a clean local copy of svn. Property Changed: ---------------- branches/BIGDATA_RELEASE_1_3_0/ Index: branches/BIGDATA_RELEASE_1_3_0 =================================================================== --- branches/BIGDATA_RELEASE_1_3_0 2014-10-28 22:26:39 UTC (rev 8702) +++ branches/BIGDATA_RELEASE_1_3_0 2014-11-05 15:13:07 UTC (rev 8703) Property changes on: branches/BIGDATA_RELEASE_1_3_0 ___________________________________________________________________ Modified: svn:ignore ## -29,3 +29,4 ## bsbm10-dataset.nt.gz bsbm10-dataset.nt.zip benchmark* +target This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-28 22:26:41
|
Revision: 8702 http://sourceforge.net/p/bigdata/code/8702 Author: thompsonbry Date: 2014-10-28 22:26:39 +0000 (Tue, 28 Oct 2014) Log Message: ----------- enabling snapshot builds for CI. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/build.properties Modified: branches/BIGDATA_RELEASE_1_3_0/build.properties =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/build.properties 2014-10-28 22:23:47 UTC (rev 8701) +++ branches/BIGDATA_RELEASE_1_3_0/build.properties 2014-10-28 22:26:39 UTC (rev 8702) @@ -95,14 +95,14 @@ # Set true to do a snapshot build. This changes the value of ${version} to # include the date. -snapshot=false +snapshot=true # Javadoc build may be disabled using this property. The javadoc target will # not be executed unless this property is defined (its value does not matter). # Note: The javadoc goes quite if you have enough memory, but can take forever # and then runs out of memory if the JVM is starved for RAM. The heap for the # javadoc JVM is explicitly set in the javadoc target in the build.xml file. -javadoc= +#javadoc= # packaging property set (rpm, deb). package.release=1 This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-28 22:23:57
|
Revision: 8701 http://sourceforge.net/p/bigdata/code/8701 Author: thompsonbry Date: 2014-10-28 22:23:47 +0000 (Tue, 28 Oct 2014) Log Message: ----------- bumping version in POM to begin snapshot CI builds for 1.3.3. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/pom.xml Modified: branches/BIGDATA_RELEASE_1_3_0/pom.xml =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/pom.xml 2014-10-28 21:50:35 UTC (rev 8700) +++ branches/BIGDATA_RELEASE_1_3_0/pom.xml 2014-10-28 22:23:47 UTC (rev 8701) @@ -52,7 +52,7 @@ <modelVersion>4.0.0</modelVersion> <groupId>com.bigdata</groupId> <artifactId>bigdata</artifactId> - <version>1.3.2-SNAPSHOT</version> + <version>1.3.3-SNAPSHOT</version> <packaging>pom</packaging> <name>bigdata(R)</name> <description>Bigdata(R) Maven Build</description> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-28 21:50:43
|
Revision: 8700 http://sourceforge.net/p/bigdata/code/8700 Author: thompsonbry Date: 2014-10-28 21:50:35 +0000 (Tue, 28 Oct 2014) Log Message: ----------- 1.3.3 tag Added Paths: ----------- tags/BIGDATA_RELEASE_1_3_3/ This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-28 21:49:36
|
Revision: 8699 http://sourceforge.net/p/bigdata/code/8699 Author: thompsonbry Date: 2014-10-28 21:49:33 +0000 (Tue, 28 Oct 2014) Log Message: ----------- Bumping version for 1.3.3 release. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/build.properties Modified: branches/BIGDATA_RELEASE_1_3_0/build.properties =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/build.properties 2014-10-28 17:41:09 UTC (rev 8698) +++ branches/BIGDATA_RELEASE_1_3_0/build.properties 2014-10-28 21:49:33 UTC (rev 8699) @@ -90,19 +90,19 @@ release.dir=ant-release # The build version (note: 0.82b -> 0.82.0); 0.83.2 is followed by 1.0.0 -build.ver=1.3.2 +build.ver=1.3.3 build.ver.osgi=1.0 # Set true to do a snapshot build. This changes the value of ${version} to # include the date. -snapshot=true +snapshot=false # Javadoc build may be disabled using this property. The javadoc target will # not be executed unless this property is defined (its value does not matter). # Note: The javadoc goes quite if you have enough memory, but can take forever # and then runs out of memory if the JVM is starved for RAM. The heap for the # javadoc JVM is explicitly set in the javadoc target in the build.xml file. -#javadoc= +javadoc= # packaging property set (rpm, deb). package.release=1 This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-28 17:41:12
|
Revision: 8698 http://sourceforge.net/p/bigdata/code/8698 Author: thompsonbry Date: 2014-10-28 17:41:09 +0000 (Tue, 28 Oct 2014) Log Message: ----------- whitespace fix Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/XSDBooleanIV.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/XSDBooleanIV.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/XSDBooleanIV.java 2014-10-28 17:38:32 UTC (rev 8697) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/XSDBooleanIV.java 2014-10-28 17:41:09 UTC (rev 8698) @@ -143,5 +143,4 @@ } - } \ No newline at end of file This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-28 17:38:35
|
Revision: 8697 http://sourceforge.net/p/bigdata/code/8697 Author: thompsonbry Date: 2014-10-28 17:38:32 +0000 (Tue, 28 Oct 2014) Log Message: ----------- Pushing fix for storage stats to SVN for 1.3.3 release. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/StorageStats.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2014-10-28 17:38:01 UTC (rev 8696) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2014-10-28 17:38:32 UTC (rev 8697) @@ -3010,22 +3010,9 @@ /* * Reset any storage stats - * FIXME: Change StorageStats internals to be able to efficiently commit/reset and avoid disk read */ if (m_storageStatsAddr != 0) { - final long statsAddr = m_storageStatsAddr >> 16; - final int statsLen = ((int) m_storageStatsAddr) & 0xFFFF; - final byte[] stats = new byte[statsLen + 4]; // allow for checksum - getData(statsAddr, stats); - final DataInputStream instr = new DataInputStream(new ByteArrayInputStream(stats)); - try { - m_storageStats = new StorageStats(instr); - for (FixedAllocator fa: m_allocs) { - m_storageStats.register(fa); - } - } catch (IOException e) { - throw new RuntimeException("Unable to reset storage stats", e); - } + m_storageStats.reset(); } else { m_storageStats = new StorageStats(m_allocSizes); } @@ -3184,7 +3171,7 @@ RWStore.this.m_storageStatsAddr = m_storageStatsAddr; RWStore.this.m_committedNextAllocation = m_lastCommittedNextAllocation; RWStore.this.m_metaBitsAddr = m_metaBitsAddr; - } + } } @@ -3439,6 +3426,10 @@ } + if (m_storageStats != null) { + m_storageStats.commit(); + } + m_commitList.clear(); } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/StorageStats.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/StorageStats.java 2014-10-28 17:38:01 UTC (rev 8696) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/StorageStats.java 2014-10-28 17:38:32 UTC (rev 8697) @@ -77,6 +77,9 @@ long m_deletes; long m_deleteSize; + // By copying committed data the stats can be reset on abort + BlobBucket m_committed = null; + public BlobBucket(final int size) { m_size = size; } @@ -86,7 +89,10 @@ m_allocations = instr.readLong(); m_deleteSize = instr.readLong(); m_deletes = instr.readLong(); + + commit(); } + public void write(DataOutputStream outstr) throws IOException { outstr.writeInt(m_size); outstr.writeLong(m_allocationSize); @@ -94,6 +100,30 @@ outstr.writeLong(m_deleteSize); outstr.writeLong(m_deletes); } + public void commit() { + if (m_committed == null) { + m_committed = new BlobBucket(m_size); + } + m_committed.m_allocationSize = m_allocationSize; + m_committed.m_allocations = m_allocations; + m_committed.m_deleteSize = m_deleteSize; + m_committed.m_deletes = m_deletes; + } + + public void reset() { + if (m_committed != null) { + m_allocationSize = m_committed.m_allocationSize; + m_allocations = m_committed.m_allocations; + m_deleteSize = m_committed.m_deleteSize; + m_deletes = m_committed.m_deletes; + } else { + m_allocationSize = 0; + m_allocations = 0; + m_deleteSize = 0; + m_deletes = 0; + } + } + public void delete(int sze) { m_deleteSize += sze; m_deletes++; @@ -131,6 +161,9 @@ long m_sizeAllocations; long m_sizeDeletes; + // By copying committed data the stats can be reset on abort + Bucket m_committed = null; + public Bucket(final int size, final int startRange) { m_size = size; m_start = startRange; @@ -144,6 +177,8 @@ m_totalSlots = instr.readLong(); m_sizeAllocations = instr.readLong(); m_sizeDeletes = instr.readLong(); + + commit(); } public void write(DataOutputStream outstr) throws IOException { outstr.writeInt(m_size); @@ -155,6 +190,37 @@ outstr.writeLong(m_sizeAllocations); outstr.writeLong(m_sizeDeletes); } + + public void commit() { + if (m_committed == null) { + m_committed = new Bucket(m_size, m_start); + } + m_committed.m_allocators = m_allocators; + m_committed.m_slotAllocations = m_slotAllocations; + m_committed.m_slotDeletes = m_slotDeletes; + m_committed.m_totalSlots = m_totalSlots; + m_committed.m_sizeAllocations = m_sizeAllocations; + m_committed.m_sizeDeletes = m_sizeDeletes; + } + + public void reset() { + if (m_committed != null) { + m_allocators = m_committed.m_allocators; + m_slotAllocations = m_committed.m_slotAllocations; + m_slotDeletes = m_committed.m_slotDeletes; + m_totalSlots = m_committed.m_totalSlots; + m_sizeAllocations = m_committed.m_sizeAllocations; + m_sizeDeletes = m_committed.m_sizeDeletes; + } else { + m_allocators = 0; + m_slotAllocations = 0; + m_slotDeletes = 0; + m_totalSlots =0; + m_sizeAllocations = 0; + m_sizeDeletes = 0; + } + } + public void delete(int sze) { if (sze < 0) throw new IllegalArgumentException("delete requires positive size, got: " + sze); @@ -532,7 +598,23 @@ } public void register(SectorAllocator allocator, boolean init) { - // TODO Auto-generated method stub - + throw new UnsupportedOperationException(); } + + public void commit() { + for (Bucket b: m_buckets) { + b.commit(); + } + for (BlobBucket b: m_blobBuckets) { + b.commit(); + } + } + public void reset() { + for (Bucket b: m_buckets) { + b.reset(); + } + for (BlobBucket b: m_blobBuckets) { + b.reset(); + } + } } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-28 17:38:13
|
Revision: 8696 http://sourceforge.net/p/bigdata/code/8696 Author: thompsonbry Date: 2014-10-28 17:38:01 +0000 (Tue, 28 Oct 2014) Log Message: ----------- updated release notes for 1.3.3 to reflect #1028 closed. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt 2014-10-28 17:35:25 UTC (rev 8695) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt 2014-10-28 17:38:01 UTC (rev 8696) @@ -65,6 +65,7 @@ - http://trac.bigdata.com/ticket/1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) - http://trac.bigdata.com/ticket/1024 (GregorianCalendar? does weird things before 1582) - http://trac.bigdata.com/ticket/1026 (SPARQL UPDATE with runtime errors causes problems with lexicon indices) +- http://trac.bigdata.com/ticket/1028 (very rare NotMaterializedException: XSDBoolean(true)) - http://trac.bigdata.com/ticket/1029 (RWStore commit state not correctly rolled back if abort fails on empty journal) - http://trac.bigdata.com/ticket/1030 (RWStorage stats cleanup) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-28 17:35:33
|
Revision: 8695 http://sourceforge.net/p/bigdata/code/8695 Author: thompsonbry Date: 2014-10-28 17:35:25 +0000 (Tue, 28 Oct 2014) Log Message: ----------- fix for #1028 (xsd:boolean materialization) also made AbstractIV.needsMaterialization() final. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/AbstractLiteralIV.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/XSDBooleanIV.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/uri/IPv4AddrIV.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java 2014-10-28 13:39:00 UTC (rev 8694) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java 2014-10-28 17:35:25 UTC (rev 8695) @@ -502,21 +502,9 @@ } final BigdataValue value = terms.get(iv); - // FIXME Temporarily rolled back for 1.3.3 release. See #1028 (xsd:boolean materialization issue) - if (value == null && iv.needsMaterialization()) { + + conditionallySetIVCache(iv,value); - throw new RuntimeException("Could not resolve: iv=" + iv); - - } - - /* - * Replace the binding. - * - * FIXME This probably needs to strip out the - * BigdataSail#NULL_GRAPH since that should not become bound. - */ - ((IV) iv).setValue(value); - } } else { @@ -546,23 +534,53 @@ final BigdataValue value = terms.get(iv); - if (value == null && iv.needsMaterialization()) { + conditionallySetIVCache(iv, value); - throw new RuntimeException("Could not resolve: iv=" + iv); + } - } + } - /* - * Replace the binding. - * - * FIXME This probably needs to strip out the - * BigdataSail#NULL_GRAPH since that should not become bound. - */ - ((IV) iv).setValue(value); + } - } - } + /** + * If the {@link BigdataValue} is non-null, then set it on the + * {@link IVCache} interface. + * + * @param iv + * The {@link IV} + * @param value + * The {@link BigdataValue} for that {@link IV} (from the + * dictionary). + * + * @throws RuntimeException + * If the {@link BigdataValue} is null (could not be discovered + * in the dictionary) and the {@link IV} requires + * materialization ({@link IV#needsMaterialization() is + * <code>true</code>). + * + * @see #1028 (xsd:boolean materialization issue) + */ + private static void conditionallySetIVCache(IV<?, ?> iv, BigdataValue value) { - } + if (value == null) { + if (iv.needsMaterialization()) { + // Not found in dictionary. This is an error. + throw new RuntimeException("Could not resolve: iv=" + iv); + + } // else NOP - Value is not required. + + } else { + + /* + * Value was found in the dictionary, so replace the binding. + * + * FIXME This probably needs to strip out the BigdataSail#NULL_GRAPH + * since that should not become bound. + */ + ((IV) iv).setValue(value); + } + + } + } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/AbstractLiteralIV.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/AbstractLiteralIV.java 2014-10-28 13:39:00 UTC (rev 8694) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/AbstractLiteralIV.java 2014-10-28 17:35:25 UTC (rev 8695) @@ -71,7 +71,8 @@ * Implement {@link IV#needsMaterialization()}. Materialization not required * to answer the {@link Literal} interface methods. */ - public boolean needsMaterialization() { + @Override + final public boolean needsMaterialization() { return false; } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/XSDBooleanIV.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/XSDBooleanIV.java 2014-10-28 13:39:00 UTC (rev 8694) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/XSDBooleanIV.java 2014-10-28 17:35:25 UTC (rev 8695) @@ -52,6 +52,7 @@ private final boolean value; + @Override public IV<V, Boolean> clone(final boolean clearCache) { final XSDBooleanIV<V> tmp = new XSDBooleanIV<V>(value); @@ -74,12 +75,14 @@ } + @Override final public Boolean getInlineValue() { return value ? Boolean.TRUE : Boolean.FALSE; } + @Override @SuppressWarnings("unchecked") public V asValue(final LexiconRelation lex) { @@ -121,10 +124,12 @@ * * @see Boolean#hashCode() */ + @Override public int hashCode() { return value ? Boolean.TRUE.hashCode() : Boolean.FALSE.hashCode(); } + @Override public int byteLength() { return 1 + 1; } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/uri/IPv4AddrIV.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/uri/IPv4AddrIV.java 2014-10-28 13:39:00 UTC (rev 8694) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/uri/IPv4AddrIV.java 2014-10-28 17:35:25 UTC (rev 8695) @@ -367,12 +367,12 @@ // // } - /** - * Does not need materialization to answer URI interface methods. - */ - @Override - public boolean needsMaterialization() { - return false; - } +// /** +// * Does not need materialization to answer URI interface methods. +// */ +// @Override +// public boolean needsMaterialization() { +// return false; +// } } \ No newline at end of file This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: Fabio R. <fab...@se...> - 2014-10-28 16:28:45
|
Dear bigdata community I would like to evaluate bigdata for my activities and beeing possibly a reseller. Do you think it is possible to load in a bigdata instance several graphs, each one coming from a separate rdf file? If yes, how? Or if not - shell I run n instances of bigdata each for one graph? Thanks a lot for your feedback -- Kind regards / Meilleures salutations / Freundliche Grüsse /Fabio Ricci/ semweb Semantic Web Technologies · Records Management Software systems · ICT coaching · ICT Projects leading *www.semweb.ch* <http://semweb.ch> Weinmanngasse 26 CH-8700 Küsnacht ZH (Switzerland) /Tel./ +41 (076) 5281961 +39 (389) 0681334 /Skype:/ *semweb-llc* <http://myskype.info/semweb-llc> *Confidentiality Warning*: This message and any attachments are intended only for the use of the intended recipients, are confidential and maybe privileged. If you are not the intended recipient, you are hereby notified that any review, retransmission, conversion to hard copy, copying, circulation or other use of this message and any attachments is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return email, and delete this message and any attachments from your system. Thank you. |
From: <tho...@us...> - 2014-10-28 13:39:03
|
Revision: 8694 http://sourceforge.net/p/bigdata/code/8694 Author: thompsonbry Date: 2014-10-28 13:39:00 +0000 (Tue, 28 Oct 2014) Log Message: ----------- minor release notes edit Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt 2014-10-28 13:37:09 UTC (rev 8693) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt 2014-10-28 13:39:00 UTC (rev 8694) @@ -20,7 +20,7 @@ Critical or otherwise of note in this minor release: -- #1021 Add critical section protection to AbstractJournal?.abort() and BigdataSailConnection?.rollback(). +- #1021 Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback(). - #1026 SPARQL UPDATE with runtime errors causes problems with lexicon indices. New features in 1.3.x: This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-28 13:37:13
|
Revision: 8693 http://sourceforge.net/p/bigdata/code/8693 Author: thompsonbry Date: 2014-10-28 13:37:09 +0000 (Tue, 28 Oct 2014) Log Message: ----------- Adding release notes for 1.3.3 Added Paths: ----------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt Added: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_3.txt 2014-10-28 13:37:09 UTC (rev 8693) @@ -0,0 +1,542 @@ +This is a critical fix release of bigdata(R). All users are encouraged to upgrade immediately. + +Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal), highly available replication cluster mode (HAJournalServer), and a horizontally sharded cluster mode (BigdataFederation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The HAJournalServer adds replication, online backup, horizontal scaling of query, and high availability. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the HAJournalServer for high availability and linear scaling in query throughput. Choose the BigdataFederation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +Starting with the 1.3.0 release, we offer a tarball artifact [10] for easy installation of the HA replication cluster. + +You can download the WAR (standalone) or HA artifacts from: + +http://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://svn.code.sf.net/p/bigdata/code/tags/BIGDATA_RELEASE_1_3_3 + +Critical or otherwise of note in this minor release: + +- #1021 Add critical section protection to AbstractJournal?.abort() and BigdataSailConnection?.rollback(). +- #1026 SPARQL UPDATE with runtime errors causes problems with lexicon indices. + +New features in 1.3.x: + +- Java 7 is now required. +- High availability [10]. +- High availability load balancer. +- New RDF/SPARQL workbench. +- Blueprints API. +- RDF Graph Mining Service (GASService) [12]. +- Reification Done Right (RDR) support [11]. +- Property Path performance enhancements. +- Plus numerous other bug fixes and performance enhancements. + +Feature summary: + +- Highly Available Replication Clusters (HAJournalServer [10]) +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited (BigdataFederation); +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (SIDs); +- Fast RDFS+ inference and truth maintenance; +- Fast 100% native SPARQL 1.1 evaluation; +- Integrated "analytic" query package; +- %100 Java memory manager leverages the JVM native heap (no GC); + +Road map [3]: + +- Column-wise indexing; +- Runtime Query Optimizer for quads; +- Performance optimization for scale-out clusters; and +- Simplified deployment, configuration, and administration for scale-out clusters. + +Change log: + + Note: Versions with (*) MAY require data migration. For details, see [9]. + +1.3.3: + +- http://trac.bigdata.com/ticket/980 (Object position of query hint is not a Literal (partial resolution - see #1028 as well)) +- http://trac.bigdata.com/ticket/1018 (Add the ability to track and cancel all queries issued through a BigdataSailRemoteRepositoryConnection) +- http://trac.bigdata.com/ticket/1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) +- http://trac.bigdata.com/ticket/1024 (GregorianCalendar? does weird things before 1582) +- http://trac.bigdata.com/ticket/1026 (SPARQL UPDATE with runtime errors causes problems with lexicon indices) +- http://trac.bigdata.com/ticket/1029 (RWStore commit state not correctly rolled back if abort fails on empty journal) +- http://trac.bigdata.com/ticket/1030 (RWStorage stats cleanup) + +1.3.2: + +- http://trac.bigdata.com/ticket/1016 (Jetty/LBS issues when deployed as WAR under tomcat) +- http://trac.bigdata.com/ticket/1010 (Upgrade apache http components to 1.3.1 (security)) +- http://trac.bigdata.com/ticket/1005 (Invalidate BTree objects if error occurs during eviction) +- http://trac.bigdata.com/ticket/1004 (Concurrent binding problem) +- http://trac.bigdata.com/ticket/1002 (Concurrency issues in JVMHashJoinUtility caused by MAX_PARALLEL query hint override) +- http://trac.bigdata.com/ticket/1000 (Add configuration option to turn off bottom-up evaluation) +- http://trac.bigdata.com/ticket/999 (Extend BigdataSailFactory to take arbitrary properties) +- http://trac.bigdata.com/ticket/998 (SPARQL Update through BigdataGraph) +- http://trac.bigdata.com/ticket/996 (Add custom prefix support for query results) +- http://trac.bigdata.com/ticket/995 (Allow general purpose SPARQL queries through BigdataGraph) +- http://trac.bigdata.com/ticket/992 (Deadlock between AbstractRunningQuery.cancel(), QueryLog.log(), and ArbitraryLengthPathTask) +- http://trac.bigdata.com/ticket/990 (Query hints not recognized in FILTERs) +- http://trac.bigdata.com/ticket/989 (Stored query service) +- http://trac.bigdata.com/ticket/988 (Bad performance for FILTER EXISTS) +- http://trac.bigdata.com/ticket/987 (maven build is broken) +- http://trac.bigdata.com/ticket/986 (Improve locality for small allocation slots) +- http://trac.bigdata.com/ticket/985 (Deadlock in BigdataTriplePatternMaterializer) +- http://trac.bigdata.com/ticket/975 (HA Health Status Page) +- http://trac.bigdata.com/ticket/974 (Name2Addr.indexNameScan(prefix) uses scan + filter) +- http://trac.bigdata.com/ticket/973 (RWStore.commit() should be more defensive) +- http://trac.bigdata.com/ticket/971 (Clarify HTTP Status codes for CREATE NAMESPACE operation) +- http://trac.bigdata.com/ticket/968 (no link to wiki from workbench) +- http://trac.bigdata.com/ticket/966 (Failed to get namespace under concurrent update) +- http://trac.bigdata.com/ticket/965 (Can not run LBS mode with HA1 setup) +- http://trac.bigdata.com/ticket/961 (Clone/modify namespace to create a new one) +- http://trac.bigdata.com/ticket/960 (Export namespace properties in XML/Java properties text format) +- http://trac.bigdata.com/ticket/938 (HA Load Balancer) +- http://trac.bigdata.com/ticket/936 (Support larger metabits allocations) +- http://trac.bigdata.com/ticket/932 (Bigdata/Rexster integration) +- http://trac.bigdata.com/ticket/919 (Formatted Layout for Status pages) +- http://trac.bigdata.com/ticket/899 (REST API Query Cancellation) +- http://trac.bigdata.com/ticket/885 (Panels do not appear on startup in Firefox) +- http://trac.bigdata.com/ticket/884 (Executing a new query should clear the old query results from the console) +- http://trac.bigdata.com/ticket/882 (Abbreviate URIs that can be namespaced with one of the defined common namespaces) +- http://trac.bigdata.com/ticket/880 (Can't explore an absolute URI with < >) +- http://trac.bigdata.com/ticket/878 (Explore page looks weird when empty) +- http://trac.bigdata.com/ticket/873 (Allow user to go use browser back & forward buttons to view explore history) +- http://trac.bigdata.com/ticket/865 (OutOfMemoryError instead of Timeout for SPARQL Property Paths) +- http://trac.bigdata.com/ticket/858 (Change explore URLs to include URI being clicked so user can see what they've clicked on before) +- http://trac.bigdata.com/ticket/855 (AssertionError: Child does not have persistent identity) +- http://trac.bigdata.com/ticket/850 (Search functionality in workbench) +- http://trac.bigdata.com/ticket/847 (Query results panel should recognize well known namespaces for easier reading) +- http://trac.bigdata.com/ticket/845 (Display the properties for a namespace) +- http://trac.bigdata.com/ticket/843 (Create new tabs for status & performance counters, and add per namespace service/VoID description links) +- http://trac.bigdata.com/ticket/837 (Configurator for new namespaces) +- http://trac.bigdata.com/ticket/836 (Allow user to create namespace in the workbench) +- http://trac.bigdata.com/ticket/830 (Output RDF data from queries in table format) +- http://trac.bigdata.com/ticket/829 (Export query results) +- http://trac.bigdata.com/ticket/828 (Save selected namespace in browser) +- http://trac.bigdata.com/ticket/827 (Explore tab in workbench) +- http://trac.bigdata.com/ticket/826 (Create shortcut to execute load/query) +- http://trac.bigdata.com/ticket/823 (Disable textarea when a large file is selected) +- http://trac.bigdata.com/ticket/820 (Allow non-file:// URLs to be loaded) +- http://trac.bigdata.com/ticket/819 (Retrieve default namespace on page load) +- http://trac.bigdata.com/ticket/772 (Query timeout only checked at operator start/stop) +- http://trac.bigdata.com/ticket/765 (order by expr skips invalid expressions) +- http://trac.bigdata.com/ticket/587 (JSP page to configure KBs) +- http://trac.bigdata.com/ticket/343 (Stochastic assert in AbstractBTree#writeNodeOrLeaf() in CI) + +1.3.1: + +- http://trac.bigdata.com/ticket/242 (Deadlines do not play well with GROUP_BY, ORDER_BY, etc.) +- http://trac.bigdata.com/ticket/256 (Amortize RTO cost) +- http://trac.bigdata.com/ticket/257 (Support BOP fragments in the RTO.) +- http://trac.bigdata.com/ticket/258 (Integrate RTO into SAIL) +- http://trac.bigdata.com/ticket/259 (Dynamically increase RTO sampling limit.) +- http://trac.bigdata.com/ticket/526 (Reification done right) +- http://trac.bigdata.com/ticket/580 (Problem with the bigdata RDF/XML parser with sids) +- http://trac.bigdata.com/ticket/622 (NSS using jetty+windows can lose connections (windows only; jdk 6/7 bug)) +- http://trac.bigdata.com/ticket/624 (HA Load Balancer) +- http://trac.bigdata.com/ticket/629 (Graph processing API) +- http://trac.bigdata.com/ticket/721 (Support HA1 configurations) +- http://trac.bigdata.com/ticket/730 (Allow configuration of embedded NSS jetty server using jetty-web.xml) +- http://trac.bigdata.com/ticket/759 (multiple filters interfere) +- http://trac.bigdata.com/ticket/763 (Stochastic results with Analytic Query Mode) +- http://trac.bigdata.com/ticket/774 (Converge on Java 7.) +- http://trac.bigdata.com/ticket/779 (Resynchronization of socket level write replication protocol (HA)) +- http://trac.bigdata.com/ticket/780 (Incremental or asynchronous purge of HALog files) +- http://trac.bigdata.com/ticket/782 (Wrong serialization version) +- http://trac.bigdata.com/ticket/784 (Describe Limit/offset don't work as expected) +- http://trac.bigdata.com/ticket/787 (Update documentations and samples, they are OUTDATED) +- http://trac.bigdata.com/ticket/788 (Name2Addr does not report all root causes if the commit fails.) +- http://trac.bigdata.com/ticket/789 (ant task to build sesame fails, docs for setting up bigdata for sesame are ancient) +- http://trac.bigdata.com/ticket/790 (should not be pruning any children) +- http://trac.bigdata.com/ticket/791 (Clean up query hints) +- http://trac.bigdata.com/ticket/793 (Explain reports incorrect value for opCount) +- http://trac.bigdata.com/ticket/796 (Filter assigned to sub-query by query generator is dropped from evaluation) +- http://trac.bigdata.com/ticket/797 (add sbt setup to getting started wiki) +- http://trac.bigdata.com/ticket/798 (Solution order not always preserved) +- http://trac.bigdata.com/ticket/799 (mis-optimation of quad pattern vs triple pattern) +- http://trac.bigdata.com/ticket/802 (Optimize DatatypeFactory instantiation in DateTimeExtension) +- http://trac.bigdata.com/ticket/803 (prefixMatch does not work in full text search) +- http://trac.bigdata.com/ticket/804 (update bug deleting quads) +- http://trac.bigdata.com/ticket/806 (Incorrect AST generated for OPTIONAL { SELECT }) +- http://trac.bigdata.com/ticket/808 (Wildcard search in bigdata for type suggessions) +- http://trac.bigdata.com/ticket/810 (Expose GAS API as SPARQL SERVICE) +- http://trac.bigdata.com/ticket/815 (RDR query does too much work) +- http://trac.bigdata.com/ticket/816 (Wildcard projection ignores variables inside a SERVICE call.) +- http://trac.bigdata.com/ticket/817 (Unexplained increase in journal size) +- http://trac.bigdata.com/ticket/821 (Reject large files, rather then storing them in a hidden variable) +- http://trac.bigdata.com/ticket/831 (UNION with filter issue) +- http://trac.bigdata.com/ticket/841 (Using "VALUES" in a query returns lexical error) +- http://trac.bigdata.com/ticket/848 (Fix SPARQL Results JSON writer to write the RDR syntax) +- http://trac.bigdata.com/ticket/849 (Create writers that support the RDR syntax) +- http://trac.bigdata.com/ticket/851 (RDR GAS interface) +- http://trac.bigdata.com/ticket/852 (RemoteRepository.cancel() does not consume the HTTP response entity.) +- http://trac.bigdata.com/ticket/853 (Follower does not accept POST of idempotent operations (HA)) +- http://trac.bigdata.com/ticket/854 (Allow override of maximum length before converting an HTTP GET to an HTTP POST) +- http://trac.bigdata.com/ticket/855 (AssertionError: Child does not have persistent identity) +- http://trac.bigdata.com/ticket/862 (Create parser for JSON SPARQL Results) +- http://trac.bigdata.com/ticket/863 (HA1 commit failure) +- http://trac.bigdata.com/ticket/866 (Batch remove API for the SAIL) +- http://trac.bigdata.com/ticket/867 (NSS concurrency problem with list namespaces and create namespace) +- http://trac.bigdata.com/ticket/869 (HA5 test suite) +- http://trac.bigdata.com/ticket/872 (Full text index range count optimization) +- http://trac.bigdata.com/ticket/874 (FILTER not applied when there is UNION in the same join group) +- http://trac.bigdata.com/ticket/876 (When I upload a file I want to see the filename.) +- http://trac.bigdata.com/ticket/877 (RDF Format selector is invisible) +- http://trac.bigdata.com/ticket/883 (CANCEL Query fails on non-default kb namespace on HA follower.) +- http://trac.bigdata.com/ticket/886 (Provide workaround for bad reverse DNS setups.) +- http://trac.bigdata.com/ticket/887 (BIND is leaving a variable unbound) +- http://trac.bigdata.com/ticket/892 (HAJournalServer does not die if zookeeper is not running) +- http://trac.bigdata.com/ticket/893 (large sparql insert optimization slow?) +- http://trac.bigdata.com/ticket/894 (unnecessary synchronization) +- http://trac.bigdata.com/ticket/895 (stack overflow in populateStatsMap) +- http://trac.bigdata.com/ticket/902 (Update Basic Bigdata Chef Cookbook) +- http://trac.bigdata.com/ticket/904 (AssertionError: PropertyPathNode got to ASTJoinOrderByType.optimizeJoinGroup) +- http://trac.bigdata.com/ticket/905 (unsound combo query optimization: union + filter) +- http://trac.bigdata.com/ticket/906 (DC Prefix Button Appends "</li>") +- http://trac.bigdata.com/ticket/907 (Add a quick-start ant task for the BD Server "ant start") +- http://trac.bigdata.com/ticket/912 (Provide a configurable IAnalyzerFactory) +- http://trac.bigdata.com/ticket/913 (Blueprints API Implementation) +- http://trac.bigdata.com/ticket/914 (Settable timeout on SPARQL Query (REST API)) +- http://trac.bigdata.com/ticket/915 (DefaultAnalyzerFactory issues) +- http://trac.bigdata.com/ticket/920 (Content negotiation orders accept header scores in reverse) +- http://trac.bigdata.com/ticket/939 (NSS does not start from command line: bigdata-war/src not found.) +- http://trac.bigdata.com/ticket/940 (ProxyServlet in web.xml breaks tomcat WAR (HA LBS) + +1.3.0: + +- http://trac.bigdata.com/ticket/530 (Journal HA) +- http://trac.bigdata.com/ticket/621 (Coalesce write cache records and install reads in cache) +- http://trac.bigdata.com/ticket/623 (HA TXS) +- http://trac.bigdata.com/ticket/639 (Remove triple-buffering in RWStore) +- http://trac.bigdata.com/ticket/645 (HA backup) +- http://trac.bigdata.com/ticket/646 (River not compatible with newer 1.6.0 and 1.7.0 JVMs) +- http://trac.bigdata.com/ticket/648 (Add a custom function to use full text index for filtering.) +- http://trac.bigdata.com/ticket/651 (RWS test failure) +- http://trac.bigdata.com/ticket/652 (Compress write cache blocks for replication and in HALogs) +- http://trac.bigdata.com/ticket/662 (Latency on followers during commit on leader) +- http://trac.bigdata.com/ticket/663 (Issue with OPTIONAL blocks) +- http://trac.bigdata.com/ticket/664 (RWStore needs post-commit protocol) +- http://trac.bigdata.com/ticket/665 (HA3 LOAD non-responsive with node failure) +- http://trac.bigdata.com/ticket/666 (Occasional CI deadlock in HALogWriter testConcurrentRWWriterReader) +- http://trac.bigdata.com/ticket/670 (Accumulating HALog files cause latency for HA commit) +- http://trac.bigdata.com/ticket/671 (Query on follower fails during UPDATE on leader) +- http://trac.bigdata.com/ticket/673 (DGC in release time consensus protocol causes native thread leak in HAJournalServer at each commit) +- http://trac.bigdata.com/ticket/674 (WCS write cache compaction causes errors in RWS postHACommit()) +- http://trac.bigdata.com/ticket/676 (Bad patterns for timeout computations) +- http://trac.bigdata.com/ticket/677 (HA deadlock under UPDATE + QUERY) +- http://trac.bigdata.com/ticket/678 (DGC Thread and Open File Leaks: sendHALogForWriteSet()) +- http://trac.bigdata.com/ticket/679 (HAJournalServer can not restart due to logically empty log file) +- http://trac.bigdata.com/ticket/681 (HAJournalServer deadlock: pipelineRemove() and getLeaderId()) +- http://trac.bigdata.com/ticket/684 (Optimization with skos altLabel) +- http://trac.bigdata.com/ticket/686 (Consensus protocol does not detect clock skew correctly) +- http://trac.bigdata.com/ticket/687 (HAJournalServer Cache not populated) +- http://trac.bigdata.com/ticket/689 (Missing URL encoding in RemoteRepositoryManager) +- http://trac.bigdata.com/ticket/690 (Error when using the alias "a" instead of rdf:type for a multipart insert) +- http://trac.bigdata.com/ticket/691 (Failed to re-interrupt thread in HAJournalServer) +- http://trac.bigdata.com/ticket/692 (Failed to re-interrupt thread) +- http://trac.bigdata.com/ticket/693 (OneOrMorePath SPARQL property path expression ignored) +- http://trac.bigdata.com/ticket/694 (Transparently cancel update/query in RemoteRepository) +- http://trac.bigdata.com/ticket/695 (HAJournalServer reports "follower" but is in SeekConsensus and is not participating in commits.) +- http://trac.bigdata.com/ticket/701 (Problems in BackgroundTupleResult) +- http://trac.bigdata.com/ticket/702 (InvocationTargetException on /namespace call) +- http://trac.bigdata.com/ticket/704 (ask does not return json) +- http://trac.bigdata.com/ticket/705 (Race between QueryEngine.putIfAbsent() and shutdownNow()) +- http://trac.bigdata.com/ticket/706 (MultiSourceSequentialCloseableIterator.nextSource() can throw NPE) +- http://trac.bigdata.com/ticket/707 (BlockingBuffer.close() does not unblock threads) +- http://trac.bigdata.com/ticket/708 (BIND heisenbug - race condition on select query with BIND) +- http://trac.bigdata.com/ticket/711 (sparql protocol: mime type application/sparql-query) +- http://trac.bigdata.com/ticket/712 (SELECT ?x { OPTIONAL { ?x eg:doesNotExist eg:doesNotExist } } incorrect) +- http://trac.bigdata.com/ticket/715 (Interrupt of thread submitting a query for evaluation does not always terminate the AbstractRunningQuery) +- http://trac.bigdata.com/ticket/716 (Verify that IRunningQuery instances (and nested queries) are correctly cancelled when interrupted) +- http://trac.bigdata.com/ticket/718 (HAJournalServer needs to handle ZK client connection loss) +- http://trac.bigdata.com/ticket/720 (HA3 simultaneous service start failure) +- http://trac.bigdata.com/ticket/723 (HA asynchronous tasks must be canceled when invariants are changed) +- http://trac.bigdata.com/ticket/725 (FILTER EXISTS in subselect) +- http://trac.bigdata.com/ticket/726 (Logically empty HALog for committed transaction) +- http://trac.bigdata.com/ticket/727 (DELETE/INSERT fails with OPTIONAL non-matching WHERE) +- http://trac.bigdata.com/ticket/728 (Refactor to create HAClient) +- http://trac.bigdata.com/ticket/729 (ant bundleJar not working) +- http://trac.bigdata.com/ticket/731 (CBD and Update leads to 500 status code) +- http://trac.bigdata.com/ticket/732 (describe statement limit does not work) +- http://trac.bigdata.com/ticket/733 (Range optimizer not optimizing Slice service) +- http://trac.bigdata.com/ticket/734 (two property paths interfere) +- http://trac.bigdata.com/ticket/736 (MIN() malfunction) +- http://trac.bigdata.com/ticket/737 (class cast exception) +- http://trac.bigdata.com/ticket/739 (Inconsistent treatment of bind and optional property path) +- http://trac.bigdata.com/ticket/741 (ctc-striterators should build as independent top-level project (Apache2)) +- http://trac.bigdata.com/ticket/743 (AbstractTripleStore.destroy() does not filter for correct prefix) +- http://trac.bigdata.com/ticket/746 (Assertion error) +- http://trac.bigdata.com/ticket/747 (BOUND bug) +- http://trac.bigdata.com/ticket/748 (incorrect join with subselect renaming vars) +- http://trac.bigdata.com/ticket/754 (Failure to setup SERVICE hook and changeLog for Unisolated and Read/Write connections) +- http://trac.bigdata.com/ticket/755 (Concurrent QuorumActors can interfere leading to failure to progress) +- http://trac.bigdata.com/ticket/756 (order by and group_concat) +- http://trac.bigdata.com/ticket/760 (Code review on 2-phase commit protocol) +- http://trac.bigdata.com/ticket/764 (RESYNC failure (HA)) +- http://trac.bigdata.com/ticket/770 (alpp ordering) +- http://trac.bigdata.com/ticket/772 (Query timeout only checked at operator start/stop.) +- http://trac.bigdata.com/ticket/776 (Closed as duplicate of #490) +- http://trac.bigdata.com/ticket/778 (HA Leader fail results in transient problem with allocations on other services) +- http://trac.bigdata.com/ticket/783 (Operator Alerts (HA)) + +1.2.4: + +- http://trac.bigdata.com/ticket/777 (ConcurrentModificationException in ASTComplexOptionalOptimizer) + +1.2.3: + +- http://trac.bigdata.com/ticket/168 (Maven Build) +- http://trac.bigdata.com/ticket/196 (Journal leaks memory). +- http://trac.bigdata.com/ticket/235 (Occasional deadlock in CI runs in com.bigdata.io.writecache.TestAll) +- http://trac.bigdata.com/ticket/312 (CI (mock) quorums deadlock) +- http://trac.bigdata.com/ticket/405 (Optimize hash join for subgroups with no incoming bound vars.) +- http://trac.bigdata.com/ticket/412 (StaticAnalysis#getDefinitelyBound() ignores exogenous variables.) +- http://trac.bigdata.com/ticket/485 (RDFS Plus Profile) +- http://trac.bigdata.com/ticket/495 (SPARQL 1.1 Property Paths) +- http://trac.bigdata.com/ticket/519 (Negative parser tests) +- http://trac.bigdata.com/ticket/531 (SPARQL UPDATE for SOLUTION SETS) +- http://trac.bigdata.com/ticket/535 (Optimize JOIN VARS for Sub-Selects) +- http://trac.bigdata.com/ticket/555 (Support PSOutputStream/InputStream at IRawStore) +- http://trac.bigdata.com/ticket/559 (Use RDFFormat.NQUADS as the format identifier for the NQuads parser) +- http://trac.bigdata.com/ticket/570 (MemoryManager Journal does not implement all methods). +- http://trac.bigdata.com/ticket/575 (NSS Admin API) +- http://trac.bigdata.com/ticket/577 (DESCRIBE with OFFSET/LIMIT needs to use sub-select) +- http://trac.bigdata.com/ticket/578 (Concise Bounded Description (CBD)) +- http://trac.bigdata.com/ticket/579 (CONSTRUCT should use distinct SPO filter) +- http://trac.bigdata.com/ticket/583 (VoID in ServiceDescription) +- http://trac.bigdata.com/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) +- http://trac.bigdata.com/ticket/590 (nxparser fails with uppercase language tag) +- http://trac.bigdata.com/ticket/592 (Optimize RWStore allocator sizes) +- http://trac.bigdata.com/ticket/593 (Ugrade to Sesame 2.6.10) +- http://trac.bigdata.com/ticket/594 (WAR was deployed using TRIPLES rather than QUADS by default) +- http://trac.bigdata.com/ticket/596 (Change web.xml parameter names to be consistent with Jini/River) +- http://trac.bigdata.com/ticket/597 (SPARQL UPDATE LISTENER) +- http://trac.bigdata.com/ticket/598 (B+Tree branching factor and HTree addressBits are confused in their NodeSerializer implementations) +- http://trac.bigdata.com/ticket/599 (BlobIV for blank node : NotMaterializedException) +- http://trac.bigdata.com/ticket/600 (BlobIV collision counter hits false limit.) +- http://trac.bigdata.com/ticket/601 (Log uncaught exceptions) +- http://trac.bigdata.com/ticket/602 (RWStore does not discard logged deletes on reset()) +- http://trac.bigdata.com/ticket/607 (History service / index) +- http://trac.bigdata.com/ticket/608 (LOG BlockingBuffer not progressing at INFO or lower level) +- http://trac.bigdata.com/ticket/609 (bigdata-ganglia is required dependency for Journal) +- http://trac.bigdata.com/ticket/611 (The code that processes SPARQL Update has a typo) +- http://trac.bigdata.com/ticket/612 (Bigdata scale-up depends on zookeper) +- http://trac.bigdata.com/ticket/613 (SPARQL UPDATE response inlines large DELETE or INSERT triple graphs) +- http://trac.bigdata.com/ticket/614 (static join optimizer does not get ordering right when multiple tails share vars with ancestry) +- http://trac.bigdata.com/ticket/615 (AST2BOpUtility wraps UNION with an unnecessary hash join) +- http://trac.bigdata.com/ticket/616 (Row store read/update not isolated on Journal) +- http://trac.bigdata.com/ticket/617 (Concurrent KB create fails with "No axioms defined?") +- http://trac.bigdata.com/ticket/618 (DirectBufferPool.poolCapacity maximum of 2GB) +- http://trac.bigdata.com/ticket/619 (RemoteRepository class should use application/x-www-form-urlencoded for large POST requests) +- http://trac.bigdata.com/ticket/620 (UpdateServlet fails to parse MIMEType when doing conneg.) +- http://trac.bigdata.com/ticket/626 (Expose performance counters for read-only indices) +- http://trac.bigdata.com/ticket/627 (Environment variable override for NSS properties file) +- http://trac.bigdata.com/ticket/628 (Create a bigdata-client jar for the NSS REST API) +- http://trac.bigdata.com/ticket/631 (ClassCastException in SIDs mode query) +- http://trac.bigdata.com/ticket/632 (NotMaterializedException when a SERVICE call needs variables that are provided as query input bindings) +- http://trac.bigdata.com/ticket/633 (ClassCastException when binding non-uri values to a variable that occurs in predicate position) +- http://trac.bigdata.com/ticket/638 (Change DEFAULT_MIN_RELEASE_AGE to 1ms) +- http://trac.bigdata.com/ticket/640 (Conditionally rollback() BigdataSailConnection if dirty) +- http://trac.bigdata.com/ticket/642 (Property paths do not work inside of exists/not exists filters) +- http://trac.bigdata.com/ticket/643 (Add web.xml parameters to lock down public NSS end points) +- http://trac.bigdata.com/ticket/644 (Bigdata2Sesame2BindingSetIterator can fail to notice asynchronous close()) +- http://trac.bigdata.com/ticket/650 (Can not POST RDF to a graph using REST API) +- http://trac.bigdata.com/ticket/654 (Rare AssertionError in WriteCache.clearAddrMap()) +- http://trac.bigdata.com/ticket/655 (SPARQL REGEX operator does not perform case-folding correctly for Unicode data) +- http://trac.bigdata.com/ticket/656 (InFactory bug when IN args consist of a single literal) +- http://trac.bigdata.com/ticket/647 (SIDs mode creates unnecessary hash join for GRAPH group patterns) +- http://trac.bigdata.com/ticket/667 (Provide NanoSparqlServer initialization hook) +- http://trac.bigdata.com/ticket/669 (Doubly nested subqueries yield no results with LIMIT) +- http://trac.bigdata.com/ticket/675 (Flush indices in parallel during checkpoint to reduce IO latency) +- http://trac.bigdata.com/ticket/682 (AtomicRowFilter UnsupportedOperationException) + +1.2.2: + +- http://trac.bigdata.com/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) +- http://trac.bigdata.com/ticket/602 (RWStore does not discard logged deletes on reset()) +- http://trac.bigdata.com/ticket/603 (Prepare critical maintenance release as branch of 1.2.1) + +1.2.1: + +- http://trac.bigdata.com/ticket/533 (Review materialization for inline IVs) +- http://trac.bigdata.com/ticket/539 (NotMaterializedException with REGEX and Vocab) +- http://trac.bigdata.com/ticket/540 (SPARQL UPDATE using NSS via index.html) +- http://trac.bigdata.com/ticket/541 (MemoryManaged backed Journal mode) +- http://trac.bigdata.com/ticket/546 (Index cache for Journal) +- http://trac.bigdata.com/ticket/549 (BTree can not be cast to Name2Addr (MemStore recycler)) +- http://trac.bigdata.com/ticket/550 (NPE in Leaf.getKey() : root cause was user error) +- http://trac.bigdata.com/ticket/558 (SPARQL INSERT not working in same request after INSERT DATA) +- http://trac.bigdata.com/ticket/562 (Sub-select in INSERT cause NPE in UpdateExprBuilder) +- http://trac.bigdata.com/ticket/563 (DISTINCT ORDER BY) +- http://trac.bigdata.com/ticket/567 (Failure to set cached value on IV results in incorrect behavior for complex UPDATE operation) +- http://trac.bigdata.com/ticket/568 (DELETE WHERE fails with Java AssertionError) +- http://trac.bigdata.com/ticket/569 (LOAD-CREATE-LOAD using virgin journal fails with "Graph exists" exception) +- http://trac.bigdata.com/ticket/571 (DELETE/INSERT WHERE handling of blank nodes) +- http://trac.bigdata.com/ticket/573 (NullPointerException when attempting to INSERT DATA containing a blank node) + +1.2.0: (*) + +- http://trac.bigdata.com/ticket/92 (Monitoring webapp) +- http://trac.bigdata.com/ticket/267 (Support evaluation of 3rd party operators) +- http://trac.bigdata.com/ticket/337 (Compact and efficient movement of binding sets between nodes.) +- http://trac.bigdata.com/ticket/433 (Cluster leaks threads under read-only index operations: DGC thread leak) +- http://trac.bigdata.com/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) +- http://trac.bigdata.com/ticket/438 (KeyBeforePartitionException on cluster) +- http://trac.bigdata.com/ticket/439 (Class loader problem) +- http://trac.bigdata.com/ticket/441 (Ganglia integration) +- http://trac.bigdata.com/ticket/443 (Logger for RWStore transaction service and recycler) +- http://trac.bigdata.com/ticket/444 (SPARQL query can fail to notice when IRunningQuery.isDone() on cluster) +- http://trac.bigdata.com/ticket/445 (RWStore does not track tx release correctly) +- http://trac.bigdata.com/ticket/446 (HTTP Repostory broken with bigdata 1.1.0) +- http://trac.bigdata.com/ticket/448 (SPARQL 1.1 UPDATE) +- http://trac.bigdata.com/ticket/449 (SPARQL 1.1 Federation extension) +- http://trac.bigdata.com/ticket/451 (Serialization error in SIDs mode on cluster) +- http://trac.bigdata.com/ticket/454 (Global Row Store Read on Cluster uses Tx) +- http://trac.bigdata.com/ticket/456 (IExtension implementations do point lookups on lexicon) +- http://trac.bigdata.com/ticket/457 ("No such index" on cluster under concurrent query workload) +- http://trac.bigdata.com/ticket/458 (Java level deadlock in DS) +- http://trac.bigdata.com/ticket/460 (Uncaught interrupt resolving RDF terms) +- http://trac.bigdata.com/ticket/461 (KeyAfterPartitionException / KeyBeforePartitionException on cluster) +- http://trac.bigdata.com/ticket/463 (NoSuchVocabularyItem with LUBMVocabulary for DerivedNumericsExtension) +- http://trac.bigdata.com/ticket/464 (Query statistics do not update correctly on cluster) +- http://trac.bigdata.com/ticket/465 (Too many GRS reads on cluster) +- http://trac.bigdata.com/ticket/469 (Sail does not flush assertion buffers before query) +- http://trac.bigdata.com/ticket/472 (acceptTaskService pool size on cluster) +- http://trac.bigdata.com/ticket/475 (Optimize serialization for query messages on cluster) +- http://trac.bigdata.com/ticket/476 (Test suite for writeCheckpoint() and recycling for BTree/HTree) +- http://trac.bigdata.com/ticket/478 (Cluster does not map input solution(s) across shards) +- http://trac.bigdata.com/ticket/480 (Error releasing deferred frees using 1.0.6 against a 1.0.4 journal) +- http://trac.bigdata.com/ticket/481 (PhysicalAddressResolutionException against 1.0.6) +- http://trac.bigdata.com/ticket/482 (RWStore reset() should be thread-safe for concurrent readers) +- http://trac.bigdata.com/ticket/484 (Java API for NanoSparqlServer REST API) +- http://trac.bigdata.com/ticket/491 (AbstractTripleStore.destroy() does not clear the locator cache) +- http://trac.bigdata.com/ticket/492 (Empty chunk in ThickChunkMessage (cluster)) +- http://trac.bigdata.com/ticket/493 (Virtual Graphs) +- http://trac.bigdata.com/ticket/496 (Sesame 2.6.3) +- http://trac.bigdata.com/ticket/497 (Implement STRBEFORE, STRAFTER, and REPLACE) +- http://trac.bigdata.com/ticket/498 (Bring bigdata RDF/XML parser up to openrdf 2.6.3.) +- http://trac.bigdata.com/ticket/500 (SPARQL 1.1 Service Description) +- http://www.openrdf.org/issues/browse/SES-884 (Aggregation with an solution set as input should produce an empty solution as output) +- http://www.openrdf.org/issues/browse/SES-862 (Incorrect error handling for SPARQL aggregation; fix in 2.6.1) +- http://www.openrdf.org/issues/browse/SES-873 (Order the same Blank Nodes together in ORDER BY) +- http://trac.bigdata.com/ticket/501 (SPARQL 1.1 BINDINGS are ignored) +- http://trac.bigdata.com/ticket/503 (Bigdata2Sesame2BindingSetIterator throws QueryEvaluationException were it should throw NoSuchElementException) +- http://trac.bigdata.com/ticket/504 (UNION with Empty Group Pattern) +- http://trac.bigdata.com/ticket/505 (Exception when using SPARQL sort & statement identifiers) +- http://trac.bigdata.com/ticket/506 (Load, closure and query performance in 1.1.x versus 1.0.x) +- http://trac.bigdata.com/ticket/508 (LIMIT causes hash join utility to log errors) +- http://trac.bigdata.com/ticket/513 (Expose the LexiconConfiguration to Function BOPs) +- http://trac.bigdata.com/ticket/515 (Query with two "FILTER NOT EXISTS" expressions returns no results) +- http://trac.bigdata.com/ticket/516 (REGEXBOp should cache the Pattern when it is a constant) +- http://trac.bigdata.com/ticket/517 (Java 7 Compiler Compatibility) +- http://trac.bigdata.com/ticket/518 (Review function bop subclass hierarchy, optimize datatype bop, etc.) +- http://trac.bigdata.com/ticket/520 (CONSTRUCT WHERE shortcut) +- http://trac.bigdata.com/ticket/521 (Incremental materialization of Tuple and Graph query results) +- http://trac.bigdata.com/ticket/525 (Modify the IChangeLog interface to support multiple agents) +- http://trac.bigdata.com/ticket/527 (Expose timestamp of LexiconRelation to function bops) +- http://trac.bigdata.com/ticket/532 (ClassCastException during hash join (can not be cast to TermId)) +- http://trac.bigdata.com/ticket/533 (Review materialization for inline IVs) +- http://trac.bigdata.com/ticket/534 (BSBM BI Q5 error using MERGE JOIN) + +1.1.0 (*) + + - http://trac.bigdata.com/ticket/23 (Lexicon joins) + - http://trac.bigdata.com/ticket/109 (Store large literals as "blobs") + - http://trac.bigdata.com/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://trac.bigdata.com/ticket/203 (Implement an persistence capable hash table to support analytic query) + - http://trac.bigdata.com/ticket/209 (AccessPath should visit binding sets rather than elements for high level query.) + - http://trac.bigdata.com/ticket/227 (SliceOp appears to be necessary when operator plan should suffice without) + - http://trac.bigdata.com/ticket/232 (Bottom-up evaluation semantics). + - http://trac.bigdata.com/ticket/246 (Derived xsd numeric data types must be inlined as extension types.) + - http://trac.bigdata.com/ticket/254 (Revisit pruning of intermediate variable bindings during query execution) + - http://trac.bigdata.com/ticket/261 (Lift conditions out of subqueries.) + - http://trac.bigdata.com/ticket/300 (Native ORDER BY) + - http://trac.bigdata.com/ticket/324 (Inline predeclared URIs and namespaces in 2-3 bytes) + - http://trac.bigdata.com/ticket/330 (NanoSparqlServer does not locate "html" resources when run from jar) + - http://trac.bigdata.com/ticket/334 (Support inlining of unicode data in the statement indices.) + - http://trac.bigdata.com/ticket/364 (Scalable default graph evaluation) + - http://trac.bigdata.com/ticket/368 (Prune variable bindings during query evaluation) + - http://trac.bigdata.com/ticket/370 (Direct translation of openrdf AST to bigdata AST) + - http://trac.bigdata.com/ticket/373 (Fix StrBOp and other IValueExpressions) + - http://trac.bigdata.com/ticket/377 (Optimize OPTIONALs with multiple statement patterns.) + - http://trac.bigdata.com/ticket/380 (Native SPARQL evaluation on cluster) + - http://trac.bigdata.com/ticket/387 (Cluster does not compute closure) + - http://trac.bigdata.com/ticket/395 (HTree hash join performance) + - http://trac.bigdata.com/ticket/401 (inline xsd:unsigned datatypes) + - http://trac.bigdata.com/ticket/408 (xsd:string cast fails for non-numeric data) + - http://trac.bigdata.com/ticket/421 (New query hints model.) + - http://trac.bigdata.com/ticket/431 (Use of read-only tx per query defeats cache on cluster) + +1.0.3 + + - http://trac.bigdata.com/ticket/217 (BTreeCounters does not track bytes released) + - http://trac.bigdata.com/ticket/269 (Refactor performance counters using accessor interface) + - http://trac.bigdata.com/ticket/329 (B+Tree should delete bloom filter when it is disabled.) + - http://trac.bigdata.com/ticket/372 (RWStore does not prune the CommitRecordIndex) + - http://trac.bigdata.com/ticket/375 (Persistent memory leaks (RWStore/DISK)) + - http://trac.bigdata.com/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) + - http://trac.bigdata.com/ticket/391 (Release age advanced on WORM mode journal) + - http://trac.bigdata.com/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) + - http://trac.bigdata.com/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) + - http://trac.bigdata.com/ticket/394 (log4j configuration error message in WAR deployment) + - http://trac.bigdata.com/ticket/399 (Add a fast range count method to the REST API) + - http://trac.bigdata.com/ticket/422 (Support temp triple store wrapped by a BigdataSail) + - http://trac.bigdata.com/ticket/424 (NQuads support for NanoSparqlServer) + - http://trac.bigdata.com/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) + - http://trac.bigdata.com/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) + - http://trac.bigdata.com/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://trac.bigdata.com/ticket/435 (Address is 0L) + - http://trac.bigdata.com/ticket/436 (TestMROWTransactions failure in CI) + +1.0.2 + + - http://trac.bigdata.com/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - http://trac.bigdata.com/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://trac.bigdata.com/ticket/356 (Query not terminated by error.) + - http://trac.bigdata.com/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://trac.bigdata.com/ticket/361 (IRunningQuery not closed promptly.) + - http://trac.bigdata.com/ticket/371 (DataLoader fails to load resources available from the classpath.) + - http://trac.bigdata.com/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - http://trac.bigdata.com/ticket/378 (ClosedByInterruptException during heavy query mix.) + - http://trac.bigdata.com/ticket/379 (NotSerializableException for SPOAccessPath.) + - http://trac.bigdata.com/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 (*) + + - http://trac.bigdata.com/ticket/107 (Unicode clean schema names in the sparse row store). + - http://trac.bigdata.com/ticket/124 (TermIdEncoder should use more bits for scale-out). + - http://trac.bigdata.com/ticket/225 (OSX requires specialized performance counter collection classes). + - http://trac.bigdata.com/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - http://trac.bigdata.com/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - http://trac.bigdata.com/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - http://trac.bigdata.com/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - http://trac.bigdata.com/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - http://trac.bigdata.com/ticket/355 (Query failure when comparing with non materialized value). + - http://trac.bigdata.com/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - http://trac.bigdata.com/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://trac.bigdata.com/ticket/362 (log4j - slf4j bridge.) + +For more information about bigdata(R), please see the following links: + +[1] http://wiki.bigdata.com/wiki/index.php/Main_Page +[2] http://wiki.bigdata.com/wiki/index.php/GettingStarted +[3] http://wiki.bigdata.com/wiki/index.php/Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] http://sourceforge.net/projects/bigdata/files/bigdata/ +[9] http://wiki.bigdata.com/wiki/index.php/DataMigration +[10] http://wiki.bigdata.com/wiki/index.php/HAJournalServer +[11] http://www.bigdata.com/whitepapers/reifSPARQL.pdf +[12] http://wiki.bigdata.com/wiki/index.php/RDF_GAS_API + +About bigdata: + +Bigdata(R) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(R) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(R) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(R) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-28 13:17:45
|
Revision: 8692 http://sourceforge.net/p/bigdata/code/8692 Author: thompsonbry Date: 2014-10-28 13:17:38 +0000 (Tue, 28 Oct 2014) Log Message: ----------- Pushing critical bug fixes for 1.3.3 RELEASE. #1021 Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback() (critical bug fix) #1026 SPARQL UPDATE with runtime errors causes problems with lexicon indices (critical bug fix) #1029 RWStore commit state not correctly rolled back if abort fails on empty journal (minor issue) #1030 RWStorage stats cleanup (minor issue) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/journal/TestJournalAbort.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2014-10-28 13:12:49 UTC (rev 8691) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2014-10-28 13:17:38 UTC (rev 8692) @@ -553,6 +553,16 @@ * @see #getName2Addr() */ private volatile Name2Addr _name2Addr; + + /** + * An atomic state specifying whether a clean abort is required. This is set + * to true by critical section code in the _abort if it does not complete cleanly. + * <p> + * It is checked in the commit() method ensure updates are protected. + * + * @see #1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) + */ + private final AtomicBoolean abortRequired = new AtomicBoolean(false); /** * Return the "live" BTree mapping index names to the last metadata record @@ -2745,6 +2755,8 @@ final WriteLock lock = _fieldReadWriteLock.writeLock(); lock.lock(); + // @see #1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) + boolean success = false; try { @@ -2757,6 +2769,8 @@ if (_bufferStrategy == null) { // Nothing to do. + success = true; + return; } @@ -2896,8 +2910,12 @@ if (log.isInfoEnabled()) log.info("done"); + + success = true; // mark successful abort. } finally { + // @see #1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) + abortRequired.set(!success); lock.unlock(); @@ -3049,6 +3067,10 @@ @Override public long commit() { + + // Critical Section Check. @see #1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) + if (abortRequired.get()) // FIXME Move this into commitNow() after tagging hot fix.(mark to maek sure this gets done). + throw new IllegalStateException("Commit cannot be called, a call to abort must be made before further updates"); // The timestamp to be assigned to this commit point. final long commitTime = nextCommitTimestamp(); Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2014-10-28 13:12:49 UTC (rev 8691) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2014-10-28 13:17:38 UTC (rev 8692) @@ -3007,6 +3007,29 @@ * (RWStore does not discard deferred deletes on reset) */ m_deferredFreeOut.reset(); + + /* + * Reset any storage stats + * FIXME: Change StorageStats internals to be able to efficiently commit/reset and avoid disk read + */ + if (m_storageStatsAddr != 0) { + final long statsAddr = m_storageStatsAddr >> 16; + final int statsLen = ((int) m_storageStatsAddr) & 0xFFFF; + final byte[] stats = new byte[statsLen + 4]; // allow for checksum + getData(statsAddr, stats); + final DataInputStream instr = new DataInputStream(new ByteArrayInputStream(stats)); + try { + m_storageStats = new StorageStats(instr); + for (FixedAllocator fa: m_allocs) { + m_storageStats.register(fa); + } + } catch (IOException e) { + throw new RuntimeException("Unable to reset storage stats", e); + } + } else { + m_storageStats = new StorageStats(m_allocSizes); + } + } catch (Exception e) { throw new IllegalStateException("Unable to reset the store", e); } finally { @@ -3156,7 +3179,7 @@ /** Reset pre-commit state to support reset/abort/rollback. */ void reset() { - if (!m_allocationWriteLock.isHeldByCurrentThread()) + if (!m_allocationWriteLock.isHeldByCurrentThread()) throw new IllegalMonitorStateException(); RWStore.this.m_storageStatsAddr = m_storageStatsAddr; RWStore.this.m_committedNextAllocation = m_lastCommittedNextAllocation; Added: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/journal/TestJournalAbort.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/journal/TestJournalAbort.java (rev 0) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/journal/TestJournalAbort.java 2014-10-28 13:17:38 UTC (rev 8692) @@ -0,0 +1,240 @@ +package com.bigdata.journal; + +import java.io.File; +import java.io.IOException; +import java.util.Properties; +import java.util.Random; +import java.util.UUID; +import java.util.concurrent.atomic.AtomicBoolean; + +import junit.framework.TestCase2; + +import com.bigdata.btree.BTree; +import com.bigdata.btree.IndexMetadata; +import com.bigdata.journal.Journal.Options; +import com.bigdata.rawstore.Bytes; +import com.bigdata.rwstore.RWStore; +import com.bigdata.util.InnerCause; + +/** + * Test suite for a failure to handle errors inside of abort() by marking the + * journal as requiring abort(). + * + * @see #1021 (Add critical section protection to AbstractJournal.abort() and + * BigdataSailConnection.rollback()) + * + * @author martyncutcher + * + * TODO Thia should be a proxied test suite. It is RWStore specific. + */ +public class TestJournalAbort extends TestCase2 { + + /** + * + */ + public TestJournalAbort() { + } + + /** + * @param name + */ + public TestJournalAbort(String name) { + super(name); + } + + @Override + public void setUp() throws Exception { + + super.setUp(); + + } + + @Override + public void tearDown() throws Exception { + + TestHelper.checkJournalsClosed(this); + + super.tearDown(); + } + + @Override + public Properties getProperties() { + File file; + try { + file = File.createTempFile(getName(), Options.JNL); + file.deleteOnExit(); + } catch (IOException e) { + throw new RuntimeException(e); + } + + final Properties properties = new Properties(); + + properties.setProperty(Options.BUFFER_MODE, BufferMode.DiskRW.toString()); + + properties.setProperty(Options.FILE, file.toString()); + + properties.setProperty(Journal.Options.INITIAL_EXTENT, "" + + Bytes.megabyte * 10); + + return properties; + + } + + static private class AbortException extends RuntimeException { + private static final long serialVersionUID = 1L; + + AbortException(String msg) { + super(msg); + } + } + + /** + * In this test we want to run through some data inserts, commits and aborts. + * + * The overridden Journal will fail to abort correctly by overriding + * the discardcommitters method that AbstractJournal calls after calling bufferStragey.reset(). + * + * @throws InterruptedException + */ + public void test_simpleAbortFailure() throws InterruptedException { + + // Define atomic to control whether abort should succeed or fail + final AtomicBoolean succeed = new AtomicBoolean(true); + + final Journal jnl = new Journal(getProperties()) { + @Override + protected void discardCommitters() { + + if (succeed.get()) { + super.discardCommitters(); + } else { + throw new AbortException("Something wrong"); + } + + } + }; + + final RWStrategy strategy = (RWStrategy) jnl.getBufferStrategy(); + final RWStore store = strategy.getStore(); + + final String btreeName = "TestBTreeAbort"; + + // 1) Create and commit some data + // 2) Create more data and Abort success + // 4) Create and commit more data (should work) + // 3) Create more data and Abort fail + // 4) Create and commit more data (should fail) + + BTree btree = createBTree(jnl); + + jnl.registerIndex(btreeName, btree); + + btree.writeCheckpoint(); + jnl.commit(); + + System.out.println("Start Commit Counter: " + jnl.getCommitRecord().getCommitCounter()); + // 1) Add some data and commit + addSomeData(btree); + btree.writeCheckpoint(); + jnl.commit(); + System.out.println("After Data Commit Counter: " + jnl.getCommitRecord().getCommitCounter()); + + btree.close(); // force re-open + + btree = jnl.getIndex(btreeName); + + addSomeData(btree); + btree.writeCheckpoint(); + jnl.commit(); + + + // Show Allocators + final StringBuilder sb1 = new StringBuilder(); + store.showAllocators(sb1); + + if(log.isInfoEnabled()) log.info(sb1.toString()); + + // 2) Add more data and abort + if(log.isInfoEnabled()) log.info("Pre Abort Commit Counter: " + jnl.getCommitRecord().getCommitCounter()); + btree.close(); // force re-open + addSomeData(btree); + btree.writeCheckpoint(); + jnl.abort(); + if(log.isInfoEnabled()) log.info("Post Abort Commit Counter: " + jnl.getCommitRecord().getCommitCounter()); + + btree.close(); // force re-open after abort + btree = jnl.getIndex(btreeName); + + // Show Allocators again (should be the same visually) + final StringBuilder sb2 = new StringBuilder(); + store.showAllocators(sb2); + + if(log.isInfoEnabled()) log.info("After Abort\n" + sb2.toString()); + + // 3) More data and commit + addSomeData(btree); + btree.writeCheckpoint(); + jnl.commit(); + + // Show Allocators + final StringBuilder sb3 = new StringBuilder(); + store.showAllocators(sb3); + if(log.isInfoEnabled()) log.info("After More Data\n" + sb3.toString()); + + // 4) More data and bad abort + addSomeData(btree); + btree.writeCheckpoint(); + succeed.set(false); + try { + jnl.abort(); + fail(); + } catch (Exception e) { + // Check the Abort was Aborted + assertTrue(InnerCause.isInnerCause(e, AbortException.class)); + // good, let's see what state it is in now + } + + btree.close(); + + // 5) More data and bad commit (after bad abort) + try { + addSomeData(btree); + btree.writeCheckpoint(); + jnl.commit(); + fail(); + } catch (Exception e) { + + if(log.isInfoEnabled()) log.info("Expected exception", e); + + succeed.set(true); + jnl.abort(); // successful abort! + } + + btree = jnl.getIndex(btreeName); + + // 6) More data and good commit (after good abort) + addSomeData(btree); + btree.writeCheckpoint(); + jnl.commit(); + } + + private void addSomeData(final BTree btree) { + + final Random r = new Random(); + + for (int n = 0; n < 2000; n++) { + final byte[] key = new byte[64]; + final byte[] value = new byte[256]; + r.nextBytes(key); + r.nextBytes(value); + btree.insert(key, value); + } + } + + private BTree createBTree(final Journal store) { + final IndexMetadata metadata = new IndexMetadata(UUID.randomUUID()); + + return BTree.create(store, metadata); + } + +} Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2014-10-28 13:12:49 UTC (rev 8691) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2014-10-28 13:17:38 UTC (rev 8692) @@ -62,6 +62,7 @@ import com.bigdata.journal.Journal; import com.bigdata.journal.Journal.Options; import com.bigdata.journal.RWStrategy; +import com.bigdata.journal.TestJournalAbort; import com.bigdata.journal.TestJournalBasics; import com.bigdata.journal.VerifyCommitRecordIndex; import com.bigdata.rawstore.AbstractRawStoreTestCase; @@ -126,10 +127,20 @@ */ suite.addTest(TestJournalBasics.suite()); + /* + * TODO This should be a proxied test suite. It is RWStore specific + * right now. + * + * @see #1021 (Add critical section protection to + * AbstractJournal.abort() and BigdataSailConnection.rollback()) + */ + suite.addTestSuite(TestJournalAbort.class); + return suite; } + @Override public Properties getProperties() { final Properties properties = super.getProperties(); @@ -2178,12 +2189,12 @@ * not robust to internal failure.</a> */ public void test_commitStateError() { - Journal store = (Journal) getStore(); + final Journal store = (Journal) getStore(); try { - RWStrategy bs = (RWStrategy) store.getBufferStrategy(); + final RWStrategy bs = (RWStrategy) store.getBufferStrategy(); - RWStore rws = bs.getStore(); + final RWStore rws = bs.getStore(); final long addr = bs.write(randomData(78)); @@ -2260,10 +2271,10 @@ } public void test_allocCommitFreeWithHistory() { - Journal store = (Journal) getStore(4); + final Journal store = (Journal) getStore(4); try { - RWStrategy bs = (RWStrategy) store.getBufferStrategy(); + final RWStrategy bs = (RWStrategy) store.getBufferStrategy(); final long addr = bs.write(randomData(78)); Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2014-10-28 13:12:49 UTC (rev 8691) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2014-10-28 13:17:38 UTC (rev 8692) @@ -70,6 +70,7 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.CopyOnWriteArraySet; import java.util.concurrent.Semaphore; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantReadWriteLock; @@ -1652,7 +1653,14 @@ * those at a time). */ private final boolean unisolated; + + /** + * Critical section support in case rollback is not completed cleanly, in which + * case calls to commit() will fail until a clean rollback() is made. @see #1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) + */ + private final AtomicBoolean rollbackRequired = new AtomicBoolean(false); + public String toString() { return getClass().getName() + "{timestamp=" @@ -3150,26 +3158,33 @@ */ @Override public synchronized void rollback() throws SailException { - - assertWritableConn(); - - if (txLog.isInfoEnabled()) - txLog.info("SAIL-ROLLBACK-CONN: " + this); - - // discard buffered assertions and/or retractions. - clearBuffers(); - - // discard the write set. - database.abort(); + // @see #1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) + boolean success = false; + try { + assertWritableConn(); + + if (txLog.isInfoEnabled()) + txLog.info("SAIL-ROLLBACK-CONN: " + this); + + // discard buffered assertions and/or retractions. + clearBuffers(); + + // discard the write set. + database.abort(); + + if (changeLog != null) { + + changeLog.transactionAborted(); + + } + + dirty = false; + + success = true; // mark successful rollback + } finally { // @see #1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) + rollbackRequired.set(!success); + } - if (changeLog != null) { - - changeLog.transactionAborted(); - - } - - dirty = false; - } /** @@ -3197,6 +3212,13 @@ * was committed. */ public synchronized long commit2() throws SailException { + + /** + * If a call to rollback does not complete cleanly, then rollbackRequired will be set and no updates will be allowed. + * @see #1021 (Add critical section protection to AbstractJournal.abort() and BigdataSailConnection.rollback()) + */ + if (rollbackRequired.get()) + throw new IllegalStateException("Rollback required"); assertWritableConn(); Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2014-10-28 13:12:49 UTC (rev 8691) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2014-10-28 13:17:38 UTC (rev 8692) @@ -350,8 +350,10 @@ * * Note: If the client closes the connection, then the response's * InputStream will be closed and the task will terminate rather than - * running on in the background with a disconnected client. + * running on in the background with a disconnected client. @see #1026 (SPARQL UPDATE with runtime errors causes problems with lexicon indices) */ + final long tx = getBigdataRDFContext().newTx(timestamp); + boolean ok = false; try { final BigdataRDFContext context = getBigdataRDFContext(); @@ -401,11 +403,21 @@ // Wait for the Future. ft.get(); + ok = true; + } catch (Throwable e) { throw BigdataRDFServlet.launderThrowable(e, resp, updateStr); - } + } finally { + + if (!ok) { + // @see #1026 (SPARQL UPDATE with runtime errors causes problems with lexicon indices) + getBigdataRDFContext().abortTx(tx); + + } + + } } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-28 13:12:54
|
Revision: 8691 http://sourceforge.net/p/bigdata/code/8691 Author: thompsonbry Date: 2014-10-28 13:12:49 +0000 (Tue, 28 Oct 2014) Log Message: ----------- Rolling back change for a 1.3.3 release. See #1038 (xsd:boolean materialization issue) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java 2014-10-26 05:02:24 UTC (rev 8690) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java 2014-10-28 13:12:49 UTC (rev 8691) @@ -502,26 +502,21 @@ } final BigdataValue value = terms.get(iv); + // FIXME Temporarily rolled back for 1.3.3 release. See #1028 (xsd:boolean materialization issue) + if (value == null && iv.needsMaterialization()) { - if (value == null ) { - - if ( iv.needsMaterialization() ) { + throw new RuntimeException("Could not resolve: iv=" + iv); - throw new RuntimeException("Could not resolve: iv=" + iv); - - } // else do nothing, e.g. for a literal - DO NOT remove the binding, trac1028. + } - } else { + /* + * Replace the binding. + * + * FIXME This probably needs to strip out the + * BigdataSail#NULL_GRAPH since that should not become bound. + */ + ((IV) iv).setValue(value); - /* - * Replace the binding. - * - * FIXME This probably needs to strip out the - * BigdataSail#NULL_GRAPH since that should not become bound. - */ - ((IV) iv).setValue(value); - } - } } else { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <jer...@us...> - 2014-10-26 05:02:27
|
Revision: 8690 http://sourceforge.net/p/bigdata/code/8690 Author: jeremy_carroll Date: 2014-10-26 05:02:24 +0000 (Sun, 26 Oct 2014) Log Message: ----------- Fix for trac1028: do not (accidentally) remove an iv from a literal. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java 2014-10-17 20:40:08 UTC (rev 8689) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/bop/rdf/join/ChunkedMaterializationOp.java 2014-10-26 05:02:24 UTC (rev 8690) @@ -503,20 +503,25 @@ final BigdataValue value = terms.get(iv); - if (value == null && iv.needsMaterialization()) { + if (value == null ) { + + if ( iv.needsMaterialization() ) { - throw new RuntimeException("Could not resolve: iv=" + iv); + throw new RuntimeException("Could not resolve: iv=" + iv); + + } // else do nothing, e.g. for a literal - DO NOT remove the binding, trac1028. + } else { + + /* + * Replace the binding. + * + * FIXME This probably needs to strip out the + * BigdataSail#NULL_GRAPH since that should not become bound. + */ + ((IV) iv).setValue(value); } - /* - * Replace the binding. - * - * FIXME This probably needs to strip out the - * BigdataSail#NULL_GRAPH since that should not become bound. - */ - ((IV) iv).setValue(value); - } } else { This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <mrp...@us...> - 2014-10-17 20:40:21
|
Revision: 8689 http://sourceforge.net/p/bigdata/code/8689 Author: mrpersonick Date: 2014-10-17 20:40:08 +0000 (Fri, 17 Oct 2014) Log Message: ----------- Ticket 1024: GregorianCalendar does weird things before 1582 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/extensions/DateTimeExtension.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/LiteralExtensionIV.java branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueImpl.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/extensions/DateTimeExtension.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/extensions/DateTimeExtension.java 2014-10-15 14:34:38 UTC (rev 8688) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/extensions/DateTimeExtension.java 2014-10-17 20:40:08 UTC (rev 8689) @@ -24,6 +24,7 @@ package com.bigdata.rdf.internal.impl.extensions; +import java.util.Date; import java.util.GregorianCalendar; import java.util.LinkedHashMap; import java.util.LinkedHashSet; @@ -140,18 +141,24 @@ final XMLGregorianCalendar c = XMLDatatypeUtil.parseCalendar(s); if (c.getTimezone() == DatatypeConstants.FIELD_UNDEFINED) { + final GregorianCalendar gc = c.toGregorianCalendar(); + gc.setGregorianChange(new Date(Long.MIN_VALUE)); + final int offsetInMillis = // defaultTZ.getRawOffset(); - defaultTZ.getOffset(c.toGregorianCalendar().getTimeInMillis()); + defaultTZ.getOffset(gc.getTimeInMillis()); final int offsetInMinutes = offsetInMillis / 1000 / 60; c.setTimezone(offsetInMinutes); } + final GregorianCalendar gc = c.toGregorianCalendar(); + gc.setGregorianChange(new Date(Long.MIN_VALUE)); + /* * Returns the current time as UTC milliseconds from the epoch */ - final long l = c.toGregorianCalendar().getTimeInMillis(); + final long l = gc.getTimeInMillis(); final AbstractLiteralIV delegate = new XSDNumericIV(l); @@ -178,6 +185,7 @@ final TimeZone tz = BSBMHACK ? TimeZone.getDefault()/*getTimeZone("GMT")*/ : defaultTZ; final GregorianCalendar c = new GregorianCalendar(tz); + c.setGregorianChange(new Date(Long.MIN_VALUE)); c.setTimeInMillis(l); try { Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/LiteralExtensionIV.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/LiteralExtensionIV.java 2014-10-15 14:34:38 UTC (rev 8688) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/internal/impl/literal/LiteralExtensionIV.java 2014-10-17 20:40:08 UTC (rev 8689) @@ -246,5 +246,11 @@ public short shortValue() { return getValue().shortValue(); } + + @Override + public String toString() { + return "LiteralExtensionIV [delegate=" + delegate + ", datatype=" + + datatype + "]"; + } } Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueImpl.java 2014-10-15 14:34:38 UTC (rev 8688) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueImpl.java 2014-10-17 20:40:08 UTC (rev 8689) @@ -133,7 +133,7 @@ if (this.iv != null && !IVUtility.equals(this.iv, iv)) { throw new IllegalStateException("Already assigned: old=" - + this.iv + ", new=" + iv); + + this.iv + ", new=" + iv + ", this: " + this); } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-15 14:34:41
|
Revision: 8688 http://sourceforge.net/p/bigdata/code/8688 Author: thompsonbry Date: 2014-10-15 14:34:38 +0000 (Wed, 15 Oct 2014) Log Message: ----------- test of ability to commit to SVN from NSight Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/pom.xml Modified: branches/BIGDATA_RELEASE_1_3_0/pom.xml =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/pom.xml 2014-10-15 13:27:49 UTC (rev 8687) +++ branches/BIGDATA_RELEASE_1_3_0/pom.xml 2014-10-15 14:34:38 UTC (rev 8688) @@ -15,7 +15,7 @@ This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. +GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-15 13:27:57
|
Revision: 8687 http://sourceforge.net/p/bigdata/code/8687 Author: thompsonbry Date: 2014-10-15 13:27:49 +0000 (Wed, 15 Oct 2014) Log Message: ----------- Test ability to commit from the command line with svn 1.8 as installed by brew. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/pom.xml Modified: branches/BIGDATA_RELEASE_1_3_0/pom.xml =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/pom.xml 2014-10-14 16:54:48 UTC (rev 8686) +++ branches/BIGDATA_RELEASE_1_3_0/pom.xml 2014-10-15 13:27:49 UTC (rev 8687) @@ -15,7 +15,7 @@ This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. +GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software @@ -509,4 +509,4 @@ </dependency> --> </dependencies> -</project> \ No newline at end of file +</project> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-14 16:54:50
|
Revision: 8686 http://sourceforge.net/p/bigdata/code/8686 Author: thompsonbry Date: 2014-10-14 16:54:48 +0000 (Tue, 14 Oct 2014) Log Message: ----------- removed unused imports to verify ability to commit from a new machine. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailBooleanQuery.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailBooleanQuery.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailBooleanQuery.java 2014-10-06 16:59:15 UTC (rev 8685) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailBooleanQuery.java 2014-10-14 16:54:48 UTC (rev 8686) @@ -3,12 +3,10 @@ import java.util.concurrent.TimeUnit; import org.openrdf.query.Dataset; -import org.openrdf.query.GraphQueryResult; import org.openrdf.query.QueryEvaluationException; import org.openrdf.query.algebra.evaluation.QueryBindingSet; import org.openrdf.repository.sail.SailBooleanQuery; -import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; import com.bigdata.rdf.sparql.ast.ASTContainer; import com.bigdata.rdf.sparql.ast.BindingsClause; import com.bigdata.rdf.sparql.ast.DatasetNode; This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2014-10-06 16:59:18
|
Revision: 8685 http://sourceforge.net/p/bigdata/code/8685 Author: thompsonbry Date: 2014-10-06 16:59:15 +0000 (Mon, 06 Oct 2014) Log Message: ----------- Fix for #980 and #983 (xsd:boolean data race) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueFactoryImpl.java Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueFactoryImpl.java =================================================================== --- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueFactoryImpl.java 2014-10-04 21:01:57 UTC (rev 8684) +++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/model/BigdataValueFactoryImpl.java 2014-10-06 16:59:15 UTC (rev 8685) @@ -43,7 +43,6 @@ import com.bigdata.cache.WeakValueCache; import com.bigdata.rdf.internal.IV; -import com.bigdata.rdf.internal.impl.literal.XSDBooleanIV; import com.bigdata.rdf.internal.impl.literal.XSDUnsignedByteIV; import com.bigdata.rdf.internal.impl.literal.XSDUnsignedIntIV; import com.bigdata.rdf.internal.impl.literal.XSDUnsignedLongIV; @@ -85,14 +84,16 @@ xsdMap = getXSDMap(); - /** - * Cache the IV on the BigdataValue for these boolean constants. - * - * @see <a href="http://trac.bigdata.com/ticket/983"> Concurrent insert - * data with boolean object causes IllegalArgumentException </a> - */ - TRUE.setIV(XSDBooleanIV.TRUE); - FALSE.setIV(XSDBooleanIV.FALSE); + // @see <a href="http://trac.bigdata.com/ticket/983"> Concurrent insert data with boolean object causes IllegalArgumentException </a> + // @see <a href="http://trac.bigdata.com/ticket/980"> Object position query hint is not a Literal </a> +// /** +// * Cache the IV on the BigdataValue for these boolean constants. +// * +// * @see <a href="http://trac.bigdata.com/ticket/983"> Concurrent insert +// * data with boolean object causes IllegalArgumentException </a> +// */ +// TRUE.setIV(XSDBooleanIV.TRUE); +// FALSE.setIV(XSDBooleanIV.FALSE); } /** @@ -311,12 +312,12 @@ private final BigdataURIImpl xsd_boolean = new BigdataURIImpl(this, xsd + "boolean"); - private final BigdataLiteralImpl TRUE = new BigdataLiteralImpl(this, "true", null, - xsd_boolean); +// private final BigdataLiteralImpl TRUE = new BigdataLiteralImpl(this, "true", null, +// xsd_boolean); +// +// private final BigdataLiteralImpl FALSE = new BigdataLiteralImpl(this, "false", null, +// xsd_boolean); - private final BigdataLiteralImpl FALSE = new BigdataLiteralImpl(this, "false", null, - xsd_boolean); - /** * Map for fast resolution of XSD URIs. The keys are the string values of * the URIs. The values are the URIs. @@ -345,10 +346,21 @@ } + /** + * {@inheritDoc} + * + * @see <a href="http://trac.bigdata.com/ticket/983"> Concurrent insert data + * with boolean object causes IllegalArgumentException </a> + * @see <a href="http://trac.bigdata.com/ticket/980"> Object position of + * query hint is not a Literal </a> + */ @Override - public BigdataLiteralImpl createLiteral(boolean arg0) { + public BigdataLiteralImpl createLiteral(final boolean arg0) { - return (arg0 ? TRUE : FALSE); + return (arg0 // + ? new BigdataLiteralImpl(this, "true", null, xsd_boolean) + : new BigdataLiteralImpl(this, "false", null, xsd_boolean) + ); } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |