This list is closed, nobody may subscribe to it.
2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bryan T. <br...@sy...> - 2014-05-30 16:41:41
|
There is an ExportKB utility class that could be refactored to expose a pretty efficient dump of triples or quads. I do not believe that there is a REST API method for this. Of course, there is the unimplemented graph protocol. A CONSTRUCT will do this and you could conneg for your choice of supported quads writers. The nquads writer should be supported when we roll forward to openrdf 2.7. Bryan > On May 30, 2014, at 11:28 AM, Jeremy J Carroll <jj...@sy...> wrote: > > > I am working on some simple management operations for our (Syapse) system > > One of them is to get an easy to understand (non-proprietary) dump of the kb content. > I understand this as, e.g. getting a trig, trix, or nquads dump of the whole store. > I also could run a query like: > > SELECT * > { GRAPH ?g { ?s ?p ?o } } > > Looking at the documentation > > http://wiki.bigdata.com/wiki/index.php/NanoSparqlServer > application/x-trig .trig UTF-8 TRIG http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec > text/x-nquads .nq US-ASCII NQUADS http://sw.deri.org/2008/07/n-quads/ While the REST API can accept NQuads data, it can not generate it yet. > the note on nquads leaves me with some hope that I can dump trig or trix - but I can't see how! > > (I also note that that WIki page needs updating with /bigdata/ through-out - unless someone shouts I will probably have time to do that today) > > > Jeremy > > > > ------------------------------------------------------------------------------ > Time is money. Stop wasting it! Get your web API in 5 minutes. > www.restlet.com/download > http://p.sf.net/sfu/restlet > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Jeremy J C. <jj...@sy...> - 2014-05-30 15:33:49
|
I am working on some simple management operations for our (Syapse) system One of them is to get an easy to understand (non-proprietary) dump of the kb content. I understand this as, e.g. getting a trig, trix, or nquads dump of the whole store. I also could run a query like: SELECT * { GRAPH ?g { ?s ?p ?o } } Looking at the documentation http://wiki.bigdata.com/wiki/index.php/NanoSparqlServer application/x-trig .trig UTF-8 TRIG http://www.wiwiss.fu-berlin.de/suhl/bizer/TriG/Spec text/x-nquads .nq US-ASCII NQUADS http://sw.deri.org/2008/07/n-quads/ While the REST API can accept NQuads data, it can not generate it yet. the note on nquads leaves me with some hope that I can dump trig or trix - but I can't see how! (I also note that that WIki page needs updating with /bigdata/ through-out - unless someone shouts I will probably have time to do that today) Jeremy |
From: Bryan T. <br...@sy...> - 2014-05-26 12:02:49
|
Can I suggest that this list should be alphabetical? I think that it is in some effectively historical ordering. Bryan |
From: Bryan T. <br...@sy...> - 2014-05-20 15:22:40
|
The 1.3.2 release will be next month and will focus on improvements in the new workbench, improvements for HA, and new deployment models. Thanks, Bryan |
From: Bryan T. <br...@sy...> - 2014-05-20 14:25:21
|
This is a major release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal), highly available replication cluster mode (HAJournalServer), and a horizontally sharded cluster mode (BigdataFederation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The HAJournalServer adds replication, online backup, horizontal scaling of query, and high availability. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the HAJournalServer for high availability and linear scaling in query throughput. Choose the BigdataFederation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. Starting with the 1.3.0 release, we offer a tarball artifact [10] for easy installation of the HA replication cluster. You can download the WAR (standalone) or HA artifacts from: http://sourceforge.net/projects/bigdata/ You can checkout this release from: https://svn.code.sf.net/p/bigdata/code/branches/BIGDATA_RELEASE_1_3_1 New features: - Java 7 is now required. - High availability [10]. - High availability load balancer. - New RDF/SPARQL workbench. - Blueprints API. - RDF Graph Mining Service (GASService) [12]. - Reification Done Right (RDR) support [11]. - Property Path performance enhancements. - Plus numerous other bug fixes and performance enhancements. Feature summary: - Highly Available Replication Clusters (HAJournalServer [10]) - Single machine data storage to ~50B triples/quads (RWStore); - Clustered data storage is essentially unlimited (BigdataFederation); - Simple embedded and/or webapp deployment (NanoSparqlServer); - Triples, quads, or triples with provenance (SIDs); - Fast RDFS+ inference and truth maintenance; - Fast 100% native SPARQL 1.1 evaluation; - Integrated "analytic" query package; - %100 Java memory manager leverages the JVM native heap (no GC); Road map [3]: - Column-wise indexing; - Runtime Query Optimizer for Analytic Query mode; - Performance optimization for scale-out clusters; and - Simplified deployment, configuration, and administration for scale-out clusters. Change log: Note: Versions with (*) MAY require data migration. For details, see [9]. 1.3.1: - http://trac.bigdata.com/ticket/242 (Deadlines do not play well with GROUP_BY, ORDER_BY, etc.) - http://trac.bigdata.com/ticket/256 (Amortize RTO cost) - http://trac.bigdata.com/ticket/257 (Support BOP fragments in the RTO.) - http://trac.bigdata.com/ticket/258 (Integrate RTO into SAIL) - http://trac.bigdata.com/ticket/259 (Dynamically increase RTO sampling limit.) - http://trac.bigdata.com/ticket/526 (Reification done right) - http://trac.bigdata.com/ticket/580 (Problem with the bigdata RDF/XML parser with sids) - http://trac.bigdata.com/ticket/622 (NSS using jetty+windows can lose connections (windows only; jdk 6/7 bug)) - http://trac.bigdata.com/ticket/624 (HA Load Balancer) - http://trac.bigdata.com/ticket/629 (Graph processing API) - http://trac.bigdata.com/ticket/721 (Support HA1 configurations) - http://trac.bigdata.com/ticket/730 (Allow configuration of embedded NSS jetty server using jetty-web.xml) - http://trac.bigdata.com/ticket/759 (multiple filters interfere) - http://trac.bigdata.com/ticket/763 (Stochastic results with Analytic Query Mode) - http://trac.bigdata.com/ticket/774 (Converge on Java 7.) - http://trac.bigdata.com/ticket/779 (Resynchronization of socket level write replication protocol (HA)) - http://trac.bigdata.com/ticket/780 (Incremental or asynchronous purge of HALog files) - http://trac.bigdata.com/ticket/782 (Wrong serialization version) - http://trac.bigdata.com/ticket/784 (Describe Limit/offset don't work as expected) - http://trac.bigdata.com/ticket/787 (Update documentations and samples, they are OUTDATED) - http://trac.bigdata.com/ticket/788 (Name2Addr does not report all root causes if the commit fails.) - http://trac.bigdata.com/ticket/789 (ant task to build sesame fails, docs for setting up bigdata for sesame are ancient) - http://trac.bigdata.com/ticket/790 (should not be pruning any children) - http://trac.bigdata.com/ticket/791 (Clean up query hints) - http://trac.bigdata.com/ticket/793 (Explain reports incorrect value for opCount) - http://trac.bigdata.com/ticket/796 (Filter assigned to sub-query by query generator is dropped from evaluation) - http://trac.bigdata.com/ticket/797 (add sbt setup to getting started wiki) - http://trac.bigdata.com/ticket/798 (Solution order not always preserved) - http://trac.bigdata.com/ticket/799 (mis-optimation of quad pattern vs triple pattern) - http://trac.bigdata.com/ticket/802 (Optimize DatatypeFactory instantiation in DateTimeExtension) - http://trac.bigdata.com/ticket/803 (prefixMatch does not work in full text search) - http://trac.bigdata.com/ticket/804 (update bug deleting quads) - http://trac.bigdata.com/ticket/806 (Incorrect AST generated for OPTIONAL { SELECT }) - http://trac.bigdata.com/ticket/808 (Wildcard search in bigdata for type suggessions) - http://trac.bigdata.com/ticket/810 (Expose GAS API as SPARQL SERVICE) - http://trac.bigdata.com/ticket/815 (RDR query does too much work) - http://trac.bigdata.com/ticket/816 (Wildcard projection ignores variables inside a SERVICE call.) - http://trac.bigdata.com/ticket/817 (Unexplained increase in journal size) - http://trac.bigdata.com/ticket/821 (Reject large files, rather then storing them in a hidden variable) - http://trac.bigdata.com/ticket/831 (UNION with filter issue) - http://trac.bigdata.com/ticket/841 (Using "VALUES" in a query returns lexical error) - http://trac.bigdata.com/ticket/848 (Fix SPARQL Results JSON writer to write the RDR syntax) - http://trac.bigdata.com/ticket/849 (Create writers that support the RDR syntax) - http://trac.bigdata.com/ticket/851 (RDR GAS interface) - http://trac.bigdata.com/ticket/852 (RemoteRepository.cancel() does not consume the HTTP response entity.) - http://trac.bigdata.com/ticket/853 (Follower does not accept POST of idempotent operations (HA)) - http://trac.bigdata.com/ticket/854 (Allow override of maximum length before converting an HTTP GET to an HTTP POST) - http://trac.bigdata.com/ticket/855 (AssertionError: Child does not have persistent identity) - http://trac.bigdata.com/ticket/862 (Create parser for JSON SPARQL Results) - http://trac.bigdata.com/ticket/863 (HA1 commit failure) - http://trac.bigdata.com/ticket/866 (Batch remove API for the SAIL) - http://trac.bigdata.com/ticket/867 (NSS concurrency problem with list namespaces and create namespace) - http://trac.bigdata.com/ticket/869 (HA5 test suite) - http://trac.bigdata.com/ticket/872 (Full text index range count optimization) - http://trac.bigdata.com/ticket/874 (FILTER not applied when there is UNION in the same join group) - http://trac.bigdata.com/ticket/876 (When I upload a file I want to see the filename.) - http://trac.bigdata.com/ticket/877 (RDF Format selector is invisible) - http://trac.bigdata.com/ticket/883 (CANCEL Query fails on non-default kb namespace on HA follower.) - http://trac.bigdata.com/ticket/886 (Provide workaround for bad reverse DNS setups.) - http://trac.bigdata.com/ticket/887 (BIND is leaving a variable unbound) - http://trac.bigdata.com/ticket/892 (HAJournalServer does not die if zookeeper is not running) - http://trac.bigdata.com/ticket/893 (large sparql insert optimization slow?) - http://trac.bigdata.com/ticket/894 (unnecessary synchronization) - http://trac.bigdata.com/ticket/895 (stack overflow in populateStatsMap) - http://trac.bigdata.com/ticket/902 (Update Basic Bigdata Chef Cookbook) - http://trac.bigdata.com/ticket/904 (AssertionError: PropertyPathNode got to ASTJoinOrderByType.optimizeJoinGroup) - http://trac.bigdata.com/ticket/905 (unsound combo query optimization: union + filter) - http://trac.bigdata.com/ticket/906 (DC Prefix Button Appends "</li>") - http://trac.bigdata.com/ticket/907 (Add a quick-start ant task for the BD Server "ant start") - http://trac.bigdata.com/ticket/912 (Provide a configurable IAnalyzerFactory) - http://trac.bigdata.com/ticket/913 (Blueprints API Implementation) - http://trac.bigdata.com/ticket/914 (Settable timeout on SPARQL Query (REST API)) - http://trac.bigdata.com/ticket/915 (DefaultAnalyzerFactory issues) - http://trac.bigdata.com/ticket/920 (Content negotiation orders accept header scores in reverse) - http://trac.bigdata.com/ticket/939 (NSS does not start from command line: bigdata-war/src not found.) - http://trac.bigdata.com/ticket/940 (ProxyServlet in web.xml breaks tomcat WAR (HA LBS) 1.3.0: - http://trac.bigdata.com/ticket/530 (Journal HA) - http://trac.bigdata.com/ticket/621 (Coalesce write cache records and install reads in cache) - http://trac.bigdata.com/ticket/623 (HA TXS) - http://trac.bigdata.com/ticket/639 (Remove triple-buffering in RWStore) - http://trac.bigdata.com/ticket/645 (HA backup) - http://trac.bigdata.com/ticket/646 (River not compatible with newer 1.6.0 and 1.7.0 JVMs) - http://trac.bigdata.com/ticket/648 (Add a custom function to use full text index for filtering.) - http://trac.bigdata.com/ticket/651 (RWS test failure) - http://trac.bigdata.com/ticket/652 (Compress write cache blocks for replication and in HALogs) - http://trac.bigdata.com/ticket/662 (Latency on followers during commit on leader) - http://trac.bigdata.com/ticket/663 (Issue with OPTIONAL blocks) - http://trac.bigdata.com/ticket/664 (RWStore needs post-commit protocol) - http://trac.bigdata.com/ticket/665 (HA3 LOAD non-responsive with node failure) - http://trac.bigdata.com/ticket/666 (Occasional CI deadlock in HALogWriter testConcurrentRWWriterReader) - http://trac.bigdata.com/ticket/670 (Accumulating HALog files cause latency for HA commit) - http://trac.bigdata.com/ticket/671 (Query on follower fails during UPDATE on leader) - http://trac.bigdata.com/ticket/673 (DGC in release time consensus protocol causes native thread leak in HAJournalServer at each commit) - http://trac.bigdata.com/ticket/674 (WCS write cache compaction causes errors in RWS postHACommit()) - http://trac.bigdata.com/ticket/676 (Bad patterns for timeout computations) - http://trac.bigdata.com/ticket/677 (HA deadlock under UPDATE + QUERY) - http://trac.bigdata.com/ticket/678 (DGC Thread and Open File Leaks: sendHALogForWriteSet()) - http://trac.bigdata.com/ticket/679 (HAJournalServer can not restart due to logically empty log file) - http://trac.bigdata.com/ticket/681 (HAJournalServer deadlock: pipelineRemove() and getLeaderId()) - http://trac.bigdata.com/ticket/684 (Optimization with skos altLabel) - http://trac.bigdata.com/ticket/686 (Consensus protocol does not detect clock skew correctly) - http://trac.bigdata.com/ticket/687 (HAJournalServer Cache not populated) - http://trac.bigdata.com/ticket/689 (Missing URL encoding in RemoteRepositoryManager) - http://trac.bigdata.com/ticket/690 (Error when using the alias "a" instead of rdf:type for a multipart insert) - http://trac.bigdata.com/ticket/691 (Failed to re-interrupt thread in HAJournalServer) - http://trac.bigdata.com/ticket/692 (Failed to re-interrupt thread) - http://trac.bigdata.com/ticket/693 (OneOrMorePath SPARQL property path expression ignored) - http://trac.bigdata.com/ticket/694 (Transparently cancel update/query in RemoteRepository) - http://trac.bigdata.com/ticket/695 (HAJournalServer reports "follower" but is in SeekConsensus and is not participating in commits.) - http://trac.bigdata.com/ticket/701 (Problems in BackgroundTupleResult) - http://trac.bigdata.com/ticket/702 (InvocationTargetException on /namespace call) - http://trac.bigdata.com/ticket/704 (ask does not return json) - http://trac.bigdata.com/ticket/705 (Race between QueryEngine.putIfAbsent() and shutdownNow()) - http://trac.bigdata.com/ticket/706(MultiSourceSequentialCloseableIterator.nextSource() can throw NPE) - http://trac.bigdata.com/ticket/707 (BlockingBuffer.close() does not unblock threads) - http://trac.bigdata.com/ticket/708 (BIND heisenbug - race condition on select query with BIND) - http://trac.bigdata.com/ticket/711 (sparql protocol: mime type application/sparql-query) - http://trac.bigdata.com/ticket/712 (SELECT ?x { OPTIONAL { ?x eg:doesNotExist eg:doesNotExist } } incorrect) - http://trac.bigdata.com/ticket/715 (Interrupt of thread submitting a query for evaluation does not always terminate the AbstractRunningQuery) - http://trac.bigdata.com/ticket/716 (Verify that IRunningQuery instances (and nested queries) are correctly cancelled when interrupted) - http://trac.bigdata.com/ticket/718 (HAJournalServer needs to handle ZK client connection loss) - http://trac.bigdata.com/ticket/720 (HA3 simultaneous service start failure) - http://trac.bigdata.com/ticket/723 (HA asynchronous tasks must be canceled when invariants are changed) - http://trac.bigdata.com/ticket/725 (FILTER EXISTS in subselect) - http://trac.bigdata.com/ticket/726 (Logically empty HALog for committed transaction) - http://trac.bigdata.com/ticket/727 (DELETE/INSERT fails with OPTIONAL non-matching WHERE) - http://trac.bigdata.com/ticket/728 (Refactor to create HAClient) - http://trac.bigdata.com/ticket/729 (ant bundleJar not working) - http://trac.bigdata.com/ticket/731 (CBD and Update leads to 500 status code) - http://trac.bigdata.com/ticket/732 (describe statement limit does not work) - http://trac.bigdata.com/ticket/733 (Range optimizer not optimizing Slice service) - http://trac.bigdata.com/ticket/734 (two property paths interfere) - http://trac.bigdata.com/ticket/736 (MIN() malfunction) - http://trac.bigdata.com/ticket/737 (class cast exception) - http://trac.bigdata.com/ticket/739 (Inconsistent treatment of bind and optional property path) - http://trac.bigdata.com/ticket/741 (ctc-striterators should build as independent top-level project (Apache2)) - http://trac.bigdata.com/ticket/743 (AbstractTripleStore.destroy() does not filter for correct prefix) - http://trac.bigdata.com/ticket/746 (Assertion error) - http://trac.bigdata.com/ticket/747 (BOUND bug) - http://trac.bigdata.com/ticket/748 (incorrect join with subselect renaming vars) - http://trac.bigdata.com/ticket/754 (Failure to setup SERVICE hook and changeLog for Unisolated and Read/Write connections) - http://trac.bigdata.com/ticket/755 (Concurrent QuorumActors can interfere leading to failure to progress) - http://trac.bigdata.com/ticket/756 (order by and group_concat) - http://trac.bigdata.com/ticket/760 (Code review on 2-phase commit protocol) - http://trac.bigdata.com/ticket/764 (RESYNC failure (HA)) - http://trac.bigdata.com/ticket/770 (alpp ordering) - http://trac.bigdata.com/ticket/772 (Query timeout only checked at operator start/stop.) - http://trac.bigdata.com/ticket/776 (Closed as duplicate of #490) - http://trac.bigdata.com/ticket/778 (HA Leader fail results in transient problem with allocations on other services) - http://trac.bigdata.com/ticket/783 (Operator Alerts (HA)) 1.2.4: - http://trac.bigdata.com/ticket/777 (ConcurrentModificationException in ASTComplexOptionalOptimizer) 1.2.3: - http://trac.bigdata.com/ticket/168 (Maven Build) - http://trac.bigdata.com/ticket/196 (Journal leaks memory). - http://trac.bigdata.com/ticket/235 (Occasional deadlock in CI runs in com.bigdata.io.writecache.TestAll) - http://trac.bigdata.com/ticket/312 (CI (mock) quorums deadlock) - http://trac.bigdata.com/ticket/405 (Optimize hash join for subgroups with no incoming bound vars.) - http://trac.bigdata.com/ticket/412 (StaticAnalysis#getDefinitelyBound() ignores exogenous variables.) - http://trac.bigdata.com/ticket/485 (RDFS Plus Profile) - http://trac.bigdata.com/ticket/495 (SPARQL 1.1 Property Paths) - http://trac.bigdata.com/ticket/519 (Negative parser tests) - http://trac.bigdata.com/ticket/531 (SPARQL UPDATE for SOLUTION SETS) - http://trac.bigdata.com/ticket/535 (Optimize JOIN VARS for Sub-Selects) - http://trac.bigdata.com/ticket/555 (Support PSOutputStream/InputStream at IRawStore) - http://trac.bigdata.com/ticket/559 (Use RDFFormat.NQUADS as the format identifier for the NQuads parser) - http://trac.bigdata.com/ticket/570 (MemoryManager Journal does not implement all methods). - http://trac.bigdata.com/ticket/575 (NSS Admin API) - http://trac.bigdata.com/ticket/577 (DESCRIBE with OFFSET/LIMIT needs to use sub-select) - http://trac.bigdata.com/ticket/578 (Concise Bounded Description (CBD)) - http://trac.bigdata.com/ticket/579 (CONSTRUCT should use distinct SPO filter) - http://trac.bigdata.com/ticket/583 (VoID in ServiceDescription) - http://trac.bigdata.com/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) - http://trac.bigdata.com/ticket/590 (nxparser fails with uppercase language tag) - http://trac.bigdata.com/ticket/592 (Optimize RWStore allocator sizes) - http://trac.bigdata.com/ticket/593 (Ugrade to Sesame 2.6.10) - http://trac.bigdata.com/ticket/594 (WAR was deployed using TRIPLES rather than QUADS by default) - http://trac.bigdata.com/ticket/596 (Change web.xml parameter names to be consistent with Jini/River) - http://trac.bigdata.com/ticket/597 (SPARQL UPDATE LISTENER) - http://trac.bigdata.com/ticket/598 (B+Tree branching factor and HTree addressBits are confused in their NodeSerializer implementations) - http://trac.bigdata.com/ticket/599 (BlobIV for blank node : NotMaterializedException) - http://trac.bigdata.com/ticket/600 (BlobIV collision counter hits false limit.) - http://trac.bigdata.com/ticket/601 (Log uncaught exceptions) - http://trac.bigdata.com/ticket/602 (RWStore does not discard logged deletes on reset()) - http://trac.bigdata.com/ticket/607 (History service / index) - http://trac.bigdata.com/ticket/608 (LOG BlockingBuffer not progressing at INFO or lower level) - http://trac.bigdata.com/ticket/609 (bigdata-ganglia is required dependency for Journal) - http://trac.bigdata.com/ticket/611 (The code that processes SPARQL Update has a typo) - http://trac.bigdata.com/ticket/612 (Bigdata scale-up depends on zookeper) - http://trac.bigdata.com/ticket/613 (SPARQL UPDATE response inlines large DELETE or INSERT triple graphs) - http://trac.bigdata.com/ticket/614 (static join optimizer does not get ordering right when multiple tails share vars with ancestry) - http://trac.bigdata.com/ticket/615 (AST2BOpUtility wraps UNION with an unnecessary hash join) - http://trac.bigdata.com/ticket/616 (Row store read/update not isolated on Journal) - http://trac.bigdata.com/ticket/617 (Concurrent KB create fails with "No axioms defined?") - http://trac.bigdata.com/ticket/618 (DirectBufferPool.poolCapacity maximum of 2GB) - http://trac.bigdata.com/ticket/619 (RemoteRepository class should use application/x-www-form-urlencoded for large POST requests) - http://trac.bigdata.com/ticket/620 (UpdateServlet fails to parse MIMEType when doing conneg.) - http://trac.bigdata.com/ticket/626 (Expose performance counters for read-only indices) - http://trac.bigdata.com/ticket/627 (Environment variable override for NSS properties file) - http://trac.bigdata.com/ticket/628 (Create a bigdata-client jar for the NSS REST API) - http://trac.bigdata.com/ticket/631 (ClassCastException in SIDs mode query) - http://trac.bigdata.com/ticket/632 (NotMaterializedException when a SERVICE call needs variables that are provided as query input bindings) - http://trac.bigdata.com/ticket/633 (ClassCastException when binding non-uri values to a variable that occurs in predicate position) - http://trac.bigdata.com/ticket/638 (Change DEFAULT_MIN_RELEASE_AGE to 1ms) - http://trac.bigdata.com/ticket/640 (Conditionally rollback() BigdataSailConnection if dirty) - http://trac.bigdata.com/ticket/642 (Property paths do not work inside of exists/not exists filters) - http://trac.bigdata.com/ticket/643 (Add web.xml parameters to lock down public NSS end points) - http://trac.bigdata.com/ticket/644 (Bigdata2Sesame2BindingSetIterator can fail to notice asynchronous close()) - http://trac.bigdata.com/ticket/650 (Can not POST RDF to a graph using REST API) - http://trac.bigdata.com/ticket/654 (Rare AssertionError in WriteCache.clearAddrMap()) - http://trac.bigdata.com/ticket/655 (SPARQL REGEX operator does not perform case-folding correctly for Unicode data) - http://trac.bigdata.com/ticket/656 (InFactory bug when IN args consist of a single literal) - http://trac.bigdata.com/ticket/647 (SIDs mode creates unnecessary hash join for GRAPH group patterns) - http://trac.bigdata.com/ticket/667 (Provide NanoSparqlServer initialization hook) - http://trac.bigdata.com/ticket/669 (Doubly nested subqueries yield no results with LIMIT) - http://trac.bigdata.com/ticket/675 (Flush indices in parallel during checkpoint to reduce IO latency) - http://trac.bigdata.com/ticket/682 (AtomicRowFilter UnsupportedOperationException) 1.2.2: - http://trac.bigdata.com/ticket/586 (RWStore immedateFree() not removing Checkpoint addresses from the historical index cache.) - http://trac.bigdata.com/ticket/602 (RWStore does not discard logged deletes on reset()) - http://trac.bigdata.com/ticket/603 (Prepare critical maintenance release as branch of 1.2.1) 1.2.1: - http://trac.bigdata.com/ticket/533 (Review materialization for inline IVs) - http://trac.bigdata.com/ticket/539 (NotMaterializedException with REGEX and Vocab) - http://trac.bigdata.com/ticket/540 (SPARQL UPDATE using NSS via index.html) - http://trac.bigdata.com/ticket/541 (MemoryManaged backed Journal mode) - http://trac.bigdata.com/ticket/546 (Index cache for Journal) - http://trac.bigdata.com/ticket/549 (BTree can not be cast to Name2Addr (MemStore recycler)) - http://trac.bigdata.com/ticket/550 (NPE in Leaf.getKey() : root cause was user error) - http://trac.bigdata.com/ticket/558 (SPARQL INSERT not working in same request after INSERT DATA) - http://trac.bigdata.com/ticket/562 (Sub-select in INSERT cause NPE in UpdateExprBuilder) - http://trac.bigdata.com/ticket/563 (DISTINCT ORDER BY) - http://trac.bigdata.com/ticket/567 (Failure to set cached value on IV results in incorrect behavior for complex UPDATE operation) - http://trac.bigdata.com/ticket/568 (DELETE WHERE fails with Java AssertionError) - http://trac.bigdata.com/ticket/569 (LOAD-CREATE-LOAD using virgin journal fails with "Graph exists" exception) - http://trac.bigdata.com/ticket/571 (DELETE/INSERT WHERE handling of blank nodes) - http://trac.bigdata.com/ticket/573 (NullPointerException when attempting to INSERT DATA containing a blank node) 1.2.0: (*) - http://trac.bigdata.com/ticket/92 (Monitoring webapp) - http://trac.bigdata.com/ticket/267 (Support evaluation of 3rd party operators) - http://trac.bigdata.com/ticket/337 (Compact and efficient movement of binding sets between nodes.) - http://trac.bigdata.com/ticket/433 (Cluster leaks threads under read-only index operations: DGC thread leak) - http://trac.bigdata.com/ticket/437 (Thread-local cache combined with unbounded thread pools causes effective memory leak: termCache memory leak & thread-local buffers) - http://trac.bigdata.com/ticket/438 (KeyBeforePartitionException on cluster) - http://trac.bigdata.com/ticket/439 (Class loader problem) - http://trac.bigdata.com/ticket/441 (Ganglia integration) - http://trac.bigdata.com/ticket/443 (Logger for RWStore transaction service and recycler) - http://trac.bigdata.com/ticket/444 (SPARQL query can fail to notice when IRunningQuery.isDone() on cluster) - http://trac.bigdata.com/ticket/445 (RWStore does not track tx release correctly) - http://trac.bigdata.com/ticket/446 (HTTP Repostory broken with bigdata 1.1.0) - http://trac.bigdata.com/ticket/448 (SPARQL 1.1 UPDATE) - http://trac.bigdata.com/ticket/449 (SPARQL 1.1 Federation extension) - http://trac.bigdata.com/ticket/451 (Serialization error in SIDs mode on cluster) - http://trac.bigdata.com/ticket/454 (Global Row Store Read on Cluster uses Tx) - http://trac.bigdata.com/ticket/456 (IExtension implementations do point lookups on lexicon) - http://trac.bigdata.com/ticket/457 ("No such index" on cluster under concurrent query workload) - http://trac.bigdata.com/ticket/458 (Java level deadlock in DS) - http://trac.bigdata.com/ticket/460 (Uncaught interrupt resolving RDF terms) - http://trac.bigdata.com/ticket/461 (KeyAfterPartitionException / KeyBeforePartitionException on cluster) - http://trac.bigdata.com/ticket/463 (NoSuchVocabularyItem with LUBMVocabulary for DerivedNumericsExtension) - http://trac.bigdata.com/ticket/464 (Query statistics do not update correctly on cluster) - http://trac.bigdata.com/ticket/465 (Too many GRS reads on cluster) - http://trac.bigdata.com/ticket/469 (Sail does not flush assertion buffers before query) - http://trac.bigdata.com/ticket/472 (acceptTaskService pool size on cluster) - http://trac.bigdata.com/ticket/475 (Optimize serialization for query messages on cluster) - http://trac.bigdata.com/ticket/476 (Test suite for writeCheckpoint() and recycling for BTree/HTree) - http://trac.bigdata.com/ticket/478 (Cluster does not map input solution(s) across shards) - http://trac.bigdata.com/ticket/480 (Error releasing deferred frees using 1.0.6 against a 1.0.4 journal) - http://trac.bigdata.com/ticket/481 (PhysicalAddressResolutionException against 1.0.6) - http://trac.bigdata.com/ticket/482 (RWStore reset() should be thread-safe for concurrent readers) - http://trac.bigdata.com/ticket/484 (Java API for NanoSparqlServer REST API) - http://trac.bigdata.com/ticket/491 (AbstractTripleStore.destroy() does not clear the locator cache) - http://trac.bigdata.com/ticket/492 (Empty chunk in ThickChunkMessage (cluster)) - http://trac.bigdata.com/ticket/493 (Virtual Graphs) - http://trac.bigdata.com/ticket/496 (Sesame 2.6.3) - http://trac.bigdata.com/ticket/497 (Implement STRBEFORE, STRAFTER, and REPLACE) - http://trac.bigdata.com/ticket/498 (Bring bigdata RDF/XML parser up to openrdf 2.6.3.) - http://trac.bigdata.com/ticket/500 (SPARQL 1.1 Service Description) - http://www.openrdf.org/issues/browse/SES-884 (Aggregation with an solution set as input should produce an empty solution as output) - http://www.openrdf.org/issues/browse/SES-862 (Incorrect error handling for SPARQL aggregation; fix in 2.6.1) - http://www.openrdf.org/issues/browse/SES-873 (Order the same Blank Nodes together in ORDER BY) - http://trac.bigdata.com/ticket/501 (SPARQL 1.1 BINDINGS are ignored) - http://trac.bigdata.com/ticket/503 (Bigdata2Sesame2BindingSetIterator throws QueryEvaluationException were it should throw NoSuchElementException) - http://trac.bigdata.com/ticket/504 (UNION with Empty Group Pattern) - http://trac.bigdata.com/ticket/505 (Exception when using SPARQL sort & statement identifiers) - http://trac.bigdata.com/ticket/506 (Load, closure and query performance in 1.1.x versus 1.0.x) - http://trac.bigdata.com/ticket/508 (LIMIT causes hash join utility to log errors) - http://trac.bigdata.com/ticket/513 (Expose the LexiconConfiguration to Function BOPs) - http://trac.bigdata.com/ticket/515 (Query with two "FILTER NOT EXISTS" expressions returns no results) - http://trac.bigdata.com/ticket/516 (REGEXBOp should cache the Pattern when it is a constant) - http://trac.bigdata.com/ticket/517 (Java 7 Compiler Compatibility) - http://trac.bigdata.com/ticket/518 (Review function bop subclass hierarchy, optimize datatype bop, etc.) - http://trac.bigdata.com/ticket/520 (CONSTRUCT WHERE shortcut) - http://trac.bigdata.com/ticket/521 (Incremental materialization of Tuple and Graph query results) - http://trac.bigdata.com/ticket/525 (Modify the IChangeLog interface to support multiple agents) - http://trac.bigdata.com/ticket/527 (Expose timestamp of LexiconRelation to function bops) - http://trac.bigdata.com/ticket/532 (ClassCastException during hash join (can not be cast to TermId)) - http://trac.bigdata.com/ticket/533 (Review materialization for inline IVs) - http://trac.bigdata.com/ticket/534 (BSBM BI Q5 error using MERGE JOIN) 1.1.0 (*) - http://trac.bigdata.com/ticket/23 (Lexicon joins) - http://trac.bigdata.com/ticket/109 (Store large literals as "blobs") - http://trac.bigdata.com/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://trac.bigdata.com/ticket/203 (Implement an persistence capable hash table to support analytic query) - http://trac.bigdata.com/ticket/209 (AccessPath should visit binding sets rather than elements for high level query.) - http://trac.bigdata.com/ticket/227 (SliceOp appears to be necessary when operator plan should suffice without) - http://trac.bigdata.com/ticket/232 (Bottom-up evaluation semantics). - http://trac.bigdata.com/ticket/246 (Derived xsd numeric data types must be inlined as extension types.) - http://trac.bigdata.com/ticket/254 (Revisit pruning of intermediate variable bindings during query execution) - http://trac.bigdata.com/ticket/261 (Lift conditions out of subqueries.) - http://trac.bigdata.com/ticket/300 (Native ORDER BY) - http://trac.bigdata.com/ticket/324 (Inline predeclared URIs and namespaces in 2-3 bytes) - http://trac.bigdata.com/ticket/330 (NanoSparqlServer does not locate "html" resources when run from jar) - http://trac.bigdata.com/ticket/334 (Support inlining of unicode data in the statement indices.) - http://trac.bigdata.com/ticket/364 (Scalable default graph evaluation) - http://trac.bigdata.com/ticket/368 (Prune variable bindings during query evaluation) - http://trac.bigdata.com/ticket/370 (Direct translation of openrdf AST to bigdata AST) - http://trac.bigdata.com/ticket/373 (Fix StrBOp and other IValueExpressions) - http://trac.bigdata.com/ticket/377 (Optimize OPTIONALs with multiple statement patterns.) - http://trac.bigdata.com/ticket/380 (Native SPARQL evaluation on cluster) - http://trac.bigdata.com/ticket/387 (Cluster does not compute closure) - http://trac.bigdata.com/ticket/395 (HTree hash join performance) - http://trac.bigdata.com/ticket/401 (inline xsd:unsigned datatypes) - http://trac.bigdata.com/ticket/408 (xsd:string cast fails for non-numeric data) - http://trac.bigdata.com/ticket/421 (New query hints model.) - http://trac.bigdata.com/ticket/431 (Use of read-only tx per query defeats cache on cluster) 1.0.3 - http://trac.bigdata.com/ticket/217 (BTreeCounters does not track bytes released) - http://trac.bigdata.com/ticket/269 (Refactor performance counters using accessor interface) - http://trac.bigdata.com/ticket/329 (B+Tree should delete bloom filter when it is disabled.) - http://trac.bigdata.com/ticket/372 (RWStore does not prune the CommitRecordIndex) - http://trac.bigdata.com/ticket/375 (Persistent memory leaks (RWStore/DISK)) - http://trac.bigdata.com/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) - http://trac.bigdata.com/ticket/391 (Release age advanced on WORM mode journal) - http://trac.bigdata.com/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) - http://trac.bigdata.com/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) - http://trac.bigdata.com/ticket/394 (log4j configuration error message in WAR deployment) - http://trac.bigdata.com/ticket/399 (Add a fast range count method to the REST API) - http://trac.bigdata.com/ticket/422 (Support temp triple store wrapped by a BigdataSail) - http://trac.bigdata.com/ticket/424 (NQuads support for NanoSparqlServer) - http://trac.bigdata.com/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) - http://trac.bigdata.com/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) - http://trac.bigdata.com/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) - http://trac.bigdata.com/ticket/435 (Address is 0L) - http://trac.bigdata.com/ticket/436 (TestMROWTransactions failure in CI) 1.0.2 - http://trac.bigdata.com/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) - http://trac.bigdata.com/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - http://trac.bigdata.com/ticket/356 (Query not terminated by error.) - http://trac.bigdata.com/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://trac.bigdata.com/ticket/361 (IRunningQuery not closed promptly.) - http://trac.bigdata.com/ticket/371 (DataLoader fails to load resources available from the classpath.) - http://trac.bigdata.com/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) - http://trac.bigdata.com/ticket/378 (ClosedByInterruptException during heavy query mix.) - http://trac.bigdata.com/ticket/379 (NotSerializableException for SPOAccessPath.) - http://trac.bigdata.com/ticket/382 (Change dependencies to Apache River 2.2.0) 1.0.1 (*) - http://trac.bigdata.com/ticket/107 (Unicode clean schema names in the sparse row store). - http://trac.bigdata.com/ticket/124 (TermIdEncoder should use more bits for scale-out). - http://trac.bigdata.com/ticket/225 (OSX requires specialized performance counter collection classes). - http://trac.bigdata.com/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - http://trac.bigdata.com/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - http://trac.bigdata.com/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - http://trac.bigdata.com/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - http://trac.bigdata.com/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - http://trac.bigdata.com/ticket/355 (Query failure when comparing with non materialized value). - http://trac.bigdata.com/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) - http://trac.bigdata.com/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - http://trac.bigdata.com/ticket/362 (log4j - slf4j bridge.) For more information about bigdata(R), please see the following links: [1] http://wiki.bigdata.com/wiki/index.php/Main_Page [2] http://wiki.bigdata.com/wiki/index.php/GettingStarted [3] http://wiki.bigdata.com/wiki/index.php/Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm [8] http://sourceforge.net/projects/bigdata/files/bigdata/ [9] http://wiki.bigdata.com/wiki/index.php/DataMigration [10] http://wiki.bigdata.com/wiki/index.php/HAJournalServer [11] http://www.bigdata.com/whitepapers/reifSPARQL.pdf [12] http://wiki.bigdata.com/wiki/index.php/RDF_GAS_API About bigdata: Bigdata(R) is a horizontally-scaled, general purpose storage and computing fabric for ordered data (B+Trees), designed to operate on either a single server or a cluster of commodity hardware. Bigdata(R) uses dynamically partitioned key-range shards in order to remove any realistic scaling limits - in principle, bigdata(R) may be deployed on 10s, 100s, or even thousands of machines and new capacity may be added incrementally without requiring the full reload of all data. The bigdata(R) RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), and datum level provenance. |
From: Bryan T. <br...@sy...> - 2014-05-20 12:14:19
|
Head over to www.bigdata.com or www.systap.com to check out our website re-design, long overdue. We’ve also developed a new Bigdata workbench application, which will be included in the 1.3.1 release. http://www.bigdata.com/download Thanks, Bryan |
From: <br...@sy...> - 2014-05-14 11:00:08
|
I need everyone to review, update, and close out (where appropriate) tickets on trac. This is blocking a release. The change log is based on tickets closed since the last release. I would like to get a summary of tickets that are in process as well. Thanks, Bryan |
From: Bryan T. <br...@sy...> - 2014-05-14 10:53:33
|
I just removed it. Bryan > On May 13, 2014, at 9:43 PM, Jeremy J Carroll <jj...@sy...> wrote: > > I would just delete this line, but given the imminent release I will not do so. > > bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/ArbitraryLengthPathNode.java: System.err.println("adj: "+zeroMatchAdjustment); > > If it is still there later I will delete post-1.3.1 > > Jeremy > > > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. > Get unparalleled scalability from the best Selenium testing platform available > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Bryan T. <br...@sy...> - 2014-05-14 10:28:56
|
Jeremy, please remove the line. Bryan > On May 13, 2014, at 9:44 PM, "Jeremy J Carroll" <jj...@sy...> wrote: > > I would just delete this line, but given the imminent release I will not do so. > > bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/ArbitraryLengthPathNode.java: System.err.println("adj: "+zeroMatchAdjustment); > > If it is still there later I will delete post-1.3.1 > > Jeremy > > > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. > Get unparalleled scalability from the best Selenium testing platform available > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Jeremy J C. <jj...@sy...> - 2014-05-14 01:43:57
|
I would just delete this line, but given the imminent release I will not do so. bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/ArbitraryLengthPathNode.java: System.err.println("adj: "+zeroMatchAdjustment); If it is still there later I will delete post-1.3.1 Jeremy |
From: Jeremy C. <jj...@gm...> - 2014-05-13 18:11:39
|
Ah thanks that was our problem then! Jeremy On May 13, 2014, at 11:05 AM, Bryan Thompson <br...@sy...> wrote: > The bundleJar approach is not really intended for a deployment mechanism. The ant stage approach is. It builds a distribution directory (./dist) and lays out all of the files under that directory in a manner meant to support deployments. > Bryan > >> On May 13, 2014, at 1:57 PM, Jeremy Carroll <jj...@gm...> wrote: >> >> >> At Syapse we are having a simple difficulty in upgrading to 1.3.1 using the current code base. >> >> A simplified version of our install is >> >> $ ant bundleJar >> >> >> then take all the libs from ant-build/lib >> and put them on the class path >> >> >> Starting NSS then does not work, specifically >> http://localhost:PORT/bigdata does redirect to http://localhost:PORT/bigdata/ >> which then 404s >> >> It does however work if the folder bigdata-war from the source is present in the directory in which we start Java >> (also bigdata-war folder can be extracted from the bigdata*jar file ) and it works >> >> What appears to be going wrong is that without the folder present, jetty is not managing to read the bigdata-war/WEB-INF/web.xml >> >> Jeremy >> >> >> >> ------------------------------------------------------------------------------ >> "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE >> Instantly run your Selenium tests across 300+ browser/OS combos. >> Get unparalleled scalability from the best Selenium testing platform available >> Simple to use. Nothing to install. Get started now for free." >> http://p.sf.net/sfu/SauceLabs >> _______________________________________________ >> Bigdata-developers mailing list >> Big...@li... >> https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Bryan T. <br...@sy...> - 2014-05-13 18:06:05
|
The bundleJar approach is not really intended for a deployment mechanism. The ant stage approach is. It builds a distribution directory (./dist) and lays out all of the files under that directory in a manner meant to support deployments. Bryan > On May 13, 2014, at 1:57 PM, Jeremy Carroll <jj...@gm...> wrote: > > > At Syapse we are having a simple difficulty in upgrading to 1.3.1 using the current code base. > > A simplified version of our install is > > $ ant bundleJar > > > then take all the libs from ant-build/lib > and put them on the class path > > > Starting NSS then does not work, specifically > http://localhost:PORT/bigdata does redirect to http://localhost:PORT/bigdata/ > which then 404s > > It does however work if the folder bigdata-war from the source is present in the directory in which we start Java > (also bigdata-war folder can be extracted from the bigdata*jar file ) and it works > > What appears to be going wrong is that without the folder present, jetty is not managing to read the bigdata-war/WEB-INF/web.xml > > Jeremy > > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. > Get unparalleled scalability from the best Selenium testing platform available > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
From: Jeremy C. <jj...@gm...> - 2014-05-13 17:58:00
|
At Syapse we are having a simple difficulty in upgrading to 1.3.1 using the current code base. A simplified version of our install is $ ant bundleJar then take all the libs from ant-build/lib and put them on the class path Starting NSS then does not work, specifically http://localhost:PORT/bigdata does redirect to http://localhost:PORT/bigdata/ which then 404s It does however work if the folder bigdata-war from the source is present in the directory in which we start Java (also bigdata-war folder can be extracted from the bigdata*jar file ) and it works What appears to be going wrong is that without the folder present, jetty is not managing to read the bigdata-war/WEB-INF/web.xml Jeremy |
From: Bryan T. <br...@sy...> - 2014-05-13 16:31:49
|
Hard. Yes, create a ticket for 1.3.1. > On May 13, 2014, at 12:26 PM, "Jeremy J Carroll" <jj...@sy...> wrote: > > > How hard is the freeze at the moment? > > I am getting the impression I should be ignoring minor/trivial issues …. :) > > (The one I just noted is that 'difficult' property values are not displayed correctly on the bigdata workbench) > > I can create a trac item, or maybe create a trac item after 1.3.1 is out ... > > Jeremy > > > > |
From: Jeremy J C. <jj...@sy...> - 2014-05-13 16:26:48
|
How hard is the freeze at the moment? I am getting the impression I should be ignoring minor/trivial issues …. :) (The one I just noted is that 'difficult' property values are not displayed correctly on the bigdata workbench) I can create a trac item, or maybe create a trac item after 1.3.1 is out ... Jeremy |
From: <br...@sy...> - 2014-05-13 13:33:50
|
This is in preparation for the 1.3.1 release. Bryan |
From: <br...@sy...> - 2014-05-08 20:21:25
|
We have a good CI build again. We rolled some changes back and into a branch and started CI on that branch as well. The main branch is building now. I would like people to work in feature branches until the 1.3.1 release for any new code. When you are ready, make sure that you update your feature branch from the main branch and verify that you can obtain a good CI run against the feature branch. Then prepare a patch and email the list. Either I or someone else will walk through a code review on the patch and bring it into the main branch. We can still try to bring focused features into the 1.3.1 release, but we need to prove that CI is good with those features before bringing them into the main branch. If you need help setting up CI on a feature branch, contact either mike or myself. Thanks, Bryan |
From: Bryan T. <br...@sy...> - 2014-05-08 14:44:13
|
Can people back out any new test suites and dependencies? Sometimes it is just a little bit that puts it over the limit. Even the new dependencies might drive up the number of open files for the jvm. If we can get the build working again, then we can focus in eliminating any thread, file, and memory leaks and hen reopen the branch for new code commits. Bryan > On May 8, 2014, at 9:39 AM, "Bryan Thompson" <br...@sy...> wrote: > > Ok. I am seeing too many open files on another server running the same CI build. This issue was definitely introduced by the recent SVN commits. Something is leaking file handles. > > @Martyn, we have some open tickets for CI leaking resources. Can you please prioritize this and Identify some culprit test suites and a workaround or fix? > > Bryan > > > Begin forwarded message: > >> From: "tho...@gm..." <tho...@gm...> >> Date: May 8, 2014 at 9:34:19 AM EDT >> To: Bryan Thompson <br...@sy...> >> Subject: Build failed in Jenkins: bigdata-release-1.3.0 #1808 >> >> See <http://192.168.1.10:8080/job/bigdata-release-1.3.0/1808/changes> >> >> Changes: >> >> [jeremy_carroll] externalized Japanese, Russian and German strings to address encoding issues >> >> [mrpersonick] changed the javadoc >> >> [jeremy_carroll] Tests for Language Range >> >> [mrpersonick] fixed the gremlin installer, added a loadGraphML method to all BigdataGraph impls >> >> [jeremy_carroll] Tests for the AnalyzerFactory's. The tests are for their shared behavior. >> >> [jeremy_carroll] improved encapsulation >> >> [jeremy_carroll] Extracted common superclass >> >> [mrpersonick] Commit of Blueprints/Gremlin support. See ticket 913. >> >> [jeremy_carroll] removed unnecessary UTF-8 encoding pref >> >> [jeremy_carroll] Initial version of ConfigurableAnalyzerFactory to address trac 912 >> >> [jeremy_carroll] delete spurious character and ensure that the copyright symbol does not prevent the javadoc target from completing. >> >> [thompsonbry] Identified a problem with the GangliaLBSPolicy where bigdata and bigdata-ganglia use the canonical (fully qualified) hostname and ganglia uses the local name of the host. This means that the host metrics are not being obtained by the GangliaLBSPolicy. While it is possible to override the hostname for ganglia starting with 3.2.x, this is quite a pain and could involve full restarts of gmond on all machines in the cluster. I have not yet resolved this issue, but I have added the ability to force the bigdata-ganglia implementation to use a hostname specified in an environment variable. >> >> Added the ability to override the hostname for bigdata-ganglia using the com.bigdata.hostname environment variable per [1]. >> >> Updated the pom.xml and build.properties files for the bigdata-ganglia-1.0.3 release. >> >> Published that release to our maven repo. >> >> [1] http://trac.bigdata.com/ticket/886 (Provide workaround for bad reverse DNS setups) >> >> [thompsonbry] Added the context path to the URL presented in the HA version of the /status page for the remote server. >> >> [thompsonbry] Fixed the name of the log file for the A server (was HALog-B.txt). Not sure how that happened. >> >> [thompsonbry] HAJournalServer now understands the jetty.dump.start environment variable. >> >> Modified the HA CI test suite to pass through the jetty.dump.start environment variable if set in the environment that runs the test suite JVM. >> >> Rolled back a change to jetty.xml that is breaking the HA CI test server startup. I will have to take this up with the jetty folks. This change was based on a recommended simplification of jetty.xml. The exception from HA CI was: >> {{{ >> WARN : 07:59:54,422 1620 com.bigdata.journal.jini.ha.HAJournalServer org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:506): Failed startup of context o.e.j.w.WebAppContext@718acd64{/bigdata,null,null}{"."} >> java.io.FileNotFoundException: "." >> at org.eclipse.jetty.webapp.WebInfConfiguration.unpack(WebInfConfiguration.java:493) >> at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:72) >> at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:460) >> at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:496) >> at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) >> at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:125) >> at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:107) >> at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:60) >> at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) >> at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:125) >> at org.eclipse.jetty.server.Server.start(Server.java:358) >> at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:107) >> at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:60) >> at org.eclipse.jetty.server.Server.doStart(Server.java:325) >> at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) >> at com.bigdata.journal.jini.ha.HAJournalServer.startNSS(HAJournalServer.java:4550) >> at com.bigdata.journal.jini.ha.HAJournalServer.startUpHook(HAJournalServer.java:883) >> at com.bigdata.journal.jini.ha.AbstractServer.run(AbstractServer.java:1881) >> at com.bigdata.journal.jini.ha.HAJournalServer.<init>(HAJournalServer.java:623) >> at com.bigdata.journal.jini.ha.HAJournalServer.main(HAJournalServer.java:4763) >> }}} >> >> [thompsonbry] Disabling the jetty-jmx configuration in jetty.xml. This is breaking CI. >> >> [tobycraig] Improved detection of inserted content in update panel >> >> [thompsonbry] Added the jetty-jmx and jetty-jndi dependencies. The jmx dependency is necessary if you want to export the MBeans from the jetty server to another host. The jndi dependency is just foward looking - jndi provides a flexible mechanism for configuring jetty. >> >> [thompsonbry] Syntax error in the HAConfig file. >> >> [thompsonbry] Added an RMI_PORT for the exporter for the HAGlue interface. This can be set from the startHAServices script. >> >> The ganglia policy needs to chose a target host in inverse proportion to the workload on that host. >> >> See #624 (HA LBS) >> >> [tobycraig] Advanced commands (monitor and analytic) enabled for update >> >> [thompsonbry] Replaced the old trac and wiki links in the release notes with the new trac and wiki links. >> >> [thompsonbry] Commit of release notes template for 1.3.1. >> >> [thompsonbry] Refactored the package for several LBS files. >> >> See #624 (HA LBS) >> >> [tobycraig] #891 - Added GAS namespace to shortcuts >> >> [tobycraig] #906 - Fixed namespace shortcut buttons for namespaces that don't end with # >> >> [mrpersonick] Ticket #907: bigdata quick start. >> >> [tobycraig] Resized logo >> >> [mrpersonick] got rid of IDoNotJoinService >> >> [thompsonbry] Modified the GangliaLBSPolicy to explicitly check for a null hostname (can be caused by a failed RMI). >> >> Modified the HA status page to include the proxy object for the RMI interface for the local service. I want to use this to diagnose situations where one service as a proxy for another but the proxy that is has does not respond and is, hence, probably dead. Note that moving into an error state on that service whose exported proxy is not responding is not enough to get the service to become responsive. This is probably because the error state does not unexport the proxy. However, it does not explain why/how a bad proxy got out there in the first place. >> >> See #624 (HA LBS) >> >> [thompsonbry] Published new version of bigdata-ganglia (1.0.2) with new APIs for GangliaService that are used by the GangliaLBSPolicy. The artifact has been pushed to the systap maven repository. >> >> See #624 (HA LBS) >> >> [jeremy_carroll] File missed from last commit >> >> [jeremy_carroll] Test and fix for trac904 FILTER EXISTS || TRUE. AbstractJoinGroupOptimizer needs to recurse into the filter expression looking for EXISTS >> >> [jeremy_carroll] Improve the print out of debug representations - in particular ensure that the content of FILTER EXISTS and FILTER NOT EXISTS are shown prior to the optimizer step that pulls the content out as a subquery. >> Also shorten the names of some annotations in some debug representations. >> >> [thompsonbry] javadoc >> >> [thompsonbry] Should have been synchronized on the hostTableRef, not the local variable named 'hostTable'. >> >> [thompsonbry] Reducing WARN => INFO. >> >> [thompsonbry] Rolling back the GangliaLBSPolicy default. I need to push out a release of the bigdata-ganglia.jar that declares some new methods, such as: >> >> com.bigdata.ganglia.GangliaService.getDefaultHostReportOn()[Ljava/lang/String >> >> See #624 (HA LBS) >> >> [thompsonbry] Enabling platform statistics collection and ganglia listener by default for the HA deployment mode to support the GangliaLBSPolicy >> >> [thompsonbry] removing the jetty thread min/max overrides. They are causing problems with Property vs SystemProperty.... >> >> [thompsonbry] Changed the default policy to the GangliaLBSPolicy for testing. >> >> Added min/max threads environment variables to startHAServices. This are examined by jetty.xml. >> >> [thompsonbry] Bug fix to startHAServices. >> >> [thompsonbry] Adding to startHAServices. If the BIGDATA_HOSTNAME is not defined, then the actual canonical hostname will be used. >> >> See #886 (Provide workaround for bad reverse DNS setups) >> >> [thompsonbry] Added counter to ServiceScore to track the #of requests that are handed to each service by the LBS on a given service. >> >> See #624 (HA LBS). >> >> [thompsonbry] Optimized code path when there is a change in the set of discovered services for the HA load balancer to reuse an existing declaration for a service from the old table. This avoids some needless RMI. >> >> See #624 (HA LBS) >> >> [thompsonbry] Removed some comment blocks (dead code). >> >> [thompsonbry] Providing more information about the state of the HA LBS. >> >> See #624 (HA LBS) >> >> [thompsonbry] Changed the default LBS policy from NOP to round-robin for performance testing on HA3 cluster. I will also need to test the ganglia LBS policy. >> >> Added ability to view the current LBS policy to the HA status page. >> >> See #624 (HA LBS) >> >> [jeremy_carroll] New tests inspired by trac904 - unfortunately all passing >> >> [jeremy_carroll] Deleted incorrect comments >> >> [tobycraig] Improved error reporting on bad query >> >> [jeremy_carroll] Fixed trac 905 >> >> [jeremy_carroll] Added test for trac905 >> >> [jeremy_carroll] Fixes three failing test cases; and obviously not incorrect >> >> [tobycraig] Quick fix for status tab in HA mode. Ideally data would be sent as JSON rather than HTML. >> >> [tobycraig] Fixed Jetty path issues with HA server >> >> [tobycraig] Added Details option to Explain query >> >> [tobycraig] Changed Load to Update >> >> [thompsonbry] Refactoring of the HA Load Balancer to expose an interface that can be used by an application to take over the rewrite of the Request-URI when the request will be proxied to another service. See the new IHARequestURIRewriter interface and the new REWRITER init-param for the HALoadBalancerSerlet. >> >> See #624 (HA LBS) >> >> [dmekonnen] Update to sync changes that occurred during the merger freeze. This version is identical with the version submitted to the Homebrew project on 4/30. >> >> ------------------------------------------ >> [...truncated 304787 lines...] >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784091 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784091 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784093 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784093 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784093 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784094 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784094 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784094 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784096 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784096 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784096 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784097 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784097 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784097 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784098 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784098 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] WARN : 5784098 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): >> [junit] java.io.IOException: Too many open files >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) >> [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) >> [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) >> [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) >> [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) >> [junit] at java.lang.Thread.run(Thread.java:724) >> [junit] Test com.bigdata.rdf.sail.TestAll FAILED >> >> BUILD FAILED >> <http://192.168.1.10:8080/job/bigdata-release-1.3.0/ws/BIGDATA_RELEASE_1_3_0/build.xml>:1996: The following error occurred while executing this line: >> <http://192.168.1.10:8080/job/bigdata-release-1.3.0/ws/BIGDATA_RELEASE_1_3_0/build.xml>:2242: Unable to open file <http://192.168.1.10:8080/job/bigdata-release-1.3.0/ws/BIGDATA_RELEASE_1_3_0/ant-build/classes/test/test-results/TEST-com.bigdata.rdf.sail.webapp.TestAll.txt> >> >> Total time: 125 minutes 51 seconds >> Build step 'Invoke Ant' marked build as failure >> Archiving artifacts >> Performing Post build task... >> Could not match :JUNIT RUN COMPLETE : False >> Logical operation result is FALSE >> Skipping script : #~/rsync.sh release-1.3.0 >> >> rm -rf /var/tmp/zookeeper >> END OF POST BUILD TASK : 0 >> Recording test results >> Failed to send e-mail to thompsonbry because no e-mail address is known, and no default e-mail domain is configured >> Failed to send e-mail to dmekonnen because no e-mail address is known, and no default e-mail domain is configured >> Failed to send e-mail to mrpersonick because no e-mail address is known, and no default e-mail domain is configured >> Failed to send e-mail to tobycraig because no e-mail address is known, and no default e-mail domain is configured >> Failed to send e-mail to jeremy_carroll because no e-mail address is known, and no default e-mail domain is configured >> Failed to send e-mail to martyncutcher because no e-mail address is known, and no default e-mail domain is configured > <ATT00001.c> > <ATT00002.c> |
From: Mike P. <mi...@sy...> - 2014-05-08 14:00:28
|
I've checked over my recent commits related to Blueprints. Unfortunately I do not see anything suspicious that could be related to the memory leak. There are a few blueprints test suites but they should not be enabled in CI right now. I don't see anything in ant run-junit that would suggest that they are magically being run anyway. My updates are mostly self-contained in com.bigdata.blueprints. Outside of that there are a few minor changes: - build.xml: changes to get blueprints code compiled, jar'd, deployed, gremlin installation - pom.xml: dependencies updated to include blueprints test dependencies - BigdataSailRemoteRepository: strengthened return type of getConnection - BigdataSailRemoteRepositoryConnection: removed a log statement - RESTServlet: linked to a new BlueprintsServlet to handle certain REST calls - RemoteRespository: took out inference methods (doClosure, setTM), added a method to send GraphML to BlueprintsServlet From: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Date: Thursday, May 8, 2014 9:38 AM To: "Big...@li...<mailto:Big...@li...>" <Big...@li...<mailto:Big...@li...>> Cc: Martyn Cutcher <ma...@sy...<mailto:ma...@sy...>>, Mike Personick <mi...@sy...<mailto:mi...@sy...>> Subject: Fwd: Build failed in Jenkins: bigdata-release-1.3.0 #1808 Ok. I am seeing too many open files on another server running the same CI build. This issue was definitely introduced by the recent SVN commits. Something is leaking file handles. @Martyn, we have some open tickets for CI leaking resources. Can you please prioritize this and Identify some culprit test suites and a workaround or fix? Bryan Begin forwarded message: From: "tho...@gm...<mailto:tho...@gm...>" <tho...@gm...<mailto:tho...@gm...>> Date: May 8, 2014 at 9:34:19 AM EDT To: Bryan Thompson <br...@sy...<mailto:br...@sy...>> Subject: Build failed in Jenkins: bigdata-release-1.3.0 #1808 See <http://192.168.1.10:8080/job/bigdata-release-1.3.0/1808/changes> Changes: [jeremy_carroll] externalized Japanese, Russian and German strings to address encoding issues [mrpersonick] changed the javadoc [jeremy_carroll] Tests for Language Range [mrpersonick] fixed the gremlin installer, added a loadGraphML method to all BigdataGraph impls [jeremy_carroll] Tests for the AnalyzerFactory's. The tests are for their shared behavior. [jeremy_carroll] improved encapsulation [jeremy_carroll] Extracted common superclass [mrpersonick] Commit of Blueprints/Gremlin support. See ticket 913. [jeremy_carroll] removed unnecessary UTF-8 encoding pref [jeremy_carroll] Initial version of ConfigurableAnalyzerFactory to address trac 912 [jeremy_carroll] delete spurious character and ensure that the copyright symbol does not prevent the javadoc target from completing. [thompsonbry] Identified a problem with the GangliaLBSPolicy where bigdata and bigdata-ganglia use the canonical (fully qualified) hostname and ganglia uses the local name of the host. This means that the host metrics are not being obtained by the GangliaLBSPolicy. While it is possible to override the hostname for ganglia starting with 3.2.x, this is quite a pain and could involve full restarts of gmond on all machines in the cluster. I have not yet resolved this issue, but I have added the ability to force the bigdata-ganglia implementation to use a hostname specified in an environment variable. Added the ability to override the hostname for bigdata-ganglia using the com.bigdata.hostname environment variable per [1]. Updated the pom.xml and build.properties files for the bigdata-ganglia-1.0.3 release. Published that release to our maven repo. [1] http://trac.bigdata.com/ticket/886 (Provide workaround for bad reverse DNS setups) [thompsonbry] Added the context path to the URL presented in the HA version of the /status page for the remote server. [thompsonbry] Fixed the name of the log file for the A server (was HALog-B.txt). Not sure how that happened. [thompsonbry] HAJournalServer now understands the jetty.dump.start environment variable. Modified the HA CI test suite to pass through the jetty.dump.start environment variable if set in the environment that runs the test suite JVM. Rolled back a change to jetty.xml that is breaking the HA CI test server startup. I will have to take this up with the jetty folks. This change was based on a recommended simplification of jetty.xml. The exception from HA CI was: {{{ WARN : 07:59:54,422 1620 com.bigdata.journal.jini.ha.HAJournalServer org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:506): Failed startup of context o.e.j.w.WebAppContext@718acd64{/bigdata,null,null}{"."} java.io.FileNotFoundException: "." at org.eclipse.jetty.webapp.WebInfConfiguration.unpack(WebInfConfiguration.java:493) at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:72) at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:460) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:496) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:125) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:107) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:60) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:125) at org.eclipse.jetty.server.Server.start(Server.java:358) at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:107) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:60) at org.eclipse.jetty.server.Server.doStart(Server.java:325) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at com.bigdata.journal.jini.ha.HAJournalServer.startNSS(HAJournalServer.java:4550) at com.bigdata.journal.jini.ha.HAJournalServer.startUpHook(HAJournalServer.java:883) at com.bigdata.journal.jini.ha.AbstractServer.run(AbstractServer.java:1881) at com.bigdata.journal.jini.ha.HAJournalServer.<init>(HAJournalServer.java:623) at com.bigdata.journal.jini.ha.HAJournalServer.main(HAJournalServer.java:4763) }}} [thompsonbry] Disabling the jetty-jmx configuration in jetty.xml. This is breaking CI. [tobycraig] Improved detection of inserted content in update panel [thompsonbry] Added the jetty-jmx and jetty-jndi dependencies. The jmx dependency is necessary if you want to export the MBeans from the jetty server to another host. The jndi dependency is just foward looking - jndi provides a flexible mechanism for configuring jetty. [thompsonbry] Syntax error in the HAConfig file. [thompsonbry] Added an RMI_PORT for the exporter for the HAGlue interface. This can be set from the startHAServices script. The ganglia policy needs to chose a target host in inverse proportion to the workload on that host. See #624 (HA LBS) [tobycraig] Advanced commands (monitor and analytic) enabled for update [thompsonbry] Replaced the old trac and wiki links in the release notes with the new trac and wiki links. [thompsonbry] Commit of release notes template for 1.3.1. [thompsonbry] Refactored the package for several LBS files. See #624 (HA LBS) [tobycraig] #891 - Added GAS namespace to shortcuts [tobycraig] #906 - Fixed namespace shortcut buttons for namespaces that don't end with # [mrpersonick] Ticket #907: bigdata quick start. [tobycraig] Resized logo [mrpersonick] got rid of IDoNotJoinService [thompsonbry] Modified the GangliaLBSPolicy to explicitly check for a null hostname (can be caused by a failed RMI). Modified the HA status page to include the proxy object for the RMI interface for the local service. I want to use this to diagnose situations where one service as a proxy for another but the proxy that is has does not respond and is, hence, probably dead. Note that moving into an error state on that service whose exported proxy is not responding is not enough to get the service to become responsive. This is probably because the error state does not unexport the proxy. However, it does not explain why/how a bad proxy got out there in the first place. See #624 (HA LBS) [thompsonbry] Published new version of bigdata-ganglia (1.0.2) with new APIs for GangliaService that are used by the GangliaLBSPolicy. The artifact has been pushed to the systap maven repository. See #624 (HA LBS) [jeremy_carroll] File missed from last commit [jeremy_carroll] Test and fix for trac904 FILTER EXISTS || TRUE. AbstractJoinGroupOptimizer needs to recurse into the filter expression looking for EXISTS [jeremy_carroll] Improve the print out of debug representations - in particular ensure that the content of FILTER EXISTS and FILTER NOT EXISTS are shown prior to the optimizer step that pulls the content out as a subquery. Also shorten the names of some annotations in some debug representations. [thompsonbry] javadoc [thompsonbry] Should have been synchronized on the hostTableRef, not the local variable named 'hostTable'. [thompsonbry] Reducing WARN => INFO. [thompsonbry] Rolling back the GangliaLBSPolicy default. I need to push out a release of the bigdata-ganglia.jar that declares some new methods, such as: com.bigdata.ganglia.GangliaService.getDefaultHostReportOn()[Ljava/lang/String See #624 (HA LBS) [thompsonbry] Enabling platform statistics collection and ganglia listener by default for the HA deployment mode to support the GangliaLBSPolicy [thompsonbry] removing the jetty thread min/max overrides. They are causing problems with Property vs SystemProperty.... [thompsonbry] Changed the default policy to the GangliaLBSPolicy for testing. Added min/max threads environment variables to startHAServices. This are examined by jetty.xml. [thompsonbry] Bug fix to startHAServices. [thompsonbry] Adding to startHAServices. If the BIGDATA_HOSTNAME is not defined, then the actual canonical hostname will be used. See #886 (Provide workaround for bad reverse DNS setups) [thompsonbry] Added counter to ServiceScore to track the #of requests that are handed to each service by the LBS on a given service. See #624 (HA LBS). [thompsonbry] Optimized code path when there is a change in the set of discovered services for the HA load balancer to reuse an existing declaration for a service from the old table. This avoids some needless RMI. See #624 (HA LBS) [thompsonbry] Removed some comment blocks (dead code). [thompsonbry] Providing more information about the state of the HA LBS. See #624 (HA LBS) [thompsonbry] Changed the default LBS policy from NOP to round-robin for performance testing on HA3 cluster. I will also need to test the ganglia LBS policy. Added ability to view the current LBS policy to the HA status page. See #624 (HA LBS) [jeremy_carroll] New tests inspired by trac904 - unfortunately all passing [jeremy_carroll] Deleted incorrect comments [tobycraig] Improved error reporting on bad query [jeremy_carroll] Fixed trac 905 [jeremy_carroll] Added test for trac905 [jeremy_carroll] Fixes three failing test cases; and obviously not incorrect [tobycraig] Quick fix for status tab in HA mode. Ideally data would be sent as JSON rather than HTML. [tobycraig] Fixed Jetty path issues with HA server [tobycraig] Added Details option to Explain query [tobycraig] Changed Load to Update [thompsonbry] Refactoring of the HA Load Balancer to expose an interface that can be used by an application to take over the rewrite of the Request-URI when the request will be proxied to another service. See the new IHARequestURIRewriter interface and the new REWRITER init-param for the HALoadBalancerSerlet. See #624 (HA LBS) [dmekonnen] Update to sync changes that occurred during the merger freeze. This version is identical with the version submitted to the Homebrew project on 4/30. ------------------------------------------ [...truncated 304787 lines...] [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784091 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784091 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784093 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784093 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784093 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784094 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784094 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784094 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784096 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784096 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784096 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784097 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784097 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784097 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784098 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784098 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] WARN : 5784098 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): [junit] java.io.IOException: Too many open files [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [junit] at java.lang.Thread.run(Thread.java:724) [junit] Test com.bigdata.rdf.sail.TestAll FAILED BUILD FAILED <http://192.168.1.10:8080/job/bigdata-release-1.3.0/ws/BIGDATA_RELEASE_1_3_0/build.xml>:1996: The following error occurred while executing this line: <http://192.168.1.10:8080/job/bigdata-release-1.3.0/ws/BIGDATA_RELEASE_1_3_0/build.xml>:2242: Unable to open file <http://192.168.1.10:8080/job/bigdata-release-1.3.0/ws/BIGDATA_RELEASE_1_3_0/ant-build/classes/test/test-results/TEST-com.bigdata.rdf.sail.webapp.TestAll.txt> Total time: 125 minutes 51 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Performing Post build task... Could not match :JUNIT RUN COMPLETE : False Logical operation result is FALSE Skipping script : #~/rsync.sh release-1.3.0 rm -rf /var/tmp/zookeeper END OF POST BUILD TASK : 0 Recording test results Failed to send e-mail to thompsonbry because no e-mail address is known, and no default e-mail domain is configured Failed to send e-mail to dmekonnen because no e-mail address is known, and no default e-mail domain is configured Failed to send e-mail to mrpersonick because no e-mail address is known, and no default e-mail domain is configured Failed to send e-mail to tobycraig because no e-mail address is known, and no default e-mail domain is configured Failed to send e-mail to jeremy_carroll because no e-mail address is known, and no default e-mail domain is configured Failed to send e-mail to martyncutcher because no e-mail address is known, and no default e-mail domain is configured |
From: Bryan T. <br...@sy...> - 2014-05-08 13:39:08
|
Ok. I am seeing too many open files on another server running the same CI build. This issue was definitely introduced by the recent SVN commits. Something is leaking file handles. @Martyn, we have some open tickets for CI leaking resources. Can you please prioritize this and Identify some culprit test suites and a workaround or fix? Bryan Begin forwarded message: > From: "tho...@gm..." <tho...@gm...> > Date: May 8, 2014 at 9:34:19 AM EDT > To: Bryan Thompson <br...@sy...> > Subject: Build failed in Jenkins: bigdata-release-1.3.0 #1808 > > See <http://192.168.1.10:8080/job/bigdata-release-1.3.0/1808/changes> > > Changes: > > [jeremy_carroll] externalized Japanese, Russian and German strings to address encoding issues > > [mrpersonick] changed the javadoc > > [jeremy_carroll] Tests for Language Range > > [mrpersonick] fixed the gremlin installer, added a loadGraphML method to all BigdataGraph impls > > [jeremy_carroll] Tests for the AnalyzerFactory's. The tests are for their shared behavior. > > [jeremy_carroll] improved encapsulation > > [jeremy_carroll] Extracted common superclass > > [mrpersonick] Commit of Blueprints/Gremlin support. See ticket 913. > > [jeremy_carroll] removed unnecessary UTF-8 encoding pref > > [jeremy_carroll] Initial version of ConfigurableAnalyzerFactory to address trac 912 > > [jeremy_carroll] delete spurious character and ensure that the copyright symbol does not prevent the javadoc target from completing. > > [thompsonbry] Identified a problem with the GangliaLBSPolicy where bigdata and bigdata-ganglia use the canonical (fully qualified) hostname and ganglia uses the local name of the host. This means that the host metrics are not being obtained by the GangliaLBSPolicy. While it is possible to override the hostname for ganglia starting with 3.2.x, this is quite a pain and could involve full restarts of gmond on all machines in the cluster. I have not yet resolved this issue, but I have added the ability to force the bigdata-ganglia implementation to use a hostname specified in an environment variable. > > Added the ability to override the hostname for bigdata-ganglia using the com.bigdata.hostname environment variable per [1]. > > Updated the pom.xml and build.properties files for the bigdata-ganglia-1.0.3 release. > > Published that release to our maven repo. > > [1] http://trac.bigdata.com/ticket/886 (Provide workaround for bad reverse DNS setups) > > [thompsonbry] Added the context path to the URL presented in the HA version of the /status page for the remote server. > > [thompsonbry] Fixed the name of the log file for the A server (was HALog-B.txt). Not sure how that happened. > > [thompsonbry] HAJournalServer now understands the jetty.dump.start environment variable. > > Modified the HA CI test suite to pass through the jetty.dump.start environment variable if set in the environment that runs the test suite JVM. > > Rolled back a change to jetty.xml that is breaking the HA CI test server startup. I will have to take this up with the jetty folks. This change was based on a recommended simplification of jetty.xml. The exception from HA CI was: > {{{ > WARN : 07:59:54,422 1620 com.bigdata.journal.jini.ha.HAJournalServer org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:506): Failed startup of context o.e.j.w.WebAppContext@718acd64{/bigdata,null,null}{"."} > java.io.FileNotFoundException: "." > at org.eclipse.jetty.webapp.WebInfConfiguration.unpack(WebInfConfiguration.java:493) > at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:72) > at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:460) > at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:496) > at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:125) > at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:107) > at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:60) > at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:125) > at org.eclipse.jetty.server.Server.start(Server.java:358) > at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:107) > at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:60) > at org.eclipse.jetty.server.Server.doStart(Server.java:325) > at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) > at com.bigdata.journal.jini.ha.HAJournalServer.startNSS(HAJournalServer.java:4550) > at com.bigdata.journal.jini.ha.HAJournalServer.startUpHook(HAJournalServer.java:883) > at com.bigdata.journal.jini.ha.AbstractServer.run(AbstractServer.java:1881) > at com.bigdata.journal.jini.ha.HAJournalServer.<init>(HAJournalServer.java:623) > at com.bigdata.journal.jini.ha.HAJournalServer.main(HAJournalServer.java:4763) > }}} > > [thompsonbry] Disabling the jetty-jmx configuration in jetty.xml. This is breaking CI. > > [tobycraig] Improved detection of inserted content in update panel > > [thompsonbry] Added the jetty-jmx and jetty-jndi dependencies. The jmx dependency is necessary if you want to export the MBeans from the jetty server to another host. The jndi dependency is just foward looking - jndi provides a flexible mechanism for configuring jetty. > > [thompsonbry] Syntax error in the HAConfig file. > > [thompsonbry] Added an RMI_PORT for the exporter for the HAGlue interface. This can be set from the startHAServices script. > > The ganglia policy needs to chose a target host in inverse proportion to the workload on that host. > > See #624 (HA LBS) > > [tobycraig] Advanced commands (monitor and analytic) enabled for update > > [thompsonbry] Replaced the old trac and wiki links in the release notes with the new trac and wiki links. > > [thompsonbry] Commit of release notes template for 1.3.1. > > [thompsonbry] Refactored the package for several LBS files. > > See #624 (HA LBS) > > [tobycraig] #891 - Added GAS namespace to shortcuts > > [tobycraig] #906 - Fixed namespace shortcut buttons for namespaces that don't end with # > > [mrpersonick] Ticket #907: bigdata quick start. > > [tobycraig] Resized logo > > [mrpersonick] got rid of IDoNotJoinService > > [thompsonbry] Modified the GangliaLBSPolicy to explicitly check for a null hostname (can be caused by a failed RMI). > > Modified the HA status page to include the proxy object for the RMI interface for the local service. I want to use this to diagnose situations where one service as a proxy for another but the proxy that is has does not respond and is, hence, probably dead. Note that moving into an error state on that service whose exported proxy is not responding is not enough to get the service to become responsive. This is probably because the error state does not unexport the proxy. However, it does not explain why/how a bad proxy got out there in the first place. > > See #624 (HA LBS) > > [thompsonbry] Published new version of bigdata-ganglia (1.0.2) with new APIs for GangliaService that are used by the GangliaLBSPolicy. The artifact has been pushed to the systap maven repository. > > See #624 (HA LBS) > > [jeremy_carroll] File missed from last commit > > [jeremy_carroll] Test and fix for trac904 FILTER EXISTS || TRUE. AbstractJoinGroupOptimizer needs to recurse into the filter expression looking for EXISTS > > [jeremy_carroll] Improve the print out of debug representations - in particular ensure that the content of FILTER EXISTS and FILTER NOT EXISTS are shown prior to the optimizer step that pulls the content out as a subquery. > Also shorten the names of some annotations in some debug representations. > > [thompsonbry] javadoc > > [thompsonbry] Should have been synchronized on the hostTableRef, not the local variable named 'hostTable'. > > [thompsonbry] Reducing WARN => INFO. > > [thompsonbry] Rolling back the GangliaLBSPolicy default. I need to push out a release of the bigdata-ganglia.jar that declares some new methods, such as: > > com.bigdata.ganglia.GangliaService.getDefaultHostReportOn()[Ljava/lang/String > > See #624 (HA LBS) > > [thompsonbry] Enabling platform statistics collection and ganglia listener by default for the HA deployment mode to support the GangliaLBSPolicy > > [thompsonbry] removing the jetty thread min/max overrides. They are causing problems with Property vs SystemProperty.... > > [thompsonbry] Changed the default policy to the GangliaLBSPolicy for testing. > > Added min/max threads environment variables to startHAServices. This are examined by jetty.xml. > > [thompsonbry] Bug fix to startHAServices. > > [thompsonbry] Adding to startHAServices. If the BIGDATA_HOSTNAME is not defined, then the actual canonical hostname will be used. > > See #886 (Provide workaround for bad reverse DNS setups) > > [thompsonbry] Added counter to ServiceScore to track the #of requests that are handed to each service by the LBS on a given service. > > See #624 (HA LBS). > > [thompsonbry] Optimized code path when there is a change in the set of discovered services for the HA load balancer to reuse an existing declaration for a service from the old table. This avoids some needless RMI. > > See #624 (HA LBS) > > [thompsonbry] Removed some comment blocks (dead code). > > [thompsonbry] Providing more information about the state of the HA LBS. > > See #624 (HA LBS) > > [thompsonbry] Changed the default LBS policy from NOP to round-robin for performance testing on HA3 cluster. I will also need to test the ganglia LBS policy. > > Added ability to view the current LBS policy to the HA status page. > > See #624 (HA LBS) > > [jeremy_carroll] New tests inspired by trac904 - unfortunately all passing > > [jeremy_carroll] Deleted incorrect comments > > [tobycraig] Improved error reporting on bad query > > [jeremy_carroll] Fixed trac 905 > > [jeremy_carroll] Added test for trac905 > > [jeremy_carroll] Fixes three failing test cases; and obviously not incorrect > > [tobycraig] Quick fix for status tab in HA mode. Ideally data would be sent as JSON rather than HTML. > > [tobycraig] Fixed Jetty path issues with HA server > > [tobycraig] Added Details option to Explain query > > [tobycraig] Changed Load to Update > > [thompsonbry] Refactoring of the HA Load Balancer to expose an interface that can be used by an application to take over the rewrite of the Request-URI when the request will be proxied to another service. See the new IHARequestURIRewriter interface and the new REWRITER init-param for the HALoadBalancerSerlet. > > See #624 (HA LBS) > > [dmekonnen] Update to sync changes that occurred during the merger freeze. This version is identical with the version submitted to the Homebrew project on 4/30. > > ------------------------------------------ > [...truncated 304787 lines...] > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784091 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784091 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784092 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784093 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784093 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784093 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784094 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784094 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784094 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784095 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784096 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784096 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784096 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784097 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784097 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784097 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784098 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784098 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] WARN : 5784098 qtp1462886873-73924-acceptor-0-ServerConnector@28797bfb{HTTP/1.1}{0.0.0.0:57426} org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:472): > [junit] java.io.IOException: Too many open files > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) > [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241) > [junit] at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:336) > [junit] at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:467) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:607) > [junit] at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) > [junit] at java.lang.Thread.run(Thread.java:724) > [junit] Test com.bigdata.rdf.sail.TestAll FAILED > > BUILD FAILED > <http://192.168.1.10:8080/job/bigdata-release-1.3.0/ws/BIGDATA_RELEASE_1_3_0/build.xml>:1996: The following error occurred while executing this line: > <http://192.168.1.10:8080/job/bigdata-release-1.3.0/ws/BIGDATA_RELEASE_1_3_0/build.xml>:2242: Unable to open file <http://192.168.1.10:8080/job/bigdata-release-1.3.0/ws/BIGDATA_RELEASE_1_3_0/ant-build/classes/test/test-results/TEST-com.bigdata.rdf.sail.webapp.TestAll.txt> > > Total time: 125 minutes 51 seconds > Build step 'Invoke Ant' marked build as failure > Archiving artifacts > Performing Post build task... > Could not match :JUNIT RUN COMPLETE : False > Logical operation result is FALSE > Skipping script : #~/rsync.sh release-1.3.0 > > rm -rf /var/tmp/zookeeper > END OF POST BUILD TASK : 0 > Recording test results > Failed to send e-mail to thompsonbry because no e-mail address is known, and no default e-mail domain is configured > Failed to send e-mail to dmekonnen because no e-mail address is known, and no default e-mail domain is configured > Failed to send e-mail to mrpersonick because no e-mail address is known, and no default e-mail domain is configured > Failed to send e-mail to tobycraig because no e-mail address is known, and no default e-mail domain is configured > Failed to send e-mail to jeremy_carroll because no e-mail address is known, and no default e-mail domain is configured > Failed to send e-mail to martyncutcher because no e-mail address is known, and no default e-mail domain is configured |
From: <br...@sy...> - 2014-05-08 12:14:01
|
We have a problem with the CI builds where there is a resource leak and the build is failing with an OOM because it is unable to allocate new threads. Code freeze until this is diagnosed and fixed. Bryan |
From: Jeremy J C. <jj...@sy...> - 2014-05-08 03:24:12
|
It passed the point where my change broke things. We will see if it completes. Jeremy |
From: Jeremy J C. <jj...@sy...> - 2014-05-08 02:50:51
|
Unfortunately I did not wait until CI was working before my latest commit, which is currently indicated as guilty - character encoding issues Jeremy J Carroll Principal Architect Syapse, Inc. |
From: Jeremy J C. <jj...@sy...> - 2014-05-07 15:44:12
|
I have made a commit with the first version of the code for this. By design it is only linked if you change your properties' file - so you should not notice any change. Also by design adding: com.bigdata.search.FullTextIndex.analyzerFactoryClass=com.bigdata.search.ConfigurableAnalyzerFactory to your bigdata.properties file should use the new code (for new journals) and not have any visible changes - of course this may exercise new defects. i.e. the default config for the ConfigurableAnalyzerFactory is the same as the only config for the DefaultAnalyzerFactory Today I hope to write unit tests, and do a personal review to make the code more maintainable. Jeremy J Carroll Principal Architect Syapse, Inc. |
From: Mike P. <mi...@sy...> - 2014-05-06 14:20:54
|
Jeremy- None of the three are obviously better than the others. I'd probably just choose option (2), but I'd certainly test all three. I suspect (3) will be the slowest because my guess is that it will expand to two subqueries. I don't think any of them will only run one or the other pattern and abort at first success - I can't think of a clever way to do that offhand but I will think on it some more. -Mike On 5/2/14 12:13 PM, "Jeremy J Carroll" <jj...@sy...> wrote: >I implement fine grained access control using additional conditions that >must hold on most queries: the corresponding SPARQL patterns get inserted >into (nearly) all queries at the right point(s). > >This is using a match that is currently expressed as a UNION, and I am >using a SELECT DISTINCT effectively the same as a FILTER EXISTS to make >sure that at least one of the conditions in the union hold. > >I can see three different ways of expressing this Š (I simplify the >example) > > >1) > >pattern binding ?foo > >{ SELECT DISTINCT ?foo { > { ?foo eg:p eg: a } UNION { ?foo eg:q eg:b } >} } > >2) >pattern binding ?foo > >FILTER EXISTS { > { ?foo eg:p eg: a } UNION { ?foo eg:q eg:b } >} > >3) > >pattern binding ?foo > >FILTER EXISTS { ?foo eg:p eg: a } || EXISTS { ?foo eg:q eg:b } > >where actually I have three alternatives on my union, and two of them are >three triple matches > >Is any one of these likely to be faster than the others Š (I do this a >lot). In particular, one implementation might evaluate in parallel, and >abort as soon as one of the alternatives is true. > >I guess also, if one of these constructs is better, should we have an >optimizer that maps the other(s) into it. > >[I have already convinced myself that: >a) for my data I wish this to be done after ?foo is bound rather than >before >b) I need to use query hints or explicit management of named subqueries >or similar to ensure that this happens >c) at some point in the future (a) and (b) may not be optimal for my >application (e.g. when we have a lot of users and each user has >relatively small amounts of data, and has privacy set fairly high: the >FILTER implements our fine-grained access control), but we will cross >that bridge when we get to it >] > >Jeremy > > > > >-------------------------------------------------------------------------- >---- >"Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE >Instantly run your Selenium tests across 300+ browser/OS combos. Get >unparalleled scalability from the best Selenium testing platform >available. >Simple to use. Nothing to install. Get started now for free." >http://p.sf.net/sfu/SauceLabs >_______________________________________________ >Bigdata-developers mailing list >Big...@li... >https://lists.sourceforge.net/lists/listinfo/bigdata-developers |