From: <tho...@us...> - 2011-09-22 20:12:27
|
Revision: 5237 http://bigdata.svn.sourceforge.net/bigdata/?rev=5237&view=rev Author: thompsonbry Date: 2011-09-22 20:12:18 +0000 (Thu, 22 Sep 2011) Log Message: ----------- Continued cleanup of license and notice files, including bug fixes to the build.xml "stage" and related targets to correctly stage license and copyright notice files. Pre-aggregated copyright notices and license URLs for all dependencies into the top-level NOTICE file. build.xml : concat does not support the "overwrite" attribute error in ant 1.8 under linux.... build.xml : removed unused targets release, release-prepare, upload, publish-api Re-verified JAR and WAR structure. Have not live tested either. I plan to do that once I get the stage/deploy targets working properly. The various notice and license files are in place in the REL/DIST targets. However, it may be that the ant-install target requires access to the LEGAL directories in the checked out source, in which case it will fail. I will check this on a linux workstation next. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/NOTICE branches/BIGDATA_RELEASE_1_0_0/build.xml branches/TERMS_REFACTOR_BRANCH/.classpath branches/TERMS_REFACTOR_BRANCH/NOTICE branches/TERMS_REFACTOR_BRANCH/build.xml Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/README.txt branches/BIGDATA_RELEASE_1_0_0/dsi-utils/NOTICE branches/BIGDATA_RELEASE_1_0_0/lgpl-utils/NOTICE branches/BIGDATA_RELEASE_1_0_0/lgpl-utils/README.txt branches/TERMS_REFACTOR_BRANCH/dsi-utils/NOTICE branches/TERMS_REFACTOR_BRANCH/lgpl-utils/NOTICE branches/TERMS_REFACTOR_BRANCH/lgpl-utils/README.txt Removed Paths: ------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/LEGAL/README.txt branches/BIGDATA_RELEASE_1_0_0/bigdata/LICENSE.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-gom/LICENSE.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/README.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LICENSE.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-rdf/LICENSE.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/LEGAL/NOTICE branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/LEGAL/jetty-license.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/LEGAL/sesame2.x-license.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/LICENSE.txt branches/BIGDATA_RELEASE_1_0_0/ctc-striterators/LICENSE.txt branches/TERMS_REFACTOR_BRANCH/bigdata/LEGAL/README.txt branches/TERMS_REFACTOR_BRANCH/bigdata/LICENSE.txt branches/TERMS_REFACTOR_BRANCH/bigdata-gom/LICENSE.txt branches/TERMS_REFACTOR_BRANCH/bigdata-jini/LEGAL/README.txt branches/TERMS_REFACTOR_BRANCH/bigdata-jini/LICENSE.txt branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/LICENSE.txt branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/lib/NOTICE branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/lib/README.txt branches/TERMS_REFACTOR_BRANCH/bigdata-sails/LEGAL/NOTICE branches/TERMS_REFACTOR_BRANCH/bigdata-sails/LEGAL/README.txt branches/TERMS_REFACTOR_BRANCH/bigdata-sails/LEGAL/sesame2.x-license.txt branches/TERMS_REFACTOR_BRANCH/bigdata-sails/LICENSE.txt branches/TERMS_REFACTOR_BRANCH/ctc-striterators/LICENSE.txt Modified: branches/BIGDATA_RELEASE_1_0_0/NOTICE =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/NOTICE 2011-09-22 15:15:09 UTC (rev 5236) +++ branches/BIGDATA_RELEASE_1_0_0/NOTICE 2011-09-22 20:12:18 UTC (rev 5237) @@ -1,5 +1,5 @@ -Copyright (C) SYSTAP, LLC 2006-2008. All rights reserved. +Copyright (C) SYSTAP, LLC 2006-2011. All rights reserved. Contact: SYSTAP, LLC @@ -19,3 +19,102 @@ You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +---- + +This product includes software developed by The Apache Software Foundation (http://www.apache.org/). +License: http://www.apache.org/licenses/LICENSE-2.0 +---- + +This product includes software from the openrdf project (http://www.openrdf.org/). +Copyright Aduna (http://www.aduna-software.com/) \xA9 2001-2011 All rights reserved. +License: http://www.openrdf.org/download.jsp +---- + +This product includes software from the colt project (http://acs.lbl.gov/software/colt/). +Copyright \xA9 1999 CERN - European Organization for Nuclear Research. +License: http://acs.lbl.gov/software/colt/license.html +---- + +This product includes software from the fastutil project (http://fastutil.dsi.unimi.it/). +Copyright (C) 2002-2011 Sebastiano Vigna +License: http://www.apache.org/licenses/LICENSE-2.0.html +---- + +This product includes software from the dsiutils project (http://dsiutils.dsi.unimi.it/). +Copyright (C) 2002-2009 Sebastiano Vigna +License: http://www.gnu.org/licenses/lgpl-2.1.html +---- + +This product includes software from the flot project (http://code.google.com/p/flot/). +Released under the MIT license by IOLA, December 2007. +License: http://www.opensource.org/licenses/mit-license.php +---- + +This product includes software from the jquery project (http://jquery.com/). +Copyright (c) 2011 John Resig, http://jquery.com/ +License: https://github.com/jquery/jquery/blob/master/MIT-LICENSE.txt +---- + +This product includes software from the slf4j project (http://www.slf4j.org/). +Copyright (c) 2004-2008 QOS.ch All rights reserved. +License: http://www.slf4j.org/license.html +---- + +This product includes software from the ICU project (http://site.icu-project.org/). +Copyright (c) 1995-2011 International Business Machines Corporation and others All rights reserved. +License: http://source.icu-project.org/repos/icu/icu/trunk/license.html +---- + +This product includes software from the nxparser project (http://sw.deri.org/2006/08/nxparser/) +Copyright (c) 2005-2010, Aidan Hogan, Andreas Harth, Juergen Umbrich. All rights reserved. +License: http://sw.deri.org/2006/08/nxparser/license.txt +---- + +This product includes software from the highscalelib project (https://sourceforge.net/projects/high-scale-lib/). +Written by Cliff Click and released to the public domain, as explained at http://creativecommons.org/licenses/publicdomain. +---- + +This product includes software from http://elonen.iki.fi/code/nanohttpd/. +Copyright (c) 2001,2005-2007 Jarno Elonen (el...@ik..., http://iki.fi/elonen/) +License: http://elonen.iki.fi/code/nanohttpd/#license +---- + +This product includes software from http://www.eclipse.org/jetty. +============================================================== + Jetty Web Container + Copyright 1995-2009 Mort Bay Consulting Pty Ltd +============================================================== + +The Jetty Web Container is Copyright Mort Bay Consulting Pty Ltd +unless otherwise noted. It is dual licensed under the apache 2.0 +license and eclipse 1.0 license. Jetty may be distributed under +either license. + +The javax.servlet package used was sourced from the Apache +Software Foundation and is distributed under the apache 2.0 +license. + +The UnixCrypt.java code implements the one way cryptography used by +Unix systems for simple password protection. Copyright 1996 Aki Yoshida, +modified April 2001 by Iris Van den Broeke, Daniel Deville. +Permission to use, copy, modify and distribute UnixCrypt +for non-commercial or commercial purposes and without fee is +granted provided that the copyright notice appears in all copies. + +License: http://www.apache.org/licenses/LICENSE-2.0 +---- + +This product includes software developed by the infinispan project (http://www.jboss.org/infinispan). +Copyright 2010 Red Hat Inc. and/or its affiliates and other +contributors as indicated by the @author tags. All rights reserved. +See the copyright.txt in the distribution for a full listing of +individual contributors. +License: http://www.opensource.org/licenses/lgpl-2.1.php +---- + +This product includes software developed by the JUnit project (http://www.junit.org/). +License: http://junit.sourceforge.net/cpl-v10.html +---- + +Source code for included Open Source software is available from the respective +websites of the copyright holders of the included software. Deleted: branches/BIGDATA_RELEASE_1_0_0/bigdata/LEGAL/README.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/LEGAL/README.txt 2011-09-22 15:15:09 UTC (rev 5236) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/LEGAL/README.txt 2011-09-22 20:12:18 UTC (rev 5237) @@ -1,2 +0,0 @@ -The lib directory contains some bundled library dependencies. The licenses for -bundled dependencies are found in this directory (LEGAL). Deleted: branches/BIGDATA_RELEASE_1_0_0/bigdata/LICENSE.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/LICENSE.txt 2011-09-22 15:15:09 UTC (rev 5236) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/LICENSE.txt 2011-09-22 20:12:18 UTC (rev 5237) @@ -1,354 +0,0 @@ -The GNU General Public License (GPL) -Version 2, June 1991 - -Copyright (C) 1989, 1991 Free Software Foundation, Inc. - -59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - -Everyone is permitted to copy and distribute verbatim copies - -of this license document, but changing it is not allowed. - - -Preamble - - -The licenses for most software are designed to take away your -freedom to share and change it. By contrast, the GNU General Public -License is intended to guarantee your freedom to share and change free -software--to make sure the software is free for all its users. This -General Public License applies to most of the Free Software -Foundation's software and to any other program whose authors commit to -using it. (Some other Free Software Foundation software is covered by -the GNU Library General Public License instead.) You can apply it to -your programs, too. - -When we speak of free software, we are referring to freedom, not -price. Our General Public Licenses are designed to make sure that you -have the freedom to distribute copies of free software (and charge for -this service if you wish), that you receive source code or can get it -if you want it, that you can change the software or use pieces of it -in new free programs; and that you know you can do these things. - -To protect your rights, we need to make restrictions that forbid -anyone to deny you these rights or to ask you to surrender the rights. -These restrictions translate to certain responsibilities for you if you -distribute copies of the software, or if you modify it. - -For example, if you distribute copies of such a program, whether -gratis or for a fee, you must give the recipients all the rights that -you have. You must make sure that they, too, receive or can get the -source code. And you must show them these terms so they know their -rights. - -We protect your rights with two steps: (1) copyright the software, and -(2) offer you this license which gives you legal permission to copy, -distribute and/or modify the software. - -Also, for each author's protection and ours, we want to make certain -that everyone understands that there is no warranty for this free -software. If the software is modified by someone else and passed on, we -want its recipients to know that what they have is not the original, so -that any problems introduced by others will not reflect on the original -authors' reputations. - -Finally, any free program is threatened constantly by software -patents. We wish to avoid the danger that redistributors of a free -program will individually obtain patent licenses, in effect making the -program proprietary. To prevent this, we have made it clear that any -patent must be licensed for everyone's free use or not licensed at all. - -The precise terms and conditions for copying, distribution and -modification follow. - - -TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION - - -0. This License applies to any program or other work which contains -a notice placed by the copyright holder saying it may be distributed -under the terms of this General Public License. The "Program", below, -refers to any such program or work, and a "work based on the Program" -means either the Program or any derivative work under copyright law: -that is to say, a work containing the Program or a portion of it, -either verbatim or with modifications and/or translated into another -language. (Hereinafter, translation is included without limitation in -the term "modification".) Each licensee is addressed as "you". - -Activities other than copying, distribution and modification are not -covered by this License; they are outside its scope. The act of -running the Program is not restricted, and the output from the Program -is covered only if its contents constitute a work based on the -Program (independent of having been made by running the Program). -Whether that is true depends on what the Program does. - -1. You may copy and distribute verbatim copies of the Program's -source code as you receive it, in any medium, provided that you -conspicuously and appropriately publish on each copy an appropriate -copyright notice and disclaimer of warranty; keep intact all the -notices that refer to this License and to the absence of any warranty; -and give any other recipients of the Program a copy of this License -along with the Program. - -You may charge a fee for the physical act of transferring a copy, and -you may at your option offer warranty protection in exchange for a fee. - -2. You may modify your copy or copies of the Program or any portion -of it, thus forming a work based on the Program, and copy and -distribute such modifications or work under the terms of Section 1 -above, provided that you also meet all of these conditions: - -a) You must cause the modified files to carry prominent notices -stating that you changed the files and the date of any change. - -b) You must cause any work that you distribute or publish, that in -whole or in part contains or is derived from the Program or any -part thereof, to be licensed as a whole at no charge to all third -parties under the terms of this License. - -c) If the modified program normally reads commands interactively -when run, you must cause it, when started running for such -interactive use in the most ordinary way, to print or display an -announcement including an appropriate copyright notice and a -notice that there is no warranty (or else, saying that you provide -a warranty) and that users may redistribute the program under -these conditions, and telling the user how to view a copy of this -License. (Exception: if the Program itself is interactive but -does not normally print such an announcement, your work based on -the Program is not required to print an announcement.) - - -These requirements apply to the modified work as a whole. If -identifiable sections of that work are not derived from the Program, -and can be reasonably considered independent and separate works in -themselves, then this License, and its terms, do not apply to those -sections when you distribute them as separate works. But when you -distribute the same sections as part of a whole which is a work based -on the Program, the distribution of the whole must be on the terms of -this License, whose permissions for other licensees extend to the -entire whole, and thus to each and every part regardless of who wrote it. - -Thus, it is not the intent of this section to claim rights or contest -your rights to work written entirely by you; rather, the intent is to -exercise the right to control the distribution of derivative or -collective works based on the Program. - -In addition, mere aggregation of another work not based on the Program -with the Program (or with a work based on the Program) on a volume of -a storage or distribution medium does not bring the other work under -the scope of this License. - -3. You may copy and distribute the Program (or a work based on it, -under Section 2) in object code or executable form under the terms of -Sections 1 and 2 above provided that you also do one of the following: - -a) Accompany it with the complete corresponding machine-readable -source code, which must be distributed under the terms of Sections -1 and 2 above on a medium customarily used for software interchange; or, - -b) Accompany it with a written offer, valid for at least three -years, to give any third party, for a charge no more than your -cost of physically performing source distribution, a complete -machine-readable copy of the corresponding source code, to be -distributed under the terms of Sections 1 and 2 above on a medium -customarily used for software interchange; or, - -c) Accompany it with the information you received as to the offer -to distribute corresponding source code. (This alternative is -allowed only for noncommercial distribution and only if you -received the program in object code or executable form with such -an offer, in accord with Subsection b above.) - - -The source code for a work means the preferred form of the work for -making modifications to it. For an executable work, complete source -code means all the source code for all modules it contains, plus any -associated interface definition files, plus the scripts used to -control compilation and installation of the executable. However, as a -special exception, the source code distributed need not include -anything that is normally distributed (in either source or binary -form) with the major components (compiler, kernel, and so on) of the -operating system on which the executable runs, unless that component -itself accompanies the executable. - -If distribution of executable or object code is made by offering -access to copy from a designated place, then offering equivalent -access to copy the source code from the same place counts as -distribution of the source code, even though third parties are not -compelled to copy the source along with the object code. - -4. You may not copy, modify, sublicense, or distribute the Program -except as expressly provided under this License. Any attempt -otherwise to copy, modify, sublicense or distribute the Program is -void, and will automatically terminate your rights under this License. -However, parties who have received copies, or rights, from you under -this License will not have their licenses terminated so long as such -parties remain in full compliance. - -5. You are not required to accept this License, since you have not -signed it. However, nothing else grants you permission to modify or -distribute the Program or its derivative works. These actions are -prohibited by law if you do not accept this License. Therefore, by -modifying or distributing the Program (or any work based on the -Program), you indicate your acceptance of this License to do so, and -all its terms and conditions for copying, distributing or modifying -the Program or works based on it. - -6. Each time you redistribute the Program (or any work based on the -Program), the recipient automatically receives a license from the -original licensor to copy, distribute or modify the Program subject to -these terms and conditions. You may not impose any further -restrictions on the recipients' exercise of the rights granted herein. -You are not responsible for enforcing compliance by third parties to -this License. - -7. If, as a consequence of a court judgment or allegation of patent -infringement or for any other reason (not limited to patent issues), -conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot -distribute so as to satisfy simultaneously your obligations under this -License and any other pertinent obligations, then as a consequence you -may not distribute the Program at all. For example, if a patent -license would not permit royalty-free redistribution of the Program by -all those who receive copies directly or indirectly through you, then -the only way you could satisfy both it and this License would be to -refrain entirely from distribution of the Program. - -If any portion of this section is held invalid or unenforceable under -any particular circumstance, the balance of the section is intended to -apply and the section as a whole is intended to apply in other -circumstances. - -It is not the purpose of this section to induce you to infringe any -patents or other property right claims or to contest validity of any -such claims; this section has the sole purpose of protecting the -integrity of the free software distribution system, which is -implemented by public license practices. Many people have made -generous contributions to the wide range of software distributed -through that system in reliance on consistent application of that -system; it is up to the author/donor to decide if he or she is willing -to distribute software through any other system and a licensee cannot -impose that choice. - -This section is intended to make thoroughly clear what is believed to -be a consequence of the rest of this License. - -8. If the distribution and/or use of the Program is restricted in -certain countries either by patents or by copyrighted interfaces, the -original copyright holder who places the Program under this License -may add an explicit geographical distribution limitation excluding -those countries, so that distribution is permitted only in or among -countries not thus excluded. In such case, this License incorporates -the limitation as if written in the body of this License. - -9. The Free Software Foundation may publish revised and/or new versions -of the General Public License from time to time. Such new versions will -be similar in spirit to the present version, but may differ in detail to -address new problems or concerns. - -Each version is given a distinguishing version number. If the Program -specifies a version number of this License which applies to it and "any -later version", you have the option of following the terms and conditions -either of that version or of any later version published by the Free -Software Foundation. If the Program does not specify a version number of -this License, you may choose any version ever published by the Free Software -Foundation. - -10. If you wish to incorporate parts of the Program into other free -programs whose distribution conditions are different, write to the author -to ask for permission. For software which is copyrighted by the Free -Software Foundation, write to the Free Software Foundation; we sometimes -make exceptions for this. Our decision will be guided by the two goals -of preserving the free status of all derivatives of our free software and -of promoting the sharing and reuse of software generally. - -NO WARRANTY - -11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY -FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN -OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES -PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED -OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF -MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS -TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE -PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, -REPAIR OR CORRECTION. - -12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING -WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR -REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, -INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING -OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED -TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY -YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER -PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE -POSSIBILITY OF SUCH DAMAGES. - -END OF TERMS AND CONDITIONS - - -How to Apply These Terms to Your New Programs - -If you develop a new program, and you want it to be of the greatest -possible use to the public, the best way to achieve this is to make it -free software which everyone can redistribute and change under these terms. - -To do so, attach the following notices to the program. It is safest -to attach them to the start of each source file to most effectively -convey the exclusion of warranty; and each file should have at least -the "copyright" line and a pointer to where the full notice is found. - -One line to give the program's name and a brief idea of what it does. - -Copyright (C) <year> <name of author> - -This program is free software; you can redistribute it and/or modify -it under the terms of the GNU General Public License as published by -the Free Software Foundation; either version 2 of the License, or -(at your option) any later version. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; if not, write to the Free Software -Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - - -Also add information on how to contact you by electronic and paper mail. - -If the program is interactive, make it output a short notice like this -when it starts in an interactive mode: - -Gnomovision version 69, Copyright (C) year name of author -Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. -This is free software, and you are welcome to redistribute it -under certain conditions; type `show c' for details. - - -The hypothetical commands `show w' and `show c' should show the appropriate -parts of the General Public License. Of course, the commands you use may -be called something other than `show w' and `show c'; they could even be -mouse-clicks or menu items--whatever suits your program. - -You should also get your employer (if you work as a programmer) or your -school, if any, to sign a "copyright disclaimer" for the program, if -necessary. Here is a sample; alter the names: - -Yoyodyne, Inc., hereby disclaims all copyright interest in the program -`Gnomovision' (which makes passes at compilers) written by James Hacker. - -signature of Ty Coon, 1 April 1989 - -Ty Coon, President of Vice - - -This General Public License does not permit incorporating your program into -proprietary programs. If your program is a subroutine library, you may -consider it more useful to permit linking proprietary applications with the -library. If this is what you want to do, use the GNU Library General -Public License instead of this License. - Deleted: branches/BIGDATA_RELEASE_1_0_0/bigdata-gom/LICENSE.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-gom/LICENSE.txt 2011-09-22 15:15:09 UTC (rev 5236) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-gom/LICENSE.txt 2011-09-22 20:12:18 UTC (rev 5237) @@ -1,354 +0,0 @@ -The GNU General Public License (GPL) -Version 2, June 1991 - -Copyright (C) 1989, 1991 Free Software Foundation, Inc. - -59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - -Everyone is permitted to copy and distribute verbatim copies - -of this license document, but changing it is not allowed. - - -Preamble - - -The licenses for most software are designed to take away your -freedom to share and change it. By contrast, the GNU General Public -License is intended to guarantee your freedom to share and change free -software--to make sure the software is free for all its users. This -General Public License applies to most of the Free Software -Foundation's software and to any other program whose authors commit to -using it. (Some other Free Software Foundation software is covered by -the GNU Library General Public License instead.) You can apply it to -your programs, too. - -When we speak of free software, we are referring to freedom, not -price. Our General Public Licenses are designed to make sure that you -have the freedom to distribute copies of free software (and charge for -this service if you wish), that you receive source code or can get it -if you want it, that you can change the software or use pieces of it -in new free programs; and that you know you can do these things. - -To protect your rights, we need to make restrictions that forbid -anyone to deny you these rights or to ask you to surrender the rights. -These restrictions translate to certain responsibilities for you if you -distribute copies of the software, or if you modify it. - -For example, if you distribute copies of such a program, whether -gratis or for a fee, you must give the recipients all the rights that -you have. You must make sure that they, too, receive or can get the -source code. And you must show them these terms so they know their -rights. - -We protect your rights with two steps: (1) copyright the software, and -(2) offer you this license which gives you legal permission to copy, -distribute and/or modify the software. - -Also, for each author's protection and ours, we want to make certain -that everyone understands that there is no warranty for this free -software. If the software is modified by someone else and passed on, we -want its recipients to know that what they have is not the original, so -that any problems introduced by others will not reflect on the original -authors' reputations. - -Finally, any free program is threatened constantly by software -patents. We wish to avoid the danger that redistributors of a free -program will individually obtain patent licenses, in effect making the -program proprietary. To prevent this, we have made it clear that any -patent must be licensed for everyone's free use or not licensed at all. - -The precise terms and conditions for copying, distribution and -modification follow. - - -TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION - - -0. This License applies to any program or other work which contains -a notice placed by the copyright holder saying it may be distributed -under the terms of this General Public License. The "Program", below, -refers to any such program or work, and a "work based on the Program" -means either the Program or any derivative work under copyright law: -that is to say, a work containing the Program or a portion of it, -either verbatim or with modifications and/or translated into another -language. (Hereinafter, translation is included without limitation in -the term "modification".) Each licensee is addressed as "you". - -Activities other than copying, distribution and modification are not -covered by this License; they are outside its scope. The act of -running the Program is not restricted, and the output from the Program -is covered only if its contents constitute a work based on the -Program (independent of having been made by running the Program). -Whether that is true depends on what the Program does. - -1. You may copy and distribute verbatim copies of the Program's -source code as you receive it, in any medium, provided that you -conspicuously and appropriately publish on each copy an appropriate -copyright notice and disclaimer of warranty; keep intact all the -notices that refer to this License and to the absence of any warranty; -and give any other recipients of the Program a copy of this License -along with the Program. - -You may charge a fee for the physical act of transferring a copy, and -you may at your option offer warranty protection in exchange for a fee. - -2. You may modify your copy or copies of the Program or any portion -of it, thus forming a work based on the Program, and copy and -distribute such modifications or work under the terms of Section 1 -above, provided that you also meet all of these conditions: - -a) You must cause the modified files to carry prominent notices -stating that you changed the files and the date of any change. - -b) You must cause any work that you distribute or publish, that in -whole or in part contains or is derived from the Program or any -part thereof, to be licensed as a whole at no charge to all third -parties under the terms of this License. - -c) If the modified program normally reads commands interactively -when run, you must cause it, when started running for such -interactive use in the most ordinary way, to print or display an -announcement including an appropriate copyright notice and a -notice that there is no warranty (or else, saying that you provide -a warranty) and that users may redistribute the program under -these conditions, and telling the user how to view a copy of this -License. (Exception: if the Program itself is interactive but -does not normally print such an announcement, your work based on -the Program is not required to print an announcement.) - - -These requirements apply to the modified work as a whole. If -identifiable sections of that work are not derived from the Program, -and can be reasonably considered independent and separate works in -themselves, then this License, and its terms, do not apply to those -sections when you distribute them as separate works. But when you -distribute the same sections as part of a whole which is a work based -on the Program, the distribution of the whole must be on the terms of -this License, whose permissions for other licensees extend to the -entire whole, and thus to each and every part regardless of who wrote it. - -Thus, it is not the intent of this section to claim rights or contest -your rights to work written entirely by you; rather, the intent is to -exercise the right to control the distribution of derivative or -collective works based on the Program. - -In addition, mere aggregation of another work not based on the Program -with the Program (or with a work based on the Program) on a volume of -a storage or distribution medium does not bring the other work under -the scope of this License. - -3. You may copy and distribute the Program (or a work based on it, -under Section 2) in object code or executable form under the terms of -Sections 1 and 2 above provided that you also do one of the following: - -a) Accompany it with the complete corresponding machine-readable -source code, which must be distributed under the terms of Sections -1 and 2 above on a medium customarily used for software interchange; or, - -b) Accompany it with a written offer, valid for at least three -years, to give any third party, for a charge no more than your -cost of physically performing source distribution, a complete -machine-readable copy of the corresponding source code, to be -distributed under the terms of Sections 1 and 2 above on a medium -customarily used for software interchange; or, - -c) Accompany it with the information you received as to the offer -to distribute corresponding source code. (This alternative is -allowed only for noncommercial distribution and only if you -received the program in object code or executable form with such -an offer, in accord with Subsection b above.) - - -The source code for a work means the preferred form of the work for -making modifications to it. For an executable work, complete source -code means all the source code for all modules it contains, plus any -associated interface definition files, plus the scripts used to -control compilation and installation of the executable. However, as a -special exception, the source code distributed need not include -anything that is normally distributed (in either source or binary -form) with the major components (compiler, kernel, and so on) of the -operating system on which the executable runs, unless that component -itself accompanies the executable. - -If distribution of executable or object code is made by offering -access to copy from a designated place, then offering equivalent -access to copy the source code from the same place counts as -distribution of the source code, even though third parties are not -compelled to copy the source along with the object code. - -4. You may not copy, modify, sublicense, or distribute the Program -except as expressly provided under this License. Any attempt -otherwise to copy, modify, sublicense or distribute the Program is -void, and will automatically terminate your rights under this License. -However, parties who have received copies, or rights, from you under -this License will not have their licenses terminated so long as such -parties remain in full compliance. - -5. You are not required to accept this License, since you have not -signed it. However, nothing else grants you permission to modify or -distribute the Program or its derivative works. These actions are -prohibited by law if you do not accept this License. Therefore, by -modifying or distributing the Program (or any work based on the -Program), you indicate your acceptance of this License to do so, and -all its terms and conditions for copying, distributing or modifying -the Program or works based on it. - -6. Each time you redistribute the Program (or any work based on the -Program), the recipient automatically receives a license from the -original licensor to copy, distribute or modify the Program subject to -these terms and conditions. You may not impose any further -restrictions on the recipients' exercise of the rights granted herein. -You are not responsible for enforcing compliance by third parties to -this License. - -7. If, as a consequence of a court judgment or allegation of patent -infringement or for any other reason (not limited to patent issues), -conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot -distribute so as to satisfy simultaneously your obligations under this -License and any other pertinent obligations, then as a consequence you -may not distribute the Program at all. For example, if a patent -license would not permit royalty-free redistribution of the Program by -all those who receive copies directly or indirectly through you, then -the only way you could satisfy both it and this License would be to -refrain entirely from distribution of the Program. - -If any portion of this section is held invalid or unenforceable under -any particular circumstance, the balance of the section is intended to -apply and the section as a whole is intended to apply in other -circumstances. - -It is not the purpose of this section to induce you to infringe any -patents or other property right claims or to contest validity of any -such claims; this section has the sole purpose of protecting the -integrity of the free software distribution system, which is -implemented by public license practices. Many people have made -generous contributions to the wide range of software distributed -through that system in reliance on consistent application of that -system; it is up to the author/donor to decide if he or she is willing -to distribute software through any other system and a licensee cannot -impose that choice. - -This section is intended to make thoroughly clear what is believed to -be a consequence of the rest of this License. - -8. If the distribution and/or use of the Program is restricted in -certain countries either by patents or by copyrighted interfaces, the -original copyright holder who places the Program under this License -may add an explicit geographical distribution limitation excluding -those countries, so that distribution is permitted only in or among -countries not thus excluded. In such case, this License incorporates -the limitation as if written in the body of this License. - -9. The Free Software Foundation may publish revised and/or new versions -of the General Public License from time to time. Such new versions will -be similar in spirit to the present version, but may differ in detail to -address new problems or concerns. - -Each version is given a distinguishing version number. If the Program -specifies a version number of this License which applies to it and "any -later version", you have the option of following the terms and conditions -either of that version or of any later version published by the Free -Software Foundation. If the Program does not specify a version number of -this License, you may choose any version ever published by the Free Software -Foundation. - -10. If you wish to incorporate parts of the Program into other free -programs whose distribution conditions are different, write to the author -to ask for permission. For software which is copyrighted by the Free -Software Foundation, write to the Free Software Foundation; we sometimes -make exceptions for this. Our decision will be guided by the two goals -of preserving the free status of all derivatives of our free software and -of promoting the sharing and reuse of software generally. - -NO WARRANTY - -11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY -FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN -OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES -PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED -OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF -MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS -TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE -PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, -REPAIR OR CORRECTION. - -12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING -WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR -REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, -INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING -OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED -TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY -YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER -PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE -POSSIBILITY OF SUCH DAMAGES. - -END OF TERMS AND CONDITIONS - - -How to Apply These Terms to Your New Programs - -If you develop a new program, and you want it to be of the greatest -possible use to the public, the best way to achieve this is to make it -free software which everyone can redistribute and change under these terms. - -To do so, attach the following notices to the program. It is safest -to attach them to the start of each source file to most effectively -convey the exclusion of warranty; and each file should have at least -the "copyright" line and a pointer to where the full notice is found. - -One line to give the program's name and a brief idea of what it does. - -Copyright (C) <year> <name of author> - -This program is free software; you can redistribute it and/or modify -it under the terms of the GNU General Public License as published by -the Free Software Foundation; either version 2 of the License, or -(at your option) any later version. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; if not, write to the Free Software -Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - - -Also add information on how to contact you by electronic and paper mail. - -If the program is interactive, make it output a short notice like this -when it starts in an interactive mode: - -Gnomovision version 69, Copyright (C) year name of author -Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. -This is free software, and you are welcome to redistribute it -under certain conditions; type `show c' for details. - - -The hypothetical commands `show w' and `show c' should show the appropriate -parts of the General Public License. Of course, the commands you use may -be called something other than `show w' and `show c'; they could even be -mouse-clicks or menu items--whatever suits your program. - -You should also get your employer (if you work as a programmer) or your -school, if any, to sign a "copyright disclaimer" for the program, if -necessary. Here is a sample; alter the names: - -Yoyodyne, Inc., hereby disclaims all copyright interest in the program -`Gnomovision' (which makes passes at compilers) written by James Hacker. - -signature of Ty Coon, 1 April 1989 - -Ty Coon, President of Vice - - -This General Public License does not permit incorporating your program into -proprietary programs. If your program is a subroutine library, you may -consider it more useful to permit linking proprietary applications with the -library. If this is what you want to do, use the GNU Library General -Public License instead of this License. - Deleted: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/README.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/README.txt 2011-09-22 15:15:09 UTC (rev 5236) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/README.txt 2011-09-22 20:12:18 UTC (rev 5237) @@ -1,8 +0,0 @@ -The lib directory contains some bundled library dependencies. The licenses for -bundled dependencies are found in this directory (LEGAL). - -This module has a dependency on Jini 2.1 and Apache Zookeeper. - -There is also a dependency on the bigdata module. - -Other dependencies exist and are described in the bigdata module. Deleted: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LICENSE.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LICENSE.txt 2011-09-22 15:15:09 UTC (rev 5236) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LICENSE.txt 2011-09-22 20:12:18 UTC (rev 5237) @@ -1,354 +0,0 @@ -The GNU General Public License (GPL) -Version 2, June 1991 - -Copyright (C) 1989, 1991 Free Software Foundation, Inc. - -59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - -Everyone is permitted to copy and distribute verbatim copies - -of this license document, but changing it is not allowed. - - -Preamble - - -The licenses for most software are designed to take away your -freedom to share and change it. By contrast, the GNU General Public -License is intended to guarantee your freedom to share and change free -software--to make sure the software is free for all its users. This -General Public License applies to most of the Free Software -Foundation's software and to any other program whose authors commit to -using it. (Some other Free Software Foundation software is covered by -the GNU Library General Public License instead.) You can apply it to -your programs, too. - -When we speak of free software, we are referring to freedom, not -price. Our General Public Licenses are designed to make sure that you -have the freedom to distribute copies of free software (and charge for -this service if you wish), that you receive source code or can get it -if you want it, that you can change the software or use pieces of it -in new free programs; and that you know you can do these things. - -To protect your rights, we need to make restrictions that forbid -anyone to deny you these rights or to ask you to surrender the rights. -These restrictions translate to certain responsibilities for you if you -distribute copies of the software, or if you modify it. - -For example, if you distribute copies of such a program, whether -gratis or for a fee, you must give the recipients all the rights that -you have. You must make sure that they, too, receive or can get the -source code. And you must show them these terms so they know their -rights. - -We protect your rights with two steps: (1) copyright the software, and -(2) offer you this license which gives you legal permission to copy, -distribute and/or modify the software. - -Also, for each author's protection and ours, we want to make certain -that everyone understands that there is no warranty for this free -software. If the software is modified by someone else and passed on, we -want its recipients to know that what they have is not the original, so -that any problems introduced by others will not reflect on the original -authors' reputations. - -Finally, any free program is threatened constantly by software -patents. We wish to avoid the danger that redistributors of a free -program will individually obtain patent licenses, in effect making the -program proprietary. To prevent this, we have made it clear that any -patent must be licensed for everyone's free use or not licensed at all. - -The precise terms and conditions for copying, distribution and -modification follow. - - -TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION - - -0. This License applies to any program or other work which contains -a notice placed by the copyright holder saying it may be distributed -under the terms of this General Public License. The "Program", below, -refers to any such program or work, and a "work based on the Program" -means either the Program or any derivative work under copyright law: -that is to say, a work containing the Program or a portion of it, -either verbatim or with modifications and/or translated into another -language. (Hereinafter, translation is included without limitation in -the term "modification".) Each licensee is addressed as "you". - -Activities other than copying, distribution and modification are not -covered by this License; they are outside its scope. The act of -running the Program is not restricted, and the output from the Program -is covered only if its contents constitute a work based on the -Program (independent of having been made by running the Program). -Whether that is true depends on what the Program does. - -1. You may copy and distribute verbatim copies of the Program's -source code as you receive it, in any medium, provided that you -conspicuously and appropriately publish on each copy an appropriate -copyright notice and disclaimer of warranty; keep intact all the -notices that refer to this License and to the absence of any warranty; -and give any other recipients of the Program a copy of this License -along with the Program. - -You may charge a fee for the physical act of transferring a copy, and -you may at your option offer warranty protection in exchange for a fee. - -2. You may modify your copy or copies of the Program or any portion -of it, thus forming a work based on the Program, and copy and -distribute such modifications or work under the terms of Section 1 -above, provided that you also meet all of these conditions: - -a) You must cause the modified files to carry prominent notices -stating that you changed the files and the date of any change. - -b) You must cause any work that you distribute or publish, that in -whole or in part contains or is derived from the Program or any -part thereof, to be licensed as a whole at no charge to all third -parties under the terms of this License. - -c) If the modified program normally reads commands interactively -when run, you must cause it, when started running for such -interactive use in the most ordinary way, to print or display an -announcement including an appropriate copyright notice and a -notice that there is no warranty (or else, saying that you provide -a warranty) and that users may redistribute the program under -these conditions, and telling the user how to view a copy of this -License. (Exception: if the Program itself is interactive but -does not normally print such an announcement, your work based on -the Program is not required to print an announcement.) - - -These requirements apply to the modified work as a whole. If -identifiable sections of that work are not derived from the Program, -and can be reasonably considered independent and separate works in -themselves, then this License, and its terms, do not apply to those -sections when you distribute them as separate works. But when you -distribute the same sections as part of a whole which is a work based -on the Program, the distribution of the whole must be on the terms of -this License, whose permissions for other licensees extend to the -entire whole, and thus to each and every part regardless of who wrote it. - -Thus, it is not the intent of this section to claim rights or contest -your rights to work written entirely by you; rather, the intent is to -exercise the right to control the distribution of derivative or -collective works based on the Program. - -In addition, mere aggregation of another work not based on the Program -with the Program (or with a work based on the Program) on a volume of -a storage or distribution medium does not bring the other work under -the scope of this License. - -3. You may copy and distribute the Program (or a work based on it, -under Section 2) in object code or executable form under the terms of -Sections 1 and 2 above provided that you also do one of the following: - -a) Accompany it with the complete corresponding machine-readable -source code, which must be distributed under the terms of Sections -1 and 2 above on a medium customarily used for software interchange; or, - -b) Accompany it with a written offer, valid for at least three -years, to give any third party, for a charge no more than your -cost of physically performing source distribution, a complete -machine-readable copy of the corresponding source code, to be -distributed under the terms of Sections 1 and 2 above on a medium -customarily used for software interchange; or, - -c) Accompany it with the information you received as to the offer -to distribute corresponding source code. (This alternative is -allowed only for noncommercial distribution and only if you -received the program in object code or executable form with such -an offer, in accord with Subsection b above.) - - -The source code for a work means the preferred form of the work for -making modifications to it. For an executable work, complete source -code means all the source code for all modules it contains, plus any -associated interface definition files, plus the scripts used to -control compilation and installation of the executable. However, as a -special exception, the source code distributed need not include -anything that is normally distributed (in either source or binary -form) with the major components (compiler, kernel, and so on) of the -operating system on which the executable runs, unless that component -itself accompanies the executable. - -If distribution of executable or object code is made by offering -access to copy from a designated place, then offering equivalent -access to copy the source code from the same place counts as -distribution of the source code, even though third parties are not -compelled to copy the source along with the object code. - -4. You may not copy, modify, sublicense, or distribute the Program -except as expressly provided under this License. Any attempt -otherwise to copy, modify, sublicense or distribute the Program is -void, and will automatically terminate your rights under this License. -However, parties who have received copies, or rights, from you under -this License will not have their licenses terminated so long as such -parties remain in full compliance. - -5. You are not required to accept this License, since you have not -signed it. However, nothing else grants you permission to modify or -distribute the Program or its derivative works. These actions are -prohibited by law if you do not accept this License. Therefore, by -modifying or distributing the Program (or any work based on the -Program), you indicate your acceptance of this License to do so, and -all its terms and conditions for copying, distributing or modifying -the Program or works based on it. - -6. Each time you redistribute the Program (or any work based on the -Program), the recipient automatically receives a license from the -original licensor to copy, distribute or modify the Program subject to -these terms and conditions. You may not impose any further -restrictions on the recipients' exercise of the rights granted herein. -You are not responsible for enforcing compliance by third parties to -this License. - -7. If, as a consequence of a court judgment or allegation of patent -infringement or for any other reason (not limited to patent issues), -conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot -distribute so as to satisfy simultaneously your obligations under this -License and any other pertinent obligations, then as a consequence you -may not distribute the Program at all. For example, if a patent -license would not permit royalty-free redistribution of the Program by -all those who receive copies directly or indirectly through you, then -the only way you could satisfy both it and this License would be to -refrain entirely from distribution of the Program. - -If any portion of this section is held invalid or unenforceable under -any particular circumstance, the balance of the section is intended to -apply and the section as a whole is intended to apply in other -circumstances. - -It is not the purpose of this section to induce you to infringe any -patents or other property right claims or to contest validity of any -such claims; this section has the sole purpose of protecting the -integrity of the free software distribution system, which is -implemented by public license practices. Many people have made -generous contributions to the wide range of software distributed -through that system in reliance on consistent application of that -system; it is up to the author/donor to decide if he or she is willing -to distribute software through any other system and a licensee cannot -impose that choice. - -This section is intended to make thoroughly clear what is believed to -be a consequence of the rest of this License. - -8. If the distribution and/or use of the Program is restricted in -certain countries either by patents or by copyrighted interfaces, the -original copyright holder who places the Program under this License -may add an explicit geographical distribution limitation excluding -those countries, so that distribution is permitted only in or among -countries not thus excluded. In such case, this License incorporates -the limitation as if written in the body of this License. - -9. The Free Software Foundation may publish revised and/or new versions -of the General Public License from time to time. Such new versions will -be similar in spirit to the present version, but may differ in detail to -address new problems or concerns. - -Each version is given a distinguishing version number. If the Program -specifies a version number of this License which applies to it and "any -later version", you have the option of following the terms and conditions -either of that version or of any later version published by the Free -Software Foundation. If the Program does not specify a version number of -this License, you may choose any version ever published by the Free Software -Foundation. - -10. If you wish to incorporate parts of the Program into other free -programs whose distribution conditions are different, write to the author -to ask for permission. For software which is copyrighted by the Free -Software Foundation, write to the Free Software Foundation; we sometimes -make exceptions for this. Our decision will be guided by the two goals -of preserving the free status of all derivatives of our free software and -of promoting the sharing and reuse of software generally. - -NO WARRANTY - -11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY -FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN -OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES -PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED -OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF -MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOS... [truncated message content] |
From: <tho...@us...> - 2011-09-23 15:49:58
|
Revision: 5243 http://bigdata.svn.sourceforge.net/bigdata/?rev=5243&view=rev Author: thompsonbry Date: 2011-09-23 15:49:51 +0000 (Fri, 23 Sep 2011) Log Message: ----------- Continued license/NOTICE cleanup Removed Paths: ------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/LEGAL/NOTICE branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/NOTICE branches/BIGDATA_RELEASE_1_0_0/dsi-utils/LEGAL/NOTICE branches/BIGDATA_RELEASE_1_0_0/lgpl-utils/LEGAL/NOTICE branches/BIGDATA_RELEASE_1_0_0/src/resources/HOWTO/ branches/TERMS_REFACTOR_BRANCH/bigdata/LEGAL/NOTICE branches/TERMS_REFACTOR_BRANCH/bigdata-jini/LEGAL/NOTICE branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/LEGAL/NOTICE branches/TERMS_REFACTOR_BRANCH/dsi-utils/LEGAL/NOTICE branches/TERMS_REFACTOR_BRANCH/junit-ext/NOTICE branches/TERMS_REFACTOR_BRANCH/lgpl-utils/LEGAL/NOTICE branches/TERMS_REFACTOR_BRANCH/src/resources/HOWTO/ Deleted: branches/BIGDATA_RELEASE_1_0_0/bigdata/LEGAL/NOTICE =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/LEGAL/NOTICE 2011-09-23 15:45:55 UTC (rev 5242) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/LEGAL/NOTICE 2011-09-23 15:49:51 UTC (rev 5243) @@ -1,5 +0,0 @@ - -This product includes software developed at -The Apache Software Foundation (http://www.apache.org/)." - -Copyright 2009 The Apache Software Foundation Deleted: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/NOTICE =================================================================== Deleted: branches/BIGDATA_RELEASE_1_0_0/dsi-utils/LEGAL/NOTICE =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/dsi-utils/LEGAL/NOTICE 2011-09-23 15:45:55 UTC (rev 5242) +++ branches/BIGDATA_RELEASE_1_0_0/dsi-utils/LEGAL/NOTICE 2011-09-23 15:49:51 UTC (rev 5243) @@ -1,5 +0,0 @@ - -This module contains software derived from dsiutils. - -Copyright (C) 2002-2009 Sebastiano Vigna - Deleted: branches/BIGDATA_RELEASE_1_0_0/lgpl-utils/LEGAL/NOTICE =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/lgpl-utils/LEGAL/NOTICE 2011-09-23 15:45:55 UTC (rev 5242) +++ branches/BIGDATA_RELEASE_1_0_0/lgpl-utils/LEGAL/NOTICE 2011-09-23 15:49:51 UTC (rev 5243) @@ -1,25 +0,0 @@ - -This module contains several LGPL utility classes, most of which were derived -from other projects and modified to support specific features. Where possible, -the changes are propagated back to the original projects and these placeholder -classes are then replaced by an updated dependency on the corresponding project. -The classes are compiled into their own lgpl-utils dependency. - ---- -Portions of this software are derived from the fastutil project. - -Copyright (C) 2002-2011 Sebastiano Vigna - ---- -Portions of this software are derived from the dsiutils project. - -Copyright (C) 2002-2009 Sebastiano Vigna - ---- -Portions of this software are derived from the infinispan project. - -Copyright 2010 Red Hat Inc. and/or its affiliates and other -contributors as indicated by the @author tags. All rights reserved. -See the copyright.txt in the distribution for a full listing of -individual contributors. - Deleted: branches/TERMS_REFACTOR_BRANCH/bigdata/LEGAL/NOTICE =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/LEGAL/NOTICE 2011-09-23 15:45:55 UTC (rev 5242) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/LEGAL/NOTICE 2011-09-23 15:49:51 UTC (rev 5243) @@ -1,5 +0,0 @@ - -This product includes software developed at -The Apache Software Foundation (http://www.apache.org/)." - -Copyright 2009 The Apache Software Foundation Deleted: branches/TERMS_REFACTOR_BRANCH/bigdata-jini/LEGAL/NOTICE =================================================================== Deleted: branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/LEGAL/NOTICE =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/LEGAL/NOTICE 2011-09-23 15:45:55 UTC (rev 5242) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/LEGAL/NOTICE 2011-09-23 15:49:51 UTC (rev 5243) @@ -1,6 +0,0 @@ - -Portions of this software are derived from Sesame. - -Copyright Aduna (http://www.aduna-software.com/) \xA9 2001-2011 All rights reserved. - - Deleted: branches/TERMS_REFACTOR_BRANCH/dsi-utils/LEGAL/NOTICE =================================================================== --- branches/TERMS_REFACTOR_BRANCH/dsi-utils/LEGAL/NOTICE 2011-09-23 15:45:55 UTC (rev 5242) +++ branches/TERMS_REFACTOR_BRANCH/dsi-utils/LEGAL/NOTICE 2011-09-23 15:49:51 UTC (rev 5243) @@ -1,5 +0,0 @@ - -This module contains software derived from dsiutils. - -Copyright (C) 2002-2009 Sebastiano Vigna - Deleted: branches/TERMS_REFACTOR_BRANCH/junit-ext/NOTICE =================================================================== Deleted: branches/TERMS_REFACTOR_BRANCH/lgpl-utils/LEGAL/NOTICE =================================================================== --- branches/TERMS_REFACTOR_BRANCH/lgpl-utils/LEGAL/NOTICE 2011-09-23 15:45:55 UTC (rev 5242) +++ branches/TERMS_REFACTOR_BRANCH/lgpl-utils/LEGAL/NOTICE 2011-09-23 15:49:51 UTC (rev 5243) @@ -1,25 +0,0 @@ - -This module contains several LGPL utility classes, most of which were derived -from other projects and modified to support specific features. Where possible, -the changes are propagated back to the original projects and these placeholder -classes are then replaced by an updated dependency on the corresponding project. -The classes are compiled into their own lgpl-utils dependency. - ---- -Portions of this software are derived from the fastutil project. - -Copyright (C) 2002-2011 Sebastiano Vigna - ---- -Portions of this software are derived from the dsiutils project. - -Copyright (C) 2002-2009 Sebastiano Vigna - ---- -Portions of this software are derived from the infinispan project. - -Copyright 2010 Red Hat Inc. and/or its affiliates and other -contributors as indicated by the @author tags. All rights reserved. -See the copyright.txt in the distribution for a full listing of -individual contributors. - This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-09-24 19:16:20
|
Revision: 5255 http://bigdata.svn.sourceforge.net/bigdata/?rev=5255&view=rev Author: thompsonbry Date: 2011-09-24 19:16:12 +0000 (Sat, 24 Sep 2011) Log Message: ----------- Changed to Apache River 2.2. This looks like it will be a smooth transition. There also appears to be more jars than were present in jini 2.1. - Remove the jini 2.1 jars. - Added the river 2.2 jars. There are new jars in river which were not part of the jini 2.1 distribution. The new jars only appear in the lib directory (versus the lib-ext and lib-dl directories). The new jars are: asm-3.2.jar, asm-common-3.2.jar, checkconfigurationfile.jar, checkser.jar, classdep.jar, computedigest.jar, computehttpmdcodebase.jar, destroy.jar, envcheck.jar, extra.jar, group.jar, jarwrapper.jar, jsk-debug-policy.jar, outrigger-snaplogstore.jar, phoenix-group.jar, phoenix-init.jar, phoenix.jar, preferredlistgen.jar, serviceui.jar, sharedvm.jar. - No changes were required to build.xml or build.properties. - Renamed the license files for apache zookeeper and apache river to clarify their relationship to the jars. I have not renamed the directory structure within the bigdata-jini project as that would any number of problems with the deployment scripts. - Updated the Depends.java file (in the TERMS branch also). Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/Depends.java branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/browser.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/classserver.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/fiddler.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jini-core.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jini-ext.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jsk-lib.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jsk-platform.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jsk-resources.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/mahalo.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/mercury.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/norm.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/outrigger.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/reggie.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/start.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/sun-util.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/tools.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/browser-dl.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/fiddler-dl.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/group-dl.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/jsk-dl.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/mahalo-dl.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/mercury-dl.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/norm-dl.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/outrigger-dl.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/phoenix-dl.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/reggie-dl.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/sdm-dl.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-ext/jsk-policy.jar branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/Depends.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/apache-river-license.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/apache-zookeeper-license.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/asm-3.2.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/asm-commons-3.2.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/checkconfigurationfile.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/checkser.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/classdep.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/computedigest.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/computehttpmdcodebase.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/destroy.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/envcheck.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/extra.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/group.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jarwrapper.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jsk-debug-policy.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/outrigger-snaplogstore.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/phoenix-group.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/phoenix-init.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/phoenix.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/preferredlistgen.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/serviceui.jar branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/sharedvm.jar Removed Paths: ------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/jini-license.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/JINI-README.txt Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/Depends.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/Depends.java 2011-09-24 18:26:11 UTC (rev 5254) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/Depends.java 2011-09-24 19:16:12 UTC (rev 5255) @@ -137,7 +137,7 @@ } - private final static Dep jini = new ApacheDep("jini", + private final static Dep jini = new ApacheDep("river", "http://river.apache.org/"); private final static Dep zookeeper = new ApacheDep("zookeeper", Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/apache-river-license.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/apache-river-license.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/apache-river-license.txt 2011-09-24 19:16:12 UTC (rev 5255) @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/apache-river-license.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/apache-zookeeper-license.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/apache-zookeeper-license.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/apache-zookeeper-license.txt 2011-09-24 19:16:12 UTC (rev 5255) @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/apache-zookeeper-license.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Deleted: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/jini-license.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/jini-license.txt 2011-09-24 18:26:11 UTC (rev 5254) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/LEGAL/jini-license.txt 2011-09-24 19:16:12 UTC (rev 5255) @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. Deleted: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/JINI-README.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/JINI-README.txt 2011-09-24 18:26:11 UTC (rev 5254) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/JINI-README.txt 2011-09-24 19:16:12 UTC (rev 5255) @@ -1,2 +0,0 @@ -This project redistributes Jini 2.1. The directory structure for the JARs -mirrors that of the Jini distribution. Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/asm-3.2.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/asm-3.2.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/asm-commons-3.2.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/asm-commons-3.2.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/browser.jar =================================================================== (Binary files differ) Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/checkconfigurationfile.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/checkconfigurationfile.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/checkser.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/checkser.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/classdep.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/classdep.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/classserver.jar =================================================================== (Binary files differ) Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/computedigest.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/computedigest.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/computehttpmdcodebase.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/computehttpmdcodebase.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/destroy.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/destroy.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/envcheck.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/envcheck.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/extra.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/extra.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/fiddler.jar =================================================================== (Binary files differ) Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/group.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/group.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jarwrapper.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jarwrapper.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jini-core.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jini-ext.jar =================================================================== (Binary files differ) Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jsk-debug-policy.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jsk-debug-policy.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jsk-lib.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jsk-platform.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/jsk-resources.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/mahalo.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/mercury.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/norm.jar =================================================================== (Binary files differ) Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/outrigger-snaplogstore.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/outrigger-snaplogstore.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/outrigger.jar =================================================================== (Binary files differ) Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/phoenix-group.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/phoenix-group.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/phoenix-init.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/phoenix-init.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/phoenix.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/phoenix.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/preferredlistgen.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/preferredlistgen.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/reggie.jar =================================================================== (Binary files differ) Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/serviceui.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/serviceui.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/sharedvm.jar =================================================================== (Binary files differ) Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/sharedvm.jar ___________________________________________________________________ Added: svn:mime-type + application/octet-stream Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/start.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/sun-util.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib/tools.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/browser-dl.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/fiddler-dl.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/group-dl.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/jsk-dl.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/mahalo-dl.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/mercury-dl.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/norm-dl.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/outrigger-dl.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/phoenix-dl.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/reggie-dl.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-dl/sdm-dl.jar =================================================================== (Binary files differ) Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-jini/lib/jini/lib-ext/jsk-policy.jar =================================================================== (Binary files differ) Modified: branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/Depends.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/Depends.java 2011-09-24 18:26:11 UTC (rev 5254) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/Depends.java 2011-09-24 19:16:12 UTC (rev 5255) @@ -137,7 +137,7 @@ } - private final static Dep jini = new ApacheDep("jini", + private final static Dep jini = new ApacheDep("river", "http://river.apache.org/"); private final static Dep zookeeper = new ApacheDep("zookeeper", This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-09-25 23:36:11
|
Revision: 5256 http://bigdata.svn.sourceforge.net/bigdata/?rev=5256&view=rev Author: thompsonbry Date: 2011-09-25 23:36:05 +0000 (Sun, 25 Sep 2011) Log Message: ----------- Modified ant script and bigdata scripts to allow the use of either lockfile or dotlockfile as configured in build.properties. This supports deployment under Ubuntu/Debian without requiring procmail (you can just install the libllockfile1 package). Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/build.properties branches/BIGDATA_RELEASE_1_0_0/build.xml branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdata branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataenv branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataup branches/TERMS_REFACTOR_BRANCH/build.properties branches/TERMS_REFACTOR_BRANCH/build.xml branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdata branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataenv branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataup Modified: branches/BIGDATA_RELEASE_1_0_0/build.properties =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/build.properties 2011-09-24 19:16:12 UTC (rev 5255) +++ branches/BIGDATA_RELEASE_1_0_0/build.properties 2011-09-25 23:36:05 UTC (rev 5256) @@ -183,6 +183,20 @@ #umask.shared=117 #umask.local=177 +# The command to obtain the lock for the bigdata subsystem lock file. Both +# lockfile (procmail) and dotlockfile (liblockfile1) will work. The "-1" for +# lock file is the sleeptime (seconds). The -r 1 in both cases has the same +# semantics and is the #of times to retry. The commands have different wait / +# sleep defaults (8 seconds for lockfile versus 5 seconds for dotlockfile), but +# you can not override this for dotlock file. The bigdata script only tests for +# an exit status of ZERO (0) to indicate that the lock was obtained. +# +# lockfile is part of procmail +LOCK_CMD=lockfile -r 1 -1 +# +# dotlockfile is in the liblockfile1 package. +#LOCK_CMD=/usr/bin/dotlockfile -r 1 + # The bigdata subsystem lock file. The user MUST be able to read/write this file # on each host. Therefore, if you are not installing as root this will need to be # a file within the user's home directory or some directory which exists on each Modified: branches/BIGDATA_RELEASE_1_0_0/build.xml =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/build.xml 2011-09-24 19:16:12 UTC (rev 5255) +++ branches/BIGDATA_RELEASE_1_0_0/build.xml 2011-09-25 23:36:05 UTC (rev 5256) @@ -549,6 +549,7 @@ <replacefilter token="@CONFIG_DIR@" value="${install.config.dir}" /> <replacefilter token="@INSTALL_USER@" value="${install.user}" /> <replacefilter token="@INSTALL_GROUP@" value="${install.group}" /> + <replacefilter token="@LOCK_CMD@" value="${LOCK_CMD}" /> <replacefilter token="@LOCK_FILE@" value="${LOCK_FILE}" /> <replacefilter token="@BIGDATA_CONFIG@" value="${bigdata.config}" /> <replacefilter token="@JINI_CONFIG@" value="${jini.config}" /> @@ -587,6 +588,7 @@ <replacefilter token="@CONFIG_DIR@" value="${install.config.dir}" /> <replacefilter token="@INSTALL_USER@" value="${install.user}" /> <replacefilter token="@INSTALL_GROUP@" value="${install.group}" /> + <replacefilter token="@LOCK_CMD@" value="${LOCK_CMD}" /> <replacefilter token="@LOCK_FILE@" value="${LOCK_FILE}" /> <replacefilter token="@BIGDATA_CONFIG@" value="${bigdata.config}" /> <replacefilter token="@JINI_CONFIG@" value="${jini.config}" /> @@ -1253,6 +1255,7 @@ <replacefilter token="@CONFIG_DIR@" value="${install.config.dir}" /> <replacefilter token="@INSTALL_USER@" value="${install.user}" /> <replacefilter token="@INSTALL_GROUP@" value="${install.group}" /> + <replacefilter token="@LOCK_CMD@" value="${LOCK_CMD}" /> <replacefilter token="@LOCK_FILE@" value="${LOCK_FILE}" /> <replacefilter token="@BIGDATA_CONFIG@" value="${bigdata.config}" /> <replacefilter token="@JINI_CONFIG@" value="${jini.config}" /> @@ -1292,6 +1295,7 @@ <replacefilter token="@CONFIG_DIR@" value="${install.config.dir}" /> <replacefilter token="@INSTALL_USER@" value="${install.user}" /> <replacefilter token="@INSTALL_GROUP@" value="${install.group}" /> + <replacefilter token="@LOCK_CMD@" value="${LOCK_CMD}" /> <replacefilter token="@LOCK_FILE@" value="${LOCK_FILE}" /> <replacefilter token="@BIGDATA_CONFIG@" value="${bigdata.config}" /> <replacefilter token="@JINI_CONFIG@" value="${jini.config}" /> Modified: branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdata =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdata 2011-09-24 19:16:12 UTC (rev 5255) +++ branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdata 2011-09-25 23:36:05 UTC (rev 5256) @@ -40,6 +40,7 @@ # $NAS - A directory on a shared volume (log files). # $LAS - A directory on a local volume (persistent service state). # $pidFile - The bigdata services manager pid is written on this file. +# $lockCmd - The command to obtain the bigdata subsystem lock. # $lockFile - The bigdata subsystem lock file, e.g., /var/lock/subsys/bigdata. # $ruleLog - The file on which bigdata rule execution statistics are logged. # $eventLog - The file on which bigdata events are logged. @@ -132,7 +133,7 @@ # associated with a file. # if [ ! -f "$eventLog" -o ! -f "$errorLog" -o ! -f "$detailLog" -o ! -f "$ruleLog" ]; then - lockfile -r 1 -1 "$logLockFile" + $lockCmd "$logLockFile" if [ 0 == $? ]; then # Event log (where events generated by each service or client get logged). if [ ! -f "$eventLog" ]; then Modified: branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataenv =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataenv 2011-09-24 19:16:12 UTC (rev 5255) +++ branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataenv 2011-09-25 23:36:05 UTC (rev 5256) @@ -99,10 +99,12 @@ # # Other things required by the 'bigdata' script. # +# $lockCmd - The command used to obtain the bigdata subsystem lock. # $lockFile - The bigdata subsystem lock file. # $pidFile - The bigdata services manager pid is written on this file. # $stateFile - The runstate for the bigdata services managers is read from this file. # +export lockCmd="@LOCK_CMD@" export lockFile=@LOCK_FILE@ export pidFile=${LAS}/pid export stateFile=@STATE_FILE@ Modified: branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataup =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataup 2011-09-24 19:16:12 UTC (rev 5255) +++ branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataup 2011-09-25 23:36:05 UTC (rev 5256) @@ -51,7 +51,7 @@ fi # Create the subsystem lock file IFF it does not exist. -lockfile -r 1 -1 "$lockFile" +$lockCmd "$lockFile" if [ 0 != $? ]; then echo $"`date` : `hostname` : could not obtain lock." exit 1; Modified: branches/TERMS_REFACTOR_BRANCH/build.properties =================================================================== --- branches/TERMS_REFACTOR_BRANCH/build.properties 2011-09-24 19:16:12 UTC (rev 5255) +++ branches/TERMS_REFACTOR_BRANCH/build.properties 2011-09-25 23:36:05 UTC (rev 5256) @@ -183,6 +183,20 @@ #umask.shared=117 #umask.local=177 +# The command to obtain the lock for the bigdata subsystem lock file. Both +# lockfile (procmail) and dotlockfile (liblockfile1) will work. The "-1" for +# lock file is the sleeptime (seconds). The -r 1 in both cases has the same +# semantics and is the #of times to retry. The commands have different wait / +# sleep defaults (8 seconds for lockfile versus 5 seconds for dotlockfile), but +# you can not override this for dotlock file. The bigdata script only tests for +# an exit status of ZERO (0) to indicate that the lock was obtained. +# +# lockfile is part of procmail +LOCK_CMD=lockfile -r 1 -1 +# +# dotlockfile is in the liblockfile1 package. +#LOCK_CMD=/usr/bin/dotlockfile -r 1 + # The bigdata subsystem lock file. The user MUST be able to read/write this file # on each host. Therefore, if you are not installing as root this will need to be # a file within the user's home directory or some directory which exists on each Modified: branches/TERMS_REFACTOR_BRANCH/build.xml =================================================================== --- branches/TERMS_REFACTOR_BRANCH/build.xml 2011-09-24 19:16:12 UTC (rev 5255) +++ branches/TERMS_REFACTOR_BRANCH/build.xml 2011-09-25 23:36:05 UTC (rev 5256) @@ -542,6 +542,7 @@ <replacefilter token="@CONFIG_DIR@" value="${install.config.dir}" /> <replacefilter token="@INSTALL_USER@" value="${install.user}" /> <replacefilter token="@INSTALL_GROUP@" value="${install.group}" /> + <replacefilter token="@LOCK_CMD@" value="${LOCK_CMD}" /> <replacefilter token="@LOCK_FILE@" value="${LOCK_FILE}" /> <replacefilter token="@BIGDATA_CONFIG@" value="${bigdata.config}" /> <replacefilter token="@JINI_CONFIG@" value="${jini.config}" /> @@ -580,6 +581,7 @@ <replacefilter token="@CONFIG_DIR@" value="${install.config.dir}" /> <replacefilter token="@INSTALL_USER@" value="${install.user}" /> <replacefilter token="@INSTALL_GROUP@" value="${install.group}" /> + <replacefilter token="@LOCK_CMD@" value="${LOCK_CMD}" /> <replacefilter token="@LOCK_FILE@" value="${LOCK_FILE}" /> <replacefilter token="@BIGDATA_CONFIG@" value="${bigdata.config}" /> <replacefilter token="@JINI_CONFIG@" value="${jini.config}" /> @@ -1241,6 +1243,7 @@ <replacefilter token="@CONFIG_DIR@" value="${install.config.dir}" /> <replacefilter token="@INSTALL_USER@" value="${install.user}" /> <replacefilter token="@INSTALL_GROUP@" value="${install.group}" /> + <replacefilter token="@LOCK_CMD@" value="${LOCK_CMD}" /> <replacefilter token="@LOCK_FILE@" value="${LOCK_FILE}" /> <replacefilter token="@BIGDATA_CONFIG@" value="${bigdata.config}" /> <replacefilter token="@JINI_CONFIG@" value="${jini.config}" /> @@ -1280,6 +1283,7 @@ <replacefilter token="@CONFIG_DIR@" value="${install.config.dir}" /> <replacefilter token="@INSTALL_USER@" value="${install.user}" /> <replacefilter token="@INSTALL_GROUP@" value="${install.group}" /> + <replacefilter token="@LOCK_CMD@" value="${LOCK_CMD}" /> <replacefilter token="@LOCK_FILE@" value="${LOCK_FILE}" /> <replacefilter token="@BIGDATA_CONFIG@" value="${bigdata.config}" /> <replacefilter token="@JINI_CONFIG@" value="${jini.config}" /> Modified: branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdata =================================================================== --- branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdata 2011-09-24 19:16:12 UTC (rev 5255) +++ branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdata 2011-09-25 23:36:05 UTC (rev 5256) @@ -40,6 +40,7 @@ # $NAS - A directory on a shared volume (log files). # $LAS - A directory on a local volume (persistent service state). # $pidFile - The bigdata services manager pid is written on this file. +# $lockCmd - The command to obtain the bigdata subsystem lock. # $lockFile - The bigdata subsystem lock file, e.g., /var/lock/subsys/bigdata. # $ruleLog - The file on which bigdata rule execution statistics are logged. # $eventLog - The file on which bigdata events are logged. @@ -132,7 +133,7 @@ # associated with a file. # if [ ! -f "$eventLog" -o ! -f "$errorLog" -o ! -f "$detailLog" -o ! -f "$ruleLog" ]; then - lockfile -r 1 -1 "$logLockFile" + $lockCmd "$logLockFile" if [ 0 == $? ]; then # Event log (where events generated by each service or client get logged). if [ ! -f "$eventLog" ]; then Modified: branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataenv =================================================================== --- branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataenv 2011-09-24 19:16:12 UTC (rev 5255) +++ branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataenv 2011-09-25 23:36:05 UTC (rev 5256) @@ -99,10 +99,12 @@ # # Other things required by the 'bigdata' script. # +# $lockCmd - The command used to obtain the bigdata subsystem lock. # $lockFile - The bigdata subsystem lock file. # $pidFile - The bigdata services manager pid is written on this file. # $stateFile - The runstate for the bigdata services managers is read from this file. # +export lockCmd="@LOCK_CMD@" export lockFile=@LOCK_FILE@ export pidFile=${LAS}/pid export stateFile=@STATE_FILE@ Modified: branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataup =================================================================== --- branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataup 2011-09-24 19:16:12 UTC (rev 5255) +++ branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataup 2011-09-25 23:36:05 UTC (rev 5256) @@ -51,7 +51,7 @@ fi # Create the subsystem lock file IFF it does not exist. -lockfile -r 1 -1 "$lockFile" +$lockCmd "$lockFile" if [ 0 != $? ]; then echo $"`date` : `hostname` : could not obtain lock." exit 1; This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-09-26 14:08:17
|
Revision: 5259 http://bigdata.svn.sourceforge.net/bigdata/?rev=5259&view=rev Author: thompsonbry Date: 2011-09-26 14:08:06 +0000 (Mon, 26 Sep 2011) Log Message: ----------- Futher clean up of the bigdata-perf/bsbm3 directory. The benchmark procedure is now documented at [1] [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=BSBM Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/README.txt branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/RWStore.properties branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/build.properties branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/README.txt branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/RWStore.properties branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/build.properties Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/README.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/README.txt 2011-09-26 13:39:09 UTC (rev 5258) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/README.txt 2011-09-26 14:08:06 UTC (rev 5259) @@ -1,14 +1,18 @@ -This directory contains a setup for running BSBM v3 against bigdata. The main -files are: +This directory contains a setup for running BSBM v3 against bigdata. -- bsbmtools - the bsbm3 source distribution. +Please see [1] and [2] for guidance on setting up and running BSBM against bigdata. +[1] http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark +[2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=BSBM + +The files in this directory include: + - build.properties - configuration properties for the ant script. -- build.xml - an ant script which may be used to generate a BSBM data set, load - the data set into a bigdata database instance, start a SPARQL - end point for that database instance, and run the BSBM benchmark - against that SPARQL end point. +- build.xml - an ant script which may be used to load a generated data set + a local bigdata database instance and start a SPARQL + end point for that database instance. You will then run the + benchmark against that SPARQL end point. - RWStore.properties - configuration properties for a bigdata database instance suitable for BSBM and backed by the RW persistence engine @@ -20,67 +24,3 @@ suitable for BSBM and backed by the WORM persistence engine (single machine write once, read many bigdata database). - -Other requirements include: - -- A 64-bit OS and a 64-bit server JVM. We have tested most extensible with Oracle - JDK 1.6.0_17. - -- Apache ant (version 1.8.0+). - -- Bigdata (check it out from SVN). - -To get started: - -1. Edit bigdata-perf/bsbm3/build.properties. - -1. In the top-level directory of the bigdata source tree, review build.properties - and then do: - - a. "ant bundleJar". - - Note: You will need to rerun this ant target any time you update the code - from SVN or if you make edits to the source tree. - -2. Change to the bigdata-perf/bsbm3 directory: - - a. "ant run-generator" (generates the BSBM data set). - - b. "ant run-load" (loads the generated data set into a bigdata instance). - - c. "ant start-nano-server" (starts the SPARQL end point). - - d. "ant run-query" (runs the benchmark). - -There are a variety of other ant tasks in that directory which may be used to -run load and run the BSBM qualification data set, etc. - -Performance should be extremely good for the reduced query mix, which can be -enabled by editing: - - bigdata-perf/bsbm3/bsbmtools/queries/explore/ignoreQueries - -For the reduced query mix, "ignoreQueries" should contain "5 6". For the full -query mix, it should be an empty file (the reduced query mix is enabled by -default in SVN). - -Notes on the queries: - -The static query optimizer and vectored pipelined joins do a great job on most -of the BSBM queries. However, there are two queries which do not do so well out -of the box: - -Query 5 has a bad join plan using the static query optimizer. Good performance -for query 5 can be achieved by replacing the contents of: - - bigdata-perf/bsbm3/bsbmtools/queries/explore/query5.txt - - bigdata-perf/bsbm3/bsbmtools/queries/explore/query5-explicit-order.txt - -The original version of query5 has also been saved as query5-original.txt - -Query 6 is uses a REGEX filter. Bigdata does not have index support for REGEX, -so this winds up visiting a lot of data and then filtering using the REGEX. This -drags the overall performance down dramatically. It is possible to integrate -bigdata with Lucene, which does support indexed regular expressions, but that is -not something which works out of the box. Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/RWStore.properties =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/RWStore.properties 2011-09-26 13:39:09 UTC (rev 5258) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/RWStore.properties 2011-09-26 14:08:06 UTC (rev 5259) @@ -70,6 +70,3 @@ # 10000 is default. com.bigdata.rdf.sail.bufferCapacity=100000 - -# direct sesame to bop translation. -com.bigdata.rdf.sail.newEvalStrategy=true Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/build.properties =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/build.properties 2011-09-26 13:39:09 UTC (rev 5258) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/build.properties 2011-09-26 14:08:06 UTC (rev 5259) @@ -52,24 +52,19 @@ # The namespace of the KB instance (multiple KBs can be in the same database). bsbm.namespace=BSBM_${bsbm.pc} -# The namespace of the KB instance for a qualification trial. -bsbm.qualNamespace=qual - # The data directory for the generated benchmark files. +bsbm.baseDir=${bsbmtools.dir}/td_100m # Laptop #bsbm.baseDir=d:/bigdata-perf-analysis/bsbm3/bsbm_${bsbm.pc} # Server #bsbm.baseDir=/nas/data/bsbm/bsbm_${bsbm.pc} -bsbm.baseDir=/root/workspace/bsbm-trunk/td_100m +#bsbm.baseDir=/root/workspace/bsbm-trunk/td_100m # Windows 2008 Server #bsbm.baseDir=c:/usr/local/data/bsbm/bsbm_${bsbm.pc} -# Where to put the XML results files. -bsbm.resultsDir=${bsbm.baseDir}/.. - # The directory in which the generator writes its data. -#bsbm.dataDir=${bsbm.baseDir}/td_data -bsbm.dataDir=/root/workspace/bsbm-trunk +bsbm.dataDir=${bsbm.baseDir}/td_data +#bsbm.dataDir=/root/workspace/bsbm-trunk # Generate ntriples. bsbm.outputType=nt @@ -98,57 +93,6 @@ #bsbm.journalFile=f:/data/bsbm/bsbm_${bsbm.pc}/bigdata-bsbm.${journalMode}.jnl # -# Qualification of the system under test. -# -#bsbm.qualDataDir=d:/bigdata-perf-analysis/bsbm/bsbm-qual -#bsbm.qualDataDir=/nas/data/bsbm/bsbm-qual -#bsbm.qualJournal=${bsbm.qualDataDir}/bigdata-bsbm.${journalMode}.jnl - -# -# Query parameters. -# - -# The #of warmup query mixes to present. -bsbm.w=50 - -# The #of query mixes to present once the database has been warmed up. -bsbm.runs=500 - -# The #of concurrent clients for query. -bsbm.mt=8 - -## -## The randomizer seeds for various studies. -## -# -# 1. Load data into the store. -# 2. Shutdown store, clear OS caches, restart store. -# 3. Run ramp-up. -# -# Original study. -# -# 4. Execute single-client test run (500 mixes performance measurement, randomizer seed: 808080) -#bsbm.seed=808080 -# 5. Execute multiple-client test runs. (2, 4, 8 and 64 clients each 500 query mixes, randomizer seeds: 863528, 888326, 975932, 487411) -# 6. Execute test run with reduced query mix. (repeat steps 2 to 5 with reduced query mix and different randomizer seeds). -# -# Nov 2009 study. -# -# 4. Execute single-client test run (500 mixes performance measurement, randomizer seed: 808080) -#bsbm.seed=808080 -# 5. Execute multiple-client test runs. ( 4 clients, 500 query mixes, randomizer seed: 863528) -#bsbm.seed=863528 -# 6. Execute test run with reduced query mix. (repeat steps 2 to 4 with reduced query mix and different randomizer seed 919191) -#bsbm.seed=919191 - -# Test with random seed (the seed is taken from the system clock). This is good for "cold cache" tests. -bsbm.seed=random - -# Use a specific seed (hot disk cache run with only JVM tuning effects). -#bsbm.seed=1273687925860 -#bsbm.seed=919191 - -# # Profiler parameters. # Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/README.txt =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/README.txt 2011-09-26 13:39:09 UTC (rev 5258) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/README.txt 2011-09-26 14:08:06 UTC (rev 5259) @@ -1,12 +1,10 @@ This directory contains a setup for running BSBM v3 against bigdata. -In addition to the files in this directory, you will need the bsbmtools -distribution. This is available from -http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark. Please consult -bsbmtools and the online documentation for BSBM for current information on -how to generate test data sets and the correct procedure for running the -benchmark. +Please see [1] and [2] for guidance on setting up and running BSBM against bigdata. +[1] http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark +[2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=BSBM + The files in this directory include: - build.properties - configuration properties for the ant script. @@ -26,65 +24,3 @@ suitable for BSBM and backed by the WORM persistence engine (single machine write once, read many bigdata database). - -Other requirements include: - -- A 64-bit OS and a 64-bit server JVM. We have tested most extensible with Oracle - JDK 1.6.0_17. - -- Apache ant (version 1.8.0+). - -- Bigdata (check it out from SVN). - -To get started: - -0. Generate a suitable data set. - -2. Edit bigdata-perf/bsbm3/build.properties. - -3. In the top-level directory of the bigdata source tree, review build.properties - and then do: - - a. "ant bundleJar". - - Note: You will need to rerun this ant target any time you update the code - from SVN or if you make edits to the source tree. - -4. Change to the bigdata-perf/bsbm3 directory: - - b. "ant run-load" (loads the generated data set into a bigdata instance). - - c. "ant start-nano-server" (starts the SPARQL end point). - -5. Follow the procedure for BSBM tools to run the benchmark against the SPARQL - end point. - -Performance should be extremely good for the reduced query mix, which can be -enabled by editing: - - bigdata-perf/bsbm3/bsbmtools/queries/explore/ignoreQueries - -For the reduced query mix, "ignoreQueries" should contain "5 6". For the full -query mix, it should be an empty file (the reduced query mix is enabled by -default in SVN). - -Notes on the queries: - -The static query optimizer and vectored pipelined joins do a great job on most -of the BSBM queries. However, there are two queries which do not do so well out -of the box: - -Query 5 has a bad join plan using the static query optimizer. Good performance -for query 5 can be achieved by replacing the contents of: - - bigdata-perf/bsbm3/bsbmtools/queries/explore/query5.txt - - bigdata-perf/bsbm3/bsbmtools/queries/explore/query5-explicit-order.txt - -The original version of query5 has also been saved as query5-original.txt - -Query 6 is uses a REGEX filter. Bigdata does not have index support for REGEX, -so this winds up visiting a lot of data and then filtering using the REGEX. This -drags the overall performance down dramatically. It is possible to integrate -bigdata with Lucene, which does support indexed regular expressions, but that is -not something which works out of the box. Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/RWStore.properties =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/RWStore.properties 2011-09-26 13:39:09 UTC (rev 5258) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/RWStore.properties 2011-09-26 14:08:06 UTC (rev 5259) @@ -70,6 +70,3 @@ # 10000 is default. com.bigdata.rdf.sail.bufferCapacity=100000 - -# direct sesame to bop translation. -com.bigdata.rdf.sail.newEvalStrategy=true Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/build.properties =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/build.properties 2011-09-26 13:39:09 UTC (rev 5258) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/build.properties 2011-09-26 14:08:06 UTC (rev 5259) @@ -52,24 +52,19 @@ # The namespace of the KB instance (multiple KBs can be in the same database). bsbm.namespace=BSBM_${bsbm.pc} -# The namespace of the KB instance for a qualification trial. -bsbm.qualNamespace=qual - # The data directory for the generated benchmark files. +bsbm.baseDir=${bsbmtools.dir}/td_100m # Laptop #bsbm.baseDir=d:/bigdata-perf-analysis/bsbm3/bsbm_${bsbm.pc} # Server #bsbm.baseDir=/nas/data/bsbm/bsbm_${bsbm.pc} -bsbm.baseDir=/root/workspace/bsbm-trunk/td_100m +#bsbm.baseDir=/root/workspace/bsbm-trunk/td_100m # Windows 2008 Server #bsbm.baseDir=c:/usr/local/data/bsbm/bsbm_${bsbm.pc} -# Where to put the XML results files. -bsbm.resultsDir=${bsbm.baseDir}/.. - # The directory in which the generator writes its data. -#bsbm.dataDir=${bsbm.baseDir}/td_data -bsbm.dataDir=/root/workspace/bsbm-trunk +bsbm.dataDir=${bsbm.baseDir}/td_data +#bsbm.dataDir=/root/workspace/bsbm-trunk # Generate ntriples. bsbm.outputType=nt @@ -98,57 +93,6 @@ #bsbm.journalFile=f:/data/bsbm/bsbm_${bsbm.pc}/bigdata-bsbm.${journalMode}.jnl # -# Qualification of the system under test. -# -#bsbm.qualDataDir=d:/bigdata-perf-analysis/bsbm/bsbm-qual -#bsbm.qualDataDir=/nas/data/bsbm/bsbm-qual -#bsbm.qualJournal=${bsbm.qualDataDir}/bigdata-bsbm.${journalMode}.jnl - -# -# Query parameters. -# - -# The #of warmup query mixes to present. -bsbm.w=50 - -# The #of query mixes to present once the database has been warmed up. -bsbm.runs=500 - -# The #of concurrent clients for query. -bsbm.mt=8 - -## -## The randomizer seeds for various studies. -## -# -# 1. Load data into the store. -# 2. Shutdown store, clear OS caches, restart store. -# 3. Run ramp-up. -# -# Original study. -# -# 4. Execute single-client test run (500 mixes performance measurement, randomizer seed: 808080) -#bsbm.seed=808080 -# 5. Execute multiple-client test runs. (2, 4, 8 and 64 clients each 500 query mixes, randomizer seeds: 863528, 888326, 975932, 487411) -# 6. Execute test run with reduced query mix. (repeat steps 2 to 5 with reduced query mix and different randomizer seeds). -# -# Nov 2009 study. -# -# 4. Execute single-client test run (500 mixes performance measurement, randomizer seed: 808080) -#bsbm.seed=808080 -# 5. Execute multiple-client test runs. ( 4 clients, 500 query mixes, randomizer seed: 863528) -#bsbm.seed=863528 -# 6. Execute test run with reduced query mix. (repeat steps 2 to 4 with reduced query mix and different randomizer seed 919191) -#bsbm.seed=919191 - -# Test with random seed (the seed is taken from the system clock). This is good for "cold cache" tests. -bsbm.seed=random - -# Use a specific seed (hot disk cache run with only JVM tuning effects). -#bsbm.seed=1273687925860 -#bsbm.seed=919191 - -# # Profiler parameters. # This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-09-26 14:14:03
|
Revision: 5260 http://bigdata.svn.sourceforge.net/bigdata/?rev=5260&view=rev Author: thompsonbry Date: 2011-09-26 14:13:57 +0000 (Mon, 26 Sep 2011) Log Message: ----------- More cleanup of the bigdata-perf/bsbm3 package. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/build.xml branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/build.xml Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/build.xml =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/build.xml 2011-09-26 14:08:06 UTC (rev 5259) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-perf/bsbm3/build.xml 2011-09-26 14:13:57 UTC (rev 5260) @@ -1,142 +1,34 @@ - <!-- $Id$ --> <!-- --> <!-- do "ant bundle-jar" in the parent directory first. --> <!-- --> -<project name="bsbm" default="compile" basedir="."> +<project name="bsbm" basedir="."> <property file="build.properties" /> - <!-- build-time classpath. --> - <path id="build.classpath"> - <fileset dir="${bsbmtools.dir}/lib"> - <include name="**/*.jar" /> - </fileset> - </path> - - <!-- runtime classpath w/o install. --> <path id="runtime.classpath"> - <!-- The compiled BSBM classes. - <pathelement location="${build.dir}/classes" /> --> - <!-- The BSBM dependencies. --> - <fileset dir="${bsbmtools.dir}/lib"> - <include name="**/*.jar" /> - </fileset> <!-- The bigdata dependencies (for the nano-server). --> <fileset dir="${bigdata.build.dir}/lib"> <include name="**/*.jar" /> </fileset> - <path refid="build.classpath" /> </path> <target name="clean" description="cleans everything in [build.dir]"> <delete dir="${build.dir}" /> - <ant dir="${bsbmtools.dir}" antfile="build.xml" target="clean"/> </target> <target name="prepare"> <!-- create directories. --> <mkdir dir="${build.dir}" /> - <!-- - <mkdir dir="${build.dir}/classes" /> - <mkdir dir="${build.dir}/bin" /> --> <copy toDir="${build.dir}/bin"> <!-- copy logging and journal configuration file. --> <fileset file="${bsbm.dir}/*.properties" /> </copy> </target> - <target name="compile" depends="prepare" description="Compile the benchmark."> - <ant dir="${bsbmtools.dir}" antfile="build.xml" target="build"/> - <!-- - <javac destdir="${build.dir}/classes" classpathref="build.classpath" debug="${javac.debug}" debuglevel="${javac.debuglevel}" verbose="${javac.verbose}" encoding="${javac.encoding}"> - <src path="${bsbm.dir}/src/java" /> - <compilerarg value="-version" /> - </javac> --> - </target> - - <!-- - Here is how to qualify the system. - - You have to download the qualification dataset [1] (it's 20M, unzip it as - dataset_1m.ttl), its test driver data [2] (unzip this), and the correct - results [3], and put them in the ${bsbm.qualDataDir}. Then follow the - instructions in [4], which boils down to three ant tasks: - - ant run-load-qualification (loads the dataset) - ant run-qualification-1 (runs the queries) - ant run-qualification-2 (compares the actual query results to the correct query results) - - Also, note that src/resources/bsbm-data/ignoreQueries.txt file MUST be empty - when you run the qualifications queries. - - [1] http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark/datasets/qualification.ttl.gz - [2] http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark/datasets/td_data_q.zip - [3] http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark/code/correct.qual - [3] http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark/spec/index.html#qualification - - --> - - <target name="run-load-qualification" depends="compile" description="Load the qualification data set."> - <delete file="${bsbm.qualJournal}" /> - <java classname="com.bigdata.rdf.store.DataLoader" fork="true" failonerror="true" dir="${build.dir}/bin"> - <arg line="-namespace ${bsbm.qualNamespace} ${bsbm.journalPropertyFile} ${bsbm.qualDataDir}/dataset_1m.ttl" /> - <!-- specify/override the journal file name. --> - <jvmarg line="${queryJvmArgs} -Dcom.bigdata.journal.AbstractJournal.file=${bsbm.qualJournal}" /> - <classpath> - <path refid="runtime.classpath" /> - </classpath> - </java> - </target> - - <target name="run-qualification-1" depends="compile" description="Run the qualification queries."> - <java classname="benchmark.testdriver.TestDriver" fork="true" failonerror="true" dir="${build.dir}/bin"> - <arg line="-idir ${bsbm.qualDataDir}/td_data -q http://localhost:${bsbm.nanoServerPort}/" /> - <classpath> - <path refid="runtime.classpath" /> - </classpath> - </java> - </target> - - <target name="run-qualification-2" depends="compile" description="Compare qualification query run against ground truth."> - <java classname="benchmark.qualification.Qualification" fork="true" failonerror="true" dir="${build.dir}/bin"> - <arg line="${bsbm.qualDataDir}/correct.qual run.qual" /> - <classpath> - <path refid="runtime.classpath" /> - </classpath> - </java> - </target> - - <!-- @todo modify to support gzip and split output files for large runs. --> - <target name="run-generator" depends="compile"> - <echo message="bsbm.pc=${bsbm.pc}"/> - <echo message="bsbm.dataDir=${bsbm.dataDir}"/> - <echo message="bsbm.outputFile=${bsbm.outputFile}"/> - <echo message="bsbm.outputType=${bsbm.outputType}"/> - <mkdir dir="${bsbm.baseDir}" /> - <java classname="benchmark.generator.Generator" fork="true" failonerror="true" dir="${bsbmtools.dir}"> - <!-- -fc causes the generator to forward chain the test data. --> - <!-- -pc # specifies the #of products. --> - <!-- -s specifies the output type, generally 'nt' for ntriples. --> - <!-- -fn specifies the output file w/o the .nt extension. --> - <arg value="-fc" /> - <arg value="-pc" /> - <arg value="${bsbm.pc}" /> - <arg value="-dir" /> - <arg value="${bsbm.dataDir}" /> - <arg value="-s" /> - <arg value="${bsbm.outputType}" /> - <arg value="-fn" /> - <arg value="${bsbm.outputFile}" /> - <jvmarg value="-Xmx${bsbm.maxMem}" /> - <classpath> - <path refid="runtime.classpath" /> - </classpath> - </java> - </target> - <!-- Note: split data files and use RDFDataLoadMaster for scale-out. --> - <target name="run-load" depends="prepare" description="Load a data set."> + <target name="run-load" depends="prepare" + description="Load a data set."> <!-- delete file if it exists so we load into a new journal. --> <delete file="${bsbm.journalFile}" /> <java classname="com.bigdata.rdf.store.DataLoader" fork="true" failonerror="true" dir="${build.dir}/bin"> @@ -152,7 +44,8 @@ </java> </target> - <target name="start-sparql-server" depends="compile" description="Start a small http server fronting for a bigdata database instance."> + <target name="start-sparql-server" depends="prepare" + description="Start a small http server fronting for a bigdata database instance."> <java classname="com.bigdata.rdf.sail.webapp.NanoSparqlServer" fork="true" failonerror="true" dir="${build.dir}/bin"> <arg line="${bsbm.nanoServerPort} ${bsbm.namespace} ${bsbm.journalPropertyFile}" /> <!-- specify/override the journal file name. --> @@ -162,79 +55,5 @@ </classpath> </java> </target> - - <target name="run-sparql-query" depends="prepare" description="Run a single query read from a file.."> - <java classname="com.bigdata.rdf.sail.webapp.NanoSparqlClient" fork="true" failonerror="true"> - <arg line="-f query5-instance01-keyRangeVersion.sparql http://localhost:${bsbm.nanoServerPort}/sparql/" /> - <classpath> - <path refid="runtime.classpath" /> - </classpath> - </java> - </target> - - <target name="rampup" depends="compile" description="Runs the benchmark queries against the loaded data until system performance reaches a steady state as defined by the benchmark."> - <java classname="benchmark.testdriver.TestDriver" fork="true" failonerror="true" dir="${bsbmtools.dir}"> - - <arg value="-rampup" /> - - <!-- -idir dir is the test data directory (default td_data). --> - <arg value="-idir" /> - <arg value="${bsbm.dataDir}" /> - - <!-- The randomizer seed. --> - <arg value="-seed" /> - <arg value="random" /> - <!--<arg value="${bsbm.seed}"/>--> - - <!-- -o file is the name of the xml output file. --> - <arg value="-o" /> - <arg value="${bsbm.resultsDir}/benchmark_result_pc${bsbm.pc}_runs${bsbm.runs}_mt${bsbm.mt}.xml" /> - - <!-- The SPARQL endpoint. --> - <arg value="http://localhost:${bsbm.nanoServerPort}/" /> - - <classpath> - <path refid="runtime.classpath" /> - </classpath> - </java> - </target> - - <target name="run-query" depends="compile" description="Runs the benchmark queries against the loaded data."> - <java classname="benchmark.testdriver.TestDriver" fork="true" failonerror="true" dir="${bsbmtools.dir}"> - <!-- -runs # is the #of query mix runs (default is 500). --> - <arg value="-runs" /> - <arg value="${bsbm.runs}" /> - - <!-- -w # is the #of warmup query mixes (default is 50). --> - <arg value="-w" /> - <arg value="${bsbm.w}" /> - - <!-- -mt # is the #of concurrent clients. --> - <arg value="-mt" /> - <arg value="${bsbm.mt}" /> - - <!-- -qdir dir is the query directory (default is queries). --> - <!--<arg value="-qdir"/><arg value="src/resources/bsbm_data"/>--> - - <!-- -idir dir is the test data directory (default td_data). --> - <arg value="-idir" /> - <arg value="${bsbm.dataDir}" /> - - <!-- The randomizer seed. --> - <arg value="-seed" /> - <arg value="${bsbm.seed}" /> - - <!-- -o file is the name of the xml output file. --> - <arg value="-o" /> - <arg value="${bsbm.resultsDir}/benchmark_result_pc${bsbm.pc}_runs${bsbm.runs}_mt${bsbm.mt}.xml" /> - - <!-- The SPARQL endpoint. --> - <arg value="http://localhost:${bsbm.nanoServerPort}/" /> - - <classpath> - <path refid="runtime.classpath" /> - </classpath> - </java> - </target> - + </project> Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/build.xml =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/build.xml 2011-09-26 14:08:06 UTC (rev 5259) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-perf/bsbm3/build.xml 2011-09-26 14:13:57 UTC (rev 5260) @@ -11,7 +11,6 @@ <fileset dir="${bigdata.build.dir}/lib"> <include name="**/*.jar" /> </fileset> - <path refid="build.classpath" /> </path> <target name="clean" description="cleans everything in [build.dir]"> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-09-27 18:00:37
|
Revision: 5261 http://bigdata.svn.sourceforge.net/bigdata/?rev=5261&view=rev Author: thompsonbry Date: 2011-09-27 18:00:30 +0000 (Tue, 27 Sep 2011) Log Message: ----------- Bumped version to 1.0.2 and added release notes for 1.0.2. Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_2.txt branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_2.txt Added: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_2.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_2.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_2.txt 2011-09-27 18:00:30 UTC (rev 5261) @@ -0,0 +1,87 @@ +This is a minor version release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +You can download the WAR from: + +https://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_2 + +Feature summary: + +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited; +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (SIDs); +- 100% native SPARQL 1.0 evaluation with lots of query optimizations; +- Fast RDFS+ inference and truth maintenance; +- Fast statement level provenance mode (SIDs). + +The road map [3] for the next releases includes: + +- High-volume analytic query and SPARQL 1.1 query, including aggregations; +- Simplified deployment, configuration, and administration for clusters; and +- High availability for the journal and the cluster. + +Change log: + +1.0.2 + + - https://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - https://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - https://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - https://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - https://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - https://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - https://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - https://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - https://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 + + - https://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - https://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - https://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - https://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - https://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - https://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - https://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - https://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - https://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - https://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - https://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + + Note: Some of these bug fixes in the 1.0.1 release require data migration. + For details, see https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration + + +For more information about bigdata, please see the following links: + +[1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] https://sourceforge.net/projects/bigdata/files/bigdata/ + +About bigdata: + +Bigdata\xAE is a horizontally-scaled, general purpose storage and computing fabric +for ordered data (B+Trees), designed to operate on either a single server or a +cluster of commodity hardware. Bigdata\xAE uses dynamically partitioned key-range +shards in order to remove any realistic scaling limits - in principle, bigdata\xAE +may be deployed on 10s, 100s, or even thousands of machines and new capacity may +be added incrementally without requiring the full reload of all data. The bigdata\xAE +RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), +and datum level provenance. Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_2.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_2.txt =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_2.txt (rev 0) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_2.txt 2011-09-27 18:00:30 UTC (rev 5261) @@ -0,0 +1,87 @@ +This is a minor version release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +You can download the WAR from: + +https://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_2 + +Feature summary: + +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited; +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (SIDs); +- 100% native SPARQL 1.0 evaluation with lots of query optimizations; +- Fast RDFS+ inference and truth maintenance; +- Fast statement level provenance mode (SIDs). + +The road map [3] for the next releases includes: + +- High-volume analytic query and SPARQL 1.1 query, including aggregations; +- Simplified deployment, configuration, and administration for clusters; and +- High availability for the journal and the cluster. + +Change log: + +1.0.2 + + - https://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - https://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - https://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - https://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - https://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - https://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - https://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - https://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - https://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 + + - https://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - https://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - https://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - https://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - https://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - https://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - https://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - https://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - https://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - https://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - https://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + + Note: Some of these bug fixes in the 1.0.1 release require data migration. + For details, see https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration + + +For more information about bigdata, please see the following links: + +[1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] https://sourceforge.net/projects/bigdata/files/bigdata/ + +About bigdata: + +Bigdata\xAE is a horizontally-scaled, general purpose storage and computing fabric +for ordered data (B+Trees), designed to operate on either a single server or a +cluster of commodity hardware. Bigdata\xAE uses dynamically partitioned key-range +shards in order to remove any realistic scaling limits - in principle, bigdata\xAE +may be deployed on 10s, 100s, or even thousands of machines and new capacity may +be added incrementally without requiring the full reload of all data. The bigdata\xAE +RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), +and datum level provenance. Property changes on: branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_2.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-10-02 21:02:33
|
Revision: 5276 http://bigdata.svn.sourceforge.net/bigdata/?rev=5276&view=rev Author: thompsonbry Date: 2011-10-02 21:02:27 +0000 (Sun, 02 Oct 2011) Log Message: ----------- missed one place for $lockCmd Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdata branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdata Modified: branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdata =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdata 2011-10-02 21:02:01 UTC (rev 5275) +++ branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdata 2011-10-02 21:02:27 UTC (rev 5276) @@ -246,7 +246,7 @@ rm -rf "$LAS" fi if [ -s "$eventLog" -o -s "$errorLog" -o -s "$detailLog" -o -s "$ruleLog" ]; then - lockfile -r 1 -1 "$logLockFile" + $lockCmd "$logLockFile" if [ 0 == $? ]; then # Granted lock in the log directory so we can rename the log # files. Modified: branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdata =================================================================== --- branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdata 2011-10-02 21:02:01 UTC (rev 5275) +++ branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdata 2011-10-02 21:02:27 UTC (rev 5276) @@ -246,7 +246,7 @@ rm -rf "$LAS" fi if [ -s "$eventLog" -o -s "$errorLog" -o -s "$detailLog" -o -s "$ruleLog" ]; then - lockfile -r 1 -1 "$logLockFile" + $lockCmd "$logLockFile" if [ 0 == $? ]; then # Granted lock in the log directory so we can rename the log # files. This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-10-03 14:00:50
|
Revision: 5281 http://bigdata.svn.sourceforge.net/bigdata/?rev=5281&view=rev Author: thompsonbry Date: 2011-10-03 14:00:39 +0000 (Mon, 03 Oct 2011) Log Message: ----------- Removed the SPOTupleSerializer's buf field due to a probable concurrency problem [1]. Since the SIDS refactor to inline SIDs, the byte[] value associated with the ISPO representation in the B+Tree is now a single byte. Thus we can simplify the code, do less object allocation, and avoid any possible concurrency error (it looks like the buf was not thread local and hence was not thread safe). [1] https://sourceforge.net/apps/trac/bigdata/ticket/385. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexWriteProc.java branches/BIGDATA_RELEASE_1_0_0/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOTupleSerializer.java branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexWriteProc.java branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOTupleSerializer.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexWriteProc.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexWriteProc.java 2011-10-03 13:18:06 UTC (rev 5280) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexWriteProc.java 2011-10-03 14:00:39 UTC (rev 5281) @@ -39,7 +39,6 @@ import com.bigdata.btree.proc.IParallelizableIndexProcedure; import com.bigdata.btree.raba.IRaba; import com.bigdata.btree.raba.codec.IRabaCoder; -import com.bigdata.io.ByteArrayBuffer; import com.bigdata.io.DataInputBuffer; import com.bigdata.rdf.model.StatementEnum; import com.bigdata.relation.IMutableRelationIndexWriteProcedure; @@ -177,8 +176,8 @@ final int n = keys.size();//getKeyCount(); - // used to generate the values that we write on the index. - final ByteArrayBuffer tmp = new ByteArrayBuffer(1); +// // used to generate the values that we write on the index. +// final ByteArrayBuffer tmp = new ByteArrayBuffer(1); final SPOTupleSerializer tupleSer = (SPOTupleSerializer) ndx.getIndexMetadata().getTupleSerializer(); @@ -243,7 +242,7 @@ */ ndx.insert(key, tupleSer.serializeVal( - tmp, false/* override */, userFlag, newType)); + /*tmp, */ false/* override */, userFlag, newType)); if (isPrimaryIndex && DEBUG) { log.debug("new SPO: key=" + BytesUtil.toString(key)); @@ -275,7 +274,7 @@ assert newType != StatementEnum.Explicit; ndx.insert(key, tupleSer.serializeVal( - tmp, false/* override */, userFlag, + /*tmp,*/ false/* override */, userFlag, // false /* no sid for type=inferred */, newType)); @@ -302,7 +301,7 @@ // final boolean newSid = maxType == StatementEnum.Explicit; ndx.insert(key, tupleSer.serializeVal( - tmp, false/* override */, + /*tmp,*/ false/* override */, userFlag, // newSid, maxType)); Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOTupleSerializer.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOTupleSerializer.java 2011-10-03 13:18:06 UTC (rev 5280) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOTupleSerializer.java 2011-10-03 14:00:39 UTC (rev 5281) @@ -32,8 +32,6 @@ import java.io.ObjectInput; import java.io.ObjectOutput; -import org.apache.log4j.Logger; - import com.bigdata.btree.DefaultTupleSerializer; import com.bigdata.btree.IRangeQuery; import com.bigdata.btree.ITuple; @@ -71,7 +69,7 @@ private static final long serialVersionUID = 2893830958762265104L; - private static final transient Logger log = Logger.getLogger(SPOTupleSerializer.class); +// private static final transient Logger log = Logger.getLogger(SPOTupleSerializer.class); /** * The natural order for the index. @@ -83,10 +81,10 @@ */ private boolean sids; - /** - * Used to format the value. - */ - private final transient ByteArrayBuffer buf = new ByteArrayBuffer(0); +// /** +// * Used to format the value. +// */ +// private final transient ByteArrayBuffer buf = new ByteArrayBuffer(0); public SPOKeyOrder getKeyOrder() { @@ -172,7 +170,7 @@ if (spo == null) throw new IllegalArgumentException(); - return serializeVal(buf, + return serializeVal(//buf, spo.isOverride(), spo.getUserFlag(), spo.getStatementType()); } @@ -183,9 +181,6 @@ * bit. If the statement identifier is non-null then it will be included in * the returned byte[]. * - * @param buf - * A buffer supplied by the caller. The buffer will be reset - * before the value is written on the buffer. * @param override * <code>true</code> iff you want the * {@link StatementEnum#MASK_OVERRIDE} bit set (this is only set @@ -200,11 +195,14 @@ * @return The value that would be written into a statement index for this * {@link SPO}. */ - public static byte[] serializeVal(final ByteArrayBuffer buf, +// * @param buf +// * A buffer supplied by the caller. The buffer will be reset +// * before the value is written on the buffer. + public byte[] serializeVal(//final ByteArrayBuffer buf, final boolean override, final boolean userFlag, final StatementEnum type) { - buf.reset(); +// buf.reset(); // optionally set the override and user flag bits on the value. final byte b = (byte) @@ -213,10 +211,12 @@ | (userFlag ? StatementEnum.MASK_USER_FLAG : 0x0) ); - buf.putByte(b); +// buf.putByte(b); +// +// return buf.toByteArray(); - return buf.toByteArray(); - + return new byte[]{b}; + } public SPO deserialize(final ITuple tuple) { Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexWriteProc.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexWriteProc.java 2011-10-03 13:18:06 UTC (rev 5280) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOIndexWriteProc.java 2011-10-03 14:00:39 UTC (rev 5281) @@ -39,7 +39,6 @@ import com.bigdata.btree.proc.IParallelizableIndexProcedure; import com.bigdata.btree.raba.IRaba; import com.bigdata.btree.raba.codec.IRabaCoder; -import com.bigdata.io.ByteArrayBuffer; import com.bigdata.io.DataInputBuffer; import com.bigdata.rdf.model.StatementEnum; import com.bigdata.relation.IMutableRelationIndexWriteProcedure; @@ -177,8 +176,8 @@ final int n = keys.size();//getKeyCount(); - // used to generate the values that we write on the index. - final ByteArrayBuffer tmp = new ByteArrayBuffer(1); +// // used to generate the values that we write on the index. +// final ByteArrayBuffer tmp = new ByteArrayBuffer(1); final SPOTupleSerializer tupleSer = (SPOTupleSerializer) ndx.getIndexMetadata().getTupleSerializer(); @@ -243,7 +242,7 @@ */ ndx.insert(key, tupleSer.serializeVal( - tmp, false/* override */, userFlag, newType)); + /*tmp,*/ false/* override */, userFlag, newType)); if (isPrimaryIndex && DEBUG) { log.debug("new SPO: key=" + BytesUtil.toString(key)); @@ -275,7 +274,7 @@ assert newType != StatementEnum.Explicit; ndx.insert(key, tupleSer.serializeVal( - tmp, false/* override */, userFlag, + /*tmp,*/ false/* override */, userFlag, // false /* no sid for type=inferred */, newType)); @@ -302,7 +301,7 @@ // final boolean newSid = maxType == StatementEnum.Explicit; ndx.insert(key, tupleSer.serializeVal( - tmp, false/* override */, + /*tmp,*/ false/* override */, userFlag, // newSid, maxType)); Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOTupleSerializer.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOTupleSerializer.java 2011-10-03 13:18:06 UTC (rev 5280) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-rdf/src/java/com/bigdata/rdf/spo/SPOTupleSerializer.java 2011-10-03 14:00:39 UTC (rev 5281) @@ -32,9 +32,6 @@ import java.io.ObjectInput; import java.io.ObjectOutput; -import org.apache.log4j.Logger; - -import com.bigdata.btree.BytesUtil; import com.bigdata.btree.DefaultTupleSerializer; import com.bigdata.btree.IRangeQuery; import com.bigdata.btree.ITuple; @@ -72,7 +69,7 @@ private static final long serialVersionUID = 2893830958762265104L; - private static final transient Logger log = Logger.getLogger(SPOTupleSerializer.class); +// private static final transient Logger log = Logger.getLogger(SPOTupleSerializer.class); /** * The natural order for the index. @@ -84,10 +81,10 @@ */ private boolean sids; - /** - * Used to format the value. - */ - private final transient ByteArrayBuffer buf = new ByteArrayBuffer(0); +// /** +// * Used to format the value. +// */ +// private final transient ByteArrayBuffer buf = new ByteArrayBuffer(0); public SPOKeyOrder getKeyOrder() { @@ -175,7 +172,7 @@ if (spo == null) throw new IllegalArgumentException(); - return serializeVal(buf, + return serializeVal(//buf, spo.isOverride(), spo.getUserFlag(), spo.getStatementType()); } @@ -189,7 +186,7 @@ if (spo == null) throw new IllegalArgumentException(); - return serializeVal(buf, + return serializeVal(//buf, spo.isOverride(), spo.getUserFlag(), spo.getStatementType()); } @@ -200,9 +197,6 @@ * bit. If the statement identifier is non-null then it will be included in * the returned byte[]. * - * @param buf - * A buffer supplied by the caller. The buffer will be reset - * before the value is written on the buffer. * @param override * <code>true</code> iff you want the * {@link StatementEnum#MASK_OVERRIDE} bit set (this is only set @@ -217,11 +211,14 @@ * @return The value that would be written into a statement index for this * {@link SPO}. */ - public static byte[] serializeVal(final ByteArrayBuffer buf, +// * @param buf +// * A buffer supplied by the caller. The buffer will be reset +// * before the value is written on the buffer. + public byte[] serializeVal(//final ByteArrayBuffer buf, final boolean override, final boolean userFlag, final StatementEnum type) { - buf.reset(); +// buf.reset(); // optionally set the override and user flag bits on the value. final byte b = (byte) @@ -230,14 +227,14 @@ | (userFlag ? StatementEnum.MASK_USER_FLAG : 0x0) ); - buf.putByte(b); - - final byte[] a = buf.toByteArray(); - - assert a.length == 1 : "Expecting one byte, but have " - + BytesUtil.toString(a); +// buf.putByte(b); +// +// final byte[] a = buf.toByteArray(); +// +// assert a.length == 1 : "Expecting one byte, but have " +// + BytesUtil.toString(a); - return a; + return new byte[]{b}; } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-10-03 17:30:55
|
Revision: 5285 http://bigdata.svn.sourceforge.net/bigdata/?rev=5285&view=rev Author: thompsonbry Date: 2011-10-03 17:30:49 +0000 (Mon, 03 Oct 2011) Log Message: ----------- Added logic to adjust the swappiness and the #of file handles allowed to the bigdataup script. These commands are commented out, but this might not be a bad place to take these steps in order to ensure that both things have appropriate values. However, in order to do that we should parameterize at least the target #of file handles (and the ulimit invocation there might only really work if you are running as root). This stuff can't really be done in bigdatasetup because at least ulimit might need to be set in the specific shell instance which starts the various services. Hence, in bigdataup. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataup branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataup Modified: branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataup =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataup 2011-10-03 16:33:08 UTC (rev 5284) +++ branches/BIGDATA_RELEASE_1_0_0/src/resources/scripts/bigdataup 2011-10-03 17:30:49 UTC (rev 5285) @@ -8,6 +8,15 @@ cd `dirname $0` source ./bigdataenv +# Note: The following are both very good ideas. You SHOULD uncomment these +# on your system, assuming that the relevant files can be found. + +# Do not swap out applications while there is free memory. +#/sbin/sysctl -w vm.swappiness=0 + +# Raise the limit on the #of open files. +#ulimit -u 40960 + # Verify critical environment variables. if [ -z "$lockFile" ]; then echo $"`date` : hostname : environment not setup." Modified: branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataup =================================================================== --- branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataup 2011-10-03 16:33:08 UTC (rev 5284) +++ branches/TERMS_REFACTOR_BRANCH/src/resources/scripts/bigdataup 2011-10-03 17:30:49 UTC (rev 5285) @@ -8,6 +8,15 @@ cd `dirname $0` source ./bigdataenv +# Note: The following are both very good ideas. You SHOULD uncomment these +# on your system, assuming that the relevant files can be found. + +# Do not swap out applications while there is free memory. +#/sbin/sysctl -w vm.swappiness=0 + +# Raise the limit on the #of open files. +#ulimit -u 40960 + # Verify critical environment variables. if [ -z "$lockFile" ]; then echo $"`date` : hostname : environment not setup." This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-10-12 13:47:41
|
Revision: 5320 http://bigdata.svn.sourceforge.net/bigdata/?rev=5320&view=rev Author: thompsonbry Date: 2011-10-12 13:47:34 +0000 (Wed, 12 Oct 2011) Log Message: ----------- Modified the Journal per Martyn's analysis to override the MIN_RELEASE_AGE property to always be Long.MAX_VALUE for a WORM mode deployment. Updated the SampleCode to obtain a read-lock to protect the historical commit point when running against a non-WORM journal. @see https://sourceforge.net/apps/trac/bigdata/ticket/391 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/journal/Journal.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java 2011-10-12 12:56:09 UTC (rev 5319) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java 2011-10-12 13:47:34 UTC (rev 5320) @@ -294,10 +294,31 @@ } + /** + * Ensure that the WORM mode of the journal always uses + * {@link Long#MAX_VALUE} for + * {@link AbstractTransactionService.Options#MIN_RELEASE_AGE}. + * + * @param properties + * The properties. + * + * @return The argument, with the minReleaseAge overridden if necessary. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/391 + */ + private Properties checkProperties(final Properties properties) { + if (getBufferStrategy() instanceof WORMStrategy) { + properties.setProperty( + AbstractTransactionService.Options.MIN_RELEASE_AGE, "" + + Long.MAX_VALUE); + } + return properties; + } + protected AbstractLocalTransactionManager newLocalTransactionManager() { final JournalTransactionService abstractTransactionService = new JournalTransactionService( - properties, this) { + checkProperties(properties), this) { { Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java 2011-10-12 12:56:09 UTC (rev 5319) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java 2011-10-12 13:47:34 UTC (rev 5320) @@ -44,12 +44,16 @@ import org.openrdf.rio.helpers.StatementCollector; import org.openrdf.rio.rdfxml.RDFXMLWriter; +import com.bigdata.journal.IIndexManager; +import com.bigdata.journal.Journal; +import com.bigdata.journal.StoreTypeEnum; import com.bigdata.rdf.model.BigdataStatement; import com.bigdata.rdf.sail.BigdataSail; +import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; import com.bigdata.rdf.sail.BigdataSailRepository; import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; -import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; import com.bigdata.rdf.store.BD; +import com.bigdata.service.AbstractTransactionService; /** * Demonstrate how to use bigdata. You are free to use this code for whatever @@ -279,8 +283,14 @@ * @param repo * @throws Exception */ - public void executeFreeTextQuery(Repository repo) throws Exception { - + public boolean executeFreeTextQuery(Repository repo) throws Exception { + if (((BigdataSailRepository) repo).getDatabase().getLexiconRelation() + .getSearchEngine() == null) { + /* + * Only if the free text index exists. + */ + return false; + } RepositoryConnection cxn = repo.getConnection(); cxn.setAutoCommit(false); try { @@ -304,7 +314,7 @@ String query = "select ?x where { ?x <"+BD.SEARCH+"> \"Yell\" . }"; executeSelectQuery(repo, query, QueryLanguage.SPARQL); // will match A, C, and D - + return true; } /** @@ -313,8 +323,12 @@ * @param repo * @throws Exception */ - public void executeProvenanceQuery(Repository repo) throws Exception { + public boolean executeProvenanceQuery(Repository repo) throws Exception { + if(!((BigdataSailRepository)repo).getDatabase().isStatementIdentifiers()) { + // IFF the KB is using the provenance mode, + return false; + } RepositoryConnection cxn = repo.getConnection(); cxn.setAutoCommit(false); try { @@ -376,44 +390,83 @@ executeConstructQuery(repo, query, QueryLanguage.SPARQL); // should see the provenance information for { Mike loves RDF } + return true; + } /** * Demonstrate execution of historical query using a read-only transaction. + * <p> + * Note: Bigdata preserves historical commit points until their release age + * expires. This behavior is controlled by the deployment mode (RW, WORM, or + * cluster) and by + * {@link AbstractTransactionService.Options#MIN_RELEASE_AGE}. Except for + * the WORM deployment mode, you MUST guard a historical commit point on + * which you want to read using a read-lock. The read-lock itself is just a + * read-only connection. It can be obtained for any historical commit point + * that you want to "pin" and can be released once you are no longer need to + * "pin" that commit point. + * <p> + * The read-lock is not required for the WORM deployment because it never + * releases historical commit points. * * @param repo * @throws Exception */ - public void executeHistoricalQuery(Repository repo) throws Exception { + public void executeHistoricalQuery(final Repository repo) throws Exception { if (!(repo instanceof BigdataSailRepository)) { return; } - URI MIKE = new URIImpl(BD.NAMESPACE+"Mike"); - URI BRYAN = new URIImpl(BD.NAMESPACE+"Bryan"); - URI PERSON = new URIImpl(BD.NAMESPACE+"Person"); + final IIndexManager indexManager = ((BigdataSailRepository) repo) + .getDatabase().getIndexManager(); - RepositoryConnection cxn = repo.getConnection(); - cxn.setAutoCommit(false); + final boolean isJournal = indexManager instanceof Journal; + + final boolean isWorm = isJournal + && ((Journal) indexManager).getBufferStrategy().getBufferMode() + .getStoreType() == StoreTypeEnum.WORM; + + final URI MIKE = new URIImpl(BD.NAMESPACE+"Mike"); + final URI BRYAN = new URIImpl(BD.NAMESPACE+"Bryan"); + final URI PERSON = new URIImpl(BD.NAMESPACE+"Person"); + + final RepositoryConnection cxn = repo.getConnection(); try { + cxn.setAutoCommit(false); + cxn.remove((Resource)null, (URI)null, (Value)null); cxn.commit(); cxn.add(MIKE, RDF.TYPE, PERSON); cxn.commit(); - long time = System.currentTimeMillis(); + final long time = System.currentTimeMillis(); - Thread.sleep(1000); + // Need a readLock connection if not a Worm store + final RepositoryConnection readLock = isWorm ? null : + ((BigdataSailRepository) repo).getReadOnlyConnection(); - cxn.add(BRYAN, RDF.TYPE, PERSON); - cxn.commit(); + final RepositoryConnection history; + try { + + Thread.sleep(1000); + + cxn.add(BRYAN, RDF.TYPE, PERSON); + cxn.commit(); + + history = ((BigdataSailRepository) repo) + .getReadOnlyConnection(time); + + } finally { + + if (readLock != null) + readLock.close(); + + } - RepositoryConnection history = - ((BigdataSailRepository) repo).getReadOnlyConnection(time); - - String query = + final String query = "select ?s " + "where { " + " ?s <"+RDF.TYPE+"> <"+PERSON+"> " + @@ -622,12 +675,13 @@ */ public static void main(final String[] args) { // use one of our pre-configured option-sets or "modes" - final String propertiesFile = "fullfeature.properties"; - // final String propertiesFile = "rdfonly.properties"; - // final String propertiesFile = "fastload.properties"; - // final String propertiesFile = "quads.properties"; +// final String propertiesFile = "fullfeature.properties"; +// final String propertiesFile = "rdfonly.properties"; +// final String propertiesFile = "fastload.properties"; + final String propertiesFile = "quads.properties"; try { - SampleCode sampleCode = new SampleCode(); + + final SampleCode sampleCode = new SampleCode(); log.info("Reading properties from file: " + propertiesFile); @@ -640,40 +694,57 @@ */ final File journal = File.createTempFile("bigdata", ".jnl"); log.info(journal.getAbsolutePath()); - // journal.deleteOnExit(); properties.setProperty(BigdataSail.Options.FILE, journal .getAbsolutePath()); } + if (properties + .getProperty(AbstractTransactionService.Options.MIN_RELEASE_AGE) == null) { + // Retain old commit points for at least 60s. + properties.setProperty( + AbstractTransactionService.Options.MIN_RELEASE_AGE, + "60000"); + } // instantiate a sail - BigdataSail sail = new BigdataSail(properties); - Repository repo = new BigdataSailRepository(sail); - repo.initialize(); + final BigdataSail sail = new BigdataSail(properties); + final Repository repo = new BigdataSailRepository(sail); + try { - // demonstrate some basic functionality - URI MIKE = new URIImpl("http://www.bigdata.com/rdf#Mike"); - sampleCode.loadSomeData(repo); - System.out.println("Loaded sample data."); - sampleCode.readSomeData(repo, MIKE); - sampleCode.executeSelectQuery(repo, "select ?p ?o where { <"+MIKE.toString()+"> ?p ?o . }", QueryLanguage.SPARQL); - System.out.println("Did SELECT query."); - sampleCode.executeConstructQuery(repo, "construct { <"+MIKE.toString()+"> ?p ?o . } where { <"+MIKE.toString()+"> ?p ?o . }", QueryLanguage.SPARQL); - System.out.println("Did CONSTRUCT query."); - sampleCode.executeFreeTextQuery(repo); - System.out.println("Did free text query."); - sampleCode.executeProvenanceQuery(repo); - System.out.println("Did provenance query."); - sampleCode.executeHistoricalQuery(repo); - System.out.println("Did historical query."); - - System.out.println("done."); - - repo.shutDown(); - - // run one of the LUBM tests - //sampleCode.doU10(); // I see loaded: 1752215 in 116563 millis: 15032 stmts/sec, what do you see? - //sampleCode.doU1(); - + repo.initialize(); + + // demonstrate some basic functionality + final URI MIKE = new URIImpl( + "http://www.bigdata.com/rdf#Mike"); + sampleCode.loadSomeData(repo); + System.out.println("Loaded sample data."); + sampleCode.readSomeData(repo, MIKE); + sampleCode.executeSelectQuery(repo, + "select ?p ?o where { <" + MIKE.toString() + + "> ?p ?o . }", QueryLanguage.SPARQL); + System.out.println("Did SELECT query."); + sampleCode.executeConstructQuery(repo, + "construct { <" + MIKE.toString() + + "> ?p ?o . } where { <" + MIKE.toString() + + "> ?p ?o . }", QueryLanguage.SPARQL); + System.out.println("Did CONSTRUCT query."); + if (sampleCode.executeFreeTextQuery(repo)) { + System.out.println("Did free text query."); + } + if (sampleCode.executeProvenanceQuery(repo)) { + System.out.println("Did provenance query."); + } + sampleCode.executeHistoricalQuery(repo); + System.out.println("Did historical query."); + + System.out.println("done."); + + // run one of the LUBM tests + // sampleCode.doU10(); // I see loaded: 1752215 in 116563 millis: 15032 stmts/sec, what do you see? + // sampleCode.doU1(); + + } finally { + repo.shutDown(); + } } catch (Exception ex) { ex.printStackTrace(); } Modified: branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/journal/Journal.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/journal/Journal.java 2011-10-12 12:56:09 UTC (rev 5319) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/journal/Journal.java 2011-10-12 13:47:34 UTC (rev 5320) @@ -294,10 +294,31 @@ } + /** + * Ensure that the WORM mode of the journal always uses + * {@link Long#MAX_VALUE} for + * {@link AbstractTransactionService.Options#MIN_RELEASE_AGE}. + * + * @param properties + * The properties. + * + * @return The argument, with the minReleaseAge overridden if necessary. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/391 + */ + private Properties checkProperties(final Properties properties) { + if (getBufferStrategy() instanceof WORMStrategy) { + properties.setProperty( + AbstractTransactionService.Options.MIN_RELEASE_AGE, "" + + Long.MAX_VALUE); + } + return properties; + } + protected AbstractLocalTransactionManager newLocalTransactionManager() { final JournalTransactionService abstractTransactionService = new JournalTransactionService( - properties, this) { + checkProperties(properties), this) { { Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java 2011-10-12 12:56:09 UTC (rev 5319) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/SampleCode.java 2011-10-12 13:47:34 UTC (rev 5320) @@ -44,6 +44,9 @@ import org.openrdf.rio.helpers.StatementCollector; import org.openrdf.rio.rdfxml.RDFXMLWriter; +import com.bigdata.journal.IIndexManager; +import com.bigdata.journal.Journal; +import com.bigdata.journal.StoreTypeEnum; import com.bigdata.rdf.model.BigdataStatement; import com.bigdata.rdf.sail.BigdataSail; import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; @@ -393,40 +396,77 @@ /** * Demonstrate execution of historical query using a read-only transaction. + * <p> + * Note: Bigdata preserves historical commit points until their release age + * expires. This behavior is controlled by the deployment mode (RW, WORM, or + * cluster) and by + * {@link AbstractTransactionService.Options#MIN_RELEASE_AGE}. Except for + * the WORM deployment mode, you MUST guard a historical commit point on + * which you want to read using a read-lock. The read-lock itself is just a + * read-only connection. It can be obtained for any historical commit point + * that you want to "pin" and can be released once you are no longer need to + * "pin" that commit point. + * <p> + * The read-lock is not required for the WORM deployment because it never + * releases historical commit points. * * @param repo * @throws Exception */ - public void executeHistoricalQuery(Repository repo) throws Exception { + public void executeHistoricalQuery(final Repository repo) throws Exception { if (!(repo instanceof BigdataSailRepository)) { return; } - URI MIKE = new URIImpl(BD.NAMESPACE+"Mike"); - URI BRYAN = new URIImpl(BD.NAMESPACE+"Bryan"); - URI PERSON = new URIImpl(BD.NAMESPACE+"Person"); + final IIndexManager indexManager = ((BigdataSailRepository) repo) + .getDatabase().getIndexManager(); - RepositoryConnection cxn = repo.getConnection(); - cxn.setAutoCommit(false); + final boolean isJournal = indexManager instanceof Journal; + + final boolean isWorm = isJournal + && ((Journal) indexManager).getBufferStrategy().getBufferMode() + .getStoreType() == StoreTypeEnum.WORM; + + final URI MIKE = new URIImpl(BD.NAMESPACE+"Mike"); + final URI BRYAN = new URIImpl(BD.NAMESPACE+"Bryan"); + final URI PERSON = new URIImpl(BD.NAMESPACE+"Person"); + + final RepositoryConnection cxn = repo.getConnection(); try { + cxn.setAutoCommit(false); + cxn.remove((Resource)null, (URI)null, (Value)null); cxn.commit(); cxn.add(MIKE, RDF.TYPE, PERSON); cxn.commit(); - long time = System.currentTimeMillis(); + final long time = System.currentTimeMillis(); - Thread.sleep(1000); + // Need a readLock connection if not a Worm store + final RepositoryConnection readLock = isWorm ? null : + ((BigdataSailRepository) repo).getReadOnlyConnection(); - cxn.add(BRYAN, RDF.TYPE, PERSON); - cxn.commit(); + final RepositoryConnection history; + try { + + Thread.sleep(1000); + + cxn.add(BRYAN, RDF.TYPE, PERSON); + cxn.commit(); + + history = ((BigdataSailRepository) repo) + .getReadOnlyConnection(time); + + } finally { + + if (readLock != null) + readLock.close(); + + } - RepositoryConnection history = - ((BigdataSailRepository) repo).getReadOnlyConnection(time); - - String query = + final String query = "select ?s " + "where { " + " ?s <"+RDF.TYPE+"> <"+PERSON+"> " + This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-10-13 17:11:47
|
Revision: 5332 http://bigdata.svn.sourceforge.net/bigdata/?rev=5332&view=rev Author: thompsonbry Date: 2011-10-13 17:11:35 +0000 (Thu, 13 Oct 2011) Log Message: ----------- Updated the wiki page for the NanoSparqlServer REST API [1] to reflect the new DELETE by access path method. I've added a delete by access path method into the DeleteServlet, which was easy enough. The hard part is the correct encode/decode of the URIs and Literals so they can be sent through. For example, a Literal with an embedded quotation mark or a URI with an embedded GT or LT symbol. The most general solution for this is to handle the SPARQL code point sequences [2], which is a bit of work (in fact, Sesame only handles one of the two code point sequence forms - it seems that JavaCC does not recognize the 8 character code point sequences. For the moment, the implementation ignores this issue. The relevant code is in EncodeDecodeValue and can be enhanced once the API change is proven useful. The most critical problem that someone is likely to encounter is a quotation mark embedded within a Literal. At present, that will not come across the API correctly. (It will be reported as a bad request exception at the HTTP layer.) There are two unit tests which demonstrate the encode/decode problem for URIs and Literals. Committed revision rXXXX. (Committed against both branches/BIGDATA_RELEASE_1_0_0, which is the 1.0.x maintenance branch, and TERMS_REFACTOR_BRANCH, which is the basis for our 1.1.x release.) [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=NanoSparqlServer#DELETE [2] http://www.w3.org/TR/sparql11-query/#codepointEscape @see https://sourceforge.net/apps/trac/bigdata/ticket/392 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/StatusServlet.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAll.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/StatusServlet.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAll.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/EncodeDecodeValue.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestEncodeDecodeValue.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_delete_by_access_path.trig branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_delete_by_access_path.ttl branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/EncodeDecodeValue.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestEncodeDecodeValue.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_delete_by_access_path.trig branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_delete_by_access_path.ttl Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2011-10-13 15:47:41 UTC (rev 5331) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/DeleteServlet.java 2011-10-13 17:11:35 UTC (rev 5332) @@ -1,372 +1,458 @@ -package com.bigdata.rdf.sail.webapp; - -import java.io.IOException; -import java.io.InputStream; -import java.io.PipedOutputStream; -import java.util.concurrent.FutureTask; -import java.util.concurrent.atomic.AtomicLong; - -import javax.servlet.http.HttpServletRequest; -import javax.servlet.http.HttpServletResponse; - -import org.apache.log4j.Logger; -import org.openrdf.model.Resource; -import org.openrdf.model.Statement; -import org.openrdf.rio.RDFFormat; -import org.openrdf.rio.RDFHandlerException; -import org.openrdf.rio.RDFParser; -import org.openrdf.rio.RDFParserFactory; -import org.openrdf.rio.RDFParserRegistry; -import org.openrdf.rio.helpers.RDFHandlerBase; -import org.openrdf.sail.SailException; - -import com.bigdata.journal.ITx; -import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; -import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; -import com.bigdata.rdf.sail.webapp.BigdataRDFContext.AbstractQueryTask; - -/** - * Handler for DELETE by query (DELETE verb) and DELETE by data (POST). - * - * @author martyncutcher - */ -public class DeleteServlet extends BigdataRDFServlet { - - /** - * - */ - private static final long serialVersionUID = 1L; - - static private final transient Logger log = Logger - .getLogger(DeleteServlet.class); - - public DeleteServlet() { - - } - - @Override - protected void doDelete(final HttpServletRequest req, - final HttpServletResponse resp) throws IOException { - - final String queryStr = req.getRequestURI(); - - if (queryStr != null) { - - doDeleteWithQuery(req, resp); - - } else { - - resp.setStatus(HttpServletResponse.SC_BAD_REQUEST); - - } - - } - - /** - * Delete all statements materialized by a DESCRIBE or CONSTRUCT query. - * <p> - * Note: To avoid materializing the statements, this runs the query against - * the last commit time and uses a pipe to connect the query directly to the - * process deleting the statements. This is done while it is holding the - * unisolated connection which prevents concurrent modifications. Therefore - * the entire SELECT + DELETE operation is ACID. - */ - private void doDeleteWithQuery(final HttpServletRequest req, - final HttpServletResponse resp) throws IOException { - - final long begin = System.currentTimeMillis(); - - final String baseURI = req.getRequestURL().toString(); - - final String namespace = getNamespace(req); - - final String queryStr = req.getParameter("query"); - - if (queryStr == null) - throw new UnsupportedOperationException(); - - if (log.isInfoEnabled()) - log.info("delete with query: " + queryStr); - - try { - - /* - * Note: pipe is drained by this thread to consume the query - * results, which are the statements to be deleted. - */ - final PipedOutputStream os = new PipedOutputStream(); - final InputStream is = newPipedInputStream(os); - try { - - // Use this format for the query results. - final RDFFormat format = RDFFormat.NTRIPLES; - - final AbstractQueryTask queryTask = getBigdataRDFContext() - .getQueryTask(namespace, ITx.READ_COMMITTED, queryStr, - format.getDefaultMIMEType(), - req, os); - - switch (queryTask.queryType) { - case DESCRIBE: - case CONSTRUCT: - break; - default: - buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, - "Must be DESCRIBE or CONSTRUCT query."); - return; - } - - final AtomicLong nmodified = new AtomicLong(0L); - - BigdataSailRepositoryConnection conn = null; - try { - - conn = getBigdataRDFContext().getUnisolatedConnection( - namespace); - - final RDFParserFactory factory = RDFParserRegistry - .getInstance().get(format); - - final RDFParser rdfParser = factory.getParser(); - - rdfParser.setValueFactory(conn.getTripleStore() - .getValueFactory()); - - rdfParser.setVerifyData(false); - - rdfParser.setStopAtFirstError(true); - - rdfParser - .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); - - rdfParser.setRDFHandler(new RemoveStatementHandler(conn - .getSailConnection(), nmodified)); - - // Wrap as Future. - final FutureTask<Void> ft = new FutureTask<Void>(queryTask); - - // Submit query for evaluation. - getBigdataRDFContext().queryService.execute(ft); - - // Run parser : visited statements will be deleted. - rdfParser.parse(is, baseURI); - - // Await the Future (of the Query) - ft.get(); - - // Commit the mutation. - conn.commit(); - - final long elapsed = System.currentTimeMillis() - begin; - - reportModifiedCount(resp, nmodified.get(), elapsed); - - } catch(Throwable t) { - - if(conn != null) - conn.rollback(); - - throw new RuntimeException(t); - - } finally { - - if (conn != null) - conn.close(); - - } - - } catch (Throwable t) { - - throw BigdataRDFServlet.launderThrowable(t, resp, queryStr); - - } - - } catch (Exception ex) { - - // Will be rendered as an INTERNAL_ERROR. - throw new RuntimeException(ex); - - } - - } - - @Override - protected void doPost(final HttpServletRequest req, - final HttpServletResponse resp) throws IOException { - - final String contentType = req.getContentType(); - - final String queryStr = req.getParameter("query"); - - if (queryStr != null) { - - doDeleteWithQuery(req, resp); - - } else if (contentType != null) { - - doDeleteWithBody(req, resp); - - } else { - - resp.setStatus(HttpServletResponse.SC_BAD_REQUEST); - - } - - } - - /** - * DELETE request with a request body containing the statements to be - * removed. - */ - private void doDeleteWithBody(final HttpServletRequest req, - final HttpServletResponse resp) throws IOException { - - final long begin = System.currentTimeMillis(); - - final String baseURI = req.getRequestURL().toString(); - - final String namespace = getNamespace(req); - - final String contentType = req.getContentType(); - - if (contentType == null) - throw new UnsupportedOperationException(); - - if (log.isInfoEnabled()) - log.info("Request body: " + contentType); - - try { - - /* - * There is a request body, so let's try and parse it. - */ - - final RDFFormat format = RDFFormat.forMIMEType(contentType); - - if (format == null) { - - buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, - "Content-Type not recognized as RDF: " + contentType); - - return; - - } - - final RDFParserFactory rdfParserFactory = RDFParserRegistry - .getInstance().get(format); - - if (rdfParserFactory == null) { - - buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, - "Parser factory not found: Content-Type=" + contentType - + ", format=" + format); - - return; - - } - - final RDFParser rdfParser = rdfParserFactory.getParser(); - - final AtomicLong nmodified = new AtomicLong(0L); - - BigdataSailRepositoryConnection conn = null; - try { - - conn = getBigdataRDFContext() - .getUnisolatedConnection(namespace); - - rdfParser.setValueFactory(conn.getTripleStore() - .getValueFactory()); - - rdfParser.setVerifyData(true); - - rdfParser.setStopAtFirstError(true); - - rdfParser - .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); - - rdfParser.setRDFHandler(new RemoveStatementHandler(conn - .getSailConnection(), nmodified)); - - /* - * Run the parser, which will cause statements to be deleted. - */ - rdfParser.parse(req.getInputStream(), baseURI); - - // Commit the mutation. - conn.commit(); - - final long elapsed = System.currentTimeMillis() - begin; - - reportModifiedCount(resp, nmodified.get(), elapsed); - - } catch(Throwable t) { - - if (conn != null) - conn.rollback(); - - throw new RuntimeException(t); - - } finally { - - if (conn != null) - conn.close(); - - } - - } catch (Exception ex) { - - // Will be rendered as an INTERNAL_ERROR. - throw new RuntimeException(ex); - - } - - } - - /** - * Helper class removes statements from the sail as they are visited by a parser. - */ - static class RemoveStatementHandler extends RDFHandlerBase { - - private final BigdataSailConnection conn; - private final AtomicLong nmodified; - - public RemoveStatementHandler(final BigdataSailConnection conn, - final AtomicLong nmodified) { - - this.conn = conn; - - this.nmodified = nmodified; - - } - - public void handleStatement(final Statement stmt) - throws RDFHandlerException { - - try { - - final Resource context = stmt.getContext(); - - conn.removeStatements(// - stmt.getSubject(), // - stmt.getPredicate(), // - stmt.getObject(), // - (Resource[]) (context == null ? nullArray - : new Resource[] { context })// - ); - - } catch (SailException e) { - - throw new RDFHandlerException(e); - - } - - nmodified.incrementAndGet(); - - } - - } - - static private transient final Resource[] nullArray = new Resource[]{}; - -} +package com.bigdata.rdf.sail.webapp; + +import java.io.IOException; +import java.io.InputStream; +import java.io.PipedOutputStream; +import java.util.concurrent.FutureTask; +import java.util.concurrent.atomic.AtomicLong; + +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; + +import org.apache.log4j.Logger; +import org.openrdf.model.Resource; +import org.openrdf.model.Statement; +import org.openrdf.model.URI; +import org.openrdf.model.Value; +import org.openrdf.rio.RDFFormat; +import org.openrdf.rio.RDFHandlerException; +import org.openrdf.rio.RDFParser; +import org.openrdf.rio.RDFParserFactory; +import org.openrdf.rio.RDFParserRegistry; +import org.openrdf.rio.helpers.RDFHandlerBase; +import org.openrdf.sail.SailException; + +import com.bigdata.journal.ITx; +import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; +import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; +import com.bigdata.rdf.sail.webapp.BigdataRDFContext.AbstractQueryTask; + +/** + * Handler for DELETE by query (DELETE verb) and DELETE by data (POST). + * + * @author martyncutcher + */ +public class DeleteServlet extends BigdataRDFServlet { + + /** + * + */ + private static final long serialVersionUID = 1L; + + static private final transient Logger log = Logger + .getLogger(DeleteServlet.class); + + public DeleteServlet() { + + } + + @Override + protected void doDelete(final HttpServletRequest req, + final HttpServletResponse resp) throws IOException { + + final String queryStr = req.getParameter("query"); + + if (queryStr != null) { + + doDeleteWithQuery(req, resp); + + } else { + + doDeleteWithAccessPath(req, resp); + +// } else { +// +// resp.setStatus(HttpServletResponse.SC_BAD_REQUEST); + + } + + } + + /** + * Delete all statements materialized by a DESCRIBE or CONSTRUCT query. + * <p> + * Note: To avoid materializing the statements, this runs the query against + * the last commit time and uses a pipe to connect the query directly to the + * process deleting the statements. This is done while it is holding the + * unisolated connection which prevents concurrent modifications. Therefore + * the entire SELECT + DELETE operation is ACID. + */ + private void doDeleteWithQuery(final HttpServletRequest req, + final HttpServletResponse resp) throws IOException { + + final long begin = System.currentTimeMillis(); + + final String baseURI = req.getRequestURL().toString(); + + final String namespace = getNamespace(req); + + final String queryStr = req.getParameter("query"); + + if (queryStr == null) + throw new UnsupportedOperationException(); + + if (log.isInfoEnabled()) + log.info("delete with query: " + queryStr); + + try { + + /* + * Note: pipe is drained by this thread to consume the query + * results, which are the statements to be deleted. + */ + final PipedOutputStream os = new PipedOutputStream(); + final InputStream is = newPipedInputStream(os); + try { + + // Use this format for the query results. + final RDFFormat format = RDFFormat.NTRIPLES; + + final AbstractQueryTask queryTask = getBigdataRDFContext() + .getQueryTask(namespace, ITx.READ_COMMITTED, queryStr, + format.getDefaultMIMEType(), + req, os); + + switch (queryTask.queryType) { + case DESCRIBE: + case CONSTRUCT: + break; + default: + buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, + "Must be DESCRIBE or CONSTRUCT query."); + return; + } + + final AtomicLong nmodified = new AtomicLong(0L); + + BigdataSailRepositoryConnection conn = null; + try { + + conn = getBigdataRDFContext().getUnisolatedConnection( + namespace); + + final RDFParserFactory factory = RDFParserRegistry + .getInstance().get(format); + + final RDFParser rdfParser = factory.getParser(); + + rdfParser.setValueFactory(conn.getTripleStore() + .getValueFactory()); + + rdfParser.setVerifyData(false); + + rdfParser.setStopAtFirstError(true); + + rdfParser + .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); + + rdfParser.setRDFHandler(new RemoveStatementHandler(conn + .getSailConnection(), nmodified)); + + // Wrap as Future. + final FutureTask<Void> ft = new FutureTask<Void>(queryTask); + + // Submit query for evaluation. + getBigdataRDFContext().queryService.execute(ft); + + // Run parser : visited statements will be deleted. + rdfParser.parse(is, baseURI); + + // Await the Future (of the Query) + ft.get(); + + // Commit the mutation. + conn.commit(); + + final long elapsed = System.currentTimeMillis() - begin; + + reportModifiedCount(resp, nmodified.get(), elapsed); + + } catch(Throwable t) { + + if(conn != null) + conn.rollback(); + + throw new RuntimeException(t); + + } finally { + + if (conn != null) + conn.close(); + + } + + } catch (Throwable t) { + + throw BigdataRDFServlet.launderThrowable(t, resp, queryStr); + + } + + } catch (Exception ex) { + + // Will be rendered as an INTERNAL_ERROR. + throw new RuntimeException(ex); + + } + + } + + @Override + protected void doPost(final HttpServletRequest req, + final HttpServletResponse resp) throws IOException { + + final String contentType = req.getContentType(); + + final String queryStr = req.getParameter("query"); + + if (queryStr != null) { + + doDeleteWithQuery(req, resp); + + } else if (contentType != null) { + + doDeleteWithBody(req, resp); + + } else { + + resp.setStatus(HttpServletResponse.SC_BAD_REQUEST); + + } + + } + + /** + * DELETE request with a request body containing the statements to be + * removed. + */ + private void doDeleteWithBody(final HttpServletRequest req, + final HttpServletResponse resp) throws IOException { + + final long begin = System.currentTimeMillis(); + + final String baseURI = req.getRequestURL().toString(); + + final String namespace = getNamespace(req); + + final String contentType = req.getContentType(); + + if (contentType == null) + throw new UnsupportedOperationException(); + + if (log.isInfoEnabled()) + log.info("Request body: " + contentType); + + try { + + /* + * There is a request body, so let's try and parse it. + */ + + final RDFFormat format = RDFFormat.forMIMEType(contentType); + + if (format == null) { + + buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, + "Content-Type not recognized as RDF: " + contentType); + + return; + + } + + final RDFParserFactory rdfParserFactory = RDFParserRegistry + .getInstance().get(format); + + if (rdfParserFactory == null) { + + buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, + "Parser factory not found: Content-Type=" + contentType + + ", format=" + format); + + return; + + } + + final RDFParser rdfParser = rdfParserFactory.getParser(); + + final AtomicLong nmodified = new AtomicLong(0L); + + BigdataSailRepositoryConnection conn = null; + try { + + conn = getBigdataRDFContext() + .getUnisolatedConnection(namespace); + + rdfParser.setValueFactory(conn.getTripleStore() + .getValueFactory()); + + rdfParser.setVerifyData(true); + + rdfParser.setStopAtFirstError(true); + + rdfParser + .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); + + rdfParser.setRDFHandler(new RemoveStatementHandler(conn + .getSailConnection(), nmodified)); + + /* + * Run the parser, which will cause statements to be deleted. + */ + rdfParser.parse(req.getInputStream(), baseURI); + + // Commit the mutation. + conn.commit(); + + final long elapsed = System.currentTimeMillis() - begin; + + reportModifiedCount(resp, nmodified.get(), elapsed); + + } catch(Throwable t) { + + if (conn != null) + conn.rollback(); + + throw new RuntimeException(t); + + } finally { + + if (conn != null) + conn.close(); + + } + + } catch (Exception ex) { + + // Will be rendered as an INTERNAL_ERROR. + throw new RuntimeException(ex); + + } + + } + + /** + * Helper class removes statements from the sail as they are visited by a parser. + */ + static class RemoveStatementHandler extends RDFHandlerBase { + + private final BigdataSailConnection conn; + private final AtomicLong nmodified; + + public RemoveStatementHandler(final BigdataSailConnection conn, + final AtomicLong nmodified) { + + this.conn = conn; + + this.nmodified = nmodified; + + } + + public void handleStatement(final Statement stmt) + throws RDFHandlerException { + + try { + + final Resource context = stmt.getContext(); + + conn.removeStatements(// + stmt.getSubject(), // + stmt.getPredicate(), // + stmt.getObject(), // + (Resource[]) (context == null ? nullArray + : new Resource[] { context })// + ); + + } catch (SailException e) { + + throw new RDFHandlerException(e); + + } + + nmodified.incrementAndGet(); + + } + + } + + /** + * Delete all statements described by an access path. + */ + private void doDeleteWithAccessPath(final HttpServletRequest req, + final HttpServletResponse resp) throws IOException { + + final long begin = System.currentTimeMillis(); + + final String namespace = getNamespace(req); + + final Resource s; + final URI p; + final Value o; + final Resource c; + try { + s = EncodeDecodeValue.decodeResource(req.getParameter("s")); + p = EncodeDecodeValue.decodeURI(req.getParameter("p")); + o = EncodeDecodeValue.decodeValue(req.getParameter("o")); + c = EncodeDecodeValue.decodeResource(req.getParameter("c")); + } catch (IllegalArgumentException ex) { + buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, + ex.getLocalizedMessage()); + return; + } + + if (log.isInfoEnabled()) + log.info("delete with access path: (s=" + s + ", p=" + p + ", o=" + + o + ", c=" + c + ")"); + + try { + + try { + + BigdataSailRepositoryConnection conn = null; + try { + + conn = getBigdataRDFContext().getUnisolatedConnection( + namespace); + + // Remove all statements matching that access path. + final long nmodified = conn.getSailConnection() + .getBigdataSail().getDatabase() + .removeStatements(s, p, o, c); + + // Commit the mutation. + conn.commit(); + + final long elapsed = System.currentTimeMillis() - begin; + + reportModifiedCount(resp, nmodified, elapsed); + + } catch(Throwable t) { + + if(conn != null) + conn.rollback(); + + throw new RuntimeException(t); + + } finally { + + if (conn != null) + conn.close(); + + } + + } catch (Throwable t) { + + throw BigdataRDFServlet.launderThrowable(t, resp, ""); + + } + + } catch (Exception ex) { + + // Will be rendered as an INTERNAL_ERROR. + throw new RuntimeException(ex); + + } + + } + + static private transient final Resource[] nullArray = new Resource[]{}; + +} Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/EncodeDecodeValue.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/EncodeDecodeValue.java (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/EncodeDecodeValue.java 2011-10-13 17:11:35 UTC (rev 5332) @@ -0,0 +1,421 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2011. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +/* + * Created on Oct 13, 2011 + */ + +package com.bigdata.rdf.sail.webapp; + +import org.openrdf.model.BNode; +import org.openrdf.model.Literal; +import org.openrdf.model.Resource; +import org.openrdf.model.URI; +import org.openrdf.model.Value; +import org.openrdf.model.impl.LiteralImpl; +import org.openrdf.model.impl.URIImpl; + +/** + * Utility class to encode/decode RDF {@link Value}s for interchange with the + * REST API. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @version $Id$ + */ +public class EncodeDecodeValue { + +// /* +// * Note: The decode logic was derived from the JavaCharStream file generated +// * by JavaCC. +// */ +// +// private static final int hexval(char c) { +// switch (c) { +// case '0': +// return 0; +// case '1': +// return 1; +// case '2': +// return 2; +// case '3': +// return 3; +// case '4': +// return 4; +// case '5': +// return 5; +// case '6': +// return 6; +// case '7': +// return 7; +// case '8': +// return 8; +// case '9': +// return 9; +// +// case 'a': +// case 'A': +// return 10; +// case 'b': +// case 'B': +// return 11; +// case 'c': +// case 'C': +// return 12; +// case 'd': +// case 'D': +// return 13; +// case 'e': +// case 'E': +// return 14; +// case 'f': +// case 'F': +// return 15; +// } +// +// throw new AssertionError(); +// } +// +// private static class DecodeString { +// private final StringBuilder sb = new StringBuilder(); +// private final String src; +// private int srcpos = 0; +// DecodeString(final String s) { +// this.src = s; +// } +// +// private char ReadByte() { +// return src.charAt(srcpos++); +// } +// +// private void backup(final int n) { +// +// sb.setLength(sb.length() - n); +// +// } +// +// /** +// * Read a character. +// * +// * TODO Does not handle the 8 character escape code sequences (but +// * neither does the SPARQL parser!) +// */ +// private char readChar() throws java.io.IOException { +// char c; +// +// sb.append(c = ReadByte()); +// +// if (c == '\\') { +// +// int backSlashCnt = 1; +// +// for (;;) // Read all the backslashes +// { +// +// try { +// sb.append(c=ReadByte()); +// if (c != '\\') { +// // found a non-backslash char. +// if ((c == 'u') && ((backSlashCnt & 1) == 1)) { +// if (--bufpos < 0) +// bufpos = bufsize - 1; +// +// break; +// } +// +// backup(backSlashCnt); +// return '\\'; +// } +// } catch (java.io.IOException e) { +// // We are returning one backslash so we should only +// // backup (count-1) +// if (backSlashCnt > 1) +// backup(backSlashCnt - 1); +// +// return '\\'; +// } +// +// backSlashCnt++; +// } +// +// // Here, we have seen an odd number of backslash's followed by a +// // 'u' +// try { +// while ((c = ReadByte()) == 'u') {} +// +// // Decode the code sequence. +// c = (char) (hexval(c) << 12 | hexval(ReadByte()) << 8 +// | hexval(ReadByte()) << 4 | hexval(ReadByte())); +// +// sb.append(c); +// +// } catch (java.io.IOException e) { +// +// throw new Error("Invalid escape character"); +// +// } +// +// if (backSlashCnt == 1) +// return c; +// else { +// backup(backSlashCnt - 1); +// return '\\'; +// } +// } else { +// return c; +// } +// } +// +// } +// +// /** +// * Apply code point escape sequences for anything that we need to escape. +// * For our purposes, this is just <code>"</code> and <code>></code>. +// * @param s +// * @return +// * +// * @see http://www.w3.org/TR/sparql11-query/#codepointEscape +// */ +// static String encodeEscapeSequences(final String s) { +// +// return s; +// +// } +// +// /** +// * Decode all code point escape sequences. Note that we need to decode more +// * than we encode since we are not responsible for the encoding when it +// * comes to the REST API, just the decoding. +// * +// * @param s +// * The string, which may have escape sequences encoded. +// * +// * @return The string with escape sequences decoded. +// * +// * @throws IllegalArgumentException +// * if the argument is <code>null</code>. +// * @throws IllegalArgumentException +// * if the argument is contains an ill-formed escape code +// * sequence. +// * +// * @see http://www.w3.org/TR/sparql11-query/#codepointEscape +// * +// * FIXME Implement encode/decode. +// */ +// static String decodeEscapeSequences(final String s) { +// +//// // Remove any escape sequences. +//// final StringBuilder sb = new StringBuilder(); +//// for (int i = 0; i < slen; i++) { +//// char ch = s.charAt(i); +//// if (ch == '\\') { +//// if (i + 1 == slen) +//// throw new IllegalArgumentException(s); +//// ch = s.charAt(i); +//// } +//// sb.append(ch); +//// } +//// final String t = sb.toString(); +// +// return s; +// +// } + + /** + * Decode a URI or Literal. + * + * @param s + * The value to be decoded. + * + * @return The URI or literal -or- <code>null</code> if the argument was + * <code>null</code>. + * + * @throws IllegalArgumentException + * if the request parameter could not be decoded as an RDF + * {@link Value}. + */ + public static Value decodeValue(final String s) { + + if(s == null) + return null; + +// final String s = decodeEscapeSequences(ss); + + final int slen = s.length(); + + if (slen == 0) + throw new IllegalArgumentException("<Empty String>"); + + final char ch = s.charAt(0); + + if(ch == '\"' || ch == '\'') { + + /* + * Literal. + */ + + final int closeQuotePos = s.indexOf(ch, 1/* fromIndex */); + + if (closeQuotePos == -1) + throw new IllegalArgumentException(s); + + final String label = s.substring(1, closeQuotePos); + + if (slen == closeQuotePos + 1) { + + /* + * Plain literal. + */ + + return new LiteralImpl(label); + + } + + final char ch2 = s.charAt(closeQuotePos + 1); + + if (ch2 == '@') { + + /* + * Language code literal. + */ + + final String languageCode = s.substring(closeQuotePos + 2); + + return new LiteralImpl(label, languageCode); + + } else if (ch2 == '^') { + + /* + * Datatype literal. + */ + + if (slen <= closeQuotePos + 2) + throw new IllegalArgumentException(s); + + if (s.charAt(closeQuotePos + 2) != '^') + throw new IllegalArgumentException(s); + + final String datatypeStr = s.substring(closeQuotePos + 3); + + final URI datatypeURI = decodeURI(datatypeStr); + + return new LiteralImpl(label,datatypeURI); + + } else { + + throw new IllegalArgumentException(s); + + } + + } else if (ch == '<') { + + /* + * URI + */ + + if (s.charAt(slen - 1) != '>') + throw new IllegalArgumentException(s); + + final String uriStr = s.substring(1, slen - 1); + + return new URIImpl(uriStr); + + } else { + + throw new IllegalArgumentException(s); + + } + + } + + /** + * Type safe variant for a {@link Resource}. + */ + public static Resource decodeResource(final String param) { + + final Value v = decodeValue(param); + + if (v == null || v instanceof Resource) + return (Resource) v; + + throw new IllegalArgumentException("Not a Resource: '" + param + "'"); + + } + + /** + * Type safe variant for a {@link URI}. + */ + public static URI decodeURI(final String param) { + + final Value v = decodeValue(param); + + if (v == null || v instanceof URI) + return (URI) v; + + throw new IllegalArgumentException("Not an URI: '" + param + "'"); + + } + + /** + * Encode an RDF {@link Value} as it should appear if used in a SPARQL + * query. E.g., a literal will look like <code>"abc"</code>, + * <code>"abc"@en</code> or + * <code>"3"^^xsd:int. A URI will look like <code><http://www.bigdata.com/></code> + * . + * + * @param v + * The value (optional). + * + * @return The encoded value -or- <code>null</code> if the argument is + * <code>null</code>. + * + * @throws IllegalArgumentException + * if the argument is a {@link BNode}. + */ + public static String encodeValue(final Value v) { + if(v == null) + return null; + if (v instanceof BNode) + throw new IllegalArgumentException(); + if (v instanceof URI) { + return "<" + v.stringValue() + ">"; + } + if (v instanceof Literal) { + final Literal lit = (Literal) v; + final StringBuilder sb = new StringBuilder(); + sb.append("\""); + sb.append(lit.getLabel()); + sb.append("\""); + if (lit.getLanguage() != null) { + sb.append("@"); + sb.append(lit.getLanguage()); + } + if (lit.getDatatype() != null) { + sb.append("^^"); + sb.append(encodeValue(lit.getDatatype())); + } + return sb.toString(); + } + throw new AssertionError(); + } + +} Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/EncodeDecodeValue.java ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2011-10-13 15:47:41 UTC (rev 5331) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2011-10-13 17:11:35 UTC (rev 5332) @@ -1,422 +1,426 @@ -package com.bigdata.rdf.sail.webapp; - -import java.net.HttpURLConnection; -import java.net.URL; -import java.net.URLConnection; -import java.util.Arrays; -import java.util.Vector; -import java.util.concurrent.atomic.AtomicLong; - -import javax.servlet.http.HttpServletRequest; -import javax.servlet.http.HttpServletResponse; - -import org.apache.log4j.Logger; -import org.openrdf.model.Resource; -import org.openrdf.model.Statement; -import org.openrdf.rio.RDFFormat; -import org.openrdf.rio.RDFHandlerException; -import org.openrdf.rio.RDFParser; -import org.openrdf.rio.RDFParserFactory; -import org.openrdf.rio.RDFParserRegistry; -import org.openrdf.rio.helpers.RDFHandlerBase; -import org.openrdf.sail.SailException; - -import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; -import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; - -/** - * Handler for INSERT operations. - * - * @author martyncutcher - */ -public class InsertServlet extends BigdataRDFServlet { - - /** - * - */ - private static final long serialVersionUID = 1L; - - static private final transient Logger log = Logger.getLogger(InsertServlet.class); - - public InsertServlet() { - - } - - /** - * <p> - * Perform an HTTP-POST, which corresponds to the basic CRUD operation - * "create" according to the generic interaction semantics of HTTP REST. The - * operation will be executed against the target namespace per the URI. - * </p> - * - * <pre> - * POST [/namespace/NAMESPACE] - * ... - * Content-Type: - * ... - * - * BODY - * </pre> - * <p> - * Where <code>BODY</code> is the new RDF content using the representation - * indicated by the <code>Content-Type</code>. - * </p> - * <p> - * -OR- - * </p> - * - * <pre> - * POST [/namespace/NAMESPACE] ?uri=URL - * </pre> - * <p> - * Where <code>URI</code> identifies a resource whose RDF content will be - * inserted into the database. The <code>uri</code> query parameter may - * occur multiple times. All identified resources will be loaded within a - * single native transaction. Bigdata provides snapshot isolation so you can - * continue to execute queries against the last commit point while this - * operation is executed. - * </p> - */ - @Override - protected void doPost(HttpServletRequest req, HttpServletResponse resp) { - try { - if (req.getParameter("uri") != null) { - doPostWithURIs(req, resp); - return; - } else { - doPostWithBody(req, resp); - return; - } - } catch (Exception e) { - throw new RuntimeException(e); - } - } - - /** - * POST with request body containing statements to be inserted. - * - * @param req - * The request. - * - * @return The response. - * - * @throws Exception - */ - private void doPostWithBody(final HttpServletRequest req, - final HttpServletResponse resp) throws Exception { - - final long begin = System.currentTimeMillis(); - - final String baseURI = req.getRequestURL().toString(); - - final String namespace = getNamespace(req); - - final String contentType = req.getContentType(); - - if (log.isInfoEnabled()) - log.info("Request body: " + contentType); - - final RDFFormat format = RDFFormat.forMIMEType(contentType); - - if (format == null) { - - buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, - "Content-Type not recognized as RDF: " + contentType); - - return; - - } - - if (log.isInfoEnabled()) - log.info("RDFFormat=" + format); - - final RDFParserFactory rdfParserFactory = RDFParserRegistry - .getInstance().get(format); - - if (rdfParserFactory == null) { - - buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, - "Parser factory not found: Content-Type=" - + contentType + ", format=" + format); - - return; - - } - - try { - - final AtomicLong nmodified = new AtomicLong(0L); - - BigdataSailRepositoryConnection conn = null; - try { - - conn = getBigdataRDFContext() - .getUnisolatedConnection(namespace); - - /* - * There is a request body, so let's try and parse it. - */ - - final RDFParser rdfParser = rdfParserFactory.getParser(); - - rdfParser.setValueFactory(conn.getTripleStore() - .getValueFactory()); - - rdfParser.setVerifyData(true); - - rdfParser.setStopAtFirstError(true); - - rdfParser - .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); - - rdfParser.setRDFHandler(new AddStatementHandler(conn - .getSailConnection(), nmodified)); - - /* - * Run the parser, which will cause statements to be inserted. - */ - rdfParser.parse(req.getInputStream(), baseURI); - - // Commit the mutation. - conn.commit(); - - final long elapsed = System.currentTimeMillis() - begin; - - reportModifiedCount(resp, nmodified.get(), elapsed); - - return; - - } catch(Throwable t) { - - if(conn != null) - conn.rollback(); - - throw new RuntimeException(t); - - } finally { - - if (conn != null) - conn.close(); - - } - - } catch (Exception ex) { - - // Will be rendered as an INTERNAL_ERROR. - throw new RuntimeException(ex); - - } - - } - - /** - * POST with URIs of resources to be inserted (loads the referenced - * resources). - * - * @param req - * The request. - * - * @return The response. - * - * @throws Exception - */ - private void doPostWithURIs(final HttpServletRequest req, - final HttpServletResponse resp) throws Exception { - - final long begin = System.currentTimeMillis(); - - final String namespace = getNamespace(req); - - final String[] uris = req.getParameterValues("uri"); - - if (uris == null) - throw new UnsupportedOperationException(); - - if (uris.length == 0) { - - final long elapsed = System.currentTimeMillis() - begin; - - reportModifiedCount(resp, 0L/* nmodified */, elapsed); - - return; - - } - - if (log.isInfoEnabled()) - log.info("URIs: " + Arrays.toString(uris)); - - // Before we do anything, make sure we have valid URLs. - final Vector<URL> urls = new Vector<URL>(uris.length); - - for (String uri : uris) { - - urls.add(new URL(uri)); - - } - - try { - - final AtomicLong nmodified = new AtomicLong(0L); - - BigdataSailRepositoryConnection conn = null; - try { - - conn = getBigdataRDFContext().getUnisolatedConnection( - namespace); - - for (URL url : urls) { - - URLConnection hconn = null; - try { - - hconn = url.openConnection(); - if (hconn instanceof HttpURLConnection) { - ((HttpURLConnection) hconn).setRequestMethod("GET"); - } - hconn.setDoInput(true); - hconn.setDoOutput(false); - hconn.setReadTimeout(0);// no timeout? http param? - - /* - * There is a request body, so let's try and parse it. - */ - - final String contentType = hconn.getContentType(); - - final RDFFormat format = RDFFormat - .forMIMEType(contentType); - - if (format == null) { - buildResponse(resp, HTTP_BADREQUEST, - MIME_TEXT_PLAIN, - "Content-Type not recognized as RDF: " - + contentType); - - return; - } - - final RDFParserFactory rdfParserFactory = RDFParserRegistry - .getInstance().get(format); - - if (rdfParserFactory == null) { - buildResponse(resp, HTTP_INTERNALERROR, - MIME_TEXT_PLAIN, - "Parser not found: Content-Type=" - + contentType); - - return; - } - - final RDFParser rdfParser = rdfParserFactory - .getParser(); - - rdfParser.setValueFactory(conn.getTripleStore() - .getValueFactory()); - - rdfParser.setVerifyData(true); - - rdfParser.setStopAtFirstError(true); - - rdfParser - .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); - - rdfParser.setRDFHandler(new AddStatementHandler(conn - .getSailConnection(), nmodified)); - - /* - * Run the parser, which will cause statements to be - * inserted. - */ - - rdfParser.parse(hconn.getInputStream(), url - .toExternalForm()/* baseURL */); - - } finally { - - if (hconn instanceof HttpURLConnection) { - /* - * Disconnect, but only after we have loaded all the - * URLs. Disconnect is optional for java.net. It is a - * hint that you will not be accessing more resources on - * the connected host. By disconnecting only after all - * resources have been loaded we are basically assuming - * that people are more likely to load from a single - * host. - */ - ((HttpURLConnection) hconn).disconnect(); - } - - } - - } // next URI. - - // Commit the mutation. - conn.commit(); - - final long elapsed = System.currentTimeMillis() - begin; - - reportModifiedCount(resp, nmodified.get(), elapsed); - - } catch(Throwable t) { - - if(conn != null) - conn.rollback(); - - throw new RuntimeException(t); - - } finally { - - if (conn != null) - conn.close(); - - } - - } catch (Exception ex) { - - // Will be rendered as an INTERNAL_ERROR. - throw new RuntimeException(ex); - - } - - } - - /** - * Helper class adds statements to the sail as they are visited by a parser. - */ - static class AddStatementHandler extends RDFHandlerBase { - - private final BigdataSailConnection conn; - private final AtomicLong nmodified; - - public AddStatementHandler(final BigdataSailConnection conn, - final AtomicLong nmodified) { - this.conn = conn; - this.nmodified = nmodified; - } - - public void handleStatement(final Statement stmt) - throws RDFHandlerException { - - try { - - conn.addStatement(// - stmt.getSubject(), // - stmt.getPredicate(), // - stmt.getObject(), // - (Resource[]) (stmt.getContext() == null ? new Resource[] { } - : new Resource[] { stmt.getContext() })// - ); - - } catch (SailException e) { - - throw new RDFHandlerException(e); - - } - - nmodified.incrementAndGet(); - - } - - } - -} +package com.bigdata.rdf.sail.webapp; + +import java.net.HttpURLConnection; +import java.net.URL; +import java.net.URLConnection; +import java.util.Arrays; +import java.util.Vector; +import java.util.concurrent.atomic.AtomicLong; + +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; + +import org.apache.log4j.Logger; +import org.openrdf.model.Resource; +import org.openrdf.model.Statement; +import org.openrdf.rio.RDFFormat; +import org.openrdf.rio.RDFHandlerException; +import org.openrdf.rio.RDFParser; +import org.openrdf.rio.RDFParserFactory; +import org.openrdf.rio.RDFParserRegistry; +import org.openrdf.rio.helpers.RDFHandlerBase; +import org.openrdf.sail.SailException; + +import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; +import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; + +/** + * Handler for INSERT operations. + * + * @author martyncutcher + */ +public class InsertServlet extends BigdataRDFServlet { + + /** + * + */ + private static final long serialVersionUID = 1L; + + static private final transient Logger log = Logger.getLogger(InsertServlet.class); + + public InsertServlet() { + + } + + /** + * <p> + * Perform an HTTP-POST, which corresponds to the basic CRUD operation + * "create" according to the generic interaction semantics of HTTP REST. The + * operation will be executed against the target namespace per the URI. + * </p> + * + * <pre> + * POST [/namespace/NAMESPACE] + * ... + * Content-Type: + * ... + * + * BODY + * </pre> + * <p> + * Where <code>BODY</code> is the new RDF content using the representation + * indicated by the <code>Content-Type</code>. + * </p> + * <p> + * -OR- + * </p> + * + * <pre> + * POST [/namespace/NAMESPACE] ?uri=URL + * </pre> + * <p> + * Where <code>URI</code> identifies a resource whose RDF content will be + * inserted into the database. The <code>uri</code> query parameter may + * occur multiple times. All identified resources will be loaded within a + * single native transaction. Bigdata provides snapshot isolation so you can + * continue to execute queries against the last commit point while this + * operation is executed. + * </p> + */ + @Override + protected void doPost(HttpServletRequest req, HttpServletResponse resp) { + try { + if (req.getParameter("uri") != null) { + doPostWithURIs(req, resp); + return; + } else { + doPostWithBody(req, resp); + return; + } + } catch (Exception e) { + throw new RuntimeException(e); + } + } + + /** + * POST with request body containing statements to be inserted. + * + * @param req + * The request. + * + * @return The response. + * + * @throws Exception + */ + private void doPostWithBody(final HttpServletRequest req, + final HttpServletResponse resp) throws Exception { + + final long begin = System.currentTimeMillis(); + + final String baseURI = req.getRequestURL().toString(); + + final String namespace = getNamespace(req); + + final String contentType = req.getContentType(); + + if (log.isInfoEnabled()) + log.info("Request body: " + contentType); + + final RDFFormat format = RDFFormat.forMIMEType(contentType); + + if (format == null) { + + buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, + "Content-Type not recognized as RDF: " + contentType); + + return; + + } + + if (log.isInfoEnabled()) + log.info("RDFFormat=" + format); + + final RDFParserFactory rdfParserFactory = RDFParserRegistry + .getInstance().get(format); + + if (rdfParserFactory == null) { + + buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, + "Parser factory not found: Content-Type=" + + contentType + ", format=" + format); + + return; + + } + + try { +... [truncated message content] |
From: <tho...@us...> - 2011-10-13 18:54:36
|
Revision: 5333 http://bigdata.svn.sourceforge.net/bigdata/?rev=5333&view=rev Author: thompsonbry Date: 2011-10-13 18:54:23 +0000 (Thu, 13 Oct 2011) Log Message: ----------- I have added a "context-uri" request parameter which may be used with the INSERT and UPDATE operations in the REST API. This request parameter is documented at [1] and only effects operations against a quads mode database. Note: When present, the "context-uri" parameter overrides the default context for all resources which are being inserted. Note: There is also a bug fix. The INSERT with URI(s) method was failing to impose the URI of the resource being loaded as the default context. It now uses the URI of the resource that was loaded as the default context unless the "context-uri" request parameter is also specified. [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=NanoSparqlServer @see https://sourceforge.net/apps/trac/bigdata/ticket/393 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/insert_triples_with_defaultContext.ttl branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/insert_triples_with_defaultContext.ttl Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2011-10-13 17:11:35 UTC (rev 5332) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2011-10-13 18:54:23 UTC (rev 5333) @@ -13,6 +13,7 @@ import org.apache.log4j.Logger; import org.openrdf.model.Resource; import org.openrdf.model.Statement; +import org.openrdf.model.impl.URIImpl; import org.openrdf.rio.RDFFormat; import org.openrdf.rio.RDFHandlerException; import org.openrdf.rio.RDFParser; @@ -21,8 +22,8 @@ import org.openrdf.rio.helpers.RDFHandlerBase; import org.openrdf.sail.SailException; +import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; -import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; /** * Handler for INSERT operations. @@ -143,6 +144,25 @@ } + /* + * Allow the caller to specify the default context. + */ + final Resource defaultContext; + { + final String s = req.getParameter("context-uri"); + if (s != null) { + try { + defaultContext = new URIImpl(s); + } catch (IllegalArgumentException ex) { + buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, + ex.getLocalizedMessage()); + return; + } + } else { + defaultContext = null; + } + } + try { final AtomicLong nmodified = new AtomicLong(0L); @@ -170,7 +190,7 @@ .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); rdfParser.setRDFHandler(new AddStatementHandler(conn - .getSailConnection(), nmodified)); + .getSailConnection(), nmodified, defaultContext)); /* * Run the parser, which will cause statements to be inserted. @@ -254,6 +274,25 @@ } + /* + * Allow the caller to specify the default context. + */ + final Resource defaultContext; + { + final String s = req.getParameter("context-uri"); + if (s != null) { + try { + defaultContext = new URIImpl(s); + } catch (IllegalArgumentException ex) { + buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, + ex.getLocalizedMessage()); + return; + } + } else { + defaultContext = null; + } + } + try { final AtomicLong nmodified = new AtomicLong(0L); @@ -266,6 +305,11 @@ for (URL url : urls) { + // Use the default context if one was given and otherwise + // the URI from which the data are being read. + final Resource defactoContext = defaultContext == null ? new URIImpl( + url.toExternalForm()) : defaultContext; + URLConnection hconn = null; try { @@ -325,7 +369,7 @@ .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); rdfParser.setRDFHandler(new AddStatementHandler(conn - .getSailConnection(), nmodified)); + .getSailConnection(), nmodified, defactoContext)); /* * Run the parser, which will cause statements to be @@ -391,11 +435,19 @@ private final BigdataSailConnection conn; private final AtomicLong nmodified; - + private final Resource[] defaultContexts; + public AddStatementHandler(final BigdataSailConnection conn, - final AtomicLong nmodified) { + final AtomicLong nmodified, final Resource defaultContext) { this.conn = conn; this.nmodified = nmodified; + final boolean quads = conn.getTripleStore().isQuads(); + if (quads && defaultContext != null) { + // The default context may only be specified for quads. + this.defaultContexts = new Resource[] { defaultContext }; + } else { + this.defaultContexts = new Resource[0]; + } } public void handleStatement(final Statement stmt) @@ -407,7 +459,7 @@ stmt.getSubject(), // stmt.getPredicate(), // stmt.getObject(), // - (Resource[]) (stmt.getContext() == null ? new Resource[] { } + (Resource[]) (stmt.getContext() == null ? defaultContexts : new Resource[] { stmt.getContext() })// ); Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java 2011-10-13 17:11:35 UTC (rev 5332) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java 2011-10-13 18:54:23 UTC (rev 5333) @@ -10,6 +10,8 @@ import javax.servlet.http.HttpServletResponse; import org.apache.log4j.Logger; +import org.openrdf.model.Resource; +import org.openrdf.model.impl.URIImpl; import org.openrdf.rio.RDFFormat; import org.openrdf.rio.RDFParser; import org.openrdf.rio.RDFParserFactory; @@ -117,6 +119,25 @@ } + /* + * Allow the caller to specify the default context. + */ + final Resource defaultContext; + { + final String s = req.getParameter("context-uri"); + if (s != null) { + try { + defaultContext = new URIImpl(s); + } catch (IllegalArgumentException ex) { + buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, + ex.getLocalizedMessage()); + return; + } + } else { + defaultContext = null; + } + } + if (log.isInfoEnabled()) log.info("update with query: " + queryStr); @@ -213,7 +234,7 @@ .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); rdfParser.setRDFHandler(new AddStatementHandler(conn - .getSailConnection(), nmodified)); + .getSailConnection(), nmodified, defaultContext)); /* * Run the parser, which will cause statements to be Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-13 17:11:35 UTC (rev 5332) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-13 18:54:23 UTC (rev 5333) @@ -2,9 +2,11 @@ import java.io.ByteArrayOutputStream; import java.io.File; +import java.io.FileReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; +import java.io.LineNumberReader; import java.io.OutputStream; import java.io.Reader; import java.io.StringWriter; @@ -51,6 +53,7 @@ import org.openrdf.repository.RepositoryException; import org.openrdf.rio.RDFFormat; import org.openrdf.rio.RDFHandlerException; +import org.openrdf.rio.RDFParseException; import org.openrdf.rio.RDFParser; import org.openrdf.rio.RDFParserFactory; import org.openrdf.rio.RDFParserRegistry; @@ -88,126 +91,126 @@ */ private static final String packagePath = "bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/"; - /** - * A jetty {@link Server} running a {@link NanoSparqlServer} instance which - * is running against that {@link #m_indexManager}. - */ - private Server m_fixture; + /** + * A jetty {@link Server} running a {@link NanoSparqlServer} instance which + * is running against that {@link #m_indexManager}. + */ + private Server m_fixture; - /** - * The namespace of the {@link AbstractTripleStore} instance against which - * the test is running. A unique namespace is used for each test run, but - * the namespace is based on the test name. - */ - private String namespace; - - /** - * The effective {@link NanoSparqlServer} http end point. - */ - private String m_serviceURL; + /** + * The namespace of the {@link AbstractTripleStore} instance against which + * the test is running. A unique namespace is used for each test run, but + * the namespace is based on the test name. + */ + private String namespace; + + /** + * The effective {@link NanoSparqlServer} http end point. + */ + private String m_serviceURL; - /** - * The request path for the REST API under test. - */ - final private static String requestPath = "/sparql"; + /** + * The request path for the REST API under test. + */ + final private static String requestPath = "/sparql"; - public TestNanoSparqlServer2() { - - } + public TestNanoSparqlServer2() { + + } - public TestNanoSparqlServer2(final String name) { + public TestNanoSparqlServer2(final String name) { - super(name); + super(name); - } + } - private AbstractTripleStore createTripleStore( - final IIndexManager indexManager, final String namespace, - final Properties properties) { + private AbstractTripleStore createTripleStore( + final IIndexManager indexManager, final String namespace, + final Properties properties) { - if(log.isInfoEnabled()) - log.info("KB namespace=" + namespace); + if(log.isInfoEnabled()) + log.info("KB namespace=" + namespace); - // Locate the resource declaration (aka "open"). This tells us if it - // exists already. - AbstractTripleStore tripleStore = (AbstractTripleStore) indexManager - .getResourceLocator().locate(namespace, ITx.UNISOLATED); + // Locate the resource declaration (aka "open"). This tells us if it + // exists already. + AbstractTripleStore tripleStore = (AbstractTripleStore) indexManager + .getResourceLocator().locate(namespace, ITx.UNISOLATED); - if (tripleStore != null) { + if (tripleStore != null) { - fail("exists: " + namespace); - - } + fail("exists: " + namespace); + + } - /* - * Create the KB instance. - */ + /* + * Create the KB instance. + */ - if (log.isInfoEnabled()) { - log.info("Creating KB instance: namespace="+namespace); - log.info("Properties=" + properties.toString()); - } + if (log.isInfoEnabled()) { + log.info("Creating KB instance: namespace="+namespace); + log.info("Properties=" + properties.toString()); + } - if (indexManager instanceof Journal) { + if (indexManager instanceof Journal) { - // Create the kb instance. - tripleStore = new LocalTripleStore(indexManager, namespace, - ITx.UNISOLATED, properties); + // Create the kb instance. + tripleStore = new LocalTripleStore(indexManager, namespace, + ITx.UNISOLATED, properties); - } else { + } else { - tripleStore = new ScaleOutTripleStore(indexManager, namespace, - ITx.UNISOLATED, properties); - } + tripleStore = new ScaleOutTripleStore(indexManager, namespace, + ITx.UNISOLATED, properties); + } // create the triple store. tripleStore.create(); if(log.isInfoEnabled()) - log.info("Created tripleStore: " + namespace); + log.info("Created tripleStore: " + namespace); // New KB instance was created. return tripleStore; } - private void dropTripleStore(final IIndexManager indexManager, - final String namespace) { + private void dropTripleStore(final IIndexManager indexManager, + final String namespace) { - if(log.isInfoEnabled()) - log.info("KB namespace=" + namespace); + if(log.isInfoEnabled()) + log.info("KB namespace=" + namespace); - // Locate the resource declaration (aka "open"). This tells us if it - // exists already. - final AbstractTripleStore tripleStore = (AbstractTripleStore) indexManager - .getResourceLocator().locate(namespace, ITx.UNISOLATED); + // Locate the resource declaration (aka "open"). This tells us if it + // exists already. + final AbstractTripleStore tripleStore = (AbstractTripleStore) indexManager + .getResourceLocator().locate(namespace, ITx.UNISOLATED); - if (tripleStore != null) { + if (tripleStore != null) { - if (log.isInfoEnabled()) - log.info("Destroying: " + namespace); + if (log.isInfoEnabled()) + log.info("Destroying: " + namespace); - tripleStore.destroy(); - - } + tripleStore.destroy(); + + } - } - - TestMode testMode = null; - - @Override - public void setUp() throws Exception { - - super.setUp(); - - final Properties properties = getProperties(); + } + + TestMode testMode = null; + + @Override + public void setUp() throws Exception { + + super.setUp(); + + final Properties properties = getProperties(); - // guaranteed distinct namespace for the KB instance. - namespace = getName() + UUID.randomUUID(); + // guaranteed distinct namespace for the KB instance. + namespace = getName() + UUID.randomUUID(); - final IIndexManager m_indexManager = getIndexManager(); - - // Create the triple store instance. + final IIndexManager m_indexManager = getIndexManager(); + + // Create the triple store instance. final AbstractTripleStore tripleStore = createTripleStore(m_indexManager, namespace, properties); @@ -218,7 +221,7 @@ } else { testMode = TestMode.triples; } - + final Map<String, String> initParams = new LinkedHashMap<String, String>(); { @@ -228,14 +231,14 @@ } // Start server for that kb instance. - m_fixture = NanoSparqlServer.newInstance(0/* port */, m_indexManager, - initParams); + m_fixture = NanoSparqlServer.newInstance(0/* port */, m_indexManager, + initParams); m_fixture.start(); - final int port = m_fixture.getConnectors()[0].getLocalPort(); + final int port = m_fixture.getConnectors()[0].getLocalPort(); - // log.info("Getting host address"); + // log.info("Getting host address"); final String hostAddr = NicUtil.getIpAddress("default.nic", "default", true/* loopbackOk */); @@ -254,83 +257,83 @@ } @Override - public void tearDown() throws Exception { + public void tearDown() throws Exception { - log.info("tearing down test"); + log.info("tearing down test"); - if (m_fixture != null) { + if (m_fixture != null) { - m_fixture.stop(); + m_fixture.stop(); - m_fixture = null; + m_fixture = null; - } + } - final IIndexManager m_indexManager = getIndexManager(); - - if (m_indexManager != null && namespace != null) { + final IIndexManager m_indexManager = getIndexManager(); + + if (m_indexManager != null && namespace != null) { - dropTripleStore(m_indexManager, namespace); + dropTripleStore(m_indexManager, namespace); - } - -// m_indexManager = null; + } + +// m_indexManager = null; - namespace = null; - - m_serviceURL = null; + namespace = null; + + m_serviceURL = null; - log.info("tear down done"); - - super.tearDown(); + log.info("tear down done"); + + super.tearDown(); - } + } /** * Returns a view of the triple store using the sail interface. */ protected BigdataSail getSail() { - final AbstractTripleStore tripleStore = (AbstractTripleStore) getIndexManager() - .getResourceLocator().locate(namespace, ITx.UNISOLATED); + final AbstractTripleStore tripleStore = (AbstractTripleStore) getIndexManager() + .getResourceLocator().locate(namespace, ITx.UNISOLATED); return new BigdataSail(tripleStore); } - public void test_startup() throws Exception { + public void test_startup() throws Exception { - assertTrue("open", m_fixture.isRunning()); - - } - - /** - * Options for the query. - */ - private static class QueryOptions { + assertTrue("open", m_fixture.isRunning()); + + } + + /** + * Options for the query. + */ + private static class QueryOptions { - /** The URL of the SPARQL end point. */ - public String serviceURL = null; - - /** The HTTP method (GET, POST, etc). */ - public String method = "GET"; + /** The URL of the SPARQL end point. */ + public String serviceURL = null; + + /** The HTTP method (GET, POST, etc). */ + public String method = "GET"; - /** + /** * The SPARQL query (this is a short hand for setting the * <code>query</code> URL query parameter). */ - public String queryStr = null; + public String queryStr = null; + + /** Request parameters to be formatted as URL query parameters. */ + public Map<String,String[]> requestParams; - /** Request parameters to be formatted as URL query parameters. */ - public Map<String,String[]> requestParams; - - /** The accept header. */ + /** The accept header. */ public String acceptHeader = // BigdataRDFServlet.MIME_SPARQL_RESULTS_XML + ";q=1" + // "," + // RDFFormat.RDFXML.getDefaultMIMEType() + ";q=1"// ; - + /** * The Content-Type (iff there will be a request body). */ @@ -341,10 +344,10 @@ */ public byte[] data = null; - /** The connection timeout (ms) -or- ZERO (0) for an infinite timeout. */ - public int timeout = 0; + /** The connection timeout (ms) -or- ZERO (0) for an infinite timeout. */ + public int timeout = 0; - } + } /** * Add any URL query parameters. @@ -374,8 +377,8 @@ // : ("&default-graph-uri=" + URLEncoder.encode( // opts.defaultGraphUri, "UTF-8"))); - } - + } + private HttpURLConnection doConnect(final String urlString, final String method) throws Exception { @@ -433,13 +436,13 @@ log.debug(opts.queryStr); } - HttpURLConnection conn = null; - try { + HttpURLConnection conn = null; + try { -// conn = doConnect(urlString.toString(), opts.method); - final URL url = new URL(urlString.toString()); - conn = (HttpURLConnection) url.openConnection(); - conn.setRequestMethod(opts.method); +// conn = doConnect(urlString.toString(), opts.method); + final URL url = new URL(urlString.toString()); + conn = (HttpURLConnection) url.openConnection(); + conn.setRequestMethod(opts.method); conn.setDoOutput(true); conn.setDoInput(true); conn.setUseCaches(false); @@ -466,27 +469,27 @@ } - // connect. - conn.connect(); + // connect. + conn.connect(); - return conn; + return conn; - } catch (Throwable t) { - /* - * If something goes wrong, then close the http connection. - * Otherwise, the connection will be closed by the caller. - */ - try { - // clean up the connection resources - if (conn != null) - conn.disconnect(); - } catch (Throwable t2) { - // ignored. - } - throw new RuntimeException(t); - } + } catch (Throwable t) { + /* + * If something goes wrong, then close the http connection. + * Otherwise, the connection will be closed by the caller. + */ + try { + // clean up the connection resources + if (conn != null) + conn.disconnect(); + } catch (Throwable t2) { + // ignored. + } + throw new RuntimeException(t); + } - } + } protected HttpURLConnection checkResponseCode(final HttpURLConnection conn) throws IOException { @@ -506,36 +509,36 @@ return conn; } - /** - * Builds a graph from an RDF result set (statements, not binding sets). - * - * @param conn - * The connection from which to read the results. - * - * @return The graph - * - * @throws Exception - * If anything goes wrong. - */ - protected Graph buildGraph(final HttpURLConnection conn) throws Exception { + /** + * Builds a graph from an RDF result set (statements, not binding sets). + * + * @param conn + * The connection from which to read the results. + * + * @return The graph + * + * @throws Exception + * If anything goes wrong. + */ + protected Graph buildGraph(final HttpURLConnection conn) throws Exception { checkResponseCode(conn); - try { + try { - final String baseURI = ""; + final String baseURI = ""; - final String contentType = conn.getContentType(); + final String contentType = conn.getContentType(); if (contentType == null) fail("Not found: Content-Type"); - final RDFFormat format = RDFFormat.forMIMEType(contentType); + final RDFFormat format = RDFFormat.forMIMEType(contentType); if (format == null) fail("RDFFormat not found: Content-Type=" + contentType); - final RDFParserFactory factory = RDFParserRegistry.getInstance().get(format); + final RDFParserFactory factory = RDFParserRegistry.getInstance().get(format); if (factory == null) fail("RDFParserFactory not found: Content-Type=" + contentType @@ -543,30 +546,30 @@ final Graph g = new GraphImpl(); - final RDFParser rdfParser = factory.getParser(); - - rdfParser.setValueFactory(new ValueFactoryImpl()); + final RDFParser rdfParser = factory.getParser(); + + rdfParser.setValueFactory(new ValueFactoryImpl()); - rdfParser.setVerifyData(true); + rdfParser.setVerifyData(true); - rdfParser.setStopAtFirstError(true); + rdfParser.setStopAtFirstError(true); - rdfParser.setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); + rdfParser.setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); - rdfParser.setRDFHandler(new StatementCollector(g)); + rdfParser.setRDFHandler(new StatementCollector(g)); - rdfParser.parse(conn.getInputStream(), baseURI); + rdfParser.parse(conn.getInputStream(), baseURI); - return g; + return g; - } finally { + } finally { - // terminate the http connection. - conn.disconnect(); + // terminate the http connection. + conn.disconnect(); - } + } - } + } /** * Parse a SPARQL result set for an ASK query. @@ -616,18 +619,18 @@ } - /** - * Counts the #of results in a SPARQL result set. - * - * @param conn - * The connection from which to read the results. - * - * @return The #of results. - * - * @throws Exception - * If anything goes wrong. - */ - protected long countResults(final HttpURLConnection conn) throws Exception { + /** + * Counts the #of results in a SPARQL result set. + * + * @param conn + * The connection from which to read the results. + * + * @return The #of results. + * + * @throws Exception + * If anything goes wrong. + */ + protected long countResults(final HttpURLConnection conn) throws Exception { checkResponseCode(conn); @@ -647,44 +650,44 @@ if (factory == null) fail("No factory for Content-Type: " + contentType); - final TupleQueryResultParser parser = factory.getParser(); + final TupleQueryResultParser parser = factory.getParser(); - final AtomicLong nsolutions = new AtomicLong(); + final AtomicLong nsolutions = new AtomicLong(); - parser.setTupleQueryResultHandler(new TupleQueryResultHandlerBase() { - // Indicates the end of a sequence of solutions. - public void endQueryResult() { - // connection close is handled in finally{} - } + parser.setTupleQueryResultHandler(new TupleQueryResultHandlerBase() { + // Indicates the end of a sequence of solutions. + public void endQueryResult() { + // connection close is handled in finally{} + } - // Handles a solution. - public void handleSolution(final BindingSet bset) { - if (log.isDebugEnabled()) - log.debug(bset.toString()); - nsolutions.incrementAndGet(); - } + // Handles a solution. + public void handleSolution(final BindingSet bset) { + if (log.isDebugEnabled()) + log.debug(bset.toString()); + nsolutions.incrementAndGet(); + } - // Indicates the start of a sequence of Solutions. - public void startQueryResult(List<String> bindingNames) { - } - }); + // Indicates the start of a sequence of Solutions. + public void startQueryResult(List<String> bindingNames) { + } + }); - parser.parse(conn.getInputStream()); + parser.parse(conn.getInputStream()); - if (log.isInfoEnabled()) - log.info("nsolutions=" + nsolutions); + if (log.isInfoEnabled()) + log.info("nsolutions=" + nsolutions); - // done. - return nsolutions.longValue(); + // done. + return nsolutions.longValue(); - } finally { + } finally { - // terminate the http connection. - conn.disconnect(); + // terminate the http connection. + conn.disconnect(); - } + } - } + } /** * Class representing the result of a mutation operation against the REST @@ -766,15 +769,15 @@ } /** - * Issue a "status" request against the service. - */ - public void test_STATUS() throws Exception { + * Issue a "status" request against the service. + */ + public void test_STATUS() throws Exception { final HttpURLConnection conn = doConnect(m_serviceURL + "/status", "GET"); - // connect. - conn.connect(); + // connect. + conn.connect(); try { @@ -797,31 +800,31 @@ } - } + } private String getStreamContents(final InputStream inputStream) throws IOException { final Reader rdr = new InputStreamReader(inputStream); - - final StringBuffer sb = new StringBuffer(); - - final char[] buf = new char[512]; - - while (true) { - - final int rdlen = rdr.read(buf); - - if (rdlen == -1) - break; - - sb.append(buf, 0, rdlen); - - } - - return sb.toString(); + + final StringBuffer sb = new StringBuffer(); + + final char[] buf = new char[512]; + + while (true) { + + final int rdlen = rdr.read(buf); + + if (rdlen == -1) + break; + + sb.append(buf, 0, rdlen); + + } + + return sb.toString(); - } + } /** * Generates some statements and serializes them using the specified @@ -922,26 +925,26 @@ * Select everything in the kb using a GET. There will be no solutions * (assuming that we are using a told triple kb or quads kb w/o axioms). */ - public void test_GET_SELECT_ALL() throws Exception { + public void test_GET_SELECT_ALL() throws Exception { - final String queryStr = "select * where {?s ?p ?o}"; + final String queryStr = "select * where {?s ?p ?o}"; - final QueryOptions opts = new QueryOptions(); - opts.serviceURL = m_serviceURL; - opts.queryStr = queryStr; - opts.method = "GET"; + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.queryStr = queryStr; + opts.method = "GET"; - opts.acceptHeader = TupleQueryResultFormat.SPARQL.getDefaultMIMEType(); - assertEquals(0, countResults(doSparqlQuery(opts, requestPath))); + opts.acceptHeader = TupleQueryResultFormat.SPARQL.getDefaultMIMEType(); + assertEquals(0, countResults(doSparqlQuery(opts, requestPath))); - // TODO JSON parser is not bundled by openrdf. + // TODO JSON parser is not bundled by openrdf. // opts.acceptHeader = TupleQueryResultFormat.JSON.getDefaultMIMEType(); // assertEquals(0, countResults(doSparqlQuery(opts, requestPath))); opts.acceptHeader = TupleQueryResultFormat.BINARY.getDefaultMIMEType(); assertEquals(0, countResults(doSparqlQuery(opts, requestPath))); - } + } /** * Select everything in the kb using a POST. There will be no solutions @@ -1068,6 +1071,64 @@ } + // TODO Write test for UPDATE where we override the default context using + // the context-uri. + public void test_POST_INSERT_triples_with_BODY_and_defaultContext() + throws Exception { + + if(TestMode.quads != testMode) + return; + + // Load the resource into the KB. + doInsertByBody("POST", requestPath, packagePath + + "insert_triples_with_defaultContext.ttl", new URIImpl( + "http://example.org")); + + // Verify that the data were inserted into the appropriate context. + { + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.method = "GET"; + opts.queryStr = "select * { GRAPH <http://example.org> {?s ?p ?p} }"; + assertEquals(7, countResults(doSparqlQuery(opts, requestPath))); + } + + } + + public void test_POST_INSERT_triples_with_URI_and_defaultContext() throws Exception { + + if(TestMode.quads != testMode) + return; + + // Load the resource into the KB. + { + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.method = "POST"; + opts.requestParams = new LinkedHashMap<String, String[]>(); + // set the resource to load. + opts.requestParams.put("uri", new String[] { new File(packagePath + + "insert_triples_with_defaultContext.ttl").toURI() + .toString() }); + // set the default context. + opts.requestParams.put("context-uri", + new String[] { "http://example.org" }); + assertEquals( + 7, + getMutationResult(doSparqlQuery(opts, requestPath)).mutationCount); + } + + // Verify that the data were inserted into the appropriate context. + { + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.method = "GET"; + opts.queryStr = "select * { GRAPH <http://example.org> {?s ?p ?p} }"; + assertEquals(7, countResults(doSparqlQuery(opts, requestPath))); + } + + } + /** * Test ability to load data from a URI. */ @@ -1430,36 +1491,36 @@ } - private void doDeleteWithQuery(final String servlet, final String query) { - HttpURLConnection conn = null; - try { + private void doDeleteWithQuery(final String servlet, final String query) { + HttpURLConnection conn = null; + try { - final URL url = new URL(m_serviceURL + servlet + "?query=" - + URLEncoder.encode(query, "UTF-8")); - conn = (HttpURLConnection) url.openConnection(); - conn.setRequestMethod("DELETE"); - conn.setDoOutput(true); - conn.setDoInput(true); - conn.setUseCaches(false); - conn.setReadTimeout(0); + final URL url = new URL(m_serviceURL + servlet + "?query=" + + URLEncoder.encode(query, "UTF-8")); + conn = (HttpURLConnection) url.openConnection(); + conn.setRequestMethod("DELETE"); + conn.setDoOutput(true); + conn.setDoInput(true); + conn.setUseCaches(false); + conn.setReadTimeout(0); - conn.connect(); + conn.connect(); - if (log.isInfoEnabled()) - log.info(conn.getResponseMessage()); + if (log.isInfoEnabled()) + log.info(conn.getResponseMessage()); - final int rc = conn.getResponseCode(); - - if (rc < 200 || rc >= 300) { - throw new IOException(conn.getResponseMessage()); - } + final int rc = conn.getResponseCode(); + + if (rc < 200 || rc >= 300) { + throw new IOException(conn.getResponseMessage()); + } - } catch (Throwable t) { - // clean up the connection resources - if (conn != null) - conn.disconnect(); - throw new RuntimeException(t); - } + } catch (Throwable t) { + // clean up the connection resources + if (conn != null) + conn.disconnect(); + throw new RuntimeException(t); + } } private MutationResult doDeleteWithAccessPath(// @@ -1527,71 +1588,71 @@ final RDFFormat format) throws Exception { HttpURLConnection conn = null; - try { + try { final URL url = new URL(m_serviceURL + servlet + "?delete"); conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod("POST"); - conn.setDoOutput(true); - conn.setDoInput(true); - conn.setUseCaches(false); - conn.setReadTimeout(0); + conn.setDoOutput(true); + conn.setDoInput(true); + conn.setUseCaches(false); + conn.setReadTimeout(0); conn .setRequestProperty("Content-Type", format .getDefaultMIMEType()); final byte[] data = genNTRIPLES(ntriples, format); - + conn.setRequestProperty("Content-Length", "" + Integer.toString(data.length)); - final OutputStream os = conn.getOutputStream(); - try { - os.write(data); - os.flush(); - } finally { - os.close(); - } + final OutputStream os = conn.getOutputStream(); + try { + os.write(data); + os.flush(); + } finally { + os.close(); + } - if (log.isInfoEnabled()) - log.info(conn.getResponseMessage()); + if (log.isInfoEnabled()) + log.info(conn.getResponseMessage()); - final int rc = conn.getResponseCode(); - - if (rc < 200 || rc >= 300) { - throw new IOException(conn.getResponseMessage()); - } + final int rc = conn.getResponseCode(); + + if (rc < 200 || rc >= 300) { + throw new IOException(conn.getResponseMessage()); + } - } catch (Throwable t) { - // clean up the connection resources - if (conn != null) - conn.disconnect(); - throw new RuntimeException(t); - } + } catch (Throwable t) { + // clean up the connection resources + if (conn != null) + conn.disconnect(); + throw new RuntimeException(t); + } // Verify the mutation count. assertEquals(ntriples, getMutationResult(conn).mutationCount); } - /** - * Test of POST w/ BODY having data to be loaded. - */ + /** + * Test of POST w/ BODY having data to be loaded. + */ private void doInsertWithBodyTest(final String method, final int ntriples, final String servlet, final RDFFormat format) throws Exception { - HttpURLConnection conn = null; - try { + HttpURLConnection conn = null; + try { - final URL url = new URL(m_serviceURL + servlet); - conn = (HttpURLConnection) url.openConnection(); - conn.setRequestMethod(method); - conn.setDoOutput(true); - conn.setDoInput(true); - conn.setUseCaches(false); - conn.setReadTimeout(0); - + final URL url = new URL(m_serviceURL + servlet); + conn = (HttpURLConnection) url.openConnection(); + conn.setRequestMethod(method); + conn.setDoOutput(true); + conn.setDoInput(true); + conn.setUseCaches(false); + conn.setReadTimeout(0); + conn.setRequestProperty("Content-Type", format .getDefaultMIMEType()); @@ -1600,49 +1661,49 @@ conn.setRequestProperty("Content-Length", Integer.toString(data .length)); - final OutputStream os = conn.getOutputStream(); - try { - os.write(data); - os.flush(); - } finally { - os.close(); - } - // conn.connect(); + final OutputStream os = conn.getOutputStream(); + try { + os.write(data); + os.flush(); + } finally { + os.close(); + } + // conn.connect(); - final int rc = conn.getResponseCode(); + final int rc = conn.getResponseCode(); if (log.isInfoEnabled()) { log.info("*** RESPONSE: " + rc + " for " + method); // log.info("*** RESPONSE: " + getResponseBody(conn)); } - if (rc < 200 || rc >= 300) { + if (rc < 200 || rc >= 300) { - throw new IOException(conn.getResponseMessage()); - - } + throw new IOException(conn.getResponseMessage()); + + } - } catch (Throwable t) { - // clean up the connection resources - if (conn != null) - conn.disconnect(); - throw new RuntimeException(t); - } + } catch (Throwable t) { + // clean up the connection resources + if (conn != null) + conn.disconnect(); + throw new RuntimeException(t); + } // Verify the mutation count. assertEquals(ntriples, getMutationResult(conn).mutationCount); - - // Verify the expected #of statements in the store. - { - final String queryStr = "select * where {?s ?p ?o}"; + + // Verify the expected #of statements in the store. + { + final String queryStr = "select * where {?s ?p ?o}"; - final QueryOptions opts = new QueryOptions(); - opts.serviceURL = m_serviceURL; - opts.queryStr = queryStr; - opts.method = "GET"; + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.queryStr = queryStr; + opts.method = "GET"; - assertEquals(ntriples, countResults(doSparqlQuery(opts, requestPath))); - } + assertEquals(ntriples, countResults(doSparqlQuery(opts, requestPath))); + } } @@ -1650,7 +1711,7 @@ * Insert a resource into the {@link NanoSparqlServer}. This is used to * load resources in the test package into the server. */ - private void doInsertbyURL(final String method, final String servlet, + private MutationResult doInsertbyURL(final String method, final String servlet, final String resource) throws Exception { final String uri = new File(resource).toURI().toString(); @@ -1665,8 +1726,7 @@ opts.requestParams = new LinkedHashMap<String, String[]>(); opts.requestParams.put("uri", new String[] { uri }); - final MutationResult result = getMutationResult(doSparqlQuery(opts, - requestPath)); + return getMutationResult(doSparqlQuery(opts, requestPath)); } catch (Throwable t) { // clean up the connection resources @@ -1677,6 +1737,204 @@ } + /** + * Read the contents of a file. + * + * @param file + * The file. + * @return It's contents. + */ + private static String readFromFile(final File file) throws IOException { + + final LineNumberReader r = new LineNumberReader(new FileReader(file)); + + try { + + final StringBuilder sb = new StringBuilder(); + + String s; + while ((s = r.readLine()) != null) { + + if (r.getLineNumber() > 1) + sb.append("\n"); + + sb.append(s); + + } + + return sb.toString(); + + } finally { + + r.close(); + + } + + } + + private static Graph readGraphFromFile(final File file) throws RDFParseException, RDFHandlerException, IOException { + + final RDFFormat format = RDFFormat.forFileName(file.getName()); + + final RDFParserFactory rdfParserFactory = RDFParserRegistry + .getInstance().get(format); + + if (rdfParserFactory == null) { + throw new RuntimeException("Parser not found: file=" + file + + ", format=" + format); + } + + final RDFParser rdfParser = rdfParserFactory + .getParser(); + + rdfParser.setValueFactory(new ValueFactoryImpl()); + + rdfParser.setVerifyData(true); + + rdfParser.setStopAtFirstError(true); + + rdfParser + .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); + + final StatementCollector rdfHandler = new StatementCollector(); + + rdfParser.setRDFHandler(rdfHandler); + + /* + * Run the parser, which will cause statements to be + * inserted. + */ + + final FileReader r = new FileReader(file); + try { + rdfParser.parse(r, file.toURI().toString()/* baseURL */); + } finally { + r.close(); + } + + final Graph g = new GraphImpl(); + + g.addAll(rdfHandler.getStatements()); + + return g; + + } + + /** + * Write a graph on a buffer suitable for sending as an HTTP request body. + * + * @param format + * The RDF Format to use. + * @param g + * The graph. + * + * @return The serialized data. + * + * @throws RDFHandlerException + */ + static private byte[] writeOnBuffer(final RDFFormat format, final Graph g) + throws RDFHandlerException { + + final RDFWriterFactory writerFactory = RDFWriterRegistry.getInstance() + .get(format); + + if (writerFactory == null) + fail("RDFWriterFactory not found: format=" + format); + + final ByteArrayOutputStream baos = new ByteArrayOutputStream(); + + final RDFWriter writer = writerFactory.getWriter(baos); + + writer.startRDF(); + + for (Statement stmt : g) { + + writer.handleStatement(stmt); + + } + + writer.endRDF(); + + return baos.toByteArray(); + + } + + /** + * Reads a resource and sends it using an INSERT with BODY request to be + * loaded into the database. + * + * @param method + * @param servlet + * @param resource + * @return + * @throws Exception + */ + private MutationResult doInsertByBody(final String method, + final String servlet, final String resource, + final URI defaultContext) throws Exception { + + final RDFFormat rdfFormat = RDFFormat.forFileName(resource); + + final Graph g = readGraphFromFile(new File(resource)); + + final byte[] wireData = writeOnBuffer(rdfFormat, g); + + HttpURLConnection conn = null; + try { + + final URL url = new URL(m_serviceURL + + servlet + + (defaultContext == null ? "" + : ("?context-uri=" + URLEncoder.encode( + defaultContext.stringValue(), "UTF-8")))); + conn = (HttpURLConnection) url.openConnection(); + conn.setRequestMethod(method); + conn.setDoOutput(true); + conn.setDoInput(true); + conn.setUseCaches(false); + conn.setReadTimeout(0); + + conn.setRequestProperty("Content-Type", + rdfFormat.getDefaultMIMEType()); + + final byte[] data = wireData; + + conn.setRequestProperty("Content-Length", + Integer.toString(data.length)); + + final OutputStream os = conn.getOutputStream(); + try { + os.write(data); + os.flush(); + } finally { + os.close(); + } + // conn.connect(); + + final int rc = conn.getResponseCode(); + + if (log.isInfoEnabled()) { + log.info("*** RESPONSE: " + rc + " for " + method); + // log.info("*** RESPONSE: " + getResponseBody(conn)); + } + + if (rc < 200 || rc >= 300) { + + throw new IOException(conn.getResponseMessage()); + + } + + return getMutationResult(conn); + + } catch (Throwable t) { + // clean up the connection resources + if (conn != null) + conn.disconnect(); + throw new RuntimeException(t); + } + + } + private static String getResponseBody(final HttpURLConnection conn) throws IOException { Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/insert_triples_with_defaultContext.ttl =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/insert_triples_with_defaultContext.ttl (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/insert_triples_with_defaultContext.ttl 2011-10-13 18:54:23 UTC (rev 5333) @@ -0,0 +1,12 @@ +@prefix : <http://www.bigdata.com/> . +@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . +@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . +@prefix foaf: <http://xmlns.com/foaf/0.1/> . + +:Mike rdf:type foaf:Person . +:Bryan rdf:type foaf:Person . +:Martyn rdf:type foaf:Person . +:Mike rdfs:label "Mike" . +:Bryan rdfs:label "Bryan" . +:Mike foaf:knows :Bryan . +:Bryan foaf:knows :Martyn . Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2011-10-13 17:11:35 UTC (rev 5332) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/InsertServlet.java 2011-10-13 18:54:23 UTC (rev 5333) @@ -13,6 +13,7 @@ import org.apache.log4j.Logger; import org.openrdf.model.Resource; import org.openrdf.model.Statement; +import org.openrdf.model.impl.URIImpl; import org.openrdf.rio.RDFFormat; import org.openrdf.rio.RDFHandlerException; import org.openrdf.rio.RDFParser; @@ -21,8 +22,8 @@ import org.openrdf.rio.helpers.RDFHandlerBase; import org.openrdf.sail.SailException; +import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; -import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; /** * Handler for INSERT operations. @@ -143,6 +144,25 @@ } + /* + * Allow the caller to specify the default context. + */ + final Resource defaultContext; + { + final String s = req.getParameter("context-uri"); + if (s != null) { + try { + defaultContext = new URIImpl(s); + } catch (IllegalArgumentException ex) { + buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, + ex.getLocalizedMessage()); + return; + } + } else { + defaultContext = null; + } + } + try { final AtomicLong nmodified = new AtomicLong(0L); @@ -170,7 +190,7 @@ .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); rdfParser.setRDFHandler(new AddStatementHandler(conn - .getSailConnection(), nmodified)); + .getSailConnection(), nmodified, defaultContext)); /* * Run the parser, which will cause statements to be inserted. @@ -254,6 +274,25 @@ } + /* + * Allow the caller to specify the default context. + */ + final Resource defaultContext; + { + final String s = req.getParameter("context-uri"); + if (s != null) { + try { + defaultContext = new URIImpl(s); + } catch (IllegalArgumentException ex) { + buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, + ex.getLocalizedMessage()); + return; + } + } else { + defaultContext = null; + } + } + try { final AtomicLong nmodified = new AtomicLong(0L); @@ -266,6 +305,11 @@ for (URL url : urls) { + // Use the default context if one was given and otherwise + // the URI from which the data are being read. + final Resource defactoContext = defaultContext == null ? new URIImpl( + url.toExternalForm()) : defaultContext; + URLConnection hconn = null; try { @@ -325,7 +369,7 @@ .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); rdfParser.setRDFHandler(new AddStatementHandler(conn - .getSailConnection(), nmodified)); + .getSailConnection(), nmodified, defactoContext)); /* * Run the parser, which will cause statements to be @@ -391,11 +435,19 @@ private final BigdataSailConnection conn; private final AtomicLong nmodified; - + private final Resource[] defaultContexts; + public AddStatementHandler(final BigdataSailConnection conn, - final AtomicLong nmodified) { + final AtomicLong nmodified, final Resource defaultContext) { this.conn = conn; this.nmodified = nmodified; + final boolean quads = conn.getTripleStore().isQuads(); + if (quads && defaultContext != null) { + // The default context may only be specified for quads. + this.defaultContexts = new Resource[] { defaultContext }; + } else { + this.defaultContexts = new Resource[0]; + } } public void handleStatement(final Statement stmt) @@ -407,7 +459,7 @@ stmt.getSubject(), // stmt.getPredicate(), // stmt.getObject(), // - (Resource[]) (stmt.getContext() == null ? new Resource[] { } + (Resource[]) (stmt.getContext() == null ? defaultContexts : new Resource[] { stmt.getContext() })// ); Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java 2011-10-13 17:11:35 UTC (rev 5332) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/UpdateServlet.java 2011-10-13 18:54:23 UTC (rev 5333) @@ -10,6 +10,8 @@ import javax.servlet.http.HttpServletResponse; import org.apache.log4j.Logger; +import org.openrdf.model.Resource; +import org.openrdf.model.impl.URIImpl; import org.openrdf.rio.RDFFormat; import org.openrdf.rio.RDFParser; import org.openrdf.rio.RDFParserFactory; @@ -117,6 +119,25 @@ } + /* + * Allow the caller to specify the default context. + */ + final Resource defaultContext; + { + final String s = req.getParameter("context-uri"); + if (s != null) { + try { + defaultContext = new URIImpl(s); + } catch (IllegalArgumentException ex) { + buildResponse(resp, HTTP_INTERNALERROR, MIME_TEXT_PLAIN, + ex.getLocalizedMessage()); + return; + } + } else { + defaultContext = null; + } + } + if (log.isInfoEnabled()) log.info("update with query: " + queryStr); @@ -213,7 +234,7 @@ .setDatatypeHandling(RDFParser.DatatypeHandling.IGNORE); rdfParser.setRDFHandler(new AddStatementHandler(conn - .getSailConnection(), nmodified)); + .getSailConnection(), nmodified, defaultContext)); /* * Run the parser, which will cause statements to be Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-13 17:11:35 UTC (rev 5332) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-13 18:54:23 UTC (rev 5333) @@ -2,9 +2,11 @@ import java.io.ByteArrayOutputStream; import java.io.File; +import java.io.FileReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; +import java.io.LineNumberReader; import java.io.OutputStream; import java.io.Reader; import java.io.StringWriter; @@ -51,6 +53,7 @@ import org.openrdf.repository.RepositoryException; import org.openrdf.rio.RDFFormat; import org.openrdf.rio.RDFHandlerException; +import org.openrdf.rio.RDFParseException; import org.openrdf.rio.RDFParser; import org.openrdf.rio.RDFParserFactory; import org.openrdf.rio.RDFParserRegistry; @@ -1068,6 +1071,64 @@ } + // TODO Write test for UPDATE where we override the default context using + // the context-uri. + public void test_POST_INSERT_triples_with_BODY_and_defaultContext() + throws Exception { + + if(TestMode.quads != testMode) + return; + + // Load the resource into the KB. + doInsertByBody("POST", requestPath, packagePath + + "insert_triples_with_defaultContext.ttl", new URIImpl( + "http://example.org")); + + // Verify that the data were inserted into the appropriate context. + { + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.method = "GET"; + opts.queryStr = "select * { GRAPH <http://example.org> {?s ?p ?p} }"; + assertEquals(7, countResults(doSparqlQuery(opts, requestPath))); + } + + } + + public void test_POST_INSERT_triples_with_URI_and_defaultContext() throws Exception { + + if(TestMode.quads != testMode) + return; + + // Load the resource into the KB. + { + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.method = "POST"; + opts.requestParams = new LinkedHashMap<String, String[]>(); + // set the resource to load. + opts.requestParams.put("uri", new String[] { new File(packagePath + + "insert_triples_with_defaultContext.ttl").toURI() + .toString() }); + // set the default context. + opts.requestParams.put("context-uri", + new String[] { "http://example.org" }); + assertEquals( + 7, + getMutationResult(doSparqlQuery(opts, requestPath)).mutationCount); + } + + // Verify that the data were inserted into the appropriate context. + { + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.method = "GET"; + opts.queryStr = "select * { GRAPH <http://example.org> {?s ?p ?p} }"; + assertEquals(7, countResults(doSparqlQuery(opts, requestPath))); + } + ... [truncated message content] |
From: <tho...@us...> - 2011-10-15 13:20:33
|
Revision: 5341 http://bigdata.svn.sourceforge.net/bigdata/?rev=5341&view=rev Author: thompsonbry Date: 2011-10-15 13:20:26 +0000 (Sat, 15 Oct 2011) Log Message: ----------- com.bigdata.util.config.LogUtil was modified per [1] to also search for the log4j configuration files along the class path. Silent deployment has been verified against both the 1.0.x maintenance branch and the TERMS_REFACTOR_BRANCH. [1] https://sourceforge.net/apps/trac/bigdata/ticket/394 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/util/config/LogUtil.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/util/config/LogUtil.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/util/config/LogUtil.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/util/config/LogUtil.java 2011-10-14 20:35:42 UTC (rev 5340) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/util/config/LogUtil.java 2011-10-15 13:20:26 UTC (rev 5341) @@ -1,124 +1,189 @@ -/* - -Copyright (C) SYSTAP, LLC 2006-2008. All rights reserved. - -Contact: - SYSTAP, LLC - 4501 Tower Road - Greensboro, NC 27410 - lic...@bi... - -This program is free software; you can redistribute it and/or modify -it under the terms of the GNU General Public License as published by -the Free Software Foundation; version 2 of the License. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; if not, write to the Free Software -Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - -*/ - -package com.bigdata.util.config; - -import org.apache.log4j.Logger; -import org.apache.log4j.PropertyConfigurator; -import org.apache.log4j.xml.DOMConfigurator; - -/** - * Utility class that provides a set of static convenience methods related to - * the initialization and configuration of the logging mechanism(s) employed by - * the components of the system. The methods of this class can be useful both in - * Jini configuration files, as well as in the system components themselves. - * <p> - * This class relies on the presence of either the - * <code>log4j.configuration</code> or the - * <code>log4j.primary.configuration</code> property and understands files with - * any of the following extensions {<code>.properties</code>, - * <code>.logging</code>, <code>.xml</code> . It will log a message on - * <em>stderr</em> if neither of those properties is defined. The class - * deliberately does not search the CLASSPATH for a log4j configuration in an - * effort to discourage the inadvertent use of hidden configuration files when - * deploying bigdata. - * <p> - * A watcher is setup on the log4j configuration if one is found. - * <p> - * This class cannot be instantiated. - */ -public class LogUtil { - - /** - * Examine the various log4j configuration properties and return the name of - * the log4j configuration resource if one was configured. - * - * @return The log4j configuration resource -or- <code>null</code> if the - * resource was not configured properly. - */ - static String getConfigPropertyValue() { - - final String log4jConfig = System - .getProperty("log4j.primary.configuration"); - - if (log4jConfig != null) - return log4jConfig; - - final String log4jDefaultConfig = System - .getProperty("log4j.configuration"); - - if (log4jDefaultConfig != null) - return log4jDefaultConfig; - - return null; - - } - - // Static initialization block that retrieves and initializes - // the log4j logger configuration for the given VM in which this - // class resides. Note that this block is executed only once - // during the life of the associated VM. - static { - - final String log4jConfig = getConfigPropertyValue(); - - if( log4jConfig != null && (log4jConfig.endsWith(".properties") || - log4jConfig.endsWith(".logging"))) { - - PropertyConfigurator.configureAndWatch(log4jConfig); - - } else if ( log4jConfig != null && log4jConfig.endsWith(".xml") ) { - - DOMConfigurator.configureAndWatch(log4jConfig); - - } else { - - System.err.println("ERROR: " + LogUtil.class.getName() - + " : Could not initialize Log4J logging utility.\n" - + "Set system property " - +"'-Dlog4j.configuration=" - +"file:bigdata/src/resources/logging/log4j.properties'" - +"\n and / or \n" - +"Set system property " - +"'-Dlog4j.primary.configuration=" - +"file:<installDir>/" - +"bigdata/src/resources/logging/log4j.properties'"); - - } - - } - - public static Logger getLog4jLogger(String componentName) { - return Logger.getLogger(componentName); - } - - public static Logger getLog4jLogger(Class componentClass) { - return Logger.getLogger(componentClass); - } - - public static Logger getLog4jRootLogger() { - return Logger.getRootLogger(); - } -} \ No newline at end of file +/* + +Copyright (C) SYSTAP, LLC 2006-2008. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +*/ + +package com.bigdata.util.config; + +import java.net.URL; + +import org.apache.log4j.Logger; +import org.apache.log4j.PropertyConfigurator; +import org.apache.log4j.xml.DOMConfigurator; + +/** + * Utility class that provides a set of static convenience methods related to + * the initialization and configuration of the logging mechanism(s) employed by + * the components of the system. The methods of this class can be useful both in + * Jini configuration files, as well as in the system components themselves. + * <p> + * This class relies on the presence of either the + * <code>log4j.configuration</code> or the + * <code>log4j.primary.configuration</code> property. + * <p> + * If neither of those properties is found, then this class searches the + * CLASSPATH for a log4j configuration. While this is a change from the + * historical, searching the CLASSPATH is necessary for webapp deployments. + * <p> + * This class understands files with any of the following extensions { + * <code>.properties</code>, <code>.logging</code>, <code>.xml</code> . If + * neither configuration property is defined, if the resource identified by the + * property can not be located, or if a log4j configuration resource can not be + * located in the default location along the class path then this class will + * will a message on <em>stderr</em>. + * <p> + * A watcher is setup on the log4j configuration if one is found. + * <p> + * This class cannot be instantiated. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/394 + */ +public class LogUtil { + + /** + * Examine the various log4j configuration properties and return the name of + * the log4j configuration resource if one was configured. + * + * @return The log4j configuration resource -or- <code>null</code> if the + * resource was not configured properly. + */ + static String getConfigPropertyValue() { + + final String log4jConfig = System + .getProperty("log4j.primary.configuration"); + + if (log4jConfig != null) + return log4jConfig; + + final String log4jDefaultConfig = System + .getProperty("log4j.configuration"); + + if (log4jDefaultConfig != null) + return log4jDefaultConfig; + + return null; + + } + + /** + * Attempt to resolve the resources with the following names in the given + * order and return the {@link URL} of the first such resource which is + * found and <code>null</code> if none of the resources are found: + * <ol> + * <li>log4j.properties</li> + * <li>log4j.logging</li> + * <li>log4j.xml</li> + * </ol> + * + * @return The {@link URL} of the first such resource which was found. + */ + static URL getConfigPropertyValueUrl() { + + URL url = LogUtil.class.getResource("/log4j.properties"); + + if (url == null) + url = LogUtil.class.getResource("/log4j.logging"); + + if (url == null) + url = LogUtil.class.getResource("/log4j.xml"); + + return url; + + } + + // Static initialization block that retrieves and initializes + // the log4j logger configuration for the given VM in which this + // class resides. Note that this block is executed only once + // during the life of the associated VM. + static { + + /* + * First, attempt to resolve the configuration property. + */ + final String log4jConfig = getConfigPropertyValue(); + + if( log4jConfig != null && (log4jConfig.endsWith(".properties") || + log4jConfig.endsWith(".logging"))) { + + PropertyConfigurator.configureAndWatch(log4jConfig); + + } else if ( log4jConfig != null && log4jConfig.endsWith(".xml") ) { + + DOMConfigurator.configureAndWatch(log4jConfig); + + } else { + + /* + * Then attempt to resolve the resource to a URL. + */ + + final URL log4jUrl = getConfigPropertyValueUrl(); + + if (log4jUrl != null &&// + (log4jUrl.getFile().endsWith(".properties") || // + log4jUrl.getFile().endsWith(".logging")// + )) { + + PropertyConfigurator.configure(log4jUrl); + + } else if (log4jUrl != null && log4jUrl.getFile().endsWith(".xml")) { + + DOMConfigurator.configure(log4jUrl); + + } else { + + /* + * log4j was not explicitly configured and the log4j resource + * could not be located on the CLASSPATH. + */ + + System.err.println("ERROR: " + LogUtil.class.getName() + + " : Could not initialize Log4J logging utility.\n" + + "Set system property " + +"'-Dlog4j.configuration=" + +"file:bigdata/src/resources/logging/log4j.properties'" + +"\n and / or \n" + +"Set system property " + +"'-Dlog4j.primary.configuration=" + +"file:<installDir>/" + +"bigdata/src/resources/logging/log4j.properties'"); + + } + + } + + } + + public static Logger getLog4jLogger(String componentName) { + return Logger.getLogger(componentName); + } + + public static Logger getLog4jLogger(Class componentClass) { + return Logger.getLogger(componentClass); + } + + public static Logger getLog4jRootLogger() { + return Logger.getRootLogger(); + } + +} Modified: branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/util/config/LogUtil.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/util/config/LogUtil.java 2011-10-14 20:35:42 UTC (rev 5340) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/util/config/LogUtil.java 2011-10-15 13:20:26 UTC (rev 5341) @@ -25,6 +25,8 @@ package com.bigdata.util.config; +import java.net.URL; + import org.apache.log4j.Logger; import org.apache.log4j.PropertyConfigurator; import org.apache.log4j.xml.DOMConfigurator; @@ -37,17 +39,24 @@ * <p> * This class relies on the presence of either the * <code>log4j.configuration</code> or the - * <code>log4j.primary.configuration</code> property and understands files with - * any of the following extensions {<code>.properties</code>, - * <code>.logging</code>, <code>.xml</code> . It will log a message on - * <em>stderr</em> if neither of those properties is defined. The class - * deliberately does not search the CLASSPATH for a log4j configuration in an - * effort to discourage the inadvertent use of hidden configuration files when - * deploying bigdata. + * <code>log4j.primary.configuration</code> property. * <p> + * If neither of those properties is found, then this class searches the + * CLASSPATH for a log4j configuration. While this is a change from the + * historical, searching the CLASSPATH is necessary for webapp deployments. + * <p> + * This class understands files with any of the following extensions { + * <code>.properties</code>, <code>.logging</code>, <code>.xml</code> . If + * neither configuration property is defined, if the resource identified by the + * property can not be located, or if a log4j configuration resource can not be + * located in the default location along the class path then this class will + * will a message on <em>stderr</em>. + * <p> * A watcher is setup on the log4j configuration if one is found. * <p> * This class cannot be instantiated. + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/394 */ public class LogUtil { @@ -76,12 +85,41 @@ } + /** + * Attempt to resolve the resources with the following names in the given + * order and return the {@link URL} of the first such resource which is + * found and <code>null</code> if none of the resources are found: + * <ol> + * <li>log4j.properties</li> + * <li>log4j.logging</li> + * <li>log4j.xml</li> + * </ol> + * + * @return The {@link URL} of the first such resource which was found. + */ + static URL getConfigPropertyValueUrl() { + + URL url = LogUtil.class.getResource("/log4j.properties"); + + if (url == null) + url = LogUtil.class.getResource("/log4j.logging"); + + if (url == null) + url = LogUtil.class.getResource("/log4j.xml"); + + return url; + + } + // Static initialization block that retrieves and initializes // the log4j logger configuration for the given VM in which this // class resides. Note that this block is executed only once // during the life of the associated VM. static { + /* + * First, attempt to resolve the configuration property. + */ final String log4jConfig = getConfigPropertyValue(); if( log4jConfig != null && (log4jConfig.endsWith(".properties") || @@ -94,8 +132,32 @@ DOMConfigurator.configureAndWatch(log4jConfig); } else { - - System.err.println("ERROR: " + LogUtil.class.getName() + + /* + * Then attempt to resolve the resource to a URL. + */ + + final URL log4jUrl = getConfigPropertyValueUrl(); + + if (log4jUrl != null &&// + (log4jUrl.getFile().endsWith(".properties") || // + log4jUrl.getFile().endsWith(".logging")// + )) { + + PropertyConfigurator.configure(log4jUrl); + + } else if (log4jUrl != null && log4jUrl.getFile().endsWith(".xml")) { + + DOMConfigurator.configure(log4jUrl); + + } else { + + /* + * log4j was not explicitly configured and the log4j resource + * could not be located on the CLASSPATH. + */ + + System.err.println("ERROR: " + LogUtil.class.getName() + " : Could not initialize Log4J logging utility.\n" + "Set system property " +"'-Dlog4j.configuration=" @@ -106,6 +168,8 @@ +"file:<installDir>/" +"bigdata/src/resources/logging/log4j.properties'"); + } + } } @@ -121,4 +185,5 @@ public static Logger getLog4jRootLogger() { return Logger.getRootLogger(); } -} \ No newline at end of file + +} This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-10-20 11:42:05
|
Revision: 5369 http://bigdata.svn.sourceforge.net/bigdata/?rev=5369&view=rev Author: thompsonbry Date: 2011-10-20 11:41:54 +0000 (Thu, 20 Oct 2011) Log Message: ----------- Added support for the ESTCARD method (aka fast range count) to the REST API. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.trig branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.ttl branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.trig branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.ttl Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2011-10-20 09:06:53 UTC (rev 5368) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2011-10-20 11:41:54 UTC (rev 5369) @@ -279,4 +279,30 @@ } + /** + * Report an access path range count and elapsed time back to the user agent. + * + * @param resp + * The response. + * @param rangeCount + * The mutation count. + * @param elapsed + * The elapsed time (milliseconds). + * + * @throws IOException + */ + protected void reportRangeCount(final HttpServletResponse resp, + final long rangeCount, final long elapsed) throws IOException { + + final StringWriter w = new StringWriter(); + + final XMLBuilder t = new XMLBuilder(w); + + t.root("data").attr("rangeCount", rangeCount) + .attr("milliseconds", elapsed).close(); + + buildResponse(resp, HTTP_OK, MIME_APPLICATION_XML, w.toString()); + + } + } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2011-10-20 09:06:53 UTC (rev 5368) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2011-10-20 11:41:54 UTC (rev 5369) @@ -13,6 +13,9 @@ import javax.servlet.http.HttpServletResponse; import org.apache.log4j.Logger; +import org.openrdf.model.Resource; +import org.openrdf.model.URI; +import org.openrdf.model.Value; import org.openrdf.query.MalformedQueryException; import com.bigdata.bop.BOpUtility; @@ -22,6 +25,7 @@ import com.bigdata.bop.fed.QueryEngineFactory; import com.bigdata.journal.IIndexManager; import com.bigdata.journal.TimestampUtility; +import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; import com.bigdata.rdf.sail.webapp.BigdataRDFContext.AbstractQueryTask; import com.bigdata.util.HTMLUtility; import com.bigdata.util.InnerCause; @@ -57,8 +61,15 @@ protected void doGet(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { - doQuery(req, resp); - + if (req.getParameter("query") != null) { + doQuery(req, resp); + } else if (req.getParameter("ESTCARD") != null) { + doEstCard(req, resp); + } else { + buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN); + return; + } + } /** @@ -412,4 +423,86 @@ } + /** + * Estimate the cardinality of an access path (fast range count). + * @param req + * @param resp + */ + private void doEstCard(final HttpServletRequest req, + final HttpServletResponse resp) throws IOException { + + final long begin = System.currentTimeMillis(); + + final String namespace = getNamespace(req); + + final Resource s; + final URI p; + final Value o; + final Resource c; + try { + s = EncodeDecodeValue.decodeResource(req.getParameter("s")); + p = EncodeDecodeValue.decodeURI(req.getParameter("p")); + o = EncodeDecodeValue.decodeValue(req.getParameter("o")); + c = EncodeDecodeValue.decodeResource(req.getParameter("c")); + } catch (IllegalArgumentException ex) { + buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, + ex.getLocalizedMessage()); + return; + } + + if (log.isInfoEnabled()) + log.info("ESTCARD: access path: (s=" + s + ", p=" + p + ", o=" + + o + ", c=" + c + ")"); + + try { + + try { + + BigdataSailRepositoryConnection conn = null; + try { + + final long timestamp = getTimestamp(req); + + conn = getBigdataRDFContext().getQueryConnection( + namespace, timestamp); + + // Range count all statements matching that access path. + final long rangeCount = conn.getSailConnection() + .getBigdataSail().getDatabase() + .getAccessPath(s, p, o, c) + .rangeCount(false/* exact */); + + final long elapsed = System.currentTimeMillis() - begin; + + reportRangeCount(resp, rangeCount, elapsed); + + } catch(Throwable t) { + + if(conn != null) + conn.rollback(); + + throw new RuntimeException(t); + + } finally { + + if (conn != null) + conn.close(); + + } + + } catch (Throwable t) { + + throw BigdataRDFServlet.launderThrowable(t, resp, ""); + + } + + } catch (Exception ex) { + + // Will be rendered as an INTERNAL_ERROR. + throw new RuntimeException(ex); + + } + + } + } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-20 09:06:53 UTC (rev 5368) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-20 11:41:54 UTC (rev 5369) @@ -769,6 +769,85 @@ } /** + * Class representing the result of a mutation operation against the REST + * API. + * + * TODO Refactor into the non-test code base along with the XML generation + * and XML parsing? + */ + private static class RangeCountResult { + + /** The range count. */ + public final long rangeCount; + + /** The elapsed time for the operation. */ + public final long elapsedMillis; + + public RangeCountResult(final long rangeCount, final long elapsedMillis) { + this.rangeCount = rangeCount; + this.elapsedMillis = elapsedMillis; + } + + } + + protected RangeCountResult getRangeCountResult(final HttpURLConnection conn) throws Exception { + + checkResponseCode(conn); + + try { + + final String contentType = conn.getContentType(); + + if (!contentType.startsWith(BigdataRDFServlet.MIME_APPLICATION_XML)) { + + fail("Expecting Content-Type of " + + BigdataRDFServlet.MIME_APPLICATION_XML + ", not " + + contentType); + + } + + final SAXParser parser = SAXParserFactory.newInstance().newSAXParser(); + + final AtomicLong rangeCount = new AtomicLong(); + final AtomicLong elapsedMillis = new AtomicLong(); + + /* + * For example: <data rangeCount="5" milliseconds="112"/> + */ + parser.parse(conn.getInputStream(), new DefaultHandler2(){ + + public void startElement(final String uri, + final String localName, final String qName, + final Attributes attributes) { + + if (!"data".equals(qName)) + fail("Expecting: 'data', but have: uri=" + uri + + ", localName=" + localName + ", qName=" + + qName); + + rangeCount.set(Long.valueOf(attributes + .getValue("rangeCount"))); + + elapsedMillis.set(Long.valueOf(attributes + .getValue("milliseconds"))); + + } + + }); + + // done. + return new RangeCountResult(rangeCount.get(), elapsedMillis.get()); + + } finally { + + // terminate the http connection. + conn.disconnect(); + + } + + } + + /** * Issue a "status" request against the service. */ public void test_STATUS() throws Exception { @@ -1189,6 +1268,194 @@ } /** + * Test the ESTCARD method (fast range count). + */ + public void test_ESTCARD() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s + null,// p + null,// o + null // c + ); + + assertEquals(7, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_s() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + new URIImpl("http://www.bigdata.com/Mike"),// s + null,// p + null,// o + null // c + ); + + assertEquals(3, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_p() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s + RDF.TYPE,// p + null,// o + null // c + ); + + assertEquals(3, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_p2() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s + RDFS.LABEL,// p + null,// o + null // c + ); + + assertEquals(2, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_o() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s + null,// p + new LiteralImpl("Mike"),// o + null // c + ); + + assertEquals(1, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_so() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + new URIImpl("http://www.bigdata.com/Mike"),// s, + RDF.TYPE,// p + null,// o + null // c + ); + + assertEquals(1, rangeCountResult.rangeCount); + + } + + /** + * Test the ESTCARD method (fast range count). + */ + public void test_ESTCARD_quads_01() throws Exception { + + if(TestMode.quads != testMode) + return; + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.trig"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s, + null,// p + null,// o + null // c + ); + + assertEquals(7, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_quads_02() throws Exception { + + if(TestMode.quads != testMode) + return; + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.trig"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s, + null,// p + null,// o + new URIImpl("http://www.bigdata.com/")// c + ); + + assertEquals(3, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_quads_03() throws Exception { + + if(TestMode.quads != testMode) + return; + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.trig"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s, + null,// p + null,// o + new URIImpl("http://www.bigdata.com/c1")// c + ); + + assertEquals(2, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_quads_04() throws Exception { + + if(TestMode.quads != testMode) + return; + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.trig"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + new URIImpl("http://www.bigdata.com/Mike"),// s, + null,// p + null,// o + new URIImpl("http://www.bigdata.com/c1")// c + ); + + assertEquals(1, rangeCountResult.rangeCount); + + } + + /** * Select everything in the kb using a POST. */ public void test_DELETE_withQuery() throws Exception { @@ -1523,6 +1790,79 @@ } } + /** + * Submit a fast range count request. + * + * @param servlet + * @param s + * @param p + * @param o + * @param c + * @return The parsed fast range count response. + */ + private RangeCountResult doRangeCount(// + final String servlet,// + final Resource s,// + final URI p,// + final Value o,// + final Resource c// + ) { + HttpURLConnection conn = null; + try { + + final LinkedHashMap<String, String[]> requestParams = new LinkedHashMap<String, String[]>(); + + requestParams.put("ESTCARD", null); + + if (s != null) + requestParams.put("s", + new String[] { EncodeDecodeValue.encodeValue(s) }); + + if (p != null) + requestParams.put("p", + new String[] { EncodeDecodeValue.encodeValue(p) }); + + if (o != null) + requestParams.put("o", + new String[] { EncodeDecodeValue.encodeValue(o) }); + + if (c != null) + requestParams.put("c", + new String[] { EncodeDecodeValue.encodeValue(c) }); + + final StringBuilder urlString = new StringBuilder(); + urlString.append(m_serviceURL).append(servlet); + addQueryParams(urlString, requestParams); + + final URL url = new URL(urlString.toString()); + conn = (HttpURLConnection) url.openConnection(); + conn.setRequestMethod("GET"); + conn.setDoOutput(false); + conn.setDoInput(true); + conn.setUseCaches(false); + conn.setReadTimeout(0); + + conn.connect(); + +// if (log.isInfoEnabled()) +// log.info(conn.getResponseMessage()); + + final int rc = conn.getResponseCode(); + + if (rc < 200 || rc >= 300) { + throw new IOException(conn.getResponseMessage()); + } + + return getRangeCountResult(conn); + + } catch (Throwable t) { + // clean up the connection resources + if (conn != null) + conn.disconnect(); + throw new RuntimeException(t); + } + } + private MutationResult doDeleteWithAccessPath(// final String servlet,// final Resource s,// Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.trig =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.trig (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.trig 2011-10-20 11:41:54 UTC (rev 5369) @@ -0,0 +1,20 @@ +@prefix : <http://www.bigdata.com/> . +@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . +@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . +@prefix foaf: <http://xmlns.com/foaf/0.1/> . + +:{ +:Mike rdf:type foaf:Person . +:Bryan rdf:type foaf:Person . +:Martyn rdf:type foaf:Person . +} + +:c1 { +:Mike rdfs:label "Mike" . +:Bryan rdfs:label "Bryan" . +} + +:c2 { +:Mike foaf:knows :Bryan . +:Bryan foaf:knows :Martyn . +} \ No newline at end of file Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.ttl =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.ttl (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.ttl 2011-10-20 11:41:54 UTC (rev 5369) @@ -0,0 +1,12 @@ +@prefix : <http://www.bigdata.com/> . +@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . +@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . +@prefix foaf: <http://xmlns.com/foaf/0.1/> . + +:Mike rdf:type foaf:Person . +:Bryan rdf:type foaf:Person . +:Martyn rdf:type foaf:Person . +:Mike rdfs:label "Mike" . +:Bryan rdfs:label "Bryan" . +:Mike foaf:knows :Bryan . +:Bryan foaf:knows :Martyn . Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2011-10-20 09:06:53 UTC (rev 5368) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServlet.java 2011-10-20 11:41:54 UTC (rev 5369) @@ -279,4 +279,30 @@ } + /** + * Report an access path range count and elapsed time back to the user agent. + * + * @param resp + * The response. + * @param rangeCount + * The mutation count. + * @param elapsed + * The elapsed time (milliseconds). + * + * @throws IOException + */ + protected void reportRangeCount(final HttpServletResponse resp, + final long rangeCount, final long elapsed) throws IOException { + + final StringWriter w = new StringWriter(); + + final XMLBuilder t = new XMLBuilder(w); + + t.root("data").attr("rangeCount", rangeCount) + .attr("milliseconds", elapsed).close(); + + buildResponse(resp, HTTP_OK, MIME_APPLICATION_XML, w.toString()); + + } + } Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2011-10-20 09:06:53 UTC (rev 5368) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/QueryServlet.java 2011-10-20 11:41:54 UTC (rev 5369) @@ -13,6 +13,9 @@ import javax.servlet.http.HttpServletResponse; import org.apache.log4j.Logger; +import org.openrdf.model.Resource; +import org.openrdf.model.URI; +import org.openrdf.model.Value; import org.openrdf.query.MalformedQueryException; import com.bigdata.bop.BOpUtility; @@ -22,6 +25,7 @@ import com.bigdata.bop.fed.QueryEngineFactory; import com.bigdata.journal.IIndexManager; import com.bigdata.journal.TimestampUtility; +import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; import com.bigdata.rdf.sail.webapp.BigdataRDFContext.AbstractQueryTask; import com.bigdata.util.HTMLUtility; import com.bigdata.util.InnerCause; @@ -57,7 +61,14 @@ protected void doGet(final HttpServletRequest req, final HttpServletResponse resp) throws IOException { - doQuery(req, resp); + if (req.getParameter("query") != null) { + doQuery(req, resp); + } else if (req.getParameter("ESTCARD") != null) { + doEstCard(req, resp); + } else { + buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN); + return; + } } @@ -413,4 +424,86 @@ } + /** + * Estimate the cardinality of an access path (fast range count). + * @param req + * @param resp + */ + private void doEstCard(final HttpServletRequest req, + final HttpServletResponse resp) throws IOException { + + final long begin = System.currentTimeMillis(); + + final String namespace = getNamespace(req); + + final Resource s; + final URI p; + final Value o; + final Resource c; + try { + s = EncodeDecodeValue.decodeResource(req.getParameter("s")); + p = EncodeDecodeValue.decodeURI(req.getParameter("p")); + o = EncodeDecodeValue.decodeValue(req.getParameter("o")); + c = EncodeDecodeValue.decodeResource(req.getParameter("c")); + } catch (IllegalArgumentException ex) { + buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN, + ex.getLocalizedMessage()); + return; + } + + if (log.isInfoEnabled()) + log.info("ESTCARD: access path: (s=" + s + ", p=" + p + ", o=" + + o + ", c=" + c + ")"); + + try { + + try { + + BigdataSailRepositoryConnection conn = null; + try { + + final long timestamp = getTimestamp(req); + + conn = getBigdataRDFContext().getQueryConnection( + namespace, timestamp); + + // Range count all statements matching that access path. + final long rangeCount = conn.getSailConnection() + .getBigdataSail().getDatabase() + .getAccessPath(s, p, o, c) + .rangeCount(false/* exact */); + + final long elapsed = System.currentTimeMillis() - begin; + + reportRangeCount(resp, rangeCount, elapsed); + + } catch(Throwable t) { + + if(conn != null) + conn.rollback(); + + throw new RuntimeException(t); + + } finally { + + if (conn != null) + conn.close(); + + } + + } catch (Throwable t) { + + throw BigdataRDFServlet.launderThrowable(t, resp, ""); + + } + + } catch (Exception ex) { + + // Will be rendered as an INTERNAL_ERROR. + throw new RuntimeException(ex); + + } + + } + } Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-20 09:06:53 UTC (rev 5368) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-20 11:41:54 UTC (rev 5369) @@ -769,6 +769,85 @@ } /** + * Class representing the result of a mutation operation against the REST + * API. + * + * TODO Refactor into the non-test code base along with the XML generation + * and XML parsing? + */ + private static class RangeCountResult { + + /** The range count. */ + public final long rangeCount; + + /** The elapsed time for the operation. */ + public final long elapsedMillis; + + public RangeCountResult(final long rangeCount, final long elapsedMillis) { + this.rangeCount = rangeCount; + this.elapsedMillis = elapsedMillis; + } + + } + + protected RangeCountResult getRangeCountResult(final HttpURLConnection conn) throws Exception { + + checkResponseCode(conn); + + try { + + final String contentType = conn.getContentType(); + + if (!contentType.startsWith(BigdataRDFServlet.MIME_APPLICATION_XML)) { + + fail("Expecting Content-Type of " + + BigdataRDFServlet.MIME_APPLICATION_XML + ", not " + + contentType); + + } + + final SAXParser parser = SAXParserFactory.newInstance().newSAXParser(); + + final AtomicLong rangeCount = new AtomicLong(); + final AtomicLong elapsedMillis = new AtomicLong(); + + /* + * For example: <data rangeCount="5" milliseconds="112"/> + */ + parser.parse(conn.getInputStream(), new DefaultHandler2(){ + + public void startElement(final String uri, + final String localName, final String qName, + final Attributes attributes) { + + if (!"data".equals(qName)) + fail("Expecting: 'data', but have: uri=" + uri + + ", localName=" + localName + ", qName=" + + qName); + + rangeCount.set(Long.valueOf(attributes + .getValue("rangeCount"))); + + elapsedMillis.set(Long.valueOf(attributes + .getValue("milliseconds"))); + + } + + }); + + // done. + return new RangeCountResult(rangeCount.get(), elapsedMillis.get()); + + } finally { + + // terminate the http connection. + conn.disconnect(); + + } + + } + + /** * Issue a "status" request against the service. */ public void test_STATUS() throws Exception { @@ -1247,6 +1326,194 @@ } /** + * Test the ESTCARD method (fast range count). + */ + public void test_ESTCARD() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s + null,// p + null,// o + null // c + ); + + assertEquals(7, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_s() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + new URIImpl("http://www.bigdata.com/Mike"),// s + null,// p + null,// o + null // c + ); + + assertEquals(3, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_p() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s + RDF.TYPE,// p + null,// o + null // c + ); + + assertEquals(3, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_p2() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s + RDFS.LABEL,// p + null,// o + null // c + ); + + assertEquals(2, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_o() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s + null,// p + new LiteralImpl("Mike"),// o + null // c + ); + + assertEquals(1, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_so() throws Exception { + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.ttl"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + new URIImpl("http://www.bigdata.com/Mike"),// s, + RDF.TYPE,// p + null,// o + null // c + ); + + assertEquals(1, rangeCountResult.rangeCount); + + } + + /** + * Test the ESTCARD method (fast range count). + */ + public void test_ESTCARD_quads_01() throws Exception { + + if(TestMode.quads != testMode) + return; + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.trig"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s, + null,// p + null,// o + null // c + ); + + assertEquals(7, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_quads_02() throws Exception { + + if(TestMode.quads != testMode) + return; + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.trig"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s, + null,// p + null,// o + new URIImpl("http://www.bigdata.com/")// c + ); + + assertEquals(3, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_quads_03() throws Exception { + + if(TestMode.quads != testMode) + return; + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.trig"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + null,// s, + null,// p + null,// o + new URIImpl("http://www.bigdata.com/c1")// c + ); + + assertEquals(2, rangeCountResult.rangeCount); + + } + + public void test_ESTCARD_quads_04() throws Exception { + + if(TestMode.quads != testMode) + return; + + doInsertbyURL("POST", requestPath, packagePath + + "test_estcard.trig"); + + final RangeCountResult rangeCountResult = doRangeCount(// + requestPath,// + new URIImpl("http://www.bigdata.com/Mike"),// s, + null,// p + null,// o + new URIImpl("http://www.bigdata.com/c1")// c + ); + + assertEquals(1, rangeCountResult.rangeCount); + + } + + /** * Select everything in the kb using a POST. */ public void test_DELETE_withQuery() throws Exception { @@ -1581,6 +1848,79 @@ } } + /** + * Submit a fast range count request. + * + * @param servlet + * @param s + * @param p + * @param o + * @param c + * @return The parsed fast range count response. + */ + private RangeCountResult doRangeCount(// + final String servlet,// + final Resource s,// + final URI p,// + final Value o,// + final Resource c// + ) { + HttpURLConnection conn = null; + try { + + final LinkedHashMap<String, String[]> requestParams = new LinkedHashMap<String, String[]>(); + + requestParams.put("ESTCARD", null); + + if (s != null) + requestParams.put("s", + new String[] { EncodeDecodeValue.encodeValue(s) }); + + if (p != null) + requestParams.put("p", + new String[] { EncodeDecodeValue.encodeValue(p) }); + + if (o != null) + requestParams.put("o", + new String[] { EncodeDecodeValue.encodeValue(o) }); + + if (c != null) + requestParams.put("c", + new String[] { EncodeDecodeValue.encodeValue(c) }); + + final StringBuilder urlString = new StringBuilder(); + urlString.append(m_serviceURL).append(servlet); + addQueryParams(urlString, requestParams); + + final URL url = new URL(urlString.toString()); + conn = (HttpURLConnection) url.openConnection(); + conn.setRequestMethod("GET"); + conn.setDoOutput(false); + conn.setDoInput(true); + conn.setUseCaches(false); + conn.setReadTimeout(0); + + conn.connect(); + +// if (log.isInfoEnabled()) +// log.info(conn.getResponseMessage()); + + final int rc = conn.getResponseCode(); + + if (rc < 200 || rc >= 300) { + throw new IOException(conn.getResponseMessage()); + } + + return getRangeCountResult(conn); + + } catch (Throwable t) { + // clean up the connection resources + if (conn != null) + conn.disconnect(); + throw new RuntimeException(t); + } + } + private MutationResult doDeleteWithAccessPath(// final String servlet,// final Resource s,// Added: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.trig =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.trig (rev 0) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.trig 2011-10-20 11:41:54 UTC (rev 5369) @@ -0,0 +1,20 @@ +@prefix : <http://www.bigdata.com/> . +@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . +@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . +@prefix foaf: <http://xmlns.com/foaf/0.1/> . + +:{ +:Mike rdf:type foaf:Person . +:Bryan rdf:type foaf:Person . +:Martyn rdf:type foaf:Person . +} + +:c1 { +:Mike rdfs:label "Mike" . +:Bryan rdfs:label "Bryan" . +} + +:c2 { +:Mike foaf:knows :Bryan . +:Bryan foaf:knows :Martyn . +} \ No newline at end of file Added: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.ttl =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.ttl (rev 0) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/test_estcard.ttl 2011-10-20 11:41:54 UTC (rev 5369) @@ -0,0 +1,12 @@ +@prefix : <http://www.bigdata.com/> . +@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . +@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . +@prefix foaf: <http://xmlns.com/foaf/0.1/> . + +:Mike rdf:type foaf:Person . +:Bryan rdf:type foaf:Person . +:Martyn rdf:type foaf:Person . +:Mike rdfs:label "Mike" . +:Bryan rdfs:label "Bryan" . +:Mike foaf:knows :Bryan . +:Bryan foaf:knows :Martyn . This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-10-30 13:05:54
|
Revision: 5458 http://bigdata.svn.sourceforge.net/bigdata/?rev=5458&view=rev Author: thompsonbry Date: 2011-10-30 13:05:47 +0000 (Sun, 30 Oct 2011) Log Message: ----------- Modified the life cycle listener for the NanoSparqlServer to force registration of the NQuads RDFFormat. Added unit test for loading NQuads data. This has to be done by URL as there is currently no "writer" for the NQuads format (that is on a todo list). Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/quads.nq branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/quads.nq Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java 2011-10-29 23:51:02 UTC (rev 5457) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java 2011-10-30 13:05:47 UTC (rev 5458) @@ -47,6 +47,7 @@ import com.bigdata.journal.ITransactionService; import com.bigdata.journal.ITx; import com.bigdata.journal.Journal; +import com.bigdata.rdf.rio.NQuadsParser; import com.bigdata.rdf.sail.BigdataSail; import com.bigdata.rdf.store.ScaleOutTripleStore; import com.bigdata.service.AbstractDistributedFederation; @@ -315,6 +316,9 @@ } + // Force registration of the NQuads RDFFormat. + NQuadsParser.forceLoad(); + if (log.isInfoEnabled()) log.info("done"); Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-29 23:51:02 UTC (rev 5457) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-30 13:05:47 UTC (rev 5458) @@ -68,6 +68,7 @@ import com.bigdata.journal.IIndexManager; import com.bigdata.journal.ITx; import com.bigdata.journal.Journal; +import com.bigdata.rdf.rio.NQuadsParser; import com.bigdata.rdf.sail.BigdataSail; import com.bigdata.rdf.sail.BigdataSailRepository; import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; @@ -1268,6 +1269,69 @@ } /** + * Test for POST of an NQuads resource by a URL. + */ + public void test_POST_INSERT_NQuads_by_URL() + throws Exception { + + if(TestMode.quads != testMode) + return; + + // Verify nothing in the KB. + { + final String queryStr = "ASK where {?s ?p ?o}"; + + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.queryStr = queryStr; + opts.method = "GET"; + + opts.acceptHeader = BooleanQueryResultFormat.SPARQL + .getDefaultMIMEType(); + assertEquals(false, askResults(doSparqlQuery(opts, requestPath))); + } + + // #of statements in that RDF file. + final long expectedStatementCount = 7; + + // Load the resource into the KB. + { + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.method = "POST"; + opts.requestParams = new LinkedHashMap<String, String[]>(); + opts.requestParams + .put("uri", + new String[] { "file:bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/quads.nq" }); + + final MutationResult result = getMutationResult(doSparqlQuery(opts, + requestPath)); + + assertEquals(expectedStatementCount, result.mutationCount); + + } + + /* + * Verify KB has the loaded data. + */ + { + final String queryStr = "SELECT * where {?s ?p ?o}"; + + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.queryStr = queryStr; + opts.method = "GET"; + + opts.acceptHeader = BooleanQueryResultFormat.SPARQL + .getDefaultMIMEType(); + + assertEquals(expectedStatementCount, countResults(doSparqlQuery( + opts, requestPath))); + } + + } + + /** * Test the ESTCARD method (fast range count). */ public void test_ESTCARD() throws Exception { @@ -2200,6 +2264,25 @@ } /** + * Load and return a graph from a resource. + * + * @param resource + * The resource. + * + * @return The graph. + */ + private Graph loadGraphFromResource(final String resource) + throws RDFParseException, RDFHandlerException, IOException { + +// final RDFFormat rdfFormat = RDFFormat.forFileName(resource); + + final Graph g = readGraphFromFile(new File(resource)); + + return g; + + } + + /** * Reads a resource and sends it using an INSERT with BODY request to be * loaded into the database. * @@ -2275,6 +2358,78 @@ } + /** + * Reads a resource and sends it using an INSERT with BODY request to be + * loaded into the database. + * + * @param method + * @param servlet + * @param resource + * @return + * @throws Exception + */ + private MutationResult doInsertByBody(final String method, + final String servlet, final RDFFormat rdfFormat, final Graph g, + final URI defaultContext) throws Exception { + + final byte[] wireData = writeOnBuffer(rdfFormat, g); + + HttpURLConnection conn = null; + try { + + final URL url = new URL(m_serviceURL + + servlet + + (defaultContext == null ? "" + : ("?context-uri=" + URLEncoder.encode( + defaultContext.stringValue(), "UTF-8")))); + conn = (HttpURLConnection) url.openConnection(); + conn.setRequestMethod(method); + conn.setDoOutput(true); + conn.setDoInput(true); + conn.setUseCaches(false); + conn.setReadTimeout(0); + + conn.setRequestProperty("Content-Type", + rdfFormat.getDefaultMIMEType()); + + final byte[] data = wireData; + + conn.setRequestProperty("Content-Length", + Integer.toString(data.length)); + + final OutputStream os = conn.getOutputStream(); + try { + os.write(data); + os.flush(); + } finally { + os.close(); + } + // conn.connect(); + + final int rc = conn.getResponseCode(); + + if (log.isInfoEnabled()) { + log.info("*** RESPONSE: " + rc + " for " + method); + // log.info("*** RESPONSE: " + getResponseBody(conn)); + } + + if (rc < 200 || rc >= 300) { + + throw new IOException(conn.getResponseMessage()); + + } + + return getMutationResult(conn); + + } catch (Throwable t) { + // clean up the connection resources + if (conn != null) + conn.disconnect(); + throw new RuntimeException(t); + } + + } + private static String getResponseBody(final HttpURLConnection conn) throws IOException { Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/quads.nq =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/quads.nq (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/quads.nq 2011-10-30 13:05:47 UTC (rev 5458) @@ -0,0 +1,7 @@ +<http://www.bigdata.com/Mike> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> <http://example.org> . +<http://www.bigdata.com/Bryan> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> <http://example.org> . +<http://www.bigdata.com/Martyn> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> <http://example.org> . +<http://www.bigdata.com/Mike> <http://www.w3.org/2000/01/rdf-schema#label> "Mike" <http://example.org> . +<http://www.bigdata.com/Bryan> <http://www.w3.org/2000/01/rdf-schema#label> "Bryan" <http://example.org> . +<http://www.bigdata.com/Mike> <http://xmlns.com/foaf/0.1/knows> <http://www.bigdata.com/Bryan> <http://example.org> . +<http://www.bigdata.com/Bryan> <http://xmlns.com/foaf/0.1/knows> <http://www.bigdata.com/Martyn> <http://example.org> . Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java 2011-10-29 23:51:02 UTC (rev 5457) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java 2011-10-30 13:05:47 UTC (rev 5458) @@ -47,6 +47,7 @@ import com.bigdata.journal.ITransactionService; import com.bigdata.journal.ITx; import com.bigdata.journal.Journal; +import com.bigdata.rdf.rio.NQuadsParser; import com.bigdata.rdf.sail.BigdataSail; import com.bigdata.rdf.store.ScaleOutTripleStore; import com.bigdata.service.AbstractDistributedFederation; @@ -329,6 +330,9 @@ } + // Force registration of the NQuads RDFFormat. + NQuadsParser.forceLoad(); + if (log.isInfoEnabled()) log.info("done"); Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-29 23:51:02 UTC (rev 5457) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServer2.java 2011-10-30 13:05:47 UTC (rev 5458) @@ -62,12 +62,14 @@ import org.openrdf.rio.RDFWriterRegistry; import org.openrdf.rio.helpers.StatementCollector; import org.openrdf.sail.SailException; +import org.semanticweb.yars.nx.parser.NxParser; import org.xml.sax.Attributes; import org.xml.sax.ext.DefaultHandler2; import com.bigdata.journal.IIndexManager; import com.bigdata.journal.ITx; import com.bigdata.journal.Journal; +import com.bigdata.rdf.rio.NQuadsParser; import com.bigdata.rdf.sail.BigdataSail; import com.bigdata.rdf.sail.BigdataSailRepository; import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; @@ -1150,6 +1152,14 @@ } +// // FIXME We need an NQuadsWriter to run this test. +// // Note: quads interchange +// public void test_POST_INSERT_withBody_NQUADS() throws Exception { +// +// doInsertWithBodyTest("POST", 23, requestPath, NQuadsParser.nquads); +// +// } + // TODO Write test for UPDATE where we override the default context using // the context-uri. public void test_POST_INSERT_triples_with_BODY_and_defaultContext() @@ -1213,6 +1223,69 @@ } /** + * Test for POST of an NQuads resource by a URL. + */ + public void test_POST_INSERT_NQuads_by_URL() + throws Exception { + + if(TestMode.quads != testMode) + return; + + // Verify nothing in the KB. + { + final String queryStr = "ASK where {?s ?p ?o}"; + + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.queryStr = queryStr; + opts.method = "GET"; + + opts.acceptHeader = BooleanQueryResultFormat.SPARQL + .getDefaultMIMEType(); + assertEquals(false, askResults(doSparqlQuery(opts, requestPath))); + } + + // #of statements in that RDF file. + final long expectedStatementCount = 7; + + // Load the resource into the KB. + { + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.method = "POST"; + opts.requestParams = new LinkedHashMap<String, String[]>(); + opts.requestParams + .put("uri", + new String[] { "file:bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/quads.nq" }); + + final MutationResult result = getMutationResult(doSparqlQuery(opts, + requestPath)); + + assertEquals(expectedStatementCount, result.mutationCount); + + } + + /* + * Verify KB has the loaded data. + */ + { + final String queryStr = "SELECT * where {?s ?p ?o}"; + + final QueryOptions opts = new QueryOptions(); + opts.serviceURL = m_serviceURL; + opts.queryStr = queryStr; + opts.method = "GET"; + + opts.acceptHeader = BooleanQueryResultFormat.SPARQL + .getDefaultMIMEType(); + + assertEquals(expectedStatementCount, countResults(doSparqlQuery( + opts, requestPath))); + } + + } + + /** * Test of insert and retrieval of a large literal. */ public void test_INSERT_veryLargeLiteral() throws Exception { @@ -2268,7 +2341,7 @@ private Graph loadGraphFromResource(final String resource) throws RDFParseException, RDFHandlerException, IOException { - final RDFFormat rdfFormat = RDFFormat.forFileName(resource); +// final RDFFormat rdfFormat = RDFFormat.forFileName(resource); final Graph g = readGraphFromFile(new File(resource)); Added: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/quads.nq =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/quads.nq (rev 0) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/quads.nq 2011-10-30 13:05:47 UTC (rev 5458) @@ -0,0 +1,7 @@ +<http://www.bigdata.com/Mike> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> <http://example.org> . +<http://www.bigdata.com/Bryan> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> <http://example.org> . +<http://www.bigdata.com/Martyn> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> <http://example.org> . +<http://www.bigdata.com/Mike> <http://www.w3.org/2000/01/rdf-schema#label> "Mike" <http://example.org> . +<http://www.bigdata.com/Bryan> <http://www.w3.org/2000/01/rdf-schema#label> "Bryan" <http://example.org> . +<http://www.bigdata.com/Mike> <http://xmlns.com/foaf/0.1/knows> <http://www.bigdata.com/Bryan> <http://example.org> . +<http://www.bigdata.com/Bryan> <http://xmlns.com/foaf/0.1/knows> <http://www.bigdata.com/Martyn> <http://example.org> . This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-12-06 12:27:36
|
Revision: 5750 http://bigdata.svn.sourceforge.net/bigdata/?rev=5750&view=rev Author: thompsonbry Date: 2011-12-06 12:27:27 +0000 (Tue, 06 Dec 2011) Log Message: ----------- Added alternative constructor for the BigdataSail which may be used to wrap a TempTripleStore as a SAIL by also passing in the AbstractTripleStore instance for the main database residing on the Journal or IBigdataFederation. That reference is used to resolve the QueryEngine. @see https://sourceforge.net/apps/trac/bigdata/ticket/422 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-11-30 19:59:17 UTC (rev 5749) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-12-06 12:27:27 UTC (rev 5750) @@ -898,23 +898,43 @@ } - + /** + * Constructor used to wrap an existing {@link AbstractTripleStore} + * instance. + * + * @param database + * The instance. + */ + public BigdataSail(final AbstractTripleStore database) { + + this(database, database); + + } /** - * Core ctor. You must use this variant for a scale-out triple store. + * Core ctor. You must use this variant for a scale-out triple store. * <p> - * To create a {@link BigdataSail} backed by an - * {@link IBigdataFederation} use the {@link ScaleOutTripleStore} ctor and - * then {@link AbstractTripleStore#create()} the triple store if it does not + * To create a {@link BigdataSail} backed by an {@link IBigdataFederation} + * use the {@link ScaleOutTripleStore} ctor and then + * {@link AbstractTripleStore#create()} the triple store if it does not * exist. * * @param database * An existing {@link AbstractTripleStore}. + * @param mainDatabase + * When <i>database</i> is a {@link TempTripleStore}, this is the + * {@link AbstractTripleStore} used to resolve the + * {@link QueryEngine}. Otherwise it must be the same object as + * the <i>database</i>. */ - public BigdataSail(final AbstractTripleStore database) { + public BigdataSail(final AbstractTripleStore database, + final AbstractTripleStore mainDatabase) { if (database == null) throw new IllegalArgumentException(); + + if (mainDatabase == null) + throw new IllegalArgumentException(); // default to false here and overwritten by some ctor variants. this.closeOnShutdown = false; @@ -1057,7 +1077,7 @@ namespaces = Collections.synchronizedMap(new LinkedHashMap<String, String>()); - queryEngine = QueryEngineFactory.getQueryController(database + queryEngine = QueryEngineFactory.getQueryController(mainDatabase .getIndexManager()); } Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-11-30 19:59:17 UTC (rev 5749) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-12-06 12:27:27 UTC (rev 5750) @@ -811,20 +811,42 @@ } /** - * Core ctor. You must use this variant for a scale-out triple store. + * Constructor used to wrap an existing {@link AbstractTripleStore} + * instance. + * + * @param database + * The instance. + */ + public BigdataSail(final AbstractTripleStore database) { + + this(database, database); + + } + + /** + * Core ctor. You must use this variant for a scale-out triple store. * <p> - * To create a {@link BigdataSail} backed by an - * {@link IBigdataFederation} use the {@link ScaleOutTripleStore} ctor and - * then {@link AbstractTripleStore#create()} the triple store if it does not + * To create a {@link BigdataSail} backed by an {@link IBigdataFederation} + * use the {@link ScaleOutTripleStore} ctor and then + * {@link AbstractTripleStore#create()} the triple store if it does not * exist. * * @param database * An existing {@link AbstractTripleStore}. + * @param mainDatabase + * When <i>database</i> is a {@link TempTripleStore}, this is the + * {@link AbstractTripleStore} used to resolve the + * {@link QueryEngine}. Otherwise it must be the same object as + * the <i>database</i>. */ - public BigdataSail(final AbstractTripleStore database) { + public BigdataSail(final AbstractTripleStore database, + final AbstractTripleStore mainDatabase) { if (database == null) throw new IllegalArgumentException(); + + if (mainDatabase == null) + throw new IllegalArgumentException(); // default to false here and overwritten by some ctor variants. this.closeOnShutdown = false; @@ -938,7 +960,7 @@ namespaces = Collections.synchronizedMap(new LinkedHashMap<String, String>()); - queryEngine = QueryEngineFactory.getQueryController(database + queryEngine = QueryEngineFactory.getQueryController(mainDatabase .getIndexManager()); } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-12-06 14:18:43
|
Revision: 5751 http://bigdata.svn.sourceforge.net/bigdata/?rev=5751&view=rev Author: thompsonbry Date: 2011-12-06 14:18:32 +0000 (Tue, 06 Dec 2011) Log Message: ----------- Added unit test for wrapping a TempTripleStore as a BigdataSail. @see https://sourceforge.net/apps/trac/bigdata/ticket/422 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTicket422.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTicket422.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2011-12-06 12:27:27 UTC (rev 5750) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2011-12-06 14:18:32 UTC (rev 5751) @@ -124,6 +124,7 @@ suite.addTestSuite(com.bigdata.rdf.sail.TestTicket353.class); suite.addTestSuite(com.bigdata.rdf.sail.TestTicket355.class); suite.addTestSuite(com.bigdata.rdf.sail.TestTicket361.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestTicket422.class); suite.addTestSuite(com.bigdata.rdf.sail.DavidsTestBOps.class); Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2011-12-06 12:27:27 UTC (rev 5750) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2011-12-06 14:18:32 UTC (rev 5751) @@ -101,6 +101,7 @@ suite.addTestSuite(com.bigdata.rdf.sail.TestTicket275.class); suite.addTestSuite(com.bigdata.rdf.sail.TestTicket276.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestTicket422.class); suite.addTestSuite(com.bigdata.rdf.sail.TestLexJoinOps.class); Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2011-12-06 12:27:27 UTC (rev 5750) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2011-12-06 14:18:32 UTC (rev 5751) @@ -95,6 +95,7 @@ suite.addTestSuite(com.bigdata.rdf.sail.TestTicket275.class); suite.addTestSuite(com.bigdata.rdf.sail.TestTicket276.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestTicket422.class); suite.addTestSuite(com.bigdata.rdf.sail.TestLexJoinOps.class); Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTicket422.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTicket422.java (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTicket422.java 2011-12-06 14:18:32 UTC (rev 5751) @@ -0,0 +1,128 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2011. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +/* + * Created on Dec 6, 2011 + */ + +package com.bigdata.rdf.sail; + +import info.aduna.iteration.CloseableIteration; + +import org.openrdf.model.Resource; +import org.openrdf.model.Statement; +import org.openrdf.model.URI; +import org.openrdf.model.Value; +import org.openrdf.sail.SailException; + +import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; +import com.bigdata.rdf.store.TempTripleStore; + +/** + * Test suite for wrapping a {@link TempTripleStore} as a {@link BigdataSail}. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @version $Id$ + */ +public class TestTicket422 extends ProxyBigdataSailTestCase { + + /** + * + */ + public TestTicket422() { + } + + /** + * @param name + */ + public TestTicket422(String name) { + super(name); + } + + public void test_wrapTempTripleStore() throws SailException { + + final BigdataSail sail = getSail(); + + try { + + final TempTripleStore tempStore = new TempTripleStore(sail + .getDatabase().getIndexManager().getTempStore(), sail + .getDatabase().getProperties(), sail.getDatabase()); + + try { + + final BigdataSail tempSail = new BigdataSail(tempStore, + sail.getDatabase()); + + try { + + tempSail.initialize(); + + final BigdataSailConnection con = tempSail.getConnection(); + + try { + + final CloseableIteration<? extends Statement, SailException> itr = con + .getStatements((Resource) null, (URI) null, + (Value) null, (Resource) null); + + try { + + while (itr.hasNext()) { + + itr.next(); + + } + } finally { + + itr.close(); + + } + + } finally { + + con.close(); + + } + + } finally { + + tempSail.shutDown(); + + } + + } finally { + + tempStore.close(); + + } + + } finally { + + sail.__tearDownUnitTest(); + + } + + } + +} Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTicket422.java ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2011-12-06 12:27:27 UTC (rev 5750) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithQuads.java 2011-12-06 14:18:32 UTC (rev 5751) @@ -114,7 +114,8 @@ // suite.addTestSuite(com.bigdata.rdf.sail.TestTicket352.class); suite.addTestSuite(com.bigdata.rdf.sail.TestTicket353.class); suite.addTestSuite(com.bigdata.rdf.sail.TestTicket355.class); -// suite.addTestSuite(com.bigdata.rdf.sail.TestTicket361.class); +// suite.addTestSuite(com.bigdata.rdf.sail.TestTicket361.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestTicket422.class); suite.addTestSuite(com.bigdata.rdf.sail.DavidsTestBOps.class); Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2011-12-06 12:27:27 UTC (rev 5750) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithSids.java 2011-12-06 14:18:32 UTC (rev 5751) @@ -94,6 +94,7 @@ suite.addTestSuite(com.bigdata.rdf.sail.TestTicket275.class); suite.addTestSuite(com.bigdata.rdf.sail.TestTicket276.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestTicket422.class); suite.addTestSuite(com.bigdata.rdf.sail.TestLexJoinOps.class); Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2011-12-06 12:27:27 UTC (rev 5750) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestBigdataSailWithoutSids.java 2011-12-06 14:18:32 UTC (rev 5751) @@ -88,6 +88,7 @@ suite.addTestSuite(com.bigdata.rdf.sail.TestTicket275.class); suite.addTestSuite(com.bigdata.rdf.sail.TestTicket276.class); + suite.addTestSuite(com.bigdata.rdf.sail.TestTicket422.class); suite.addTestSuite(com.bigdata.rdf.sail.TestLexJoinOps.class); Added: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTicket422.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTicket422.java (rev 0) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTicket422.java 2011-12-06 14:18:32 UTC (rev 5751) @@ -0,0 +1,128 @@ +/** + +Copyright (C) SYSTAP, LLC 2006-2011. All rights reserved. + +Contact: + SYSTAP, LLC + 4501 Tower Road + Greensboro, NC 27410 + lic...@bi... + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; version 2 of the License. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +*/ +/* + * Created on Dec 6, 2011 + */ + +package com.bigdata.rdf.sail; + +import info.aduna.iteration.CloseableIteration; + +import org.openrdf.model.Resource; +import org.openrdf.model.Statement; +import org.openrdf.model.URI; +import org.openrdf.model.Value; +import org.openrdf.sail.SailException; + +import com.bigdata.rdf.sail.BigdataSail.BigdataSailConnection; +import com.bigdata.rdf.store.TempTripleStore; + +/** + * Test suite for wrapping a {@link TempTripleStore} as a {@link BigdataSail}. + * + * @author <a href="mailto:tho...@us...">Bryan Thompson</a> + * @version $Id$ + */ +public class TestTicket422 extends ProxyBigdataSailTestCase { + + /** + * + */ + public TestTicket422() { + } + + /** + * @param name + */ + public TestTicket422(String name) { + super(name); + } + + public void test_wrapTempTripleStore() throws SailException { + + final BigdataSail sail = getSail(); + + try { + + final TempTripleStore tempStore = new TempTripleStore(sail + .getDatabase().getIndexManager().getTempStore(), sail + .getDatabase().getProperties(), sail.getDatabase()); + + try { + + final BigdataSail tempSail = new BigdataSail(tempStore, + sail.getDatabase()); + + try { + + tempSail.initialize(); + + final BigdataSailConnection con = tempSail.getConnection(); + + try { + + final CloseableIteration<? extends Statement, SailException> itr = con + .getStatements((Resource) null, (URI) null, + (Value) null, (Resource) null); + + try { + + while (itr.hasNext()) { + + itr.next(); + + } + } finally { + + itr.close(); + + } + + } finally { + + con.close(); + + } + + } finally { + + tempSail.shutDown(); + + } + + } finally { + + tempStore.close(); + + } + + } finally { + + sail.__tearDownUnitTest(); + + } + + } + +} Property changes on: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/test/com/bigdata/rdf/sail/TestTicket422.java ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-12-07 14:43:06
|
Revision: 5755 http://bigdata.svn.sourceforge.net/bigdata/?rev=5755&view=rev Author: thompsonbry Date: 2011-12-07 14:42:55 +0000 (Wed, 07 Dec 2011) Log Message: ----------- Bug fix for atomic pattern to obtain a read-only connection from the last commit time on the database at the SAIL interface. @see https://sourceforge.net/apps/trac/bigdata/ticket/427 Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt Added: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-07 14:42:55 UTC (rev 5755) @@ -0,0 +1,103 @@ +This is a minor version release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +You can download the WAR from: + +https://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_3 + +Feature summary: + +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited; +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (SIDs); +- 100% native SPARQL 1.0 evaluation with lots of query optimizations; +- Fast RDFS+ inference and truth maintenance; +- Fast statement level provenance mode (SIDs). + +The road map [3] for the next releases includes: + +- High-volume analytic query and SPARQL 1.1 query, including aggregations; +- SPARQL 1.1 Update, Property Paths, and Federation support; +- Simplified deployment, configuration, and administration for clusters; and +- High availability for the journal and the cluster. + +Change log: + +1.0.3 + +- https://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) +- https://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) +- https://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) +- https://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) +- https://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) +- https://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) +- https://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) +- https://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) +- https://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) +- https://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) +- https://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) +- https://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + +1.0.2 + + - https://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - https://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - https://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - https://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - https://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - https://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - https://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - https://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - https://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 + + - https://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - https://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - https://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - https://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - https://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - https://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - https://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - https://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - https://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - https://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - https://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + + Note: Some of these bug fixes in the 1.0.1 release require data migration. + For details, see https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration + + +For more information about bigdata, please see the following links: + +[1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] https://sourceforge.net/projects/bigdata/files/bigdata/ + +About bigdata: + +Bigdata\xAE is a horizontally-scaled, general purpose storage and computing fabric +for ordered data (B+Trees), designed to operate on either a single server or a +cluster of commodity hardware. Bigdata\xAE uses dynamically partitioned key-range +shards in order to remove any realistic scaling limits - in principle, bigdata\xAE +may be deployed on 10s, 100s, or even thousands of machines and new capacity may +be added incrementally without requiring the full reload of all data. The bigdata\xAE +RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), +and datum level provenance. Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-12-06 23:26:38 UTC (rev 5754) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-12-07 14:42:55 UTC (rev 5755) @@ -1352,15 +1352,19 @@ } /** - * Return a read-only connection based on a the last commit point. + * Return a read-only connection based on the last commit point. This method + * is atomic with respect to the commit protocol. * * @return The view. */ public BigdataSailConnection getReadOnlyConnection() { - final long timestamp = database.getIndexManager().getLastCommitTime(); + // Note: This is not atomic with respect to the commit protocol. +// final long timestamp = database.getIndexManager().getLastCommitTime(); +// +// return getReadOnlyConnection(timestamp); - return getReadOnlyConnection(timestamp); + return getReadOnlyConnection(ITx.READ_COMMITTED); } Added: branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt (rev 0) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-07 14:42:55 UTC (rev 5755) @@ -0,0 +1,103 @@ +This is a minor version release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. + +Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. + +See [1,2,8] for instructions on installing bigdata(R), [4] for the javadoc, and [3,5,6] for news, questions, and the latest developments. For more information about SYSTAP, LLC and bigdata, see [7]. + +Starting with the 1.0.0 release, we offer a WAR artifact [8] for easy installation of the single machine RDF database. For custom development and cluster installations we recommend checking out the code from SVN using the tag for this release. The code will build automatically under eclipse. You can also build the code using the ant script. The cluster installer requires the use of the ant script. + +You can download the WAR from: + +https://sourceforge.net/projects/bigdata/ + +You can checkout this release from: + +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_3 + +Feature summary: + +- Single machine data storage to ~50B triples/quads (RWStore); +- Clustered data storage is essentially unlimited; +- Simple embedded and/or webapp deployment (NanoSparqlServer); +- Triples, quads, or triples with provenance (SIDs); +- 100% native SPARQL 1.0 evaluation with lots of query optimizations; +- Fast RDFS+ inference and truth maintenance; +- Fast statement level provenance mode (SIDs). + +The road map [3] for the next releases includes: + +- High-volume analytic query and SPARQL 1.1 query, including aggregations; +- SPARQL 1.1 Update, Property Paths, and Federation support; +- Simplified deployment, configuration, and administration for clusters; and +- High availability for the journal and the cluster. + +Change log: + +1.0.3 + +- https://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) +- https://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) +- https://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) +- https://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) +- https://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) +- https://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) +- https://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) +- https://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) +- https://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) +- https://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) +- https://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) +- https://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + +1.0.2 + + - https://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - https://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - https://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - https://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - https://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - https://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - https://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - https://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - https://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 + + - https://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - https://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - https://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - https://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - https://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - https://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - https://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - https://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - https://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - https://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - https://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + + Note: Some of these bug fixes in the 1.0.1 release require data migration. + For details, see https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration + + +For more information about bigdata, please see the following links: + +[1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap +[4] http://www.bigdata.com/bigdata/docs/api/ +[5] http://sourceforge.net/projects/bigdata/ +[6] http://www.bigdata.com/blog +[7] http://www.systap.com/bigdata.htm +[8] https://sourceforge.net/projects/bigdata/files/bigdata/ + +About bigdata: + +Bigdata\xAE is a horizontally-scaled, general purpose storage and computing fabric +for ordered data (B+Trees), designed to operate on either a single server or a +cluster of commodity hardware. Bigdata\xAE uses dynamically partitioned key-range +shards in order to remove any realistic scaling limits - in principle, bigdata\xAE +may be deployed on 10s, 100s, or even thousands of machines and new capacity may +be added incrementally without requiring the full reload of all data. The bigdata\xAE +RDF database supports RDFS and OWL Lite reasoning, high-level query (SPARQL), +and datum level provenance. Property changes on: branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-12-06 23:26:38 UTC (rev 5754) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-12-07 14:42:55 UTC (rev 5755) @@ -1235,15 +1235,19 @@ } /** - * Return a read-only connection based on a the last commit point. + * Return a read-only connection based on the last commit point. This method + * is atomic with respect to the commit protocol. * * @return The view. */ public BigdataSailConnection getReadOnlyConnection() { - final long timestamp = database.getIndexManager().getLastCommitTime(); + // Note: This is not atomic with respect to the commit protocol. +// final long timestamp = database.getIndexManager().getLastCommitTime(); +// +// return getReadOnlyConnection(timestamp); - return getReadOnlyConnection(timestamp); + return getReadOnlyConnection(ITx.READ_COMMITTED); } This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-12-19 15:37:05
|
Revision: 5806 http://bigdata.svn.sourceforge.net/bigdata/?rev=5806&view=rev Author: thompsonbry Date: 2011-12-19 15:36:58 +0000 (Mon, 19 Dec 2011) Log Message: ----------- Added BigdataSailConnection and BigdataSailRepositoryConnection#commit2(). These methods report the actual commit time for the commit point and ZERO (0L) if the write set was empty. The reported commit time may be used with various mechanisms for accessing historical commit points. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepositoryConnection.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepositoryConnection.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-12-19 15:29:25 UTC (rev 5805) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-12-19 15:36:58 UTC (rev 5806) @@ -2721,10 +2721,14 @@ /** * Commit the write set. * <p> - * Note: The semantics depend on the {@link Options#STORE_CLASS}. See + * Note: The semantics depend on the {@link Options#STORE_CLASS}. See * {@link ITripleStore#commit()}. + * + * @return The timestamp associated with the commit point. This will be + * <code>0L</code> if the write set was empty such that nothing + * was committed. */ - public synchronized void commit() throws SailException { + public synchronized long commit2() throws SailException { assertWritableConn(); @@ -2738,7 +2742,7 @@ flushStatementBuffers(true/* assertions */, true/* retractions */); - database.commit(); + final long commitTime = database.commit(); if (changeLog != null) { @@ -2746,7 +2750,21 @@ } + return commitTime; + } + + /** + * Commit the write set. + * <p> + * Note: The semantics depend on the {@link Options#STORE_CLASS}. See + * {@link ITripleStore#commit()}. + */ + final public synchronized void commit() throws SailException { + + commit2(); + + } // /** // * Commit the write set, providing detailed feedback on the change set @@ -3656,6 +3674,8 @@ } /** + * {@inheritDoc} + * <p> * A specialized commit that goes through the transaction service * available on the journal's transaction manager. Once the commit * happens, a new read/write transaction is automatically started @@ -3666,7 +3686,7 @@ * lexicon to never be committed. Probably not a significant issue. */ @Override - public synchronized void commit() throws SailException { + public synchronized long commit2() throws SailException { /* * don't double commit, but make a note that writes to the lexicon @@ -3688,9 +3708,11 @@ try { - txService.commit(tx); + final long commitTime = txService.commit(tx); newTx(); + + return commitTime; } catch(IOException ex) { @@ -3832,11 +3854,14 @@ /** * NOP + * + * @return <code>0L</code> since nothing was committed. */ @Override - public synchronized void commit() throws SailException { + public synchronized long commit2() throws SailException { // NOP. + return 0L; // Nothing committed. } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepositoryConnection.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepositoryConnection.java 2011-12-19 15:29:25 UTC (rev 5805) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepositoryConnection.java 2011-12-19 15:36:58 UTC (rev 5806) @@ -232,15 +232,18 @@ } /** - * {@inheritDoc} + * Commit, returning the timestamp associated with the new commit point. * <p> * Note: auto-commit is an EXTREMELY bad idea. Performance will be terrible. * The database will swell to an outrageous size. TURN OFF AUTO COMMIT. * + * @return The timestamp associated with the new commit point. This will be + * <code>0L</code> if the write set was empty such that nothing was + * committed. + * * @see BigdataSail.Options#ALLOW_AUTO_COMMIT */ - @Override - public void commit() throws RepositoryException { + public long commit2() throws RepositoryException { // auto-commit is heinously inefficient if (isAutoCommit() && @@ -250,10 +253,33 @@ } - super.commit(); + // Note: Just invokes sailConnection.commit() +// super.commit(); + try { +// sailConnection.commit(); + return getSailConnection().commit2(); + } + catch (SailException e) { + throw new RepositoryException(e); + } } + /** + * {@inheritDoc} + * <p> + * Note: auto-commit is an EXTREMELY bad idea. Performance will be terrible. + * The database will swell to an outrageous size. TURN OFF AUTO COMMIT. + * + * @see BigdataSail.Options#ALLOW_AUTO_COMMIT + */ + @Override + public void commit() throws RepositoryException { + + commit2(); + + } + // /** // * Commit any changes made in the connection, providing detailed feedback // * on the change set that occurred as a result of this commit. Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-12-19 15:29:25 UTC (rev 5805) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSail.java 2011-12-19 15:36:58 UTC (rev 5806) @@ -2615,10 +2615,14 @@ /** * Commit the write set. * <p> - * Note: The semantics depend on the {@link Options#STORE_CLASS}. See + * Note: The semantics depend on the {@link Options#STORE_CLASS}. See * {@link ITripleStore#commit()}. + * + * @return The timestamp associated with the commit point. This will be + * <code>0L</code> if the write set was empty such that nothing + * was committed. */ - public synchronized void commit() throws SailException { + public synchronized long commit2() throws SailException { assertWritableConn(); @@ -2632,7 +2636,7 @@ flushStatementBuffers(true/* assertions */, true/* retractions */); - database.commit(); + final long commitTime = database.commit(); if (changeLog != null) { @@ -2640,8 +2644,22 @@ } + return commitTime; + } - + + /** + * Commit the write set. + * <p> + * Note: The semantics depend on the {@link Options#STORE_CLASS}. See + * {@link ITripleStore#commit()}. + */ + final public synchronized void commit() throws SailException { + + commit2(); + + } + // /** // * Commit the write set, providing detailed feedback on the change set // * that occurred as a result of this commit. @@ -3311,6 +3329,8 @@ } /** + * {@inheritDoc} + * <p> * A specialized commit that goes through the transaction service * available on the journal's transaction manager. Once the commit * happens, a new read/write transaction is automatically started @@ -3321,7 +3341,7 @@ * lexicon to never be committed. Probably not a significant issue. */ @Override - public synchronized void commit() throws SailException { + public synchronized long commit2() throws SailException { /* * don't double commit, but make a note that writes to the lexicon @@ -3343,9 +3363,11 @@ try { - txService.commit(tx); + final long commitTime = txService.commit(tx); newTx(); + + return commitTime; } catch(IOException ex) { @@ -3559,11 +3581,14 @@ /** * NOP + * + * @return <code>0L</code> since nothing was committed. */ @Override - public synchronized void commit() throws SailException { + public synchronized long commit2() throws SailException { // NOP. + return 0L; // Nothing committed. } Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepositoryConnection.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepositoryConnection.java 2011-12-19 15:29:25 UTC (rev 5805) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailRepositoryConnection.java 2011-12-19 15:36:58 UTC (rev 5806) @@ -160,15 +160,18 @@ } /** - * {@inheritDoc} + * Commit, returning the timestamp associated with the new commit point. * <p> * Note: auto-commit is an EXTREMELY bad idea. Performance will be terrible. * The database will swell to an outrageous size. TURN OFF AUTO COMMIT. * + * @return The timestamp associated with the new commit point. This will be + * <code>0L</code> if the write set was empty such that nothing was + * committed. + * * @see BigdataSail.Options#ALLOW_AUTO_COMMIT */ - @Override - public void commit() throws RepositoryException { + public long commit2() throws RepositoryException { // auto-commit is heinously inefficient if (isAutoCommit() && @@ -178,28 +181,31 @@ } - super.commit(); + // Note: Just invokes sailConnection.commit() +// super.commit(); + try { +// sailConnection.commit(); + return getSailConnection().commit2(); + } + catch (SailException e) { + throw new RepositoryException(e); + } } - + /** - * Version returning the commit time that can be used to open a readOnly - * transaction. + * {@inheritDoc} + * <p> + * Note: auto-commit is an EXTREMELY bad idea. Performance will be terrible. + * The database will swell to an outrageous size. TURN OFF AUTO COMMIT. * - * @return the state associated with the commit - * @throws RepositoryException + * @see BigdataSail.Options#ALLOW_AUTO_COMMIT */ - public long commit2() throws RepositoryException { - - // auto-commit is heinously inefficient - if (isAutoCommit() && - !((BigdataSailConnection) getSailConnection()).getAllowAutoCommit()) { - - throw new RepositoryException("please set autoCommit to false"); + @Override + public void commit() throws RepositoryException { - } - - return ((BigdataSailConnection) getSailConnection()).getTripleStore().commit(); + commit2(); + } /** This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-12-19 15:37:34
|
Revision: 5807 http://bigdata.svn.sourceforge.net/bigdata/?rev=5807&view=rev Author: thompsonbry Date: 2011-12-19 15:37:22 +0000 (Mon, 19 Dec 2011) Log Message: ----------- Updated release notes for 1.0.3 and 1.1.0. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-19 15:36:58 UTC (rev 5806) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-19 15:37:22 UTC (rev 5807) @@ -8,7 +8,7 @@ You can download the WAR from: -https://sourceforge.net/projects/bigdata/ +http://sourceforge.net/projects/bigdata/ You can checkout this release from: @@ -24,7 +24,7 @@ - Fast RDFS+ inference and truth maintenance; - Fast statement level provenance mode (SIDs). -The road map [3] for the next releases includes: +Road map [3]: - High-volume analytic query and SPARQL 1.1 query, including aggregations; - SPARQL 1.1 Update, Property Paths, and Federation support; @@ -33,63 +33,65 @@ Change log: + Note: Versions with (*) require data migration. For details, see [9]. + 1.0.3 -- https://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) -- https://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) -- https://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) -- https://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) -- https://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) -- https://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) -- https://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) -- https://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) -- https://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) -- https://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) -- https://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) -- https://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) + - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) + - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) + - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) + - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) + - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) + - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) + - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) + - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) 1.0.2 - - https://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) - - https://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - - https://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) - - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - - https://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) - - https://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) - - https://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) - - https://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) - - https://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) - - https://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) -1.0.1 +1.0.1 (*) - - https://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). - - https://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). - - https://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). - - https://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - - https://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - - https://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - - https://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - - https://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - - https://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). - - https://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) - - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - - https://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) - Note: Some of these bug fixes in the 1.0.1 release require data migration. - For details, see https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration - - For more information about bigdata, please see the following links: -[1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page -[2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted -[3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap +[1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm -[8] https://sourceforge.net/projects/bigdata/files/bigdata/ +[8] http://sourceforge.net/projects/bigdata/files/bigdata/ +[9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration About bigdata: Modified: branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-19 15:36:58 UTC (rev 5806) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-19 15:37:22 UTC (rev 5807) @@ -8,7 +8,7 @@ You can download the WAR from: -https://sourceforge.net/projects/bigdata/ +http://sourceforge.net/projects/bigdata/ You can checkout this release from: @@ -24,7 +24,7 @@ - Fast RDFS+ inference and truth maintenance; - Fast statement level provenance mode (SIDs). -The road map [3] for the next releases includes: +Road map [3]: - High-volume analytic query and SPARQL 1.1 query, including aggregations; - SPARQL 1.1 Update, Property Paths, and Federation support; @@ -33,63 +33,65 @@ Change log: + Note: Versions with (*) require data migration. For details, see [9]. + 1.0.3 -- https://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) -- https://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) -- https://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) -- https://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) -- https://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) -- https://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) -- https://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) -- https://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) -- https://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) -- https://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) -- https://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) -- https://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) + - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) + - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) + - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) + - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) + - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) + - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) + - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) + - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) 1.0.2 - - https://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) - - https://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) - - https://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) - - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - - https://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) - - https://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) - - https://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) - - https://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) - - https://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) - - https://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) -1.0.1 +1.0.1 (*) - - https://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). - - https://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). - - https://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). - - https://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). - - https://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). - - https://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). - - https://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). - - https://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). - - https://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). - - https://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) - - https://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) - - https://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) - Note: Some of these bug fixes in the 1.0.1 release require data migration. - For details, see https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration - - For more information about bigdata, please see the following links: -[1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page -[2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted -[3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap +[1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm -[8] https://sourceforge.net/projects/bigdata/files/bigdata/ +[8] http://sourceforge.net/projects/bigdata/files/bigdata/ +[9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration About bigdata: Modified: branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt 2011-12-19 15:36:58 UTC (rev 5806) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt 2011-12-19 15:37:22 UTC (rev 5807) @@ -1,5 +1,3 @@ -***DRAFT*** - This is a major version release of bigdata(R). Bigdata is a horizontally-scaled, open-source architecture for indexed data with an emphasis on RDF capable of loading 1B triples in under one hour on a 15 node cluster. Bigdata operates in both a single machine mode (Journal) and a cluster mode (Federation). The Journal provides fast scalable ACID indexed storage for very large data sets, up to 50 billion triples / quads. The federation provides fast scalable shard-wise parallel indexed storage using dynamic sharding and shard-wise ACID updates and incremental cluster size growth. Both platforms support fully concurrent readers with snapshot isolation. Distributed processing offers greater throughput but does not reduce query or update latency. Choose the Journal when the anticipated scale and throughput requirements permit. Choose the Federation when the administrative and machine overhead associated with operating a cluster is an acceptable tradeoff to have essentially unlimited data scaling and throughput. @@ -10,11 +8,11 @@ You can download the WAR from: -https://sourceforge.net/projects/bigdata/ +http://sourceforge.net/projects/bigdata/ You can checkout this release from: -https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_0_2 +https://bigdata.svn.sourceforge.net/svnroot/bigdata/tags/BIGDATA_RELEASE_1_1_0 New features: @@ -28,26 +26,106 @@ - Clustered data storage is essentially unlimited; - Simple embedded and/or webapp deployment (NanoSparqlServer); - Triples, quads, or triples with provenance (SIDs); -- 100% native SPARQL 1.0 evaluation with lots of query optimizations; +- Fast 100% native SPARQL 1.0 evaluation; +- Integrated "analytic" query package; - Fast RDFS+ inference and truth maintenance; - Fast statement level provenance mode (SIDs). -The road map [3] for the next releases includes: +Road map [3]: -- High-volume analytic query and SPARQL 1.1 query, including aggregations; - Simplified deployment, configuration, and administration for clusters; and - High availability for the journal and the cluster. -For more information, please see the following links: +Change log: -[1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page -[2] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted -[3] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap + Note: Versions with (*) require data migration. For details, see [9]. + +1.1.0 (*) + + - http://sourceforge.net/apps/trac/bigdata/ticket/23 (Lexicon joins) + - http://sourceforge.net/apps/trac/bigdata/ticket/109 (Store large literals as "blobs") + - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://sourceforge.net/apps/trac/bigdata/ticket/203 (Implement an persistence capable hash table to support analytic query) + - http://sourceforge.net/apps/trac/bigdata/ticket/209 (AccessPath should visit binding sets rather than elements for high level query.) + - http://sourceforge.net/apps/trac/bigdata/ticket/227 (SliceOp appears to be necessary when operator plan should suffice without) + - http://sourceforge.net/apps/trac/bigdata/ticket/232 (Bottom-up evaluation semantics). + - http://sourceforge.net/apps/trac/bigdata/ticket/246 (Derived xsd numeric data types must be inlined as extension types.) + - http://sourceforge.net/apps/trac/bigdata/ticket/254 (Revisit pruning of intermediate variable bindings during query execution) + - http://sourceforge.net/apps/trac/bigdata/ticket/261 (Lift conditions out of subqueries.) + - http://sourceforge.net/apps/trac/bigdata/ticket/300 (Native ORDER BY) + - http://sourceforge.net/apps/trac/bigdata/ticket/324 (Inline predeclared URIs and namespaces in 2-3 bytes) + - http://sourceforge.net/apps/trac/bigdata/ticket/330 (NanoSparqlServer does not locate "html" resources when run from jar) + - http://sourceforge.net/apps/trac/bigdata/ticket/334 (Support inlining of unicode data in the statement indices.) + - http://sourceforge.net/apps/trac/bigdata/ticket/364 (Scalable default graph evaluation) + - http://sourceforge.net/apps/trac/bigdata/ticket/368 (Prune variable bindings during query evaluation) + - http://sourceforge.net/apps/trac/bigdata/ticket/370 (Direct translation of openrdf AST to bigdata AST) + - http://sourceforge.net/apps/trac/bigdata/ticket/373 (Fix StrBOp and other IValueExpressions) + - http://sourceforge.net/apps/trac/bigdata/ticket/377 (Optimize OPTIONALs with multiple statement patterns.) + - http://sourceforge.net/apps/trac/bigdata/ticket/380 (Native SPARQL evaluation on cluster) + - http://sourceforge.net/apps/trac/bigdata/ticket/387 (Cluster does not compute closure) + - http://sourceforge.net/apps/trac/bigdata/ticket/395 (HTree hash join performance) + - http://sourceforge.net/apps/trac/bigdata/ticket/401 (inline xsd:unsigned datatypes) + - http://sourceforge.net/apps/trac/bigdata/ticket/408 (xsd:string cast fails for non-numeric data) + - http://sourceforge.net/apps/trac/bigdata/ticket/421 (New query hints model.) + - http://sourceforge.net/apps/trac/bigdata/ticket/431 (Use of read-only tx per query defeats cache on cluster) + +1.0.3 + + - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) + - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) + - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) + - http://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) + - http://sourceforge.net/apps/trac/bigdata/ticket/385 (FastRDFValueCoder2: ArrayIndexOutOfBoundsException) + - http://sourceforge.net/apps/trac/bigdata/ticket/391 (Release age advanced on WORM mode journal) + - http://sourceforge.net/apps/trac/bigdata/ticket/392 (Add a DELETE by access path method to the NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/393 (Add "context-uri" request parameter to specify the default context for INSERT in the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/394 (log4j configuration error message in WAR deployment) + - http://sourceforge.net/apps/trac/bigdata/ticket/399 (Add a fast range count method to the REST API) + - http://sourceforge.net/apps/trac/bigdata/ticket/422 (Support temp triple store wrapped by a BigdataSail) + - http://sourceforge.net/apps/trac/bigdata/ticket/424 (NQuads support for NanoSparqlServer) + - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) + - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + +1.0.2 + + - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) + - http://sourceforge.net/apps/trac/bigdata/ticket/181 (Scale-out LUBM "how to" in wiki and build.xml are out of date.) + - http://sourceforge.net/apps/trac/bigdata/ticket/356 (Query not terminated by error.) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/361 (IRunningQuery not closed promptly.) + - http://sourceforge.net/apps/trac/bigdata/ticket/371 (DataLoader fails to load resources available from the classpath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/376 (Support for the streaming of bigdata IBindingSets into a sparql query.) + - http://sourceforge.net/apps/trac/bigdata/ticket/378 (ClosedByInterruptException during heavy query mix.) + - http://sourceforge.net/apps/trac/bigdata/ticket/379 (NotSerializableException for SPOAccessPath.) + - http://sourceforge.net/apps/trac/bigdata/ticket/382 (Change dependencies to Apache River 2.2.0) + +1.0.1 (*) + + - http://sourceforge.net/apps/trac/bigdata/ticket/107 (Unicode clean schema names in the sparse row store). + - http://sourceforge.net/apps/trac/bigdata/ticket/124 (TermIdEncoder should use more bits for scale-out). + - http://sourceforge.net/apps/trac/bigdata/ticket/225 (OSX requires specialized performance counter collection classes). + - http://sourceforge.net/apps/trac/bigdata/ticket/348 (BigdataValueFactory.asValue() must return new instance when DummyIV is used). + - http://sourceforge.net/apps/trac/bigdata/ticket/349 (TermIdEncoder limits Journal to 2B distinct RDF Values per triple/quad store instance). + - http://sourceforge.net/apps/trac/bigdata/ticket/351 (SPO not Serializable exception in SIDS mode (scale-out)). + - http://sourceforge.net/apps/trac/bigdata/ticket/352 (ClassCastException when querying with binding-values that are not known to the database). + - http://sourceforge.net/apps/trac/bigdata/ticket/353 (UnsupportedOperatorException for some SPARQL queries). + - http://sourceforge.net/apps/trac/bigdata/ticket/355 (Query failure when comparing with non materialized value). + - http://sourceforge.net/apps/trac/bigdata/ticket/357 (RWStore reports "FixedAllocator returning null address, with freeBits".) + - http://sourceforge.net/apps/trac/bigdata/ticket/359 (NamedGraph pattern fails to bind graph variable if only one binding exists.) + - http://sourceforge.net/apps/trac/bigdata/ticket/362 (log4j - slf4j bridge.) + +For more information about bigdata(R), please see the following links: + +[1] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Main_Page +[2] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=GettingStarted +[3] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Roadmap [4] http://www.bigdata.com/bigdata/docs/api/ [5] http://sourceforge.net/projects/bigdata/ [6] http://www.bigdata.com/blog [7] http://www.systap.com/bigdata.htm -[8] https://sourceforge.net/projects/bigdata/files/bigdata/ +[8] http://sourceforge.net/projects/bigdata/files/bigdata/ +[9] http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=DataMigration About bigdata: This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-12-19 15:46:05
|
Revision: 5808 http://bigdata.svn.sourceforge.net/bigdata/?rev=5808&view=rev Author: thompsonbry Date: 2011-12-19 15:45:59 +0000 (Mon, 19 Dec 2011) Log Message: ----------- Modified the history retention time sample code (ReleaseTimes.java) to (a) include a link to the wiki page which describes the example; and (b) to use commit2() on the BigdataSailRepositoryConnection to atomically obtain the commit time. Modified Paths: -------------- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/ReleaseTimes.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/samples/com/bigdata/samples/ReleaseTimes.java Added: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/samples/com/bigdata/samples/ReleaseTimes.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/samples/com/bigdata/samples/ReleaseTimes.java (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/samples/com/bigdata/samples/ReleaseTimes.java 2011-12-19 15:45:59 UTC (rev 5808) @@ -0,0 +1,227 @@ +package com.bigdata.samples; + +import java.io.File; +import java.io.IOException; +import java.util.Properties; + +import org.openrdf.model.Literal; +import org.openrdf.model.Statement; +import org.openrdf.model.URI; +import org.openrdf.model.ValueFactory; +import org.openrdf.model.vocabulary.RDFS; +import org.openrdf.repository.RepositoryException; +import org.openrdf.repository.RepositoryResult; +import org.openrdf.sail.SailException; + +import com.bigdata.journal.BufferMode; +import com.bigdata.journal.ITx; +import com.bigdata.journal.Options; +import com.bigdata.rdf.sail.BigdataSail; +import com.bigdata.rdf.sail.BigdataSailRepository; +import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; +import com.bigdata.rdf.store.BD; +import com.bigdata.service.AbstractTransactionService; + +/** + * Example of History retention usage with Sail interface classes. + * + * A sequence of commits updates the value of a statement + * and retains the commit time of each. + * + * We can confirm that the correct data is associated with the relevant + * commit time + * + * On closing and re-opening the store, the historical data remains + * accessible, however, the release time is recalculated when connections + * are closed. The code clarifies several of the issues involved. + * + * @see http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=RetentionHistory + * + * @author Martyn Cutcher + */ +public class ReleaseTimes { + + public static void main(String[] args) throws IOException, InterruptedException, SailException, RepositoryException { + /** + * First create a new repository with 5000ms (5 second) retention + */ + BigdataSail sail = new BigdataSail(getProperties("5000")); + sail.initialize(); + BigdataSailRepository repo = new BigdataSailRepository(sail); + final BigdataSailRepositoryConnection cxn = repo.getConnection(); + cxn.setAutoCommit(false); + + final ValueFactory vf = sail.getValueFactory(); + + /* + * Create some terms. + */ + final URI devuri = vf.createURI(BD.NAMESPACE + "developer"); + final Literal mp = vf.createLiteral("mike personick"); + final Literal bt = vf.createLiteral("bryan thompson"); + final Literal mc = vf.createLiteral("martyn cutcher"); + + /* + * Associate statements with recorded state + */ + System.out.println("Creating historical states..."); + final long state1 = assertStatement(cxn, devuri, RDFS.LABEL, mp); + final long state2 = assertStatement(cxn, devuri, RDFS.LABEL, bt); + final long state3 = assertStatement(cxn, devuri, RDFS.LABEL, mc); + + System.out.println("State1: " + state1 + ", state2: " + state2 + ", state3: " + state3); + + // Check historical state + System.out.println("Checking historical states..."); + check(checkState(repo, state2, devuri, RDFS.LABEL, bt)); + check(checkState(repo, state1, devuri, RDFS.LABEL, mp)); + check(checkState(repo, state3, devuri, RDFS.LABEL, mc)); + + System.out.println("Shutting down..."); + // Shutdown sail + cxn.close(); + sail.shutDown(); + + // Reopen with sufficient retention time to cover previous states + sail = new BigdataSail(getProperties("5000")); + sail.initialize(); + repo = new BigdataSailRepository(sail); + final BigdataSailRepositoryConnection cxn2 = repo.getConnection(); + cxn2.setAutoCommit(false); + System.out.println("Reopened"); + + // wait for data to "age" + Thread.sleep(5000); + + // ... confirm earliest historical access after open + check(checkState(repo, state2, devuri, RDFS.LABEL, bt)); + System.out.println("History retained after re-open"); + + // closing the connection will reset the release time so + // that subsequent accesses will fail + check(!checkState(repo, state2, devuri, RDFS.LABEL, bt)); + check(!checkState(repo, state1, devuri, RDFS.LABEL, mp)); + System.out.println("History released after closing read-only connections"); + + // this is last committed state, so will succeed + check(checkState(repo, state3, devuri, RDFS.LABEL, mc)); + System.out.println("History of last commit point accessible"); + + // .. and will also succeed directly against READ_COMMITTED + check(checkState(repo, ITx.READ_COMMITTED, devuri, RDFS.LABEL, mc)); + + // Updating the statement... + assertStatement(cxn2, devuri, RDFS.LABEL, mp); + System.out.println("Statement udated"); + + // the previous historical access now fails + check(!checkState(repo, state3, devuri, RDFS.LABEL, mc)); + System.out.println("History of last commit point of re-opened journal no longer accessible after update"); + + // ..and the READ_COMMITTED is updated + check(checkState(repo, ITx.READ_COMMITTED, devuri, RDFS.LABEL, mp)); + System.out.println("Committed state confirmed"); + + cxn2.close(); + sail.shutDown(); + + System.out.println("DONE"); + } + + private static void check(boolean checkState) { + if (!checkState) { + throw new AssertionError("Unexpected results"); + } + } + + /** + * Define statement with a URI identifier, type and literal label + * + * First ensure that any previous valued statement is removed. + * + * Commit, and return the store state. + */ + private static long assertStatement(BigdataSailRepositoryConnection cxn, URI id, + URI pred, Literal label) throws RepositoryException + { + cxn.remove(id, pred, null); + + check(!cxn.getStatements(id, pred, null, false).hasNext()); + + cxn.add(id, pred, label); + + check(cxn.getStatements(id, pred, label, false).hasNext()); + + // Return the time of the commit + return cxn.commit2(); + } + + /** + * To check a specific state we retrieve a readOnlyConnection from the + * repository for the time/state specified. + * + * An IllegalStateException is thrown if no valid connection can be + * established. + * + * We then retrieve statements from the connection, checking that only + * the requested statement is present. + */ + static boolean checkState(final BigdataSailRepository repo, final long state, + final URI id, final URI pred, final Literal label ) + { + try { + final BigdataSailRepositoryConnection cxn; + try { + cxn = repo.getReadOnlyConnection(state); + } catch (IllegalStateException e) { + return false; // invalid state! + } + try { + RepositoryResult<Statement> results = cxn.getStatements(id, pred, null, false); + + // nothing found + if (!results.hasNext()) { + return false; + } + + // Check that result is the one expected + final String labelstr = label.toString(); + final String value = results.next().getObject().toString(); + + if (!labelstr.equals(value)) { + return false; + } + + // ..and that no others are returned + return !results.hasNext(); + } finally { + cxn.close(); + } + } catch (Exception e) { + e.printStackTrace(); + throw new AssertionError("Unable to load index"); + } + } + + static String filename = null; + static public Properties getProperties(String releaseAge) throws IOException { + + Properties properties = new Properties(); + + // create temporary file for this application run + if (filename == null) + filename = File.createTempFile("BIGDATA", "jnl").getAbsolutePath(); + + properties.setProperty(Options.FILE, filename); + properties.setProperty(Options.DELETE_ON_EXIT, "true"); + + // Set RWStore + properties.setProperty(Options.BUFFER_MODE, BufferMode.DiskRW.toString()); + + // Set minimum commit history + properties.setProperty(AbstractTransactionService.Options.MIN_RELEASE_AGE, releaseAge); + + return properties; + + } + } Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata-sails/src/samples/com/bigdata/samples/ReleaseTimes.java ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/ReleaseTimes.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/ReleaseTimes.java 2011-12-19 15:37:22 UTC (rev 5807) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-sails/src/samples/com/bigdata/samples/ReleaseTimes.java 2011-12-19 15:45:59 UTC (rev 5808) @@ -2,32 +2,24 @@ import java.io.File; import java.io.IOException; -import java.text.SimpleDateFormat; -import java.util.Date; import java.util.Properties; import org.openrdf.model.Literal; import org.openrdf.model.Statement; import org.openrdf.model.URI; import org.openrdf.model.ValueFactory; -import org.openrdf.model.vocabulary.RDF; import org.openrdf.model.vocabulary.RDFS; -import org.openrdf.repository.RepositoryConnection; import org.openrdf.repository.RepositoryException; import org.openrdf.repository.RepositoryResult; import org.openrdf.sail.SailException; -import com.bigdata.btree.IIndex; import com.bigdata.journal.BufferMode; import com.bigdata.journal.ITx; -import com.bigdata.journal.Journal; import com.bigdata.journal.Options; -import com.bigdata.journal.Tx; import com.bigdata.rdf.sail.BigdataSail; import com.bigdata.rdf.sail.BigdataSailRepository; import com.bigdata.rdf.sail.BigdataSailRepositoryConnection; import com.bigdata.rdf.store.BD; -import com.bigdata.service.AbstractClient; import com.bigdata.service.AbstractTransactionService; /** @@ -42,11 +34,11 @@ * On closing and re-opening the store, the historical data remains * accessible, however, the release time is recalculated when connections * are closed. The code clarifies several of the issues involved. + * + * @see http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=RetentionHistory * * @author Martyn Cutcher - * */ - public class ReleaseTimes { public static void main(String[] args) throws IOException, InterruptedException, SailException, RepositoryException { @@ -160,10 +152,8 @@ check(cxn.getStatements(id, pred, label, false).hasNext()); - cxn.commit(); - - // Return the time of the commit - return cxn.getRepository().getDatabase().getIndexManager().getLastCommitTime(); + // Return the time of the commit + return cxn.commit2(); } /** This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-12-19 16:57:00
|
Revision: 5809 http://bigdata.svn.sourceforge.net/bigdata/?rev=5809&view=rev Author: thompsonbry Date: 2011-12-19 16:56:48 +0000 (Mon, 19 Dec 2011) Log Message: ----------- Change sets for RWStore history and persistent memory leaks following code review with MCutcher. Commit is against 1.0.x maintenance branch and TERMS_REFACTOR_BRANCH. @see https://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) @see https://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled) @see https://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune commit record index). @see https://sourceforge.net/apps/trac/bigdata/ticket/375 (Persistent memory leaks (RWStore/DISK)) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTreeCounters.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BloomFilter.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/concurrent/NonBlockingLockManagerWithNewDesign.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/ConcurrencyManager.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/RWStrategy.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/StressTestConcurrentUnisolatedIndices.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestCommitHistory.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestJournalShutdown.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/btree/BTree.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/btree/BTreeCounters.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/btree/BloomFilter.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/concurrent/NonBlockingLockManagerWithNewDesign.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractJournal.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/journal/AbstractTask.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/journal/ConcurrencyManager.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/journal/Journal.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/journal/Name2Addr.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/journal/RWStrategy.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/rwstore/RWStore.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/rwstore/sector/MemStore.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/test/com/bigdata/bop/join/AbstractHashJoinUtilityTestCase.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/test/com/bigdata/journal/StressTestConcurrentUnisolatedIndices.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/test/com/bigdata/journal/TestCommitHistory.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/test/com/bigdata/journal/TestJournalShutdown.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java Added Paths: ----------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestSimpleReleaseTimes.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestAllocBits.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/test/com/bigdata/htree/ShowHTreeResourceUsage.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/test/com/bigdata/journal/TestSimpleReleaseTimes.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/test/com/bigdata/rwstore/TestAllocBits.java Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -2025,7 +2025,8 @@ * have an acceptable error rate. */ - filter.disable(); + final long curAddr = filter.disable(); + store.delete(curAddr); log.warn("Bloom filter disabled - maximum error rate would be exceeded" + ": entryCount=" @@ -4176,11 +4177,11 @@ if (isReadOnly()) throw new IllegalStateException(ERROR_READ_ONLY); - getStore().delete(addr); - final int nbytes = getStore().getByteCount(addr); - + btreeCounters.bytesReleased += nbytes; btreeCounters.bytesOnStore_nodesAndLeaves.addAndGet(-nbytes); + + getStore().delete(addr); } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTree.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -933,13 +933,22 @@ * The bloom filter is enabled, is loaded and is dirty, so write * it on the store now. */ + + final long oldAddr = filter.getAddr(); + if (oldAddr != IRawStore.NULL) { + this.getBtreeCounters().bytesReleased += store.getByteCount(oldAddr); + store.delete(oldAddr); + } + filter.write(store); } } + // TODO: Ensure indexMetadata is recycled + if (metadata.getMetadataAddr() == 0L) { /* @@ -951,6 +960,20 @@ } + // delete old checkpoint data + final long oldAddr = checkpoint != null ? checkpoint.addrCheckpoint : IRawStore.NULL; + if (oldAddr != IRawStore.NULL) { + this.getBtreeCounters().bytesReleased += store.getByteCount(oldAddr); + store.delete(oldAddr); + } + + // delete old root data if changed + final long oldRootAddr = checkpoint != null ? checkpoint.getRootAddr() : IRawStore.NULL; + if (oldRootAddr != IRawStore.NULL && oldRootAddr != root.identity) { + this.getBtreeCounters().bytesReleased += store.getByteCount(oldRootAddr); + store.delete(oldRootAddr); + } + // create new checkpoint record. checkpoint = metadata.newCheckpoint(this); @@ -1253,6 +1276,9 @@ assertNotReadOnly(); + /* + * FIXME simplify conditionals - mgc + */ if (!getIndexMetadata().getDeleteMarkers() && getStore() instanceof RWStrategy) { Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTreeCounters.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTreeCounters.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BTreeCounters.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -165,6 +165,7 @@ nodesWritten += o.nodesWritten; leavesWritten += o.leavesWritten; bytesWritten += o.bytesWritten; + bytesReleased += o.bytesReleased; writeNanos += o.writeNanos; serializeNanos += o.serializeNanos; rawRecordsWritten += o.rawRecordsWritten; @@ -230,6 +231,7 @@ t.nodesWritten -= o.nodesWritten; t.leavesWritten -= o.leavesWritten; t.bytesWritten -= o.bytesWritten; + t.bytesReleased -= o.bytesReleased; t.serializeNanos -= o.serializeNanos; t.writeNanos -= o.writeNanos; t.rawRecordsWritten -= o.rawRecordsWritten; @@ -356,6 +358,7 @@ public int nodesWritten = 0; public int leavesWritten = 0; public long bytesWritten = 0L; + public long bytesReleased = 0L; public long writeNanos = 0; public long serializeNanos = 0; public long rawRecordsWritten = 0; @@ -509,6 +512,15 @@ } /** + * The number of bytes released from the backing store. + */ + final public long getBytesReleased() { + + return bytesReleased; + + } + + /** * Return a {@link CounterSet} reporting on the various counters tracked in * the instance fields of this class. */ @@ -918,6 +930,12 @@ } }); + tmp.addCounter("bytesReleased", new Instrument<Long>() { + protected void sample() { + setValue(bytesReleased); + } + }); + } // } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BloomFilter.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BloomFilter.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/BloomFilter.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -559,20 +559,27 @@ * At that point the {@link BTree} is dirty. {@link Checkpoint} will notice * that the bloom filter is disabled will write its address as 0L so the * bloom filter is no longer reachable from the post-checkpoint record. + * <p> + * @return the current address for recycling */ - final public void disable() { - + final public long disable() { + final long ret = addr; + if (enabled) { enabled = false; // release the filter impl. this is often 1-10M of data! filter = null; + addr = 0; if (log.isInfoEnabled()) log.info("disabled."); + - } + } + + return ret; } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/concurrent/NonBlockingLockManagerWithNewDesign.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/concurrent/NonBlockingLockManagerWithNewDesign.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/concurrent/NonBlockingLockManagerWithNewDesign.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -1125,13 +1125,13 @@ final LockFutureTask task = (LockFutureTask) getTaskWithLocks(resource); - if(task == null) { + if (task == null) { /* * There is no task holding all of these locks. */ - throw new IllegalStateException(); + throw new IllegalStateException("Task does not hold all required locks"); } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractJournal.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -59,6 +59,9 @@ import com.bigdata.btree.BytesUtil; import com.bigdata.btree.Checkpoint; import com.bigdata.btree.IIndex; +import com.bigdata.btree.IRangeQuery; +import com.bigdata.btree.ITuple; +import com.bigdata.btree.ITupleIterator; import com.bigdata.btree.IndexMetadata; import com.bigdata.btree.ReadOnlyIndex; import com.bigdata.btree.keys.ICUVersionRecord; @@ -2369,18 +2372,29 @@ final IRootBlockView old = _rootBlock; final long newCommitCounter = old.getCommitCounter() + 1; - + final ICommitRecord commitRecord = new CommitRecord(commitTime, newCommitCounter, rootAddrs); final long commitRecordAddr = write(ByteBuffer .wrap(CommitRecordSerializer.INSTANCE.serialize(commitRecord))); /* + * Before flushing the commitRecordIndex we need to check for + * deferred frees that will prune the index. + * + * Do this BEFORE adding new commit record which will otherwise + * be immediately removed if no history is retained + */ + if (_bufferStrategy instanceof RWStrategy) { + ((RWStrategy) _bufferStrategy).checkDeferredFrees(this); + } + + /* * Add the commit record to an index so that we can recover * historical states efficiently. */ _commitRecordIndex.add(commitRecordAddr, commitRecord); - + /* * Flush the commit record index to the store and stash the address * of its metadata record in the root block. @@ -2394,13 +2408,7 @@ */ final long commitRecordIndexAddr = _commitRecordIndex.writeCheckpoint(); - /* - * DEBUG: The commitRecordIndexAddr should not be deleted, the - * call to lockAddress forces a runtime check protecting the address - */ - if (_bufferStrategy instanceof RWStrategy) { - ((RWStrategy) _bufferStrategy).lockAddress(commitRecordIndexAddr); - } + if (quorum != null) { /* @@ -3248,6 +3256,12 @@ * handling this. */ public ICommitRecord getCommitRecord(final long commitTime) { + + if (this._bufferStrategy instanceof RWStrategy) { + if (commitTime <= ((RWStrategy) _bufferStrategy).getLastReleaseTime()) { + return null; // no index available + } + } final ReadLock lock = _fieldReadWriteLock.readLock(); @@ -4247,4 +4261,32 @@ return port; } + /** + * Remove all commit records between the two provided keys. + * + * This is called from the RWStore when it checks for deferredFrees + * against the CommitRecordIndex where the CommitRecords + * reference the deleteBlocks that have been deferred. + * + * Once processed the records for the effected range muct + * be removed as they reference invalid states. + * + * @param fromKey + * @param toKey + */ + public int removeCommitRecordEntries(byte[] fromKey, byte[] toKey) { + final CommitRecordIndex cri = _commitRecordIndex; // Use the LIVE indeex! + final ITupleIterator<CommitRecordIndex.Entry> commitRecords + = cri.rangeIterator(fromKey, toKey, 0/* capacity*/, IRangeQuery.CURSOR, null/*filter*/); + + int removed = 0; + while (commitRecords.hasNext()) { + final ITuple<CommitRecordIndex.Entry> entry = commitRecords.next(); + commitRecords.remove(); + removed++; + } + + return removed; + } + } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/AbstractTask.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -71,6 +71,7 @@ import com.bigdata.resources.StaleLocatorException; import com.bigdata.resources.StaleLocatorReason; import com.bigdata.rwstore.IAllocationContext; +import com.bigdata.rwstore.RWStore.RawTx; import com.bigdata.sparse.GlobalRowStoreHelper; import com.bigdata.sparse.SparseRowStore; import com.bigdata.util.InnerCause; @@ -2225,10 +2226,12 @@ detachContext(); } + // RawTx encapsulates the transaction protocol to prevent multiple calls to close() + private final RawTx m_rawTx; + public void completeTask() { - final IBufferStrategy bufferStrategy = delegate.getBufferStrategy(); - if (bufferStrategy instanceof RWStrategy) { - ((RWStrategy) bufferStrategy).getRWStore().deactivateTx(); + if (m_rawTx != null) { + m_rawTx.close(); } } @@ -2265,8 +2268,10 @@ // must grab the tx BEFORE registering the context to correctly // bracket, since the tx count is decremented AFTER the // context is released - ((RWStrategy) bufferStrategy).getRWStore().activateTx(); + m_rawTx = ((RWStrategy) bufferStrategy).getRWStore().newTx(); ((RWStrategy) bufferStrategy).getRWStore().registerContext(this); + } else { + m_rawTx = null; } } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/ConcurrencyManager.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/ConcurrencyManager.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/ConcurrencyManager.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -1558,7 +1558,9 @@ f.get(nanos, TimeUnit.NANOSECONDS); } catch (TimeoutException ex) { - + + if (log.isInfoEnabled()) log.info("Task Timeout"); + return futures; } catch (ExecutionException ex) { Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Journal.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -73,6 +73,7 @@ import com.bigdata.resources.IndexManager; import com.bigdata.resources.ResourceManager; import com.bigdata.resources.StaleLocatorReason; +import com.bigdata.rwstore.RWStore.RawTx; import com.bigdata.service.AbstractTransactionService; import com.bigdata.service.DataService; import com.bigdata.service.IBigdataClient; @@ -320,6 +321,8 @@ final JournalTransactionService abstractTransactionService = new JournalTransactionService( checkProperties(properties), this) { + final private AtomicReference<RawTx> m_rawTx = new AtomicReference<RawTx>(null); + { final long lastCommitTime = Journal.this.getLastCommitTime(); @@ -339,19 +342,18 @@ protected void activateTx(final TxState state) { final IBufferStrategy bufferStrategy = Journal.this.getBufferStrategy(); - if(bufferStrategy instanceof RWStrategy) { -// Logger.getLogger("TransactionTrace").info("OPEN: txId="+state.tx+", readsOnCommitTime="+state.readCommitTime); - ((RWStrategy)bufferStrategy).getRWStore().activateTx(); + if (bufferStrategy instanceof RWStrategy) { +// Logger.getLogger("TransactionTrace").info("OPEN: txId="+state.tx+", readsOnCommitTime="+state.readCommitTime); + m_rawTx.set(((RWStrategy)bufferStrategy).getRWStore().newTx()); } super.activateTx(state); } protected void deactivateTx(final TxState state) { super.deactivateTx(state); - final IBufferStrategy bufferStrategy = Journal.this.getBufferStrategy(); - if(bufferStrategy instanceof RWStrategy) { -// Logger.getLogger("TransactionTrace").info("DONE: txId="+state.tx+", readsOnCommitTime="+state.readCommitTime); - ((RWStrategy)bufferStrategy).getRWStore().deactivateTx(); + + if (m_rawTx.get() != null) { + m_rawTx.get().close(); } } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/Name2Addr.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -630,7 +630,6 @@ // update persistent mapping. insert(key, EntrySerializer.INSTANCE.serialize( entry )); - } } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/RWStrategy.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/RWStrategy.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/journal/RWStrategy.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -647,9 +647,34 @@ * to be monitored to ensure it is not freed. * * @param addr - address to be locked + * @return true - for use in assert statement */ - public void lockAddress(final long addr) { + public boolean lockAddress(final long addr) { m_store.lockAddress(decodeAddr(addr)); + + return true; } + /** + * If history is retained this returns the time for which + * data was most recently released. No request can be made for data + * earlier than this. + * @return latest data release time + */ + long getLastReleaseTime() { + return m_store.getLastDeferredReleaseTime(); + } + + /** + * Lifted to provide a direct interface from the Journal so that the + * CommitRecordIndex can be pruned prior to store commit. + */ + public void checkDeferredFrees(final AbstractJournal journal) { + final int totalFreed = m_store.checkDeferredFrees(true, journal); // free now if possible + + if (totalFreed > 0 && log.isInfoEnabled()) { + log.info("Freed " + totalFreed + " deferralls on commit"); + } + } + } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/rwstore/RWStore.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -40,6 +40,7 @@ import java.util.TreeMap; import java.util.TreeSet; import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; @@ -47,6 +48,7 @@ import org.apache.log4j.Logger; +import com.bigdata.btree.BytesUtil; import com.bigdata.btree.IIndex; import com.bigdata.btree.ITuple; import com.bigdata.btree.ITupleIterator; @@ -2205,11 +2207,11 @@ try { - final int totalFreed = checkDeferredFrees(true, journal); // free now if possible - - if (totalFreed > 0 && log.isInfoEnabled()) { - log.info("Freed " + totalFreed + " deferralls on commit"); - } +// final int totalFreed = checkDeferredFrees(true, journal); // free now if possible +// +// if (totalFreed > 0 && log.isInfoEnabled()) { +// log.info("Freed " + totalFreed + " deferralls on commit"); +// } // free old storageStatsAddr if (m_storageStatsAddr != 0) { int len = (int) (m_storageStatsAddr & 0xFFFF); @@ -2315,7 +2317,7 @@ * * returns number of addresses freed */ - /* public */int checkDeferredFrees(final boolean freeNow, + public /* public */int checkDeferredFrees(final boolean freeNow, final AbstractJournal journal) { // Note: Invoked from unit test w/o the lock... @@ -2324,42 +2326,65 @@ if (journal != null) { - final AbstractTransactionService transactionService = (AbstractTransactionService) journal - .getLocalTransactionManager().getTransactionService(); + /** + * Since this is now called direct from the AbstractJournal commit method we muct take the + * allocaiton lock. + * + * This may have adverse effects wrt concurrency deadlock issues, but none have + * been noticed so far. + */ + m_allocationLock.lock(); + + try { + final AbstractTransactionService transactionService = (AbstractTransactionService) journal + .getLocalTransactionManager().getTransactionService(); -// // the previous commit point. -// long lastCommitTime = journal.getLastCommitTime(); -// -// if (lastCommitTime == 0L) { -// // Nothing committed. -// return; -// } - - // the effective release time. - long latestReleasableTime = transactionService.getReleaseTime(); + // // the previous commit point. + // long lastCommitTime = journal.getLastCommitTime(); + // + // if (lastCommitTime == 0L) { + // // Nothing committed. + // return; + // } - /* - * add one because we want to read the delete blocks for all commit - * points up to and including the first commit point that we may not - * release. - */ - latestReleasableTime++; + /** + * The effective release time does not need a lock since we are called + * from within the AbstractJournal commit. The calculation can + * safely be based on the system time, the min release age and the + * earliest active transaction. + * + * The purpose is to ensure the RWStore can recycle any storage + * released since this time. + */ + long latestReleasableTime = transactionService + .getEarliestReleaseTime(); + // long latestReleasableTime = + // transactionService.getReleaseTime(); // revert for test - /* - * add one to give this inclusive upper bound semantics to the range - * scan. - */ - latestReleasableTime++; + /* + * add one because we want to read the delete blocks for all + * commit points up to and including the first commit point that + * we may not release. + */ + latestReleasableTime++; - /* - * Free deferrals. - * - * Note: This adds one to the lastDeferredReleaseTime to give - * exclusive lower bound semantics. - */ - return freeDeferrals(journal, m_lastDeferredReleaseTime + 1, - latestReleasableTime); - + /* + * add one to give this inclusive upper bound semantics to the + * range scan. + */ + latestReleasableTime++; + + /* + * Free deferrals. + * + * Note: This adds one to the lastDeferredReleaseTime to give + * exclusive lower bound semantics. + */ + return freeDeferrals(journal, m_lastDeferredReleaseTime + 1, + latestReleasableTime); + } finally { + m_allocationLock.unlock(); + } } else { return 0; } @@ -3442,6 +3467,8 @@ nxtAddr = strBuf.readInt(); } + // now free delete block + immediateFree(addr, sze); m_lastDeferredReleaseTime = lastReleaseTime; if (log.isTraceEnabled()) log.trace("Updated m_lastDeferredReleaseTime=" @@ -3470,12 +3497,14 @@ final long toTime) { final ITupleIterator<CommitRecordIndex.Entry> commitRecords; - { /* * Commit can be called prior to Journal initialisation, in which * case the commitRecordIndex will not be set. */ final IIndex commitRecordIndex = journal.getReadOnlyCommitRecordIndex(); + if (commitRecordIndex == null) { + return 0; + } final IndexMetadata metadata = commitRecordIndex .getIndexMetadata(); @@ -3489,11 +3518,7 @@ commitRecords = commitRecordIndex .rangeIterator(fromKey, toKey); - } - if(log.isTraceEnabled()) - log.trace("fromTime=" + fromTime + ", toTime=" + toTime); - int totalFreed = 0; while (commitRecords.hasNext()) { @@ -3501,21 +3526,38 @@ final ITuple<CommitRecordIndex.Entry> tuple = commitRecords.next(); final CommitRecordIndex.Entry entry = tuple.getObject(); + + try { + final ICommitRecord record = CommitRecordSerializer.INSTANCE + .deserialize(journal.read(entry.addr)); + + final long blockAddr = record + .getRootAddr(AbstractJournal.DELETEBLOCK); + + if (blockAddr != 0) { + + totalFreed += freeDeferrals(blockAddr, record.getTimestamp()); + + } + + immediateFree((int) (entry.addr >> 32), (int) entry.addr); + } catch (RuntimeException re) { + log.warn("Problem with entry at " + entry.addr); + throw re; + } - final ICommitRecord record = CommitRecordSerializer.INSTANCE - .deserialize(journal.read(entry.addr)); - - final long blockAddr = record - .getRootAddr(AbstractJournal.DELETEBLOCK); - - if (blockAddr != 0) { - - totalFreed += freeDeferrals(blockAddr, record.getTimestamp()); - - } - } + // Now remove all record entries + final int removed = journal.removeCommitRecordEntries(fromKey, toKey); + if (removed > 0) { + if(log.isInfoEnabled()) + log.info("Removed " + removed + " commit records"); + } + + if(log.isInfoEnabled()) + log.info("fromTime=" + fromTime + ", toTime=" + toTime + ", totalFreed=" + totalFreed); + return totalFreed; } @@ -4456,7 +4498,23 @@ return m_storageStats; } - public void activateTx() { + public class RawTx { + final AtomicBoolean m_open = new AtomicBoolean(true); + RawTx() { + activateTx(); + } + + public void close() { + if (m_open.getAndSet(false)) + deactivateTx(); + } + } + + public RawTx newTx() { + return new RawTx(); + } + + private void activateTx() { m_allocationLock.lock(); try { m_activeTxCount++; @@ -4467,7 +4525,7 @@ } } - public void deactivateTx() { + private void deactivateTx() { m_allocationLock.lock(); try { if (m_activeTxCount == 0) { @@ -4547,5 +4605,15 @@ return m_writeCache.getCounters(); } + /** + * If historical data is maintained then this will return the earliest time for which + * data can be safely retrieved. + * + * @return time of last release + */ + public long getLastDeferredReleaseTime() { + return m_lastDeferredReleaseTime; + } + } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/service/AbstractTransactionService.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -798,6 +798,23 @@ private volatile long releaseTime = 0L; /** + * Provides correct value for RWStore deferred store releases to be + * recycled. The effective release time does not need a lock since we are + * called from within the AbstractJournal commit. The calculation can safely + * be based on the system time, the min release age and the earliest active + * transaction. The purpose is to permit the RWStore to recycle data based + * on the release time which will be in effect at the commit point. + * + * @return earliest time that data can be released + */ + public long getEarliestReleaseTime() { + final long immediate = System.currentTimeMillis() - minReleaseAge; + + return earliestTxStartTime == 0 || immediate < earliestTxStartTime + ? immediate : earliestTxStartTime; + } + + /** * Sets the new release time. * * @param newValue @@ -2463,4 +2480,5 @@ } protected CounterSet countersRoot; + } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/StressTestConcurrentUnisolatedIndices.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/StressTestConcurrentUnisolatedIndices.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/StressTestConcurrentUnisolatedIndices.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -145,12 +145,12 @@ */ doConcurrentClientTest(journal,// 30,// timeout - 20,// nresources + 20, // 3,// nresources // 20 1, // minLocks - 3, // maxLocks - 100, // ntrials + 3, // 5 // maxLocks + 100, //5000, // ntrials // 1000 3, // keyLen - 1000, // nops + 1000, // 1000, // nops 0.02d // failureRate ); @@ -468,7 +468,7 @@ final Thread other = btrees.putIfAbsent(name, t); if (other != null) { - + log.error("Unisolated index already in use: " + resource[i]); throw new AssertionError( "Unisolated index already in use: " + resource[i] + ", currentThread=" + t @@ -524,19 +524,21 @@ */ for (int i = 0; i < resource.length; i++) { - final String name = resource[i]; + if (indices[i] != null) { // do NOT remove if never added! + final String name = resource[i]; + + final Thread tmp = btrees.remove(name); + + if (tmp != t) { + + throw new AssertionError( + "Index associated with another thread? index=" + + name + ", currentThread=" + t + + ", otherThread=" + tmp); + + } + } - final Thread tmp = btrees.remove(name); - - if (tmp != t) { - - throw new AssertionError( - "Index associated with another thread? index=" - + name + ", currentThread=" + t - + ", otherThread=" + tmp); - - } - } } Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestCommitHistory.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestCommitHistory.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestCommitHistory.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -29,8 +29,11 @@ import java.io.IOException; import java.nio.ByteBuffer; +import java.util.Properties; import com.bigdata.btree.BTree; +import com.bigdata.btree.BytesUtil; +import com.bigdata.service.AbstractTransactionService; /** * Test the ability to get (exact match) and find (most recent less than or @@ -115,9 +118,63 @@ * {@link CommitRecordIndex}. */ public void test_recoverCommitRecord() { + final Properties properties = getProperties(); + // Set a release age for RWStore if required + properties.setProperty(AbstractTransactionService.Options.MIN_RELEASE_AGE, "5000"); + + final Journal journal = new Journal(properties); - final Journal journal = new Journal(getProperties()); + try { + /* + * The first commit flushes the root leaves of some indices so we + * get back a non-zero commit timestamp. + */ + assertTrue(0L != journal.commit()); + + /* + * A follow up commit in which nothing has been written should + * return a 0L timestamp. + */ + assertEquals(0L, journal.commit()); + + journal.write(ByteBuffer.wrap(new byte[] { 1, 2, 3 })); + + final long commitTime1 = journal.commit(); + + assertTrue(commitTime1 != 0L); + + ICommitRecord commitRecord = journal.getCommitRecord(commitTime1); + + assertNotNull(commitRecord); + + assertNotNull(journal.getCommitRecord()); + + assertEquals(commitTime1, journal.getCommitRecord().getTimestamp()); + + assertEquals(journal.getCommitRecord(), commitRecord); + + } finally { + + journal.destroy(); + + } + + } + /** + * Test the ability to recover a {@link ICommitRecord} from the + * {@link CommitRecordIndex}. + * + * A second commit should be void and therefore the previous record + * should be retrievable. + */ + public void test_recoverCommitRecordNoHistory() { + final Properties properties = getProperties(); + // Set a release age for RWStore if required + properties.setProperty(AbstractTransactionService.Options.MIN_RELEASE_AGE, "0"); + + final Journal journal = new Journal(properties); + try { /* @@ -161,7 +218,11 @@ */ public void test_commitRecordIndex_restartSafe() { - Journal journal = new Journal(getProperties()); + final Properties properties = getProperties(); + // Set a release age for RWStore if required + properties.setProperty(AbstractTransactionService.Options.MIN_RELEASE_AGE, "5000"); + + Journal journal = new Journal(properties); try { @@ -226,11 +287,17 @@ * the commit record index. This also tests restart-safety of the index with * multiple records (if the store is stable). * + * The minReleaseAge property has been added to test historical data protection, + * and not just the retention of the CommitRecords which currently are erroneously + * never removed. + * * @throws IOException */ public void test_commitRecordIndex_find() throws IOException { - Journal journal = new Journal(getProperties()); + final Properties props = getProperties(); + props.setProperty("com.bigdata.service.AbstractTransactionService.minReleaseAge","2000"); // 2 seconds + Journal journal = new Journal(props); try { @@ -240,12 +307,20 @@ final long[] commitRecordIndexAddrs = new long[limit]; + final long[] dataRecordAddrs = new long[limit]; + final ByteBuffer[] dataRecords = new ByteBuffer[limit]; + final ICommitRecord[] commitRecords = new ICommitRecord[limit]; - + for(int i=0; i<limit; i++) { - // write some data. - journal.write(ByteBuffer.wrap(new byte[]{1,2,3})); + // write some data, this should be protected by minReleaseAge + dataRecords[i] = ByteBuffer.wrap(new byte[]{1,2,3,(byte) i}); + dataRecordAddrs[i] = journal.write(dataRecords[i]); + dataRecords[i].flip(); + if (i > 0) { + journal.delete(dataRecordAddrs[i-1]); // remove previous committed data + } // commit the store. commitTime[i] = journal.commit(); @@ -308,6 +383,9 @@ assertEquals(commitRecords[i], journal .getCommitRecord(commitTime[i])); + final ByteBuffer rdbuf = journal.read(dataRecordAddrs[i]); + assertTrue(dataRecords[i].compareTo(rdbuf) == 0); + } } @@ -357,9 +435,12 @@ * reference to the commit record of interest). */ public void test_canonicalizingCache() { + final Properties properties = getProperties(); + // Set a release age for RWStore if required + properties.setProperty(AbstractTransactionService.Options.MIN_RELEASE_AGE, "5000"); + + final Journal journal = new Journal(properties); - final Journal journal = new Journal(getProperties()); - try { /* Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestJournalShutdown.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestJournalShutdown.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestJournalShutdown.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -29,10 +29,13 @@ import java.lang.ref.WeakReference; import java.util.Properties; +import java.util.UUID; +import java.util.concurrent.ExecutionException; import java.util.concurrent.atomic.AtomicInteger; import junit.framework.TestCase2; +import com.bigdata.btree.IndexMetadata; import com.bigdata.rawstore.Bytes; /** @@ -57,6 +60,68 @@ super(name); } + private int startupActiveThreads = 0; + + public void setUp() throws Exception { + + super.setUp(); + + startupActiveThreads = Thread.currentThread().getThreadGroup().activeCount(); + + } + + private static boolean s_checkThreads = true; + + public void tearDown() throws Exception { + + TestHelper.checkJournalsClosed(this); + + if (s_checkThreads) { + + final ThreadGroup grp = Thread.currentThread().getThreadGroup(); + final int tearDownActiveThreads = grp.activeCount(); + if (startupActiveThreads != tearDownActiveThreads) { + final Thread[] threads = new Thread[tearDownActiveThreads]; + grp.enumerate(threads); + final StringBuilder info = new StringBuilder(); + int nactive = 0; + for (Thread t : threads) { + if (t == null) + continue; + if(nactive>0) + info.append(','); + info.append("[" + t.getName() + "]"); + nactive++; + } + + final String failMessage = "Threads left active after task" + +": test=" + getName()// +// + ", delegate="+getOurDelegate().getClass().getName() + + ", startupCount=" + startupActiveThreads + + ", teardownCount("+nactive+")=" + tearDownActiveThreads + + ", thisThread="+Thread.currentThread().getName() + + ", threads: " + info; + + if (grp.activeCount() != startupActiveThreads) + log.error(failMessage); + + /* + * Wait up to 2 seconds for threads to die off so the next test + * will run more cleanly. + */ + for (int i = 0; i < 20; i++) { + Thread.sleep(100); + if (grp.activeCount() != startupActiveThreads) + break; + } + + } + + } + + super.tearDown(); + } + /** * Look for a memory leak when the test calls {@link Journal#close()} * explicitly. @@ -157,6 +222,62 @@ nunfinalized.incrementAndGet(); ncreated.incrementAndGet(); + /** + * FIXME If we submit a task which registers an index on the + * new journal then we once again have a memory leak in the + * condition where we do not issue an explicit close against + * the Journal. [Actually, submitting a read-only task which + * does not cause an index to be retained does not cause a + * memory leak.] + * + * @see https://sourceforge.net/apps/trac/bigdata/ticket/196 + */ + // force the use of the LockManager. + try { + + /* + * Task does not create an index, but does use the + * LockManager and the WriteExecutorService. + */ + final AbstractTask task1 = new NOpTask( + jnl.getConcurrencyManager(), ITx.UNISOLATED, + "name"); + + /* + * Task does not create an index. Since it accesses a + * historical view, it does not use the LockManager or + * the WriteExecutorService. + * + * Note: This task may be run w/o causing Journal + * references to be retained. However, [task1] and + * [task2] will both cause journal references to be + * retained. + */ + final AbstractTask task1b = new NOpTask( + jnl.getConcurrencyManager(), ITx.READ_COMMITTED, + "name"); + + /* + * Task uses the LockManager and the + * WriteExecutorService and creates an index. A hard + * reference to that index will make it into the + * journal's index cache. + */ + final AbstractTask task2 = new RegisterIndexTask( + jnl.getConcurrencyManager(), "name", + new IndexMetadata("name", UUID.randomUUID())); + + /* + * Submit one of the tasks and *wait* for its Future. + */ + jnl.getConcurrencyManager().submit(task1).get(); + jnl.getConcurrencyManager().submit(task1b).get(); + jnl.getConcurrencyManager().submit(task2).get(); + + } catch (ExecutionException e) { + log.error("Problem registering index: " + e, e); + } + if (closeJournal) { /* * Exercise each of the ways in which we can close the @@ -196,12 +317,37 @@ } - // Demand a GC. - System.gc(); + /* + * Loop, doing GCs waiting for the journal(s) to be finalized. + */ + int lastCount = nunfinalized.get(); + +// // Wait for the index references to be expired from the journal caches. +// Thread.sleep(60000/* ms */); - // Wait for it. - Thread.sleep(1000/* ms */); + for (int i = 0; i < 20 && nunfinalized.get() > 0; i++) { + // Demand a GC. + System.gc(); + + // Wait for it. + Thread.sleep(500/* ms */); + + final int currentCount = nunfinalized.get(); + + if (currentCount != lastCount) { + + if (log.isInfoEnabled()) + log.info("npasses=" + (i + 1) + ", nfinalized=" + + (lastCount - currentCount) + ", unfinalized=" + + currentCount); + + lastCount = currentCount; + + } + + } + if (log.isInfoEnabled()) { log.info("Created " + ncreated + " journals."); @@ -211,10 +357,10 @@ } - if (nunfinalized.get() == ncreated.get()) { + if (nunfinalized.get() > 0) { fail("Created " + ncreated - + " journals. No journals were finalized."); + + " journals, and " + nunfinalized + " journals were not finalized."); } @@ -234,4 +380,26 @@ } + /** + * A task which does nothing, but which will wait for its locks anyway. + */ + private static class NOpTask extends AbstractTask<Void> { + + /** + * @param concurrencyManager + * @param timestamp + * @param resource + */ + protected NOpTask(IConcurrencyManager concurrencyManager, + long timestamp, String resource) { + super(concurrencyManager, timestamp, resource); + } + + @Override + protected Void doTask() throws Exception { + return null; + } + + } + } Added: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestSimpleReleaseTimes.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestSimpleReleaseTimes.java (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestSimpleReleaseTimes.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -0,0 +1,137 @@ +package com.bigdata.journal; + +import java.io.File; +import java.io.IOException; +import java.text.SimpleDateFormat; +import java.util.Date; +import java.util.Properties; + +import com.bigdata.btree.IIndex; +import com.bigdata.service.AbstractClient; + +/** + * Example of Commit History usage. + * + * A series of delayed commits stores different values against a named index + * and stores the commit time of each. + * + * Then the store is closed and re-opened. Looking up the historical values stored + * against the previous commit times. + * + * New data is then committed, resulting in aged records being released and a + * correct failure to retrieve historical state. + * + * @author Martyn Cutcher + * + */ +public class TestSimpleReleaseTimes { + + public static void main(String[] args) throws IOException, InterruptedException { + Journal journal = new Journal(getProperties("2000")); + + IIndex ti = journal.registerIndex("TestIndex"); + journal.commit(); + + final long start = journal.getLastCommitTime(); + + // Add simple values and confirm read back + + final long bryan = insertName(journal, "Bryan"); + final long mike = insertName(journal, "Mike"); + final long martyn = insertName(journal, "Martyn"); + + // Retrieve historical index + assert checkState(journal, bryan, "Bryan"); + assert checkState(journal, mike, "Mike"); + assert checkState(journal, martyn, "Martyn"); + + // Reopen the journal + journal.shutdown(); + journal = new Journal(getProperties("2000")); + + // wait for data to "age" + Thread.sleep(3000); + + // ... confirm historical access - despite no current historical protection + assert checkState(journal, bryan, "Bryan"); + assert checkState(journal, mike, "Mike"); + assert checkState(journal, martyn, "Martyn"); + + // Now we will make a further commit that will + // release historical index state. + System.out.println("Current commit time: " + insertName(journal, "SomeoneElse")); + // FIXME Apparently a further commit is required + // System.out.println("Current commit time: " + insertName(journal, "Another")); + + // ... confirm no historical access + + assert !checkState(journal, bryan, "Bryan"); + assert !checkState(journal, mike, "Mike"); + assert !checkState(journal, martyn, "Martyn"); + + System.out.println("DONE"); + } + + /** + * Add the value to the index, commit, assert value retrieval and + * return the commit time + */ + static long insertName(final Journal jrnl, final String name) { + IIndex index = jrnl.getIndex("TestIndex"); + index.remove("Name"); + index.insert("Name", name); + + final long ret = jrnl.commit(); + + if(!name.equals(index.lookup("Name"))) { + throw new AssertionError(name + " != " + index.lookup("Name")); + } + + return ret; + } + + static boolean checkState(final Journal jrnl, final long state, final String name) { + try { + IIndex istate = jrnl.getIndex("TestIndex", state); + + if (istate == null) { + System.out.println("No index found for state: " + state); + + return false; + } + + if(!name.equals(istate.lookup("Name"))) { + throw new AssertionError(name + " != " + istate.lookup("Name")); + } + + System.out.println("Index confirmed for state: " + state); + + return true; + } catch (Exception e) { + e.printStackTrace(); + throw new AssertionError("Unable to load index"); + } + } + + static String filename = null; + static public Properties getProperties(String releaseAge) throws IOException { + + Properties properties = new Properties(); + + // create temporary file for this application run + if (filename == null) + filename = File.createTempFile("BIGDATA", "jnl").getAbsolutePath(); + + properties.setProperty(Options.FILE, filename); + properties.setProperty(Options.DELETE_ON_EXIT,"true"); + + // Set RWStore + properties.setProperty(Options.BUFFER_MODE, BufferMode.DiskRW.toString()); + + // Set minimum commit history + properties.setProperty("com.bigdata.service.AbstractTransactionService.minReleaseAge", releaseAge); + + return properties; + + } +} Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/journal/TestSimpleReleaseTimes.java ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Added: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestAllocBits.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestAllocBits.java (rev 0) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestAllocBits.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -0,0 +1,32 @@ +package com.bigdata.rwstore; + +import java.util.Random; + +import junit.framework.TestCase; + +public class TestAllocBits extends TestCase { + + public void testBitCounts() { + Random r = new Random(); + + for (int i = 0; i < 50000; i++) { + final int tst = r.nextInt(); + final int r1 = countZeros(tst); + final int r2 = 32 - Integer.bitCount(tst); // zeros are 32 - 1s + assertTrue(r1 == r2); + } + + } + + int countZeros(final int tst) { + int cnt = 0; + for (int bit = 0; bit < 32; bit++) { + if ((tst & (1 << bit)) == 0) { + cnt++; + } + } + + return cnt; + } + +} Property changes on: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestAllocBits.java ___________________________________________________________________ Added: svn:keywords + Id Date Revision Author HeadURL Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2011-12-19 15:45:59 UTC (rev 5808) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/test/com/bigdata/rwstore/TestRWJournal.java 2011-12-19 16:56:48 UTC (rev 5809) @@ -62,6 +62,8 @@ import com.bigdata.rawstore.AbstractRawStoreTestCase; import com.bigdata.rawstore.Bytes; import com.bigdata.rawstore.IRawStore; +import com.bigdata.rwstore.RWStore.RawTx; +import com.bigdata.service.AbstractTransactionService; import com.bigdata.util.InnerCause; /** @@ -110,6 +112,9 @@ // test suite for MRMW correctness. suite.addTestSuite(TestMRMW.class); + // ..and add TestAllocBits + suite.addTestSuite(TestAllocBits.class); + /* * Pickup the basic journal test suite. This is a proxied test suite, so * all the tests will run with the configuration specified in this test @@ -359,14 +364,16 @@ } - public Properties getProperties() { + public Properties getProperties(final long retention) { if (log.isInfoEnabled()) log.info("TestRWJournal:getProperties"); final Properties properties = super.getProperties(); - properties.setProperty(Options.BUFFER_MODE, BufferMode.DiskRW.toString()); + properties.setProperty(AbstractTransactionService.Options.MIN_RELEASE_AGE, "" + retention); + + properties.setProperty(Options.BUFFER_MODE, BufferMode.DiskRW.toString()); // properties.setProperty(Options.BUFFER_MODE, // BufferMode.TemporaryRW.toString()); @@ -406,10 +413,16 @@ protected IRawStore getStore() { - return new Journal(getProperties()); + return getStore(0); } + protected IRawStore getStore(final long retention) { + + return new Journal(getProperties(retention)); + + } + // /** // * Test that allocate() pre-extends the store when a record is // allocated @@ -1356,10 +1369,23 @@ } - public void test_stressCommitIndex() { - Journal journal = (Journal) getStore(); + public void test_stressCommitIndexWithRetention() { + + assertEquals(1000,doStressCommitIndex(40000L /* 20 secs history */, 1000)); + + + } + + public void test_stressCommitIndexNoRetention() { + + assertEquals(1, doStressCommitIndex(0L /* no history */, 1000)); + + } + + public int doStressCommitIndex(final long retention, final int runs) { + Journal journal = (Journal) getStore(retention); // remember no history! try { - final int cRuns = 1000; + final int cRuns = runs; for (int i = 0; i < cRuns; i++) commitSomeData(journal); @@ -1392,7 +1418,7 @@ } } - assertTrue("Should be " + cRuns + " == " + records, cRuns == records); + return records; } } finally { journal.destroy(); @@ -1459,7 +1485,7 @@ long faddr = bs.write(bb); // rw.alloc(buf, buf.length); - rw.activateTx(); + RawTx tx = rw.newTx(); bs.delete(faddr); // delettion prot... [truncated message content] |
From: <tho...@us...> - 2011-12-19 19:03:55
|
Revision: 5810 http://bigdata.svn.sourceforge.net/bigdata/?rev=5810&view=rev Author: thompsonbry Date: 2011-12-19 19:03:45 +0000 (Mon, 19 Dec 2011) Log Message: ----------- Possible bug fix to [1]. The code was failing to check for a 0L (NULL) address before deleting the old record. Updated release notes for 1.0.3 and 1.1.0. Updated overrides for branching factors for the default KB instance in the WAR (1.1.0). [1] https://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt branches/TERMS_REFACTOR_BRANCH/bigdata-war/RWStore.properties Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-12-19 16:56:48 UTC (rev 5809) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-12-19 19:03:45 UTC (rev 5810) @@ -2026,7 +2026,8 @@ */ final long curAddr = filter.disable(); - store.delete(curAddr); + if (curAddr != IRawStore.NULL) + store.delete(curAddr); log.warn("Bloom filter disabled - maximum error rate would be exceeded" + ": entryCount=" Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-19 16:56:48 UTC (rev 5809) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-19 19:03:45 UTC (rev 5810) @@ -37,6 +37,7 @@ 1.0.3 + - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) Modified: branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-12-19 16:56:48 UTC (rev 5809) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/java/com/bigdata/btree/AbstractBTree.java 2011-12-19 19:03:45 UTC (rev 5810) @@ -2026,7 +2026,8 @@ */ final long curAddr = filter.disable(); - store.delete(curAddr); + if (curAddr != IRawStore.NULL) + store.delete(curAddr); log.warn("Bloom filter disabled - maximum error rate would be exceeded" + ": entryCount=" Modified: branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-19 16:56:48 UTC (rev 5809) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-19 19:03:45 UTC (rev 5810) @@ -37,6 +37,7 @@ 1.0.3 + - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) Modified: branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt 2011-12-19 16:56:48 UTC (rev 5809) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt 2011-12-19 19:03:45 UTC (rev 5810) @@ -71,6 +71,7 @@ 1.0.3 + - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) - http://sourceforge.net/apps/trac/bigdata/ticket/269 (Refactor performance counters using accessor interface) - http://sourceforge.net/apps/trac/bigdata/ticket/329 (B+Tree should delete bloom filter when it is disabled.) - http://sourceforge.net/apps/trac/bigdata/ticket/372 (RWStore does not prune the CommitRecordIndex) Modified: branches/TERMS_REFACTOR_BRANCH/bigdata-war/RWStore.properties =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata-war/RWStore.properties 2011-12-19 16:56:48 UTC (rev 5809) +++ branches/TERMS_REFACTOR_BRANCH/bigdata-war/RWStore.properties 2011-12-19 19:03:45 UTC (rev 5810) @@ -16,9 +16,9 @@ com.bigdata.btree.BTree.branchingFactor=128 # Bump up the branching factor for the statement indices on the default kb. -com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor=512 +com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor=400 # Bump up the branching factor for the lexicon indices on the default kb. -com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor=512 +com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor=1024 # 200M initial extent. com.bigdata.journal.AbstractJournal.initialExtent=209715200 This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |
From: <tho...@us...> - 2011-12-20 12:16:31
|
Revision: 5814 http://bigdata.svn.sourceforge.net/bigdata/?rev=5814&view=rev Author: thompsonbry Date: 2011-12-20 12:16:20 +0000 (Tue, 20 Dec 2011) Log Message: ----------- Updated release notes. Modified Paths: -------------- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt Modified: branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt =================================================================== --- branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-20 09:03:03 UTC (rev 5813) +++ branches/BIGDATA_RELEASE_1_0_0/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-20 12:16:20 UTC (rev 5814) @@ -53,6 +53,8 @@ - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) + - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) 1.0.2 Modified: branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-20 09:03:03 UTC (rev 5813) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_0_3.txt 2011-12-20 12:16:20 UTC (rev 5814) @@ -53,6 +53,8 @@ - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) + - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) + - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) 1.0.2 Modified: branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt =================================================================== --- branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt 2011-12-20 09:03:03 UTC (rev 5813) +++ branches/TERMS_REFACTOR_BRANCH/bigdata/src/releases/RELEASE_1_1_0.txt 2011-12-20 12:16:20 UTC (rev 5814) @@ -68,7 +68,7 @@ - http://sourceforge.net/apps/trac/bigdata/ticket/408 (xsd:string cast fails for non-numeric data) - http://sourceforge.net/apps/trac/bigdata/ticket/421 (New query hints model.) - http://sourceforge.net/apps/trac/bigdata/ticket/431 (Use of read-only tx per query defeats cache on cluster) - + 1.0.3 - http://sourceforge.net/apps/trac/bigdata/ticket/217 (BTreeCounters does not track bytes released) @@ -87,7 +87,9 @@ - http://sourceforge.net/apps/trac/bigdata/ticket/425 (Bug fix to DEFAULT_RDF_FORMAT for bulk data loader in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/426 (Support either lockfile (procmail) and dotlockfile (liblockfile1) in scale-out) - http://sourceforge.net/apps/trac/bigdata/ticket/427 (BigdataSail#getReadOnlyConnection() race condition with concurrent commit) - + - http://sourceforge.net/apps/trac/bigdata/ticket/435 (Address is 0L) + - http://sourceforge.net/apps/trac/bigdata/ticket/436 (TestMROWTransactions failure in CI) + 1.0.2 - http://sourceforge.net/apps/trac/bigdata/ticket/32 (Query time expansion of (foo rdf:type rdfs:Resource) drags in SPORelation for scale-out.) This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |