This list is closed, nobody may subscribe to it.
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(139) |
Aug
(94) |
Sep
(232) |
Oct
(143) |
Nov
(138) |
Dec
(55) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
(127) |
Feb
(90) |
Mar
(101) |
Apr
(74) |
May
(148) |
Jun
(241) |
Jul
(169) |
Aug
(121) |
Sep
(157) |
Oct
(199) |
Nov
(281) |
Dec
(75) |
| 2012 |
Jan
(107) |
Feb
(122) |
Mar
(184) |
Apr
(73) |
May
(14) |
Jun
(49) |
Jul
(26) |
Aug
(103) |
Sep
(133) |
Oct
(61) |
Nov
(51) |
Dec
(55) |
| 2013 |
Jan
(59) |
Feb
(72) |
Mar
(99) |
Apr
(62) |
May
(92) |
Jun
(19) |
Jul
(31) |
Aug
(138) |
Sep
(47) |
Oct
(83) |
Nov
(95) |
Dec
(111) |
| 2014 |
Jan
(125) |
Feb
(60) |
Mar
(119) |
Apr
(136) |
May
(270) |
Jun
(83) |
Jul
(88) |
Aug
(30) |
Sep
(47) |
Oct
(27) |
Nov
(23) |
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
|
Feb
|
Mar
(4) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: <dme...@us...> - 2014-05-21 18:53:19
|
Revision: 8409
http://sourceforge.net/p/bigdata/code/8409
Author: dmekonnen
Date: 2014-05-21 18:53:16 +0000 (Wed, 21 May 2014)
Log Message:
-----------
removed commented out lines related to nss-brew packaging
Modified Paths:
--------------
branches/DEPLOYMENT_BRANCH_1_3_1/build.xml
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/build.xml
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/build.xml 2014-05-21 18:36:01 UTC (rev 8408)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/build.xml 2014-05-21 18:53:16 UTC (rev 8409)
@@ -1273,7 +1273,6 @@
<copy file="${src.resources}/deployment/nss/bin/startNSS"
todir="${dist.bin}" />
<chmod file="${dist.bin}/startNSS" perm="755" />
- <!-- copy file="${src.resources}/deployment/nss/etc/jetty.xml" todir="${dist.var.jetty}/etc" overwrite="true" / -->
<copy file="${src.resources}/deployment/nss/WEB-INF/RWStore.properties"
todir="${dist.var.jetty}/WEB-INF" overwrite="true" />
<copy file="${src.resources}/deployment/nss/WEB-INF/classes/log4j.properties"
@@ -1378,7 +1377,6 @@
<exclude name="bigdata/lib-ext" />
<include name="bigdata/var/jetty/**" />
<include name="bigdata/var/config/logging/logging.properties" />
- <!-- exclude name="bigdata/var/jetty/jetty.xml" / -->
<exclude name="bigdata/var/jetty/html/new.html" />
<exclude name="bigdata/var/jetty/html/old.html" />
</tarfileset>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 18:36:06
|
Revision: 8408
http://sourceforge.net/p/bigdata/code/8408
Author: dmekonnen
Date: 2014-05-21 18:36:01 +0000 (Wed, 21 May 2014)
Log Message:
-----------
updates to remove jetty template dependency
Modified Paths:
--------------
branches/DEPLOYMENT_BRANCH_1_3_1/build.xml
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/build.xml
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/build.xml 2014-05-21 18:33:09 UTC (rev 8407)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/build.xml 2014-05-21 18:36:01 UTC (rev 8408)
@@ -1378,7 +1378,7 @@
<exclude name="bigdata/lib-ext" />
<include name="bigdata/var/jetty/**" />
<include name="bigdata/var/config/logging/logging.properties" />
- <exclude name="bigdata/var/jetty/jetty.xml" />
+ <!-- exclude name="bigdata/var/jetty/jetty.xml" / -->
<exclude name="bigdata/var/jetty/html/new.html" />
<exclude name="bigdata/var/jetty/html/old.html" />
</tarfileset>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 18:33:12
|
Revision: 8407
http://sourceforge.net/p/bigdata/code/8407
Author: dmekonnen
Date: 2014-05-21 18:33:09 +0000 (Wed, 21 May 2014)
Log Message:
-----------
updates to remove jetty template dependency
Modified Paths:
--------------
branches/DEPLOYMENT_BRANCH_1_3_1/src/resources/deployment/nss/bin/startNSS
Removed Paths:
-------------
branches/DEPLOYMENT_BRANCH_1_3_1/src/resources/deployment/nss/etc/
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/src/resources/deployment/nss/bin/startNSS
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/src/resources/deployment/nss/bin/startNSS 2014-05-21 17:39:04 UTC (rev 8406)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/src/resources/deployment/nss/bin/startNSS 2014-05-21 18:33:09 UTC (rev 8407)
@@ -34,7 +34,7 @@
export JETTY_PORT="8080"
fi
if [ -z "${JETTY_XML}" ]; then
- export JETTY_XML="${JETTY_DIR}/etc/jetty.xml"
+ export JETTY_XML="${JETTY_DIR}/jetty.xml"
fi
if [ -z "${JETTY_RESOURCE_BASE}" ]; then
export JETTY_RESOURCE_BASE="${JETTY_DIR}"
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <mrp...@us...> - 2014-05-21 17:39:09
|
Revision: 8406
http://sourceforge.net/p/bigdata/code/8406
Author: mrpersonick
Date: 2014-05-21 17:39:04 +0000 (Wed, 21 May 2014)
Log Message:
-----------
bash script to start bigdata server (non-HA)
Added Paths:
-----------
branches/BIGDATA_RELEASE_1_3_0/src/resources/bin/bigdata.sh
Added: branches/BIGDATA_RELEASE_1_3_0/src/resources/bin/bigdata.sh
===================================================================
--- branches/BIGDATA_RELEASE_1_3_0/src/resources/bin/bigdata.sh (rev 0)
+++ branches/BIGDATA_RELEASE_1_3_0/src/resources/bin/bigdata.sh 2014-05-21 17:39:04 UTC (rev 8406)
@@ -0,0 +1,61 @@
+#!/bin/bash
+
+# Start the services and put the JVM in the background. All services will
+# run in a single JVM. See Apache River com.sun.jini.start.ServiceStarter
+# for more details. The services are configured in the accompanying
+# startHAServices.config file. Specific configuration options for each
+# service are defined in the documentation for that service.
+#
+# Note: One drawback with running each service in the same JVM is that the
+# GC load of all services is combined and all services would be suspended
+# at the same time by a Full GC pass. If this is a problem, then you can
+# break out the river services (ClassServer and Reggie) into a separate
+# ServiceStarter instance from the HAJournalServer.
+
+# The top-level of the installation.
+pushd `dirname $0` > /dev/null;cd ..;INSTALL_DIR=`pwd`;popd > /dev/null
+
+##
+# HAJournalServer configuration parameter overrides (see HAJournal.config).
+#
+# The bigdata HAJournal.config file may be heavily parameterized through
+# environment variables that get passed through into the JVM started by
+# this script and are thus made available to the HAJournalServer when it
+# interprets the contents of the HAJournal.config file. See HAJournal.config
+# for the meaning of these environment variables.
+#
+# Note: Many of these properties have defaults.
+##
+
+export JETTY_XML="${INSTALL_DIR}/var/jetty/jetty.xml"
+export JETTY_RESOURCE_BASE="${INSTALL_DIR}/var/jetty"
+export LIB_DIR=${INSTALL_DIR}/lib
+export CONFIG_DIR=${INSTALL_DIR}/var/config
+export LOG4J_CONFIG=${CONFIG_DIR}/logging/log4j.properties
+
+# TODO Explicitly enumerate JARs so we can control order if necessary and
+# deploy on OS without find and tr.
+export HAJOURNAL_CLASSPATH=`find ${LIB_DIR} -name '*.jar' -print0 | tr '\0' ':'`
+
+export JAVA_OPTS="\
+ -server -Xmx4G\
+ -Dlog4j.configuration=${LOG4J_CONFIG}\
+ -Djetty.resourceBase=${JETTY_RESOURCE_BASE}\
+ -DJETTY_XML=${JETTY_XML}\
+"
+
+cmd="java ${JAVA_OPTS} \
+ -server -Xmx4G \
+ -cp ${HAJOURNAL_CLASSPATH} \
+ com.bigdata.rdf.sail.webapp.NanoSparqlServer \
+ 9999 kb \
+ ${INSTALL_DIR}/var/jetty/WEB-INF/GraphStore.properties \
+"
+echo "Running: $cmd"
+$cmd&
+pid=$!
+# echo "PID=$pid"
+echo "kill $pid" > stop.sh
+chmod +w stop.sh
+
+# Note: To obtain the pid, do: read pid < "$pidFile"
Property changes on: branches/BIGDATA_RELEASE_1_3_0/src/resources/bin/bigdata.sh
___________________________________________________________________
Added: svn:executable
## -0,0 +1 ##
+*
\ No newline at end of property
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 17:01:39
|
Revision: 8405
http://sourceforge.net/p/bigdata/code/8405
Author: dmekonnen
Date: 2014-05-21 17:01:36 +0000 (Wed, 21 May 2014)
Log Message:
-----------
test for nss packaging
Modified Paths:
--------------
branches/DEPLOYMENT_BRANCH_1_3_1/build.xml
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/build.xml
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/build.xml 2014-05-21 16:56:51 UTC (rev 8404)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/build.xml 2014-05-21 17:01:36 UTC (rev 8405)
@@ -1273,8 +1273,7 @@
<copy file="${src.resources}/deployment/nss/bin/startNSS"
todir="${dist.bin}" />
<chmod file="${dist.bin}/startNSS" perm="755" />
- <copy file="${src.resources}/deployment/nss/etc/jetty.xml"
- todir="${dist.var.jetty}/etc" overwrite="true" />
+ <!-- copy file="${src.resources}/deployment/nss/etc/jetty.xml" todir="${dist.var.jetty}/etc" overwrite="true" / -->
<copy file="${src.resources}/deployment/nss/WEB-INF/RWStore.properties"
todir="${dist.var.jetty}/WEB-INF" overwrite="true" />
<copy file="${src.resources}/deployment/nss/WEB-INF/classes/log4j.properties"
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <tho...@us...> - 2014-05-21 16:56:55
|
Revision: 8404
http://sourceforge.net/p/bigdata/code/8404
Author: thompsonbry
Date: 2014-05-21 16:56:51 +0000 (Wed, 21 May 2014)
Log Message:
-----------
Fixed the SVN URL.
Modified Paths:
--------------
branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_1.txt
Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_1.txt
===================================================================
--- branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_1.txt 2014-05-21 16:32:00 UTC (rev 8403)
+++ branches/BIGDATA_RELEASE_1_3_0/bigdata/src/releases/RELEASE_1_3_1.txt 2014-05-21 16:56:51 UTC (rev 8404)
@@ -16,7 +16,7 @@
You can checkout this release from:
-https://svn.code.sf.net/p/bigdata/code/branches/BIGDATA_RELEASE_1_3_1
+https://svn.code.sf.net/p/bigdata/code/tags/BIGDATA_RELEASE_1_3_1
New features:
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 16:32:04
|
Revision: 8403
http://sourceforge.net/p/bigdata/code/8403
Author: dmekonnen
Date: 2014-05-21 16:32:00 +0000 (Wed, 21 May 2014)
Log Message:
-----------
test fix for HA and NSS startup
Modified Paths:
--------------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml 2014-05-21 16:31:06 UTC (rev 8402)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml 2014-05-21 16:32:00 UTC (rev 8403)
@@ -110,9 +110,9 @@
</Item>
</Array>
</Arg>
- <Set name="host"><SystemProperty name="jetty.host" /></Set>
- <Set name="port"><SystemProperty name="jetty.port" default="8080" /></Set>
- <Set name="idleTimeout"><SystemProperty name="http.timeout" default="30000"/></Set>
+ <Set name="host"><Property name="jetty.host" /></Set>
+ <Set name="port"><Property name="jetty.port" default="8080" /></Set>
+ <Set name="idleTimeout"><Property name="http.timeout" default="30000"/></Set>
</New>
</Arg>
</Call>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 16:31:09
|
Revision: 8402
http://sourceforge.net/p/bigdata/code/8402
Author: dmekonnen
Date: 2014-05-21 16:31:06 +0000 (Wed, 21 May 2014)
Log Message:
-----------
wrong branch, reverting change
Modified Paths:
--------------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml 2014-05-21 16:29:27 UTC (rev 8401)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml 2014-05-21 16:31:06 UTC (rev 8402)
@@ -110,9 +110,9 @@
</Item>
</Array>
</Arg>
- <Set name="host"><Property name="jetty.host" /></Set>
- <Set name="port"><Property name="jetty.port" default="8080" /></Set>
- <Set name="idleTimeout"><Property name="http.timeout" default="30000"/></Set>
+ <Set name="host"><SystemProperty name="jetty.host" /></Set>
+ <Set name="port"><SystemProperty name="jetty.port" default="8080" /></Set>
+ <Set name="idleTimeout"><SystemProperty name="http.timeout" default="30000"/></Set>
</New>
</Arg>
</Call>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 16:29:31
|
Revision: 8401
http://sourceforge.net/p/bigdata/code/8401
Author: dmekonnen
Date: 2014-05-21 16:29:27 +0000 (Wed, 21 May 2014)
Log Message:
-----------
test fix for HA and NSS startup
Modified Paths:
--------------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml 2014-05-21 16:26:25 UTC (rev 8400)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml 2014-05-21 16:29:27 UTC (rev 8401)
@@ -110,9 +110,9 @@
</Item>
</Array>
</Arg>
- <Set name="host"><SystemProperty name="jetty.host" /></Set>
- <Set name="port"><SystemProperty name="jetty.port" default="8080" /></Set>
- <Set name="idleTimeout"><SystemProperty name="http.timeout" default="30000"/></Set>
+ <Set name="host"><Property name="jetty.host" /></Set>
+ <Set name="port"><Property name="jetty.port" default="8080" /></Set>
+ <Set name="idleTimeout"><Property name="http.timeout" default="30000"/></Set>
</New>
</Arg>
</Call>
@@ -166,4 +166,4 @@
<Set name="dumpAfterStart"><Property name="jetty.dump.start" default="false"/></Set>
<Set name="dumpBeforeStop"><Property name="jetty.dump.stop" default="false"/></Set>
-</Configure>
\ No newline at end of file
+</Configure>
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <tob...@us...> - 2014-05-21 16:26:33
|
Revision: 8400
http://sourceforge.net/p/bigdata/code/8400
Author: tobycraig
Date: 2014-05-21 16:26:25 +0000 (Wed, 21 May 2014)
Log Message:
-----------
Fixed submit button on explore tab being overwritten with URI
Modified Paths:
--------------
branches/NEW_WORKBENCH_1_3_2_BRANCH/bigdata-war/src/html/js/workbench.js
Modified: branches/NEW_WORKBENCH_1_3_2_BRANCH/bigdata-war/src/html/js/workbench.js
===================================================================
--- branches/NEW_WORKBENCH_1_3_2_BRANCH/bigdata-war/src/html/js/workbench.js 2014-05-21 14:52:13 UTC (rev 8399)
+++ branches/NEW_WORKBENCH_1_3_2_BRANCH/bigdata-war/src/html/js/workbench.js 2014-05-21 16:26:25 UTC (rev 8400)
@@ -932,7 +932,7 @@
$('#explore-form').submit(function(e) {
e.preventDefault();
- var uri = $(this).find('input').val().trim();
+ var uri = $(this).find('input[type="text"]').val().trim();
if(uri) {
// add < > if they're not present
if(uri[0] != '<') {
@@ -941,7 +941,7 @@
if(uri.slice(-1) != '>') {
uri += '>';
}
- $(this).find('input').val(uri);
+ $(this).find('input[type="text"]').val(uri);
loadURI(uri);
// if this is a SID, make the components clickable
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 14:52:17
|
Revision: 8399
http://sourceforge.net/p/bigdata/code/8399
Author: dmekonnen
Date: 2014-05-21 14:52:13 +0000 (Wed, 21 May 2014)
Log Message:
-----------
Set tar extraction not to be 1.3.0 specific.
Modified Paths:
--------------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/resources/deployment/chef/recipes/high_availability.rb
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/resources/deployment/chef/recipes/high_availability.rb
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/resources/deployment/chef/recipes/high_availability.rb 2014-05-21 10:27:42 UTC (rev 8398)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/resources/deployment/chef/recipes/high_availability.rb 2014-05-21 14:52:13 UTC (rev 8399)
@@ -73,7 +73,7 @@
user node['bigdata'][:user]
group node['bigdata'][:group]
cwd "#{node['bigdata'][:home]}/.."
- command "tar xvf #{node['bigdata'][:source_dir]}/REL.bigdata-1.3.0-*.tgz"
+ command "tar xvf #{node['bigdata'][:source_dir]}/REL.bigdata-1.*.tgz"
end
#
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 10:27:45
|
Revision: 8398
http://sourceforge.net/p/bigdata/code/8398
Author: dmekonnen
Date: 2014-05-21 10:27:42 +0000 (Wed, 21 May 2014)
Log Message:
-----------
deletions.
Removed Paths:
-------------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/lib/bigdata-ganglia-1.0.1.jar
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/lib/bigdata-ganglia-1.0.2.jar
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-core-2.4.0.jar
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataBlueprintsGraph.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataEventTransactionalGraph.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/QueryManager.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/IDoNotJoinService.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/IHALoadBalancerPolicy.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/HostTable.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/IHostScoringRule.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/NOPHostScoringRule.java
Deleted: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/lib/bigdata-ganglia-1.0.1.jar
===================================================================
(Binary files differ)
Deleted: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/lib/bigdata-ganglia-1.0.2.jar
===================================================================
(Binary files differ)
Deleted: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-core-2.4.0.jar
===================================================================
(Binary files differ)
Deleted: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataBlueprintsGraph.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataBlueprintsGraph.java 2014-05-21 10:23:40 UTC (rev 8397)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataBlueprintsGraph.java 2014-05-21 10:27:42 UTC (rev 8398)
@@ -1,141 +0,0 @@
-package com.bigdata.blueprints;
-
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-
-import sun.reflect.generics.reflectiveObjects.NotImplementedException;
-
-import com.tinkerpop.blueprints.Edge;
-import com.tinkerpop.blueprints.Features;
-import com.tinkerpop.blueprints.GraphQuery;
-import com.tinkerpop.blueprints.TransactionalGraph;
-import com.tinkerpop.blueprints.Vertex;
-
-
-public abstract class BigdataBlueprintsGraph implements BigdataEventTransactionalGraph {
- // elements that we will be deleting from the store
- private ArrayList<BigdataElement> removedElements = new ArrayList<BigdataElement>();
- // vertices that we will be adding to the store
- private HashMap<String,BigdataVertex> addedVertices = new HashMap<String,BigdataVertex>();
- // elements that we will be adding to the store
- private HashMap<String,BigdataEdge> addedEdges = new HashMap<String,BigdataEdge>();
- private QueryManager qm = null;
-
- public BigdataBlueprintsGraph () { }
-
- public BigdataBlueprintsGraph (QueryManager qm) { this.qm = qm; }
-
- public void setQueryManager(QueryManager qm) { this.qm = qm; }
- public QueryManager getQueryManager() { return qm; }
-
- public void commit() {
- // form and submit query
- //
- //
- //
- throwUnimplemented( "commit" );
- }
-
- public void rollback() {
- throwUnimplemented( "rollback" );
- }
-
- public void stopTransaction(TransactionalGraph.Conclusion conclusion) {
- throwUnimplemented( "stopTransaction" );
- }
-
- public void shutdown() {
- throwUnimplemented( "shutdown" );
- }
-
- public Vertex getVertex(Object id) {
- // we can only remove an item from the "add" queue
- return addedVertices.get( (String) id );
- }
-
- public BigdataBlueprintsGraph getBasseGraph() { return this; }
-
- public Edge addEdge(Object id, BigdataVertex outVertex, BigdataVertex inVertex, String label) {
- BigdataEdge edge = new BigdataEdge( (String)id, outVertex, inVertex, label );
- addedEdges.put((String)id, edge);
- return edge;
- }
-
- public Features getFeatures() {
- throwUnimplemented( "getFeatures" );
- return (Features)null;
- }
-
- public Vertex addVertex(Object id) {
- BigdataVertex v = new BigdataVertex( (String)id );
- addedVertices.put( (String)id, v );
- return v;
- }
-
- public void removeVertex(BigdataVertex vertex) {
- addedVertices.remove( vertex.getId() ); // if present
- removedElements.add( vertex );
- }
-
- public Iterable<Vertex> getVertices(String key, Object value) {
- throwUnimplemented( "getVertices(String key, Object value)" );
- return (Iterable<Vertex>)null;
- }
-
- public Iterable<Vertex> getVertices() {
- // we only return what is in the "add" queue
- final List<Vertex> vertexList = new ArrayList<Vertex>();
- vertexList.addAll( addedVertices.values() );
- return vertexList;
- }
-
- public Edge getEdge(Object id) {
- // we can only remove an item from the "add" queue
- return addedEdges.get( (String) id );
- }
-
- public void removeEdge(BigdataEdge edge) {
- addedEdges.remove( edge.getId() ); // if present
- removedElements.add( edge );
- }
-
- public Iterable<Edge> getEdges(String key, Object value) {
- throwUnimplemented( "getEdges(String key, Object value)" );
- return (Iterable<Edge>)null;
- }
-
- public Iterable<Edge> getEdges() {
- // we only return what is in the add queue
- final List<Edge> edgeList = new ArrayList<Edge>();
- edgeList.addAll( addedEdges.values() );
- return edgeList;
- }
-
- public GraphQuery query() {
- throwUnimplemented( "queries" );
- return (GraphQuery)null;
- }
-
- // @SuppressWarnings("deprecation")
- private void throwUnimplemented(String method) {
- // unchecked( new Exception( "The '" + method + "' has not been implemented." ) );
- throw new NotImplementedException();
- }
-
-
- /* Maybe use later
- *
- public static RuntimeException unchecked(Throwable e) {
- BigdataBlueprintsGraph.<RuntimeException>throwAny(e);
- return null;
- }
-
- @SuppressWarnings("unchecked")
- private static <E extends Throwable> void throwAny(Throwable e) throws E {
- throw (E)e;
- }
- */
-
-}
-
Deleted: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataEventTransactionalGraph.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataEventTransactionalGraph.java 2014-05-21 10:23:40 UTC (rev 8397)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataEventTransactionalGraph.java 2014-05-21 10:27:42 UTC (rev 8398)
@@ -1,8 +0,0 @@
-package com.bigdata.blueprints;
-
-import com.tinkerpop.blueprints.Graph;
-import com.tinkerpop.blueprints.ThreadedTransactionalGraph;
-
-public interface BigdataEventTransactionalGraph extends Graph, ThreadedTransactionalGraph {
-
-}
Deleted: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/QueryManager.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/QueryManager.java 2014-05-21 10:23:40 UTC (rev 8397)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/QueryManager.java 2014-05-21 10:27:42 UTC (rev 8398)
@@ -1,54 +0,0 @@
-package com.bigdata.blueprints;
-
-import java.util.List;
-
-
-public interface QueryManager {
-
- /*
- * Set the SPARQL endpoint to exchange with.
- */
- public void setEndpoint( String endpointURL );
-
- /*
- * Set the Vertices and Edges to form deletion queries from
- */
- public void setDeleteElements( List<BigdataElement> elements );
-
- /*
- * Set the Vertices and Edges to form insertion queries from
- */
- public void setInsertElements( List<BigdataElement> elements );
-
- /*
- * Resets private query variables to null.
- */
- public void reset();
-
- /*
- * Returns the aggregate INSERT{ } clause for asserted triples.
- */
- public String getInsertClause();
-
- /*
- * Returns the aggregate DELETE{ } clause for asserted triples.
- */
- public String getDeleteClause();
-
- /*
- * Returns an array of DELETE{ <URI> ?p ?o } WHERE{ <URI> ?p ?o } queries to delete every
- * triple of an Element.
- */
- public String[] getDeleteQueries();
-
- /*
- * Build the internal representation of the queries.
- */
- public void buildQUeries();
-
- /*
- * Submits the update query to the server, no result set returned.
- */
- public void commitUpdate();
-
-}
Deleted: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/IDoNotJoinService.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/IDoNotJoinService.java 2014-05-21 10:23:40 UTC (rev 8397)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/IDoNotJoinService.java 2014-05-21 10:27:42 UTC (rev 8398)
@@ -1,35 +0,0 @@
-/**
-
-Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
-
-Contact:
- SYSTAP, LLC
- 4501 Tower Road
- Greensboro, NC 27410
- lic...@bi...
-
-This program is free software; you can redistribute it and/or modify
-it under the terms of the GNU General Public License as published by
-the Free Software Foundation; version 2 of the License.
-
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-GNU General Public License for more details.
-
-You should have received a copy of the GNU General Public License
-along with this program; if not, write to the Free Software
-Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-*/
-package com.bigdata.rdf.sparql.ast.service;
-
-/**
- * Service calls can implement this interface and they will not be routed
- * through a hash join in the query plan. They will be responsible for their
- * own join internally.
- *
- * @author mikepersonick
- */
-public interface IDoNotJoinService {
-
-}
Deleted: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/IHALoadBalancerPolicy.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/IHALoadBalancerPolicy.java 2014-05-21 10:23:40 UTC (rev 8397)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/IHALoadBalancerPolicy.java 2014-05-21 10:27:42 UTC (rev 8398)
@@ -1,104 +0,0 @@
-/**
-Copyright (C) SYSTAP, LLC 2006-2007. All rights reserved.
-
-Contact:
- SYSTAP, LLC
- 4501 Tower Road
- Greensboro, NC 27410
- lic...@bi...
-
-This program is free software; you can redistribute it and/or modify
-it under the terms of the GNU General Public License as published by
-the Free Software Foundation; version 2 of the License.
-
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-GNU General Public License for more details.
-
-You should have received a copy of the GNU General Public License
-along with this program; if not, write to the Free Software
-Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-*/
-package com.bigdata.rdf.sail.webapp;
-
-import java.io.IOException;
-
-import javax.servlet.ServletConfig;
-import javax.servlet.ServletException;
-import javax.servlet.http.HttpServletRequest;
-import javax.servlet.http.HttpServletResponse;
-
-import com.bigdata.journal.IIndexManager;
-
-/**
- * Load balancer policy interface.
- *
- * @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- *
- * @see HALoadBalancerServlet
- * @see <a href="http://trac.bigdata.com/ticket/624">HA Load Balancer</a>
- */
-public interface IHALoadBalancerPolicy {
-
- /**
- * Initialize the load balancer policy.
- *
- * @param servletConfig
- * @param indexManager
- */
- void init(ServletConfig servletConfig, IIndexManager indexManager)
- throws ServletException;
-
- /**
- * Destroy the load balancer policy (stop any asynchronous processing,
- * release any resources).
- */
- void destroy();
-
- /**
- * Invoked for each request. If the response is not committed, then it will
- * be handled by the {@link HALoadBalancerServlet}.
- *
- * @param isLeaderRequest
- * <code>true</code> iff this request must be directed to the
- * leaeder and <code>false</code> iff this request may be load
- * balanced over the joined services. UPDATEs MUST be handled by
- * the leader. Read requests can be handled by any service that
- * is joined with the met quorum.
- * @param request
- * The request.
- * @param response
- * The response.
- *
- * @return <code>true</code> iff the request was handled.
- */
- boolean service(final boolean isLeaderRequest,
- final HttpServletRequest request, final HttpServletResponse response)
- throws ServletException, IOException;
-
- /**
- * Return the URL to which a non-idempotent request will be proxied.
- *
- * @param req
- * The request.
- *
- * @return The proxyTo URL -or- <code>null</code> if we could not find a
- * service to which we could proxy this request.
- */
- String getLeaderURL(HttpServletRequest req);
-
- /**
- * Return the URL to which a <strong>read-only</strong> request will be
- * proxied. The returned URL must include the protocol, hostname and port
- * (if a non-default port will be used) as well as the target request path.
- *
- * @param req
- * The request.
- *
- * @return The proxyTo URL -or- <code>null</code> if we could not find a
- * service to which we could proxy this request.
- */
- String getReaderURL(HttpServletRequest req);
-
-}
Deleted: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/HostTable.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/HostTable.java 2014-05-21 10:23:40 UTC (rev 8397)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/HostTable.java 2014-05-21 10:27:42 UTC (rev 8398)
@@ -1,65 +0,0 @@
-/**
-Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
-
-Contact:
- SYSTAP, LLC
- 4501 Tower Road
- Greensboro, NC 27410
- lic...@bi...
-
-This program is free software; you can redistribute it and/or modify
-it under the terms of the GNU General Public License as published by
-the Free Software Foundation; version 2 of the License.
-
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-GNU General Public License for more details.
-
-You should have received a copy of the GNU General Public License
-along with this program; if not, write to the Free Software
-Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-*/
-package com.bigdata.rdf.sail.webapp.lbs.policy.ganglia;
-
-import java.util.Arrays;
-
-import com.bigdata.rdf.sail.webapp.lbs.HostScore;
-
-/**
- * Class bundles together the set of {@link HostScore}s for services that are
- * joined with the met quorum and the {@link HostScore} for this service (iff it
- * is joined with the met quorum).
- *
- * @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- */
-public class HostTable {
-
- /**
- * The most recent score for this host -or- <code>null</code> iff there
- * is not score for this host.
- */
- final public HostScore thisHost;
-
- /**
- * The table of pre-scored hosts -or- <code>null</code> iff there are no
- * host scores. Only hosts that have services that are joined with the met
- * quorum will appear in this table.
- */
- final public HostScore[] hostScores;
-
- public HostTable(final HostScore thisHost, final HostScore[] hostScores) {
- this.thisHost = thisHost;
- this.hostScores = hostScores;
- }
-
- @Override
- public String toString() {
-
- return "HostTable{this=" + thisHost + ",hostScores="
- + (hostScores == null ? "N/A" : Arrays.toString(hostScores))
- + "}";
-
- }
-
-}
\ No newline at end of file
Deleted: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/IHostScoringRule.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/IHostScoringRule.java 2014-05-21 10:23:40 UTC (rev 8397)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/IHostScoringRule.java 2014-05-21 10:27:42 UTC (rev 8398)
@@ -1,44 +0,0 @@
-/**
-Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
-
-Contact:
- SYSTAP, LLC
- 4501 Tower Road
- Greensboro, NC 27410
- lic...@bi...
-
-This program is free software; you can redistribute it and/or modify
-it under the terms of the GNU General Public License as published by
-the Free Software Foundation; version 2 of the License.
-
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-GNU General Public License for more details.
-
-You should have received a copy of the GNU General Public License
-along with this program; if not, write to the Free Software
-Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-*/
-package com.bigdata.rdf.sail.webapp.lbs.policy.ganglia;
-
-import com.bigdata.ganglia.IHostReport;
-
-/**
- * Interface for scoring the load on a host.
- *
- * @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- */
-public interface IHostScoringRule {
-
- /**
- * Return a score for the given {@link IHostReport}.
- *
- * @param hostReport
- * The {@link IHostReport}.
- *
- * @return The score.
- */
- public double getScore(final IHostReport hostReport);
-
-}
\ No newline at end of file
Deleted: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/NOPHostScoringRule.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/NOPHostScoringRule.java 2014-05-21 10:23:40 UTC (rev 8397)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/NOPHostScoringRule.java 2014-05-21 10:27:42 UTC (rev 8398)
@@ -1,41 +0,0 @@
-/**
-Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
-
-Contact:
- SYSTAP, LLC
- 4501 Tower Road
- Greensboro, NC 27410
- lic...@bi...
-
-This program is free software; you can redistribute it and/or modify
-it under the terms of the GNU General Public License as published by
-the Free Software Foundation; version 2 of the License.
-
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-GNU General Public License for more details.
-
-You should have received a copy of the GNU General Public License
-along with this program; if not, write to the Free Software
-Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-*/
-package com.bigdata.rdf.sail.webapp.lbs.policy.ganglia;
-
-import com.bigdata.ganglia.IHostReport;
-
-/**
- * Returns ONE for each host (all hosts appear to have an equal workload).
- *
- * @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- */
-public class NOPHostScoringRule implements IHostScoringRule {
-
- @Override
- public double getScore(final IHostReport hostReport) {
-
- return 1d;
-
- }
-
-}
\ No newline at end of file
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 10:23:44
|
Revision: 8397
http://sourceforge.net/p/bigdata/code/8397
Author: dmekonnen
Date: 2014-05-21 10:23:40 +0000 (Wed, 21 May 2014)
Log Message:
-----------
retrying sync.
Modified Paths:
--------------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/images/logo.png
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/images/logo.png
===================================================================
(Binary files differ)
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 10:01:28
|
Revision: 8396
http://sourceforge.net/p/bigdata/code/8396
Author: dmekonnen
Date: 2014-05-21 10:01:25 +0000 (Wed, 21 May 2014)
Log Message:
-----------
commit to fix failed sync.
Added Paths:
-----------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTicket887.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.rq
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.srx
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.trig
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.rq
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.srx
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.trig
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTicket887.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTicket887.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestTicket887.java 2014-05-21 10:01:25 UTC (rev 8396)
@@ -0,0 +1,78 @@
+/**
+
+Copyright (C) SYSTAP, LLC 2013. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+package com.bigdata.rdf.sparql.ast.eval;
+
+
+/**
+ * Test suite for a hesienbug involving BIND. Unlike the other issues this
+ * sometimes happens, and is sometimes OK, so we run the test in a loop 20
+ * times.
+ *
+ * @see <a href="https://sourceforge.net/apps/trac/bigdata/ticket/708">
+ * Heisenbug </a>
+ *
+ * @version $Id$
+ */
+public class TestTicket887 extends AbstractDataDrivenSPARQLTestCase {
+
+ public TestTicket887() {
+ }
+
+ public TestTicket887(String name) {
+ super(name);
+ }
+
+ /**
+ * <pre>
+ * SELECT *
+ * WHERE {
+ *
+ * GRAPH ?g {
+ *
+ * BIND( "hello" as ?hello ) .
+ * BIND( CONCAT(?hello, " world") as ?helloWorld ) .
+ *
+ * ?member a ?class .
+ *
+ * }
+ *
+ * }
+ * LIMIT 1
+ * </pre>
+ *
+ * @see <a href="http://trac.bigdata.com/ticket/887" > BIND is leaving a
+ * variable unbound </a>
+ */
+ public void test_ticket_887_bind() throws Exception {
+
+ new TestHelper(
+ "ticket_887_bind", // testURI,
+ "ticket_887_bind.rq",// queryFileURL
+ "ticket_887_bind.trig",// dataFileURL
+ "ticket_887_bind.srx"// resultFileURL
+ ).runTest();
+
+ }
+
+}
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.rq
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.rq (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.rq 2014-05-21 10:01:25 UTC (rev 8396)
@@ -0,0 +1,14 @@
+SELECT *
+WHERE {
+
+ GRAPH ?g {
+
+ BIND( "hello" as ?hello ) .
+ BIND( CONCAT(?hello, " world") as ?helloWorld ) .
+
+ ?member a ?class .
+
+ }
+
+}
+LIMIT 1
\ No newline at end of file
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.srx
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.srx (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.srx 2014-05-21 10:01:25 UTC (rev 8396)
@@ -0,0 +1,32 @@
+<?xml version="1.0"?>
+<sparql
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:xs="http://www.w3.org/2001/XMLSchema#"
+ xmlns="http://www.w3.org/2005/sparql-results#" >
+ <head>
+ <variable name="?hello"/>
+ <variable name="?helloWorld"/>
+ <variable name="?member"/>
+ <variable name="?class"/>
+ <variable name="?g"/>
+ </head>
+ <results>
+ <result>
+ <binding name="hello">
+ <literal>hello</literal>
+ </binding>
+ <binding name="helloWorld">
+ <literal>hello world</literal>
+ </binding>
+ <binding name="member">
+ <uri>http://www.bigdata.com/member</uri>
+ </binding>
+ <binding name="class">
+ <uri>http://www.bigdata.com/cls</uri>
+ </binding>
+ <binding name="g">
+ <uri>http://www.bigdata.com/</uri>
+ </binding>
+ </result>
+ </results>
+</sparql>
\ No newline at end of file
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.trig
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.trig (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_887_bind.trig 2014-05-21 10:01:25 UTC (rev 8396)
@@ -0,0 +1,6 @@
+@prefix : <http://www.bigdata.com/> .
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+
+: {
+ :member a :cls
+}
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.rq
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.rq (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.rq 2014-05-21 10:01:25 UTC (rev 8396)
@@ -0,0 +1,11 @@
+SELECT *
+WHERE {
+ {
+ SELECT ?s
+ WHERE {?s ?p ?o}
+ LIMIT 1
+ }
+ {?s ?p ?o}
+ UNION
+ {}
+}
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.srx
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.srx (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.srx 2014-05-21 10:01:25 UTC (rev 8396)
@@ -0,0 +1,37 @@
+<?xml version='1.0' encoding='UTF-8'?>
+<sparql xmlns='http://www.w3.org/2005/sparql-results#'>
+ <head>
+ <variable name='s'/>
+ <variable name='p'/>
+ <variable name='o'/>
+ </head>
+ <results>
+ <result>
+ <binding name='s'>
+ <uri>http://example.org/a0</uri>
+ </binding>
+ <binding name='p'>
+ <uri>http://example.org/p0</uri>
+ </binding>
+ <binding name='o'>
+ <literal>a0+p0</literal>
+ </binding>
+ </result>
+ <result>
+ <binding name='s'>
+ <uri>http://example.org/a0</uri>
+ </binding>
+ <binding name='p'>
+ <uri>http://example.org/p1</uri>
+ </binding>
+ <binding name='o'>
+ <literal>a0+p1</literal>
+ </binding>
+ </result>
+ <result>
+ <binding name='s'>
+ <uri>http://example.org/a0</uri>
+ </binding>
+ </result>
+ </results>
+</sparql>
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.trig
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.trig (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/ticket_944.trig 2014-05-21 10:01:25 UTC (rev 8396)
@@ -0,0 +1,7 @@
+@prefix : <http://example.org/> .
+
+<file:/tmp/sparql-1.1-evaluation5799/testcases-sparql-1.1/subquery/data-04.ttl>{
+:a1 :p2 "a1+p2" .
+:a0 :p1 "a0+p1" .
+:a0 :p0 "a0+p0" .
+}
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 10:01:03
|
Revision: 8395
http://sourceforge.net/p/bigdata/code/8395
Author: dmekonnen
Date: 2014-05-21 10:01:00 +0000 (Wed, 21 May 2014)
Log Message:
-----------
commit to fix failed sync.
Added Paths:
-----------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/test/com/bigdata/journal/jini/ha/TestHA3LoadBalancer_CountersLBS.java
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/test/com/bigdata/journal/jini/ha/TestHA3LoadBalancer_CountersLBS.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/test/com/bigdata/journal/jini/ha/TestHA3LoadBalancer_CountersLBS.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/test/com/bigdata/journal/jini/ha/TestHA3LoadBalancer_CountersLBS.java 2014-05-21 10:01:00 UTC (rev 8395)
@@ -0,0 +1,54 @@
+/**
+
+Copyright (C) SYSTAP, LLC 2006-2010. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+package com.bigdata.journal.jini.ha;
+
+import com.bigdata.rdf.sail.webapp.lbs.IHALoadBalancerPolicy;
+import com.bigdata.rdf.sail.webapp.lbs.policy.counters.CountersLBSPolicy;
+
+/**
+ * Test suite for the HA load balancer.
+ *
+ * @author <a href="mailto:tho...@us...">Bryan Thompson</a>
+ *
+ * @see <a href="http://trac.bigdata.com/ticket/624"> HA Load Balancer </a>
+ */
+public class TestHA3LoadBalancer_CountersLBS extends AbstractHA3LoadBalancerTestCase {
+
+ public TestHA3LoadBalancer_CountersLBS() {
+ }
+
+ public TestHA3LoadBalancer_CountersLBS(final String name) {
+
+ super(name);
+
+ }
+
+ @Override
+ protected IHALoadBalancerPolicy newTestPolicy() {
+
+ return new CountersLBSPolicy();
+
+ }
+
+}
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 10:00:35
|
Revision: 8394
http://sourceforge.net/p/bigdata/code/8394
Author: dmekonnen
Date: 2014-05-21 10:00:32 +0000 (Wed, 21 May 2014)
Log Message:
-----------
commit to fix failed sync.
Added Paths:
-----------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-ganglia/src/releases/bigdata-ganglia-1.0.2.txt
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-ganglia/src/releases/bigdata-ganglia-1.0.2.txt
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-ganglia/src/releases/bigdata-ganglia-1.0.2.txt (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-ganglia/src/releases/bigdata-ganglia-1.0.2.txt 2014-05-21 10:00:32 UTC (rev 8394)
@@ -0,0 +1,55 @@
+This library provides a pure Java embedded peer for Ganglia. The GangliaService
+both listens and reports metrics. This means that it is capable of provide load
+balanced reported from soft state and can even be used as a substitute for gmond
+on operating systems (such as Windows) to which gmond has not been ported.
+
+The main entry point is GangliaService. It is trivial to setup with defaults and
+you can easily register your own metrics collection classes to report out on your
+application.
+
+GangliaServer service = new GangliaService("MyService");
+// Register to collect metrics.
+service.addMetricCollector(new MyMetricsCollector());
+// Join the ganglia network; Start collecting and reporting metrics.
+service.run();
+
+The following will return the default load balanced report, which contains
+exactly the same information that you would get from gstat -a. You can also use
+an alternative method signature to get a report based on your own list of metrics
+and/or have the report sorted by the metric (or even a synthetic metric) of your
+choice.
+
+IHostReport[] hostReport = service.getHostReport();
+
+Have fun!
+
+Change log:
+
+1.0.2:
+
+- Minor API additions for GangliaService.
+
+1.0.1:
+
+- Added utility class for parsing a string into an array of host addresses.
+- GangliaListener was ignoring interrupts.
+
+--------------------------
+
+ Copyright (C) SYSTAP, LLC 2006-2012. All rights reserved.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+----
+This product includes software developed by The Apache Software Foundation (http://www.apache.org/).
+License: http://www.apache.org/licenses/LICENSE-2.0
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <dme...@us...> - 2014-05-21 09:59:37
|
Revision: 8393
http://sourceforge.net/p/bigdata/code/8393
Author: dmekonnen
Date: 2014-05-21 09:59:33 +0000 (Wed, 21 May 2014)
Log Message:
-----------
commit to fix failed sync.
Added Paths:
-----------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/BigdataSailNSSWrapper.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/StringUtil.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/AbstractHostLBSPolicy.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/AbstractHostMetrics.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/HostTable.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/IHostMetrics.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/IHostScoringRule.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/NOPHostScoringRule.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/counters/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/counters/CounterSetHostMetricsWrapper.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/counters/CountersLBSPolicy.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/counters/DefaultHostScoringRule.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/GangliaHostMetricWrapper.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/health/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/health/TestNSSHealthCheck.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/lbs/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/lbs/TestAbstractHostLBSPolicy.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/lbs/TestAll.java
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailFactory.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailFactory.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailFactory.java 2014-05-21 09:59:33 UTC (rev 8393)
@@ -0,0 +1,254 @@
+/**
+Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+package com.bigdata.rdf.sail;
+
+import java.io.File;
+import java.util.Arrays;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Properties;
+
+import com.bigdata.journal.BufferMode;
+import com.bigdata.journal.Journal;
+import com.bigdata.rdf.axioms.NoAxioms;
+import com.bigdata.rdf.axioms.OwlAxioms;
+import com.bigdata.rdf.sail.remote.BigdataSailRemoteRepository;
+import com.bigdata.rdf.vocab.RDFSVocabulary;
+
+/**
+ * Helper class to create a bigdata instance.
+ *
+ * @author mikepersonick
+ *
+ */
+public class BigdataSailFactory {
+
+ /**
+ * A handy list of common Options you might want to specify when creating
+ * your bigdata instance.
+ *
+ * @author mikepersonick
+ *
+ */
+ public static enum Option {
+
+ /**
+ * Inference on or off. Off by default.
+ */
+ Inference,
+
+ /**
+ * Quads on or off. Off by default.
+ */
+ Quads,
+
+ /**
+ * RDR (statement identifiers) on or off. Off by default.
+ */
+ RDR,
+
+ /**
+ * Text index on or off. Off by default.
+ */
+ TextIndex,
+
+// /**
+// * Create an in-memory instance.
+// */
+// InMemory,
+//
+// /**
+// * Create a persistent instance backed by a file. You must specify
+// * the file.
+// */
+// Persistent
+
+ }
+
+ /**
+ * Connect to a remote bigdata instance.
+ */
+ public static BigdataSailRemoteRepository connect(final String host, final int port) {
+
+ return connect("http://"+host+":"+port);
+
+ }
+
+ /**
+ * Connect to a remote bigdata instance.
+ */
+ public static BigdataSailRemoteRepository connect(final String serviceEndpoint) {
+
+ if (serviceEndpoint.endsWith("/bigdata/sparql")) {
+ return new BigdataSailRemoteRepository(serviceEndpoint);
+ } else if (serviceEndpoint.endsWith("/bigdata/")) {
+ return new BigdataSailRemoteRepository(serviceEndpoint + "sparql");
+ } else if (serviceEndpoint.endsWith("/bigdata")) {
+ return new BigdataSailRemoteRepository(serviceEndpoint + "/sparql");
+ } else if (serviceEndpoint.endsWith("/")) {
+ return new BigdataSailRemoteRepository(serviceEndpoint + "bigdata/sparql");
+ } else {
+ return new BigdataSailRemoteRepository(serviceEndpoint + "/bigdata/sparql");
+ }
+
+ }
+
+ /**
+ * Open an existing persistent bigdata instance.
+ */
+ public static BigdataSailRepository openRepository(final String file) {
+
+ return new BigdataSailRepository(openSail(file));
+
+ }
+
+ /**
+ * Open an existing persistent bigdata instance.
+ */
+ public static BigdataSail openSail(final String file) {
+
+ if (!new File(file).exists()) {
+ throw new IllegalArgumentException("file does not exist - use create() method instead");
+ }
+
+ final Properties props = new Properties();
+ props.setProperty(BigdataSail.Options.FILE, file);
+
+ final BigdataSail sail = new BigdataSail(props);
+
+ return sail;
+
+ }
+
+ /**
+ * Create a new bigdata instance using the specified options. Since no
+ * journal file is specified this must be an in-memory instance.
+ */
+ public static BigdataSailRepository createRepository(final Option... args) {
+
+ return new BigdataSailRepository(createSail(null, args));
+
+ }
+
+ /**
+ * Create a new bigdata instance using the specified options.
+ */
+ public static BigdataSailRepository createRepository(final String file,
+ final Option... args) {
+
+ return new BigdataSailRepository(createSail(file, args));
+
+ }
+
+ /**
+ * Create a new bigdata instance using the specified options. Since no
+ * journal file is specified this must be an in-memory instance.
+ */
+ public static BigdataSail createSail(final Option... args) {
+
+ return createSail(null, args);
+
+ }
+
+ /**
+ * Create a new bigdata instance using the specified options.
+ */
+ public static BigdataSail createSail(final String file,
+ final Option... args) {
+
+ final List<Option> options = args != null ?
+ Arrays.asList(args) : new LinkedList<Option>();
+
+ checkArgs(file, options);
+
+ final Properties props = new Properties();
+
+ if (file != null) {
+ props.setProperty(BigdataSail.Options.FILE, file);
+ props.setProperty(Journal.Options.BUFFER_MODE, BufferMode.DiskRW.toString());
+ } else {
+ props.setProperty(Journal.Options.BUFFER_MODE, BufferMode.MemStore.toString());
+ }
+
+ if (options.contains(Option.Inference)) {
+ props.setProperty(BigdataSail.Options.AXIOMS_CLASS, OwlAxioms.class.getName());
+ props.setProperty(BigdataSail.Options.VOCABULARY_CLASS, RDFSVocabulary.class.getName());
+ props.setProperty(BigdataSail.Options.TRUTH_MAINTENANCE, "true");
+ props.setProperty(BigdataSail.Options.JUSTIFY, "true");
+ } else {
+ props.setProperty(BigdataSail.Options.AXIOMS_CLASS, NoAxioms.class.getName());
+ props.setProperty(BigdataSail.Options.VOCABULARY_CLASS, RDFSVocabulary.class.getName());
+ props.setProperty(BigdataSail.Options.TRUTH_MAINTENANCE, "false");
+ props.setProperty(BigdataSail.Options.JUSTIFY, "false");
+ }
+
+ props.setProperty(BigdataSail.Options.TEXT_INDEX,
+ String.valueOf(options.contains(Option.TextIndex)));
+
+ props.setProperty(BigdataSail.Options.STATEMENT_IDENTIFIERS,
+ String.valueOf(options.contains(Option.RDR)));
+
+ props.setProperty(BigdataSail.Options.QUADS,
+ String.valueOf(options.contains(Option.Quads)));
+
+ // Setup for the RWStore recycler rather than session protection.
+ props.setProperty("com.bigdata.service.AbstractTransactionService.minReleaseAge","1");
+ props.setProperty("com.bigdata.btree.writeRetentionQueue.capacity","4000");
+ props.setProperty("com.bigdata.btree.BTree.branchingFactor","128");
+ // Bump up the branching factor for the lexicon indices on the default kb.
+ props.setProperty("com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor","400");
+ // Bump up the branching factor for the statement indices on the default kb.
+ props.setProperty("com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor","1024");
+
+ final BigdataSail sail = new BigdataSail(props);
+
+ return sail;
+
+ }
+
+ protected static void checkArgs(final String file, final List<Option> options) {
+
+ if (options.contains(Option.Inference) && options.contains(Option.Quads)) {
+ throw new IllegalArgumentException();
+ }
+
+ if (options.contains(Option.RDR) && options.contains(Option.Quads)) {
+ throw new IllegalArgumentException();
+ }
+
+// if (options.contains(Option.InMemory) && options.contains(Option.Persistent)) {
+// throw new IllegalArgumentException();
+// }
+//
+// if (options.contains(Option.InMemory) && file != null) {
+// throw new IllegalArgumentException();
+// }
+//
+// if (options.contains(Option.Persistent) && file == null) {
+// throw new IllegalArgumentException();
+// }
+
+ }
+
+
+}
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BlueprintsServlet.java 2014-05-21 09:59:33 UTC (rev 8393)
@@ -0,0 +1,150 @@
+/**
+Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+package com.bigdata.rdf.sail.webapp;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.List;
+
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.log4j.Logger;
+
+import com.bigdata.blueprints.BigdataGraphBulkLoad;
+import com.bigdata.rdf.sail.BigdataSailRepositoryConnection;
+import com.bigdata.rdf.sail.webapp.client.MiniMime;
+import com.bigdata.rdf.store.AbstractTripleStore;
+import com.tinkerpop.blueprints.util.io.graphml.GraphMLReader;
+
+/**
+ * Helper servlet for the blueprints layer.
+ */
+public class BlueprintsServlet extends BigdataRDFServlet {
+
+ /**
+ *
+ */
+ private static final long serialVersionUID = 1L;
+
+ static private final transient Logger log = Logger.getLogger(BlueprintsServlet.class);
+
+ static public final List<String> mimeTypes = Arrays.asList(new String[] {
+ "application/graphml+xml"
+ }) ;
+
+ /**
+ * Flag to signify a blueprints operation.
+ */
+ static final transient String ATTR_BLUEPRINTS = "blueprints";
+
+ public BlueprintsServlet() {
+
+ }
+
+ /**
+ * Post a GraphML file to the blueprints layer.
+ */
+ @Override
+ protected void doPost(final HttpServletRequest req,
+ final HttpServletResponse resp) throws IOException {
+
+ final long begin = System.currentTimeMillis();
+
+ final String namespace = getNamespace(req);
+
+ final long timestamp = getTimestamp(req);
+
+ final AbstractTripleStore tripleStore = getBigdataRDFContext()
+ .getTripleStore(namespace, timestamp);
+
+ if (tripleStore == null) {
+ /*
+ * There is no such triple/quad store instance.
+ */
+ buildResponse(resp, HTTP_NOTFOUND, MIME_TEXT_PLAIN);
+ return;
+ }
+
+ final String contentType = req.getContentType();
+
+ if (log.isInfoEnabled())
+ log.info("Request body: " + contentType);
+
+ final String mimeType = new MiniMime(contentType).getMimeType().toLowerCase();
+
+ if (!mimeTypes.contains(mimeType)) {
+
+ buildResponse(resp, HTTP_BADREQUEST, MIME_TEXT_PLAIN,
+ "Content-Type not recognized as graph data: " + contentType);
+
+ return;
+
+ }
+
+ try {
+
+ BigdataSailRepositoryConnection conn = null;
+ try {
+
+ conn = getBigdataRDFContext()
+ .getUnisolatedConnection(namespace);
+
+ final BigdataGraphBulkLoad graph = new BigdataGraphBulkLoad(conn);
+
+ GraphMLReader.inputGraph(graph, req.getInputStream());
+
+ graph.commit();
+
+ final long nmodified = graph.getMutationCountLastCommit();
+
+ final long elapsed = System.currentTimeMillis() - begin;
+
+ reportModifiedCount(resp, nmodified, elapsed);
+
+ return;
+
+ } catch(Throwable t) {
+
+ if(conn != null)
+ conn.rollback();
+
+ throw new RuntimeException(t);
+
+ } finally {
+
+ if (conn != null)
+ conn.close();
+
+ }
+
+ } catch (Exception ex) {
+
+ // Will be rendered as an INTERNAL_ERROR.
+ throw new RuntimeException(ex);
+
+ }
+
+ }
+
+}
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/BigdataSailNSSWrapper.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/BigdataSailNSSWrapper.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/BigdataSailNSSWrapper.java 2014-05-21 09:59:33 UTC (rev 8393)
@@ -0,0 +1,186 @@
+/**
+Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+package com.bigdata.rdf.sail.webapp.client;
+
+import java.net.URL;
+import java.util.LinkedHashMap;
+import java.util.Map;
+
+import org.apache.http.client.HttpClient;
+import org.apache.http.conn.ClientConnectionManager;
+import org.apache.http.impl.client.DefaultHttpClient;
+import org.apache.http.impl.client.DefaultRedirectStrategy;
+import org.apache.log4j.Logger;
+import org.eclipse.jetty.server.Server;
+
+import com.bigdata.BigdataStatics;
+import com.bigdata.rdf.sail.BigdataSail;
+import com.bigdata.rdf.sail.webapp.ConfigParams;
+import com.bigdata.rdf.sail.webapp.NanoSparqlServer;
+import com.bigdata.util.config.NicUtil;
+
+public class BigdataSailNSSWrapper {
+
+ private static final transient Logger log = Logger
+ .getLogger(BigdataSailNSSWrapper.class);
+
+
+ public final BigdataSail sail;
+
+ /**
+ * A jetty {@link Server} running a {@link NanoSparqlServer} instance which
+ * is running against that {@link #m_indexManager}.
+ */
+ protected Server m_fixture;
+
+ /**
+ * The {@link ClientConnectionManager} for the {@link HttpClient} used by
+ * the {@link RemoteRepository}. This is used when we tear down the
+ * {@link RemoteRepository}.
+ */
+ private ClientConnectionManager m_cm;
+
+ /**
+ * Exposed to tests that do direct HTTP GET/POST operations.
+ */
+ protected HttpClient m_httpClient = null;
+
+ /**
+ * The client-API wrapper to the NSS.
+ */
+ public RemoteRepositoryManager m_repo;
+
+ /**
+ * The effective {@link NanoSparqlServer} http end point (including the
+ * ContextPath).
+ */
+ protected String m_serviceURL;
+
+ /**
+ * The URL of the root of the web application server. This does NOT include
+ * the ContextPath for the webapp.
+ *
+ * <pre>
+ * http://localhost:8080 -- root URL
+ * http://localhost:8080/bigdata -- webapp URL (includes "/bigdata" context path.
+ * </pre>
+ */
+ protected String m_rootURL;
+
+ public BigdataSailNSSWrapper(final BigdataSail sail) {
+ this.sail = sail;
+ }
+
+ public void init() throws Exception {
+
+ final Map<String, String> initParams = new LinkedHashMap<String, String>();
+ {
+
+ initParams.put(ConfigParams.NAMESPACE, sail.getDatabase().getNamespace());
+
+ initParams.put(ConfigParams.CREATE, "false");
+
+ }
+ // Start server for that kb instance.
+ m_fixture = NanoSparqlServer.newInstance(0/* port */,
+ sail.getDatabase().getIndexManager(), initParams);
+
+ m_fixture.start();
+
+ final int port = NanoSparqlServer.getLocalPort(m_fixture);
+
+ // log.info("Getting host address");
+
+ final String hostAddr = NicUtil.getIpAddress("default.nic", "default",
+ true/* loopbackOk */);
+
+ if (hostAddr == null) {
+
+ throw new RuntimeException("Could not identify network address for this host.");
+
+ }
+
+ m_rootURL = new URL("http", hostAddr, port, ""/* contextPath */
+ ).toExternalForm();
+
+ m_serviceURL = new URL("http", hostAddr, port,
+ BigdataStatics.getContextPath()).toExternalForm();
+
+ if (log.isInfoEnabled())
+ log.info("Setup done: \nrootURL=" + m_rootURL + "\nserviceURL="
+ + m_serviceURL);
+
+// final HttpClient httpClient = new DefaultHttpClient();
+
+// m_cm = httpClient.getConnectionManager();
+
+ m_cm = DefaultClientConnectionManagerFactory.getInstance()
+ .newInstance();
+
+ final DefaultHttpClient httpClient = new DefaultHttpClient(m_cm);
+ m_httpClient = httpClient;
+
+ /*
+ * Ensure that the client follows redirects using a standard policy.
+ *
+ * Note: This is necessary for tests of the webapp structure since the
+ * container may respond with a redirect (302) to the location of the
+ * webapp when the client requests the root URL.
+ */
+ httpClient.setRedirectStrategy(new DefaultRedirectStrategy());
+
+ m_repo = new RemoteRepositoryManager(m_serviceURL,
+ m_httpClient,
+ sail.getDatabase().getIndexManager().getExecutorService());
+
+ }
+
+ public void shutdown() throws Exception {
+
+ if (m_fixture != null) {
+
+ m_fixture.stop();
+
+ m_fixture = null;
+
+ }
+
+ m_rootURL = null;
+ m_serviceURL = null;
+
+ if (m_cm != null) {
+ m_cm.shutdown();
+ m_cm = null;
+ }
+
+ m_httpClient = null;
+ m_repo = null;
+
+ if (log.isInfoEnabled())
+ log.info("tear down done");
+
+ }
+
+
+}
+
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/StringUtil.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/StringUtil.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/StringUtil.java 2014-05-21 09:59:33 UTC (rev 8393)
@@ -0,0 +1,67 @@
+//
+// ========================================================================
+// Copyright (c) 1995-2014 Mort Bay Consulting Pty. Ltd.
+// ------------------------------------------------------------------------
+// All rights reserved. This program and the accompanying materials
+// are made available under the terms of the Eclipse Public License v1.0
+// and Apache License v2.0 which accompanies this distribution.
+//
+// The Eclipse Public License is available at
+// http://www.eclipse.org/legal/epl-v10.html
+//
+// The Apache License v2.0 is available at
+// http://www.opensource.org/licenses/apache2.0.php
+//
+// You may elect to redistribute this code under either of these licenses.
+// ========================================================================
+//
+/*
+ * Note: This class was extracted from org.eclipse.jetty.util.StringUtil.
+ * It contains only those methods that we need that are not already part
+ * of the general servlet API. (We can not rely on jetty being present
+ * since the WAR deployment does not bundle the jetty dependencies.)
+ */
+package com.bigdata.rdf.sail.webapp.client;
+
+/** Fast String Utilities.
+*
+* These string utilities provide both convenience methods and
+* performance improvements over most standard library versions. The
+* main aim of the optimizations is to avoid object creation unless
+* absolutely required.
+*/
+public class StringUtil {
+
+ /**
+ * Convert String to an long. Parses up to the first non-numeric character.
+ * If no number is found an IllegalArgumentException is thrown
+ *
+ * @param string
+ * A String containing an integer.
+ * @return an int
+ */
+ public static long toLong(String string) {
+ long val = 0;
+ boolean started = false;
+ boolean minus = false;
+
+ for (int i = 0; i < string.length(); i++) {
+ char b = string.charAt(i);
+ if (b <= ' ') {
+ if (started)
+ break;
+ } else if (b >= '0' && b <= '9') {
+ val = val * 10L + (b - '0');
+ started = true;
+ } else if (b == '-' && !started) {
+ minus = true;
+ } else
+ break;
+ }
+
+ if (started)
+ return minus ? (-val) : val;
+ throw new NumberFormatException(string);
+ }
+
+}
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/AbstractHostLBSPolicy.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/AbstractHostLBSPolicy.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/AbstractHostLBSPolicy.java 2014-05-21 09:59:33 UTC (rev 8393)
@@ -0,0 +1,990 @@
+/**
+Copyright (C) SYSTAP, LLC 2006-2007. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+package com.bigdata.rdf.sail.webapp.lbs;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.UUID;
+import java.util.concurrent.CancellationException;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Future;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+
+import javax.servlet.ServletConfig;
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.log4j.Logger;
+
+import com.bigdata.journal.IIndexManager;
+import com.bigdata.journal.Journal;
+import com.bigdata.journal.PlatformStatsPlugIn;
+import com.bigdata.journal.jini.ha.HAJournal;
+import com.bigdata.quorum.Quorum;
+import com.bigdata.rdf.sail.webapp.HALoadBalancerServlet;
+import com.bigdata.util.InnerCause;
+
+/**
+ * Abstract base class for an LBS policy that uses per-host load metrics.
+ *
+ * @author <a href="mailto:tho...@us...">Bryan Thompson</a>
+ */
+public abstract class AbstractHostLBSPolicy extends AbstractLBSPolicy {
+
+ private static final Logger log = Logger.getLogger(AbstractHostLBSPolicy.class);
+
+ /**
+ *
+ */
+ private static final long serialVersionUID = 1L;
+
+ /**
+ *
+ * @see HALoadBalancerServlet#getConfigParam(ServletConfig, Class, String,
+ * String) for how these <code>init-param</code> values can be set in
+ * <code>web.xml</code> and via environment variables.
+ */
+ public interface InitParams extends AbstractLBSPolicy.InitParams {
+
+ /**
+ * The {@link IHostScoringRule} that will be used to score the
+ * {@link IHostMetrics}. The {@link IHostMetrics} are obtained
+ * periodically from the from some source (specified by a concrete
+ * derived class).
+ * <p>
+ * The purpose of the {@link IHostScoringRule} is to compute a single
+ * workload number based on those host metrics. The resulting scores are
+ * then normalized. Load balancing decisions are made based on those
+ * normalized scores.
+ * <p>
+ * Note: The default policy is specific to the concrete instance of the
+ * outer class.
+ */
+ String HOST_SCORING_RULE = "hostScoringRule";
+
+ /**
+ * Read requests are forwarded to the local service if the availability
+ * on that service is greater than or equal to the configured threshold
+ * when considering the normalized workload of the hosts. The value must
+ * be in [0:1] and represents a normalized availability threshold for
+ * the hosts having services that are joined with the met quorum. This
+ * may be set to ONE (1.0) to disable this bias. The default is
+ * {@value #DEFAULT_LOCAL_FORWARD_THRESHOLD}.
+ * <p>
+ * This bias is designed for use when an external round-robin policy is
+ * distributing the requests evenly across the services. In this case,
+ * the round-robin smooths out most of the workload and the
+ * {@link IHALoadBalancerPolicy} takes over only when there is a severe
+ * workload imbalance (as defined by the value of this parameter).
+ * <p>
+ * For example, if you have 3 hosts and they are equally available, then
+ * their normalized availability scores will be <code>.3333</code>. If
+ * the score for a given host is close to this normalized availability,
+ * then a local forward is a reasonable choice.
+ *
+ * TODO In fact, we could automatically compute and use a reasonable
+ * value based on the quorum size as
+ * <code>ceil((1/replicationFactor)-.01)</code>. With this approach, the
+ * local forward bias is automatic. However, we still only want to do
+ * this if there is a round-robin over the services. Otherwise we will
+ * slam this host whenever its load gets below the threshold while not
+ * assigning any work to the other hosts until the next update of the
+ * {@link HostTable}.
+ */
+ String LOCAL_FORWARD_THRESHOLD = "localForwardThreshold";
+
+ String DEFAULT_LOCAL_FORWARD_THRESHOLD = "1.0";
+
+ /**
+ * The initial delay in milliseconds before the first scheduled task
+ * that updates the in-memory snapshots of the performance metrics for
+ * the joined services (default
+ * {@value #DEFAULT_HOST_DISCOVERY_INITIAL_DELAY}).
+ */
+ String HOST_DISCOVERY_INITIAL_DELAY = "hostDiscoveryInitialDelay";
+
+ String DEFAULT_HOST_DISCOVERY_INITIAL_DELAY = "10000"; // ms.
+
+ /**
+ * The delay in milliseconds between scheduled tasks that update the
+ * in-memory snapshots of the performance metrics for the joined
+ * services (default {@value #DEFAULT_HOST_DISCOVERY_DELAY}).
+ */
+ String HOST_DISCOVERY_DELAY = "hostDiscoveryDelay";
+
+ String DEFAULT_HOST_DISCOVERY_DELAY = "10000"; // ms.
+
+ }
+
+ /*
+ * Static declarations of some common exceptions to reduce overhead
+ * associated with filling in the stack traces.
+ */
+
+ /**
+ * The {@link HostTable} is empty (no hosts).
+ */
+ private static final RuntimeException CAUSE_EMPTY_HOST_TABLE = new RuntimeException(
+ "Empty host table.");
+
+ /**
+ * The service table is empty (no services).
+ */
+ private static final RuntimeException CAUSE_EMPTY_SERVICE_TABLE = new RuntimeException(
+ "Empty service table.");
+
+ /**
+ * The load balancing logic failed to select a host to handle the
+ * request.
+ */
+ private static final RuntimeException CAUSE_NO_HOST_SELECTED = new RuntimeException(
+ "No host selected for request.");
+
+ /**
+ * The load balancing logic failed to select a service to handle the
+ * request.
+ */
+ private static final RuntimeException CAUSE_NO_SERVICE_SELECTED = new RuntimeException(
+ "No service selected for request.");
+
+ /**
+ * @see InitParams#LOCAL_FORWARD_THRESHOLD
+ */
+ private final AtomicReference<Double> localForwardThresholdRef = new AtomicReference<Double>();
+
+ /**
+ * The rule used to score the {@link IHostMetrics}.
+ *
+ * @see InitParams#HOST_SCORING_RULE
+ */
+ private final AtomicReference<IHostScoringRule> scoringRuleRef = new AtomicReference<IHostScoringRule>();
+
+ /**
+ * The initial delay before the first discovery cycle that updates our local
+ * knowledge of the load on each host.
+ *
+ * @see InitParams#HOST_DISCOVERY_INITIAL_DELAY
+ */
+ private long hostDiscoveryInitialDelay = -1L;
+
+ /**
+ * The delay between discovery cycles that updates our local knowledge of
+ * the load on each host.
+ *
+ * @see InitParams#HOST_DISCOVERY_DELAY
+ */
+ private long hostDiscoveryDelay = -1L;
+
+ /**
+ * Random number generator used to load balance the read-requests.
+ */
+ private final Random rand = new Random();
+
+ /**
+ * The current {@link HostTable} data.
+ *
+ * @see #updateHostTable()
+ */
+ private final AtomicReference<HostTable> hostTableRef = new AtomicReference<HostTable>(
+ null);
+
+ /**
+ * The {@link Future} of a task that periodically queries the ganglia peer
+ * for its up to date host counters for each discovered host.
+ */
+ private ScheduledFuture<?> scheduledFuture;
+
+ /**
+ * Return the name of the {@link IHostScoringRule} that provides default
+ * value for the {@link InitParams#HOST_SCORING_RULE} configuration
+ * parameter.
+ * <p>
+ * Note: The policy needs to be specific to the LBS implementation since the
+ * names of the host metrics depend on the system that is being used to
+ * collect and report them.
+ */
+ abstract protected String getDefaultScoringRule();
+
+ /**
+ * The delay between discovery cycles that updates our local knowledge of
+ * the load on each host.
+ *
+ * @see InitParams#HOST_DISCOVERY_DELAY
+ */
+ protected long getHostDiscoveryDelay() {
+
+ return hostDiscoveryDelay;
+
+ }
+
+ @Override
+ protected void toString(final StringBuilder sb) {
+
+ super.toString(sb);
+
+ sb.append(",localForwardThreshold=" + localForwardThresholdRef.get());
+
+ sb.append(",hostDiscoveryInitialDelay=" + hostDiscoveryInitialDelay);
+
+ sb.append(",hostDiscoveryDelay=" + hostDiscoveryDelay);
+
+ sb.append(",scoringRule=" + scoringRuleRef.get());
+
+ // report whether or not the scheduled future is still running.
+ {
+ final ScheduledFuture<?> tmp = scheduledFuture;
+ final boolean futureIsDone = tmp == null ? true : tmp.isDone();
+ sb.append(",scheduledFuture="
+ + (tmp == null ? "N/A"
+ : (futureIsDone ? "done" : "running")));
+ if (futureIsDone && tmp != null) {
+ // Check for error.
+ Throwable cause = null;
+ try {
+ tmp.get();
+ } catch (CancellationException ex) {
+ cause = ex;
+ } catch (ExecutionException ex) {
+ cause = ex;
+ } catch (InterruptedException ex) {
+ cause = ex;
+ }
+ if (cause != null) {
+ sb.append("(cause=" + cause + ")");
+ }
+ }
+ }
+
+ sb.append(",hostTable=" + hostTableRef.get());
+
+ }
+
+ public AbstractHostLBSPolicy() {
+ super();
+ }
+
+ @Override
+ public void init(final ServletConfig servletConfig,
+ final IIndexManager indexManager) throws ServletException {
+
+ super.init(servletConfig, indexManager);
+
+ final HAJournal journal = (HAJournal) indexManager;
+
+ if (journal.getPlatformStatisticsCollector() == null) {
+ // LBS requires platform stats to load balance requests.
+ throw new ServletException("LBS requires "
+ + PlatformStatsPlugIn.class.getName());
+ }
+
+ {
+
+ final String s = HALoadBalancerServlet.getConfigParam(
+ servletConfig, //
+ AbstractHostLBSPolicy.class,// owningClass
+ InitParams.LOCAL_FORWARD_THRESHOLD,
+ InitParams.DEFAULT_LOCAL_FORWARD_THRESHOLD);
+
+ final double d = Double.valueOf(s);
+
+ if (log.isInfoEnabled())
+ log.info(InitParams.LOCAL_FORWARD_THRESHOLD + "=" + d);
+
+ setLocalForwardThreshold(d);
+
+ }
+
+ {
+
+ scoringRuleRef.set(HALoadBalancerServlet.newInstance(//
+ servletConfig,//
+ AbstractHostLBSPolicy.class,// owningClass
+ IHostScoringRule.class, InitParams.HOST_SCORING_RULE,
+ getDefaultScoringRule()));
+
+ if (log.isInfoEnabled())
+ log.info(InitParams.HOST_SCORING_RULE + "="
+ + scoringRuleRef.getClass().getName());
+
+ }
+
+ {
+
+ final String s = HALoadBalancerServlet.getConfigParam(
+ servletConfig, //
+ AbstractHostLBSPolicy.class,// owningClass
+ InitParams.HOST_DISCOVERY_INITIAL_DELAY,
+ InitParams.DEFAULT_HOST_DISCOVERY_INITIAL_DELAY);
+
+ hostDiscoveryInitialDelay = Long.valueOf(s);
+
+ if (log.isInfoEnabled())
+ log.info(InitParams.HOST_DISCOVERY_DELAY + "="
+ + hostDiscoveryDelay);
+
+ }
+
+ {
+
+ final String s = HALoadBalancerServlet.getConfigParam(//
+ servletConfig, //
+ AbstractHostLBSPolicy.class,// owningClass
+ InitParams.HOST_DISCOVERY_DELAY,//
+ InitParams.DEFAULT_HOST_DISCOVERY_DELAY);
+
+ hostDiscoveryDelay = Long.valueOf(s);
+
+ if (log.isInfoEnabled())
+ log.info(InitParams.HOST_DISCOVERY_DELAY + "="
+ + hostDiscoveryDelay);
+
+ }
+
+ /*
+ * Setup a scheduled task to discover and rank the hosts on a periodic
+ * basis.
+ */
+ scheduledFuture = ((Journal) indexManager).addScheduledTask(
+ new Runnable() {
+ @Override
+ public void run() {
+ try {
+ updateHostTable();
+ } catch (RuntimeException ex) {
+ if (InnerCause.isInnerCause(ex,
+ InterruptedException.class)) {
+ // Terminate if interrupted.
+ throw ex;
+ }
+ /*
+ * Note: If the task thows an exception it will not
+ * be rescheduled, therefore log @ ERROR rather than
+ * allowing the unchecked exception to be
+ * propagated.
+ */
+ log.error(ex, ex);
+ }
+ }
+ }, hostDiscoveryInitialDelay, hostDiscoveryDelay,
+ TimeUnit.MILLISECONDS);
+
+ }
+
+ @Override
+ public void destroy() {
+
+ super.destroy();
+
+ localForwardThresholdRef.set(null);
+
+ scoringRuleRef.set(null);
+
+ hostTableRef.set(null);
+
+ if (scheduledFuture != null) {
+
+ scheduledFuture.cancel(true/* mayInterruptIfRunning */);
+
+ scheduledFuture = null;
+
+ }
+
+ }
+
+ public void setLocalForwardThreshold(final double newValue) {
+
+ if (newValue < 0 || newValue > 1)
+ throw new IllegalArgumentException();
+
+ localForwardThresholdRef.set(newValue);
+
+ }
+
+ /**
+ * Extended to conditionally update the {@link #hostTableRef} iff it does
+ * not exist or is empty.
+ */
+ @Override
+ protected void conditionallyUpdateServiceTable() {
+
+ super.conditionallyUpdateServiceTable();
+
+ final HostTable hostTable = hostTableRef.get();
+
+ final HostScore[] hostScores = hostTable == null ? null
+ : hostTable.hostScores;
+
+ if (hostScores == null || hostScores.length == 0) {
+
+ /*
+ * Synchronize so we do not do the same work for each concurrent
+ * request on a service start.
+ */
+
+ synchronized (hostTableRef) {
+
+ // Ensure that the host table exists.
+ updateHostTable();
+
+ }
+
+ }
+
+ }
+
+ /**
+ * Overridden to also update the hosts table in case we add/remove a service
+ * and the set of hosts that cover the member services is changed as a
+ * result.
+ */
+ @Override
+ protected void updateServiceTable() {
+
+ super.updateServiceTable();
+
+ updateHostTable();
+
+ }
+
+ /**
+ * Update the per-host scoring table. The host table will only contain
+ * entries for hosts associated with at least one service that is joined
+ * with the met quorum.
+ *
+ * @see #hostTableRef
+ */
+ protected void updateHostTable() {
+
+ // Snapshot of the per service scores.
+ final ServiceScore[] serviceScores = serviceTableRef.get();
+
+ // The scoring rule that will be applied for this update.
+ final IHostScoringRule scoringRule = scoringRuleRef.get();
+
+ if (serviceScores == null || serviceScores.length == 0
+ || scoringRule == null) {
+
+ /*
+ * No joined services?
+ *
+ * No scoring rule?
+ */
+
+ // clear the host table.
+ hostTableRef.set(null);
+
+ return;
+
+ }
+
+ // Obtain the host reports for those services.
+ final Map<String/* hostname */, IHostMetrics> hostMetricsMap = getHostReportForKnownServices(
+ scoringRule, serviceScores);
+
+ if (hostMetricsMap == null || hostMetricsMap.isEmpty()) {
+
+ // clear the host table.
+ hostTableRef.set(null);
+
+ return;
+
+ }
+
+ if (log.isTraceEnabled())
+ log.trace("hostMetricsMap=" + hostMetricsMap);
+
+ final HostTable newHostTable = normalizeHostScores(scoringRule,
+ hostMetricsMap);
+
+ if (log.isTraceEnabled())
+ log.trace("newHostTable=" + newHostTable);
+
+ // Set the host table.
+ hostTableRef.set(newHostTable);
+
+ }
+
+ /**
+ * Compute and return the normalized load across the known hosts.
+ * <p>
+ * Note: This needs to be done only for those hosts that are associated with
+ * the {@link Quorum} members. If we do it for the other hosts then the
+ * normalization is not meaningful since we will only load balance across
+ * the services that are joined with a met {@link Quorum}.
+ *
+ * @param scoringRule
+ * The {@link IHostScoringRule} used to integrate the per-host
+ * performance metrics.
+ * @param hostMetricsMap
+ * The per-host performance metrics for the known hosts.
+ *
+ * @return The normalized host workload.
+ */
+ private static HostTable normalizeHostScores(
+ final IHostScoringRule scoringRule,//
+ final Map<String/* hostname */, IHostMetrics> hostMetricsMap//
+ ) {
+
+ /*
+ * Compute the per-host scores and the total score across those hosts.
+ * This produces some dense arrays. The head of the array contains
+ * information about the hosts that are associated with known services
+ * for this HA replication cluster.
+ */
+
+ final int nhosts = hostMetricsMap.size();
+
+ final String[] hostnames = new String[nhosts];
+
+ final IHostMetrics[] metrics2 = new IHostMetrics[nhosts];
+
+ final double[] load = new double[nhosts];
+
+ double totalLoad = 0d;
+
+ {
+
+ /*
+ * TODO Since the scoring rule does not produce normalized host
+ * scores, we do not know how the NO_INFO will be ordered with
+ * respect to those hosts for which the scoring rule was
+ * successfully applied. This could be made to work either by
+ * flagging hosts without metrics or by pushing down the handling of
+ * a [null] metrics reference into the scoring rule, which would
+ * know how to return a "median" value.
+ */
+ final double NO_INFO = .5d;
+
+ int i = 0;
+
+ for (Map.Entry<String, IHostMetrics> e : hostMetricsMap.entrySet()) {
+
+ final String hostname = e.getKey();
+
+ assert hostname != null; // Note: map keys are never null.
+
+ final IHostMetrics metrics = e.getValue();
+
+ // flag host if no load information is available.
+ double hostScore = metrics == null ? NO_INFO : scoringRule
+ .getScore(metrics);
+
+ if (hostScore < 0) {
+
+ log.error("Negative score: " + hostname);
+
+ hostScore = NO_INFO;
+
+ }
+
+ hostnames[i] = hostname;
+
+ load[i] = hostScore;
+
+ metrics2[i] = metrics;
+
+ totalLoad += hostScore;
+
+ i++;
+
+ }
+
+ }
+
+ /*
+ * Convert from LOAD to AVAILABILITY.
+ *
+ * AVAILABILITY := TOTAL - LOAD[i]
+ *
+ * Note: The per-host metrics and scoring rule give us LOAD. However, we
+ * want to distribute the requests based on the inverse of the load,
+ * which is the AVAILABILITY to do more work.
+ */
+ double totalAvailability = 0;
+ double availability[] = new double[nhosts];
+ {
+ for (int i = 0; i < nhosts; i++) {
+
+ final double avail = availability[i] = totalLoad - load[i];
+
+ totalAvailability += avail;
+
+ }
+ }
+
+ /*
+ * Normalize the per-hosts scores.
+ */
+
+ HostScore thisHostScore = null;
+ final HostScore[] scores = new HostScore[nhosts];
+ {
+
+ for (int i = 0; i < nhosts; i++) {
+
+ final String hostname = hostnames[i];
+
+ final double normalizedAvailability;
+ if (totalAvailability == 0) {
+ // divide the work evenly.
+ normalizedAvailability = 1d / nhosts;
+ } else {
+ normalizedAvailability = availability[i]
+ / totalAvailability;
+ }
+
+ // Normalize host scores.
+ final HostScore hostScore = scores[i] = new HostScore(hostname,
+ normalizedAvailability);
+
+ if (thisHostScore != null && hostScore.isThisHost()) {
+
+ // The first score discovered for this host.
+ thisHostScore = hostScore;
+
+ }
+
+ if (log.isDebugEnabled())
+ log.debug("hostname=" + hostname + ", metrics="
+ + metrics2[i] + ", score=" + hostScore.getAvailability());
+
+ }
+
+ }
+
+// for (int i = 0; i < scores.length; i++) {
+//
+// scores[i].rank = i;
+//
+// scores[i].drank = ((double) i) / scores.length;
+//
+// }
+
+// // Sort into order by decreasing load.
+// Arrays.sort(scores);
+
+// if (scores.length > 0) {
+//
+// if (log.isDebugEnabled()) {
+//
+// log.debug("The most active index was: "
+// + scores[scores.length - 1]);
+//
+// log.debug("The least active index was: " + scores[0]);
+//
+// log.debug("This host: " + thisHostScore);
+//
+// }
+//
+// }
+
+ /*
+ * Sort into order by hostname (useful for /bigdata/status view).
+ *
+ * Note: The ordering is not really material to anything. Stochastic
+ * load balancing decisions are made without regard to this ordering.
+ */
+ Arrays.sort(scores, HostScore.COMPARE_BY_HOSTNAME);
+
+ return new HostTable(thisHostScore, scores);
+
+ }
+
+ @Override
+ public String getReaderURI(final HttpServletRequest req) {
+
+ final HostTable hostTable = hostTableRef.get();
+
+ final HostScore[] hostScores = hostTable == null ? null
+ : hostTable.hostScores;
+
+ final ServiceScore[] serviceScores = serviceTableRef.get();
+
+ if (hostScores == null || hostScores.length == 0) {
+ // Can't do anything.
+ throw CAUSE_EMPTY_HOST_TABLE;
+ }
+
+ if (serviceScores == null) {
+ // No services.
+ throw CAUSE_EMPTY_SERVICE_TABLE;
+ }
+
+ final HostScore hostScore = getHost(rand.nextDouble(), hostScores);
+
+ if (hostScore == null) {
+ // None found.
+ throw CAUSE_NO_HOST_SELECTED;
+ }
+
+ final ServiceScore serviceScore = getService(rand, hostScore,
+ serviceScores);
+
+ if (serviceScore == null) {
+ // None found.
+ throw CAUSE_NO_SERVICE_SELECTED;
+ }
+
+ /*
+ * Track #of requests to each service.
+ *
+ * Note: ServiceScore.nrequests is incremented before we make the
+ * decision to do a local forward when the target is *this* host. This
+ * means that the /status page will still show the effect of the load
+ * balancer for local forwards. This is a deliberate decision.
+ */
+ serviceScore.nrequests.increment();
+
+ if (serviceScore.getServiceUUID().equals(serviceIDRef.get())) {
+ /*
+ * The target is *this* service. As an optimization, we return
+ * [null] so that the caller will perform a local forward (as part
+ * of its exception handling logic). The local foward has less
+ * latency than proxying to this service.
+ *
+ * Note: ServiceScore.nrequests *is* incremented before we make this
+ * decision so the /status page will still show the effect of the
+ * load balancer for local forwards. This is a deliberate decision.
+ */
+ return null;
+ }
+
+ // We will return the Request-URI for that service.
+ final String requestURI = serviceScore.getRequestURI();
+
+ return requestURI;
+
+ }
+
+ /**
+ * Stochastically select the target host based on the current host workload.
+ * <p>
+ * Note: This is package private in order to expose it to the test suite.
+ *
+ * @param d
+ * A random number in the half-open [0:1).
+ * @param hostScores
+ * The {@link HostScore}s.
+ *
+ * @return The {@link HostScore} of the host to which a read request should
+ * be proxied -or- <code>null</code> if the request should not be
+ * proxied (because we lack enough information to identify a target
+ * host).
+ *
+ * @see bigdata/src/resources/architecture/HA_LBS.xls
+ */
+ static HostScore getHost(//
+ final double d, //
+ final HostScore[] hostScores
+ ) {
+
+ if (d < 0 || d >= 1d)
+ throw new IllegalArgumentException();
+
+ if (hostScores == null)
+ throw new IllegalArgumentException();
+
+ /*
+ * Stochastically select the target host based on the current host
+ * workload.
+ *
+ * Note: The host is selected with a probability that is INVERSELY
+ * proportional to normalized host load. If the normalized host load is
+ * .75, then the host is selected with a probability of .25.
+ *
+ * Note: We need to ignore any host that is does not have a service that
+ * is joined with the met quorum....
+ */
+ HostScore hostScore = null;
+ {
+ if (hostScores.length == 1) {
+ /*
+ * Only one host.
+ */
+ hostScore = hostScores[0];
+ } else {
+ /*
+ * Multiple hosts.
+ *
+ * Note: Choice is inversely proportional to normalized workload
+ * (1 - load).
+ */
+ double sum = 0d;
+ for (HostScore tmp : hostScores) {
+ hostScore = tmp;
+ sum += hostScore.getAvailability();
+ if (sum >= d) {
+ // found desired host.
+ break;
+ }
+ // scan further.
+ }
+ }
+
+ }
+
+ return hostScore;
+
+ }
+
+ /**
+ * Stochastically select the target service on the given host.
+ * <p>
+ * Note: There can be multiple services on the same host. However, this
+ * mostly happens in CI. Normal deployment allocates only one service per
+ * host.
+ * <p>
+ * Note: This is package private in order to expose it to the test suite.
+ *
+ * @param rand
+ * A random number generator.
+ * @param hostScore
+ * The {@link HostScore} of the selected host.
+ * @param serviceScores
+ * The {@link ServiceScore}s for the joined services.
+ *
+ * @return The {@link ServiceScore} of a service on the host identified by
+ * the caller to which a read request should be proxied -or-
+ * <code>null</code> if the request should not be proxied (because
+ * we lack enough information to identify a target service on that
+ * host).
+ *
+ * TODO Optimize the lookup of the services on a given host. The
+ * trick is making sure that this mapping remains consistent as
+ * services join/leave. However, the normal case is one service per
+ * host. For that case, this loop wastes the maximum effort since it
+ * scans all services.
+ */
+ static ServiceScore getService(//
+ final Random rand, //
+ final HostScore hostScore,//
+ final ServiceScore[] serviceScores//
+ ) {
+
+ // The set of services on the given host.
+ final List<ServiceScore> foundServices = new LinkedList<ServiceScore>();
+
+ for (ServiceScore tmp : serviceScores) {
+
+ if (tmp == null) // should never happen.
+ continue;
+
+ if (tmp.getRequestURI() == null) // can't proxy.
+ continue;
+
+ if (hostScore.getHostname().equals(tmp.getHostname())) {
+
+ // Found a joined service on that host.
+ foundServices.add(tmp);
+
+ }
+
+ }
+
+ /*
+ * Report a service on that host. If there is more than one, then we
+ * choose the service randomly.
+ */
+ final int nservices = foundServices.size();
+
+ if (nservices == 0) {
+ // Can't find a service.
+ log.warn("No services on host: hostname=" + hostScore.getHostname());
+ return null;
+ }
+
+ // Figure out which service to use.
+ final int n = rand.nextInt(nservices);
+
+ final ServiceScore serviceScore = foundServices.get(n);
+
+ return serviceScore;
+
+ }
+
+ /**
+ * {@inheritDoc}
+ * <p>
+ * If the normalized availability for this host is over a configured
+ * threshold, then we forward the request to the local service. This help to
+ * reduce the latency of the request since it is not being proxied.
+ *
+ * @see InitParams#LOCAL_FORWARD_THRESHOLD
+ */
+ @Override
+ protected boolean conditionallyForwardReadRequest(
+ final HALoadBalancerServlet servlet,//
+ final HttpServletRequest request, //
+ final HttpServletResponse response//
+ ) throws IOException {
+
+ final HostTable hostTable = hostTableRef.get();
+
+ final HostScore thisHostScore = hostTable == null ? null
+ : hostTable.thisHost;
+
+ if (thisHostScore != null
+ && thisHostScore.getAvailability() >= localForwardThresholdRef.get()) {
+
+ servlet.forwardToLocalService(false/* isLeaderRequest */, request,
+ response);
+
+ // request was handled.
+ return true;
+
+ }
+
+ return false;
+
+ }
+
+ /**
+ * Return a map from the known canonical hostnames (as self-reported by the
+ * services) of the joined services to the {@link IHostMetrics}s for those
+ * hosts.
+ *
+ * @param scoringRule
+ * The {@link IHostScoringRule} to be applied.
+ * @param serviceScores
+ * The set of known services.
+ *
+ * @return The map.
+ *
+ * TODO If there is more than one service on the same host, then we
+ * will have one record per host, not per service. This means that
+ * we can not support JVM specific metrics, such as GC time. This
+ * could be fixed if the map was indexed by Service {@link UUID} and
+ * the host metrics were combined into the map once for each
+ * service.
+ */
+ abstract protected Map<String, IHostMetrics> getHostReportForKnownServices(
+ IHostScoringRule scoringRule, ServiceScore[] serviceScores);
+
+}
\ No newline at end of file
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/AbstractHostMetrics.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/AbstractHostMetrics.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/AbstractHostMetrics.java 2014-05-21 09:59:33 UTC (rev 8393)
...
[truncated message content] |
|
From: <dme...@us...> - 2014-05-21 09:59:06
|
Revision: 8392
http://sourceforge.net/p/bigdata/code/8392
Author: dmekonnen
Date: 2014-05-21 09:59:04 +0000 (Wed, 21 May 2014)
Log Message:
-----------
commit to fix failed sync.
Added Paths:
-----------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/WEB-INF/GraphStore.properties
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/WEB-INF/override-web.xml
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/css/vendor/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/css/vendor/codemirror.css
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-addons/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-addons/placeholder.js
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/javascript.js
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/ntriples.js
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/sparql.js
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/turtle.js
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/xml.js
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/codemirror.js
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/WEB-INF/GraphStore.properties
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/WEB-INF/GraphStore.properties (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/WEB-INF/GraphStore.properties 2014-05-21 09:59:04 UTC (rev 8392)
@@ -0,0 +1,40 @@
+#
+# Note: These options are applied when the journal and the triple store are
+# first created.
+
+##
+## Journal options.
+##
+
+# The backing file. This contains all your data. You want to put this someplace
+# safe. The default locator will wind up in the directory from which you start
+# your servlet container.
+com.bigdata.journal.AbstractJournal.file=bigdata.jnl
+
+# The persistence engine. Use 'Disk' for the WORM or 'DiskRW' for the RWStore.
+com.bigdata.journal.AbstractJournal.bufferMode=DiskRW
+
+# Setup for the RWStore recycler rather than session protection.
+com.bigdata.service.AbstractTransactionService.minReleaseAge=1
+
+com.bigdata.btree.writeRetentionQueue.capacity=4000
+com.bigdata.btree.BTree.branchingFactor=128
+
+# 200M initial extent.
+com.bigdata.journal.AbstractJournal.initialExtent=209715200
+com.bigdata.journal.AbstractJournal.maximumExtent=209715200
+
+##
+## Setup for QUADS mode without the full text index.
+##
+com.bigdata.rdf.sail.truthMaintenance=false
+com.bigdata.rdf.store.AbstractTripleStore.quads=false
+com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers=false
+com.bigdata.rdf.store.AbstractTripleStore.textIndex=true
+com.bigdata.rdf.store.AbstractTripleStore.axiomsClass=com.bigdata.rdf.axioms.NoAxioms
+
+# Bump up the branching factor for the lexicon indices on the default kb.
+com.bigdata.namespace.kb.lex.com.bigdata.btree.BTree.branchingFactor=400
+
+# Bump up the branching factor for the statement indices on the default kb.
+com.bigdata.namespace.kb.spo.com.bigdata.btree.BTree.branchingFactor=1024
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/WEB-INF/override-web.xml
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/WEB-INF/override-web.xml (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/WEB-INF/override-web.xml 2014-05-21 09:59:04 UTC (rev 8392)
@@ -0,0 +1,100 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<web-app xmlns="http://java.sun.com/xml/ns/javaee"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_1.xsd"
+ version="3.1">
+ <servlet>
+ <servlet-name>Load Balancer</servlet-name>
+ <description>
+ The HA Load Balancer servlet provides a transparent proxy for
+ requests arriving its configured URL pattern (the "external"
+ interface for the load balancer) to the root of the web
+ application.
+
+ The use of the load balancer is entirely optional. If the
+ security rules permit, then clients MAY make requests directly
+ against a specific service. Thus, no specific provision exists
+ to disable the load balancer servlet, but you may choose not to
+ deploy it.
+
+ When successfully deployed, requests having prefix corresponding to
+ the URL pattern for the load balancer are automatically redirected
+ to a joined service in the met quorum based on the configured load
+ balancer policy.
+
+ Requests directed to /bigdata/LBS/leader are proxied to the quorum
+ leader - this URL must be used for non-idempotent requests
+ (updates).
+
+ Requests directed to /bigdata/LBS/read are load balanced over the
+ services joined with the met quourm. This URL may only be used
+ with idempotent requests (reads).
+
+ For non-HA deployments, requests are simply forwarded to the local
+ service after stripping off the /LBS/leader or /LBS/read prefix.
+ Thus, it is always safe to use the LBS request URLs.
+
+ The load balancer policies are "HA aware." They will always
+ redirect update requests to the quorum leader. The default
+ polices will load balance read requests over the leader and
+ followers in a manner that reflects the CPU, IO Wait, and GC
+ Time associated with each service. The PlatformStatsPlugIn
+ and GangliaPlugIn MUST be enabled for the default load
+ balancer policy to operate. It depends on those plugins to
+ maintain a model of the load on the HA replication cluster.
+ The GangliaPlugIn should be run only as a listener if you are
+ are running the real gmond process on the host. If you are
+ not running gmond, then the GangliaPlugIn should be configured
+ as both a listener and a sender.
+ </description>
+ <servlet-class>com.bigdata.rdf.sail.webapp.HALoadBalancerServlet</servlet-class>
+ <load-on-startup>1</load-on-startup>
+ <async-supported>true</async-supported>
+ <init-param>
+ <param-name>policy</param-name>
+ <param-value>com.bigdata.rdf.sail.webapp.lbs.policy.RoundRobinLBSPolicy</param-value>
+ <description>
+ The load balancer policy. This must be an instance of the
+ IHALoadBalancerPolicy interface. A default policy (NOPLBSPolicy) is
+ used when no value is specified.
+
+ The policies differ ONLY in how they handle READ requests. All policies
+ proxy updates to the leader. If you do not want update proxying, then
+ use a URL that does not address the HALoadBalancerServlet.
+
+ The following policies are pre-defined:
+
+ com.bigdata.rdf.sail.webapp.lbs.policy.NOPLBSPolicy:
+
+ Does not load balance read requests.
+
+ com.bigdata.rdf.sail.webapp.lbs.policy.RoundRobinLBSPolicy:
+
+ Round robin for read requests.
+
+ com.bigdata.rdf.sail.webapp.lbs.policy.counters.CountersLBSPolicy:
+
+ Load based proxying for read requests using the build in http
+ service for reporting performance counters. This policy requires
+ the PlatformStatsPlugIn and may also require platform specific
+ metrics collection dependencies, e.g., sysstat.
+
+ com.bigdata.rdf.sail.webapp.lbs.policy.ganglia.GangliaLBSPolicy:
+
+ Load based proxying for read requests using ganglia. This policy
+ requires the requires the PlatformStatsPlugIn. In addition, either
+ gmond must be installed on each node or the embedded GangliaService
+ must be enabled such that performance metrics are collected and
+ reported.
+
+ Some of these policies can be further configured using additional
+ init-param elements that they understand. See the javadoc for the
+ individual policies for more information.
+ </description>
+ </init-param>
+ </servlet>
+ <servlet-mapping>
+ <servlet-name>Load Balancer</servlet-name>
+ <url-pattern>/LBS/*</url-pattern>
+ </servlet-mapping>
+</web-app>
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/css/vendor/codemirror.css
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/css/vendor/codemirror.css (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/css/vendor/codemirror.css 2014-05-21 09:59:04 UTC (rev 8392)
@@ -0,0 +1,272 @@
+/* BASICS */
+
+.CodeMirror {
+ /* Set height, width, borders, and global font properties here */
+ font-family: monospace;
+ height: 300px;
+}
+.CodeMirror-scroll {
+ /* Set scrolling behaviour here */
+ overflow: auto;
+}
+
+/* PADDING */
+
+.CodeMirror-lines {
+ padding: 4px 0; /* Vertical padding around content */
+}
+.CodeMirror pre {
+ padding: 0 4px; /* Horizontal padding of content */
+}
+
+.CodeMirror-scrollbar-filler, .CodeMirror-gutter-filler {
+ background-color: white; /* The little square between H and V scrollbars */
+}
+
+/* GUTTER */
+
+.CodeMirror-gutters {
+ border-right: 1px solid #ddd;
+ background-color: #f7f7f7;
+ white-space: nowrap;
+}
+.CodeMirror-linenumbers {}
+.CodeMirror-linenumber {
+ padding: 0 3px 0 5px;
+ min-width: 20px;
+ text-align: right;
+ color: #999;
+ -moz-box-sizing: content-box;
+ box-sizing: content-box;
+}
+
+/* CURSOR */
+
+.CodeMirror div.CodeMirror-cursor {
+ border-left: 1px solid black;
+}
+/* Shown when moving in bi-directional text */
+.CodeMirror div.CodeMirror-secondarycursor {
+ border-left: 1px solid silver;
+}
+.CodeMirror.cm-keymap-fat-cursor div.CodeMirror-cursor {
+ width: auto;
+ border: 0;
+ background: #7e7;
+}
+/* Can style cursor different in overwrite (non-insert) mode */
+div.CodeMirror-overwrite div.CodeMirror-cursor {}
+
+.cm-tab { display: inline-block; }
+
+.CodeMirror-ruler {
+ border-left: 1px solid #ccc;
+ position: absolute;
+}
+
+/* DEFAULT THEME */
+
+.cm-s-default .cm-keyword {color: #708;}
+.cm-s-default .cm-atom {color: #219;}
+.cm-s-default .cm-number {color: #164;}
+.cm-s-default .cm-def {color: #00f;}
+.cm-s-default .cm-variable,
+.cm-s-default .cm-punctuation,
+.cm-s-default .cm-property,
+.cm-s-default .cm-operator {}
+.cm-s-default .cm-variable-2 {color: #05a;}
+.cm-s-default .cm-variable-3 {color: #085;}
+.cm-s-default .cm-comment {color: #a50;}
+.cm-s-default .cm-string {color: #a11;}
+.cm-s-default .cm-string-2 {color: #f50;}
+.cm-s-default .cm-meta {color: #555;}
+.cm-s-default .cm-qualifier {color: #555;}
+.cm-s-default .cm-builtin {color: #30a;}
+.cm-s-default .cm-bracket {color: #997;}
+.cm-s-default .cm-tag {color: #170;}
+.cm-s-default .cm-attribute {color: #00c;}
+.cm-s-default .cm-header {color: blue;}
+.cm-s-default .cm-quote {color: #090;}
+.cm-s-default .cm-hr {color: #999;}
+.cm-s-default .cm-link {color: #00c;}
+
+.cm-negative {color: #d44;}
+.cm-positive {color: #292;}
+.cm-header, .cm-strong {font-weight: bold;}
+.cm-em {font-style: italic;}
+.cm-link {text-decoration: underline;}
+
+.cm-s-default .cm-error {color: #f00;}
+.cm-invalidchar {color: #f00;}
+
+div.CodeMirror span.CodeMirror-matchingbracket {color: #0f0;}
+div.CodeMirror span.CodeMirror-nonmatchingbracket {color: #f22;}
+.CodeMirror-activeline-background {background: #e8f2ff;}
+
+/* STOP */
+
+/* The rest of this file contains styles related to the mechanics of
+ the editor. You probably shouldn't touch them. */
+
+.CodeMirror {
+ line-height: 1;
+ position: relative;
+ overflow: hidden;
+ background: white;
+ color: black;
+}
+
+.CodeMirror-scroll {
+ /* 30px is the magic margin used to hide the element's real scrollbars */
+ /* See overflow: hidden in .CodeMirror */
+ margin-bottom: -30px; margin-right: -30px;
+ padding-bottom: 30px;
+ height: 100%;
+ outline: none; /* Prevent dragging from highlighting the element */
+ position: relative;
+ -moz-box-sizing: content-box;
+ box-sizing: content-box;
+}
+.CodeMirror-sizer {
+ position: relative;
+ border-right: 30px solid transparent;
+ -moz-box-sizing: content-box;
+ box-sizing: content-box;
+}
+
+/* The fake, visible scrollbars. Used to force redraw during scrolling
+ before actuall scrolling happens, thus preventing shaking and
+ flickering artifacts. */
+.CodeMirror-vscrollbar, .CodeMirror-hscrollbar, .CodeMirror-scrollbar-filler, .CodeMirror-gutter-filler {
+ position: absolute;
+ z-index: 6;
+ display: none;
+}
+.CodeMirror-vscrollbar {
+ right: 0; top: 0;
+ overflow-x: hidden;
+ overflow-y: scroll;
+}
+.CodeMirror-hscrollbar {
+ bottom: 0; left: 0;
+ overflow-y: hidden;
+ overflow-x: scroll;
+}
+.CodeMirror-scrollbar-filler {
+ right: 0; bottom: 0;
+}
+.CodeMirror-gutter-filler {
+ left: 0; bottom: 0;
+}
+
+.CodeMirror-gutters {
+ position: absolute; left: 0; top: 0;
+ padding-bottom: 30px;
+ z-index: 3;
+}
+.CodeMirror-gutter {
+ white-space: normal;
+ height: 100%;
+ -moz-box-sizing: content-box;
+ box-sizing: content-box;
+ padding-bottom: 30px;
+ margin-bottom: -32px;
+ display: inline-block;
+ /* Hack to make IE7 behave */
+ *zoom:1;
+ *display:inline;
+}
+.CodeMirror-gutter-elt {
+ position: absolute;
+ cursor: default;
+ z-index: 4;
+}
+
+.CodeMirror-lines {
+ cursor: text;
+}
+.CodeMirror pre {
+ /* Reset some styles that the rest of the page might have set */
+ -moz-border-radius: 0; -webkit-border-radius: 0; border-radius: 0;
+ border-width: 0;
+ background: transparent;
+ font-family: inherit;
+ font-size: inherit;
+ margin: 0;
+ white-space: pre;
+ word-wrap: normal;
+ line-height: inherit;
+ color: inherit;
+ z-index: 2;
+ position: relative;
+ overflow: visible;
+}
+.CodeMirror-wrap pre {
+ word-wrap: break-word;
+ white-space: pre-wrap;
+ word-break: normal;
+}
+
+.CodeMirror-linebackground {
+ position: absolute;
+ left: 0; right: 0; top: 0; bottom: 0;
+ z-index: 0;
+}
+
+.CodeMirror-linewidget {
+ position: relative;
+ z-index: 2;
+ overflow: auto;
+}
+
+.CodeMirror-widget {}
+
+.CodeMirror-wrap .CodeMirror-scroll {
+ overflow-x: hidden;
+}
+
+.CodeMirror-measure {
+ position: absolute;
+ width: 100%;
+ height: 0;
+ overflow: hidden;
+ visibility: hidden;
+}
+.CodeMirror-measure pre { position: static; }
+
+.CodeMirror div.CodeMirror-cursor {
+ position: absolute;
+ border-right: none;
+ width: 0;
+}
+
+div.CodeMirror-cursors {
+ visibility: hidden;
+ position: relative;
+ z-index: 1;
+}
+.CodeMirror-focused div.CodeMirror-cursors {
+ visibility: visible;
+}
+
+.CodeMirror-selected { background: #d9d9d9; }
+.CodeMirror-focused .CodeMirror-selected { background: #d7d4f0; }
+.CodeMirror-crosshair { cursor: crosshair; }
+
+.cm-searching {
+ background: #ffa;
+ background: rgba(255, 255, 0, .4);
+}
+
+/* IE7 hack to prevent it from returning funny offsetTops on the spans */
+.CodeMirror span { *vertical-align: text-bottom; }
+
+/* Used to force a border model for a node */
+.cm-force-border { padding-right: .1px; }
+
+@media print {
+ /* Hide the cursor when printing */
+ .CodeMirror div.CodeMirror-cursors {
+ visibility: hidden;
+ }
+}
Property changes on: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/css/vendor/codemirror.css
___________________________________________________________________
Added: svn:executable
## -0,0 +1 ##
+*
\ No newline at end of property
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-addons/placeholder.js
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-addons/placeholder.js (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-addons/placeholder.js 2014-05-21 09:59:04 UTC (rev 8392)
@@ -0,0 +1,55 @@
+(function(mod) {
+ if (typeof exports == "object" && typeof module == "object") // CommonJS
+ mod(require("../../lib/codemirror"));
+ else if (typeof define == "function" && define.amd) // AMD
+ define(["../../lib/codemirror"], mod);
+ else // Plain browser env
+ mod(CodeMirror);
+})(function(CodeMirror) {
+ CodeMirror.defineOption("placeholder", "", function(cm, val, old) {
+ var prev = old && old != CodeMirror.Init;
+ if (val && !prev) {
+ cm.on("blur", onBlur);
+ cm.on("change", onChange);
+ onChange(cm);
+ } else if (!val && prev) {
+ cm.off("blur", onBlur);
+ cm.off("change", onChange);
+ clearPlaceholder(cm);
+ var wrapper = cm.getWrapperElement();
+ wrapper.className = wrapper.className.replace(" CodeMirror-empty", "");
+ }
+
+ if (val && !cm.hasFocus()) onBlur(cm);
+ });
+
+ function clearPlaceholder(cm) {
+ if (cm.state.placeholder) {
+ cm.state.placeholder.parentNode.removeChild(cm.state.placeholder);
+ cm.state.placeholder = null;
+ }
+ }
+ function setPlaceholder(cm) {
+ clearPlaceholder(cm);
+ var elt = cm.state.placeholder = document.createElement("pre");
+ elt.style.cssText = "height: 0; overflow: visible";
+ elt.className = "CodeMirror-placeholder";
+ elt.appendChild(document.createTextNode(cm.getOption("placeholder")));
+ cm.display.lineSpace.insertBefore(elt, cm.display.lineSpace.firstChild);
+ }
+
+ function onBlur(cm) {
+ if (isEmpty(cm)) setPlaceholder(cm);
+ }
+ function onChange(cm) {
+ var wrapper = cm.getWrapperElement(), empty = isEmpty(cm);
+ wrapper.className = wrapper.className.replace(" CodeMirror-empty", "") + (empty ? " CodeMirror-empty" : "");
+
+ if (empty) setPlaceholder(cm);
+ else clearPlaceholder(cm);
+ }
+
+ function isEmpty(cm) {
+ return (cm.lineCount() === 1) && (cm.getLine(0) === "");
+ }
+});
Property changes on: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-addons/placeholder.js
___________________________________________________________________
Added: svn:executable
## -0,0 +1 ##
+*
\ No newline at end of property
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/javascript.js
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/javascript.js (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/javascript.js 2014-05-21 09:59:04 UTC (rev 8392)
@@ -0,0 +1,660 @@
+// TODO actually recognize syntax of TypeScript constructs
+
+(function(mod) {
+ if (typeof exports == "object" && typeof module == "object") // CommonJS
+ mod(require("../../lib/codemirror"));
+ else if (typeof define == "function" && define.amd) // AMD
+ define(["../../lib/codemirror"], mod);
+ else // Plain browser env
+ mod(CodeMirror);
+})(function(CodeMirror) {
+"use strict";
+
+CodeMirror.defineMode("javascript", function(config, parserConfig) {
+ var indentUnit = config.indentUnit;
+ var statementIndent = parserConfig.statementIndent;
+ var jsonldMode = parserConfig.jsonld;
+ var jsonMode = parserConfig.json || jsonldMode;
+ var isTS = parserConfig.typescript;
+
+ // Tokenizer
+
+ var keywords = function(){
+ function kw(type) {return {type: type, style: "keyword"};}
+ var A = kw("keyword a"), B = kw("keyword b"), C = kw("keyword c");
+ var operator = kw("operator"), atom = {type: "atom", style: "atom"};
+
+ var jsKeywords = {
+ "if": kw("if"), "while": A, "with": A, "else": B, "do": B, "try": B, "finally": B,
+ "return": C, "break": C, "continue": C, "new": C, "delete": C, "throw": C, "debugger": C,
+ "var": kw("var"), "const": kw("var"), "let": kw("var"),
+ "function": kw("function"), "catch": kw("catch"),
+ "for": kw("for"), "switch": kw("switch"), "case": kw("case"), "default": kw("default"),
+ "in": operator, "typeof": operator, "instanceof": operator,
+ "true": atom, "false": atom, "null": atom, "undefined": atom, "NaN": atom, "Infinity": atom,
+ "this": kw("this"), "module": kw("module"), "class": kw("class"), "super": kw("atom"),
+ "yield": C, "export": kw("export"), "import": kw("import"), "extends": C
+ };
+
+ // Extend the 'normal' keywords with the TypeScript language extensions
+ if (isTS) {
+ var type = {type: "variable", style: "variable-3"};
+ var tsKeywords = {
+ // object-like things
+ "interface": kw("interface"),
+ "extends": kw("extends"),
+ "constructor": kw("constructor"),
+
+ // scope modifiers
+ "public": kw("public"),
+ "private": kw("private"),
+ "protected": kw("protected"),
+ "static": kw("static"),
+
+ // types
+ "string": type, "number": type, "bool": type, "any": type
+ };
+
+ for (var attr in tsKeywords) {
+ jsKeywords[attr] = tsKeywords[attr];
+ }
+ }
+
+ return jsKeywords;
+ }();
+
+ var isOperatorChar = /[+\-*&%=<>!?|~^]/;
+ var isJsonldKeyword = /^@(context|id|value|language|type|container|list|set|reverse|index|base|vocab|graph)"/;
+
+ function readRegexp(stream) {
+ var escaped = false, next, inSet = false;
+ while ((next = stream.next()) != null) {
+ if (!escaped) {
+ if (next == "/" && !inSet) return;
+ if (next == "[") inSet = true;
+ else if (inSet && next == "]") inSet = false;
+ }
+ escaped = !escaped && next == "\\";
+ }
+ }
+
+ // Used as scratch variables to communicate multiple values without
+ // consing up tons of objects.
+ var type, content;
+ function ret(tp, style, cont) {
+ type = tp; content = cont;
+ return style;
+ }
+ function tokenBase(stream, state) {
+ var ch = stream.next();
+ if (ch == '"' || ch == "'") {
+ state.tokenize = tokenString(ch);
+ return state.tokenize(stream, state);
+ } else if (ch == "." && stream.match(/^\d+(?:[eE][+\-]?\d+)?/)) {
+ return ret("number", "number");
+ } else if (ch == "." && stream.match("..")) {
+ return ret("spread", "meta");
+ } else if (/[\[\]{}\(\),;\:\.]/.test(ch)) {
+ return ret(ch);
+ } else if (ch == "=" && stream.eat(">")) {
+ return ret("=>", "operator");
+ } else if (ch == "0" && stream.eat(/x/i)) {
+ stream.eatWhile(/[\da-f]/i);
+ return ret("number", "number");
+ } else if (/\d/.test(ch)) {
+ stream.match(/^\d*(?:\.\d*)?(?:[eE][+\-]?\d+)?/);
+ return ret("number", "number");
+ } else if (ch == "/") {
+ if (stream.eat("*")) {
+ state.tokenize = tokenComment;
+ return tokenComment(stream, state);
+ } else if (stream.eat("/")) {
+ stream.skipToEnd();
+ return ret("comment", "comment");
+ } else if (state.lastType == "operator" || state.lastType == "keyword c" ||
+ state.lastType == "sof" || /^[\[{}\(,;:]$/.test(state.lastType)) {
+ readRegexp(stream);
+ stream.eatWhile(/[gimy]/); // 'y' is "sticky" option in Mozilla
+ return ret("regexp", "string-2");
+ } else {
+ stream.eatWhile(isOperatorChar);
+ return ret("operator", "operator", stream.current());
+ }
+ } else if (ch == "`") {
+ state.tokenize = tokenQuasi;
+ return tokenQuasi(stream, state);
+ } else if (ch == "#") {
+ stream.skipToEnd();
+ return ret("error", "error");
+ } else if (isOperatorChar.test(ch)) {
+ stream.eatWhile(isOperatorChar);
+ return ret("operator", "operator", stream.current());
+ } else {
+ stream.eatWhile(/[\w\$_]/);
+ var word = stream.current(), known = keywords.propertyIsEnumerable(word) && keywords[word];
+ return (known && state.lastType != ".") ? ret(known.type, known.style, word) :
+ ret("variable", "variable", word);
+ }
+ }
+
+ function tokenString(quote) {
+ return function(stream, state) {
+ var escaped = false, next;
+ if (jsonldMode && stream.peek() == "@" && stream.match(isJsonldKeyword)){
+ state.tokenize = tokenBase;
+ return ret("jsonld-keyword", "meta");
+ }
+ while ((next = stream.next()) != null) {
+ if (next == quote && !escaped) break;
+ escaped = !escaped && next == "\\";
+ }
+ if (!escaped) state.tokenize = tokenBase;
+ return ret("string", "string");
+ };
+ }
+
+ function tokenComment(stream, state) {
+ var maybeEnd = false, ch;
+ while (ch = stream.next()) {
+ if (ch == "/" && maybeEnd) {
+ state.tokenize = tokenBase;
+ break;
+ }
+ maybeEnd = (ch == "*");
+ }
+ return ret("comment", "comment");
+ }
+
+ function tokenQuasi(stream, state) {
+ var escaped = false, next;
+ while ((next = stream.next()) != null) {
+ if (!escaped && (next == "`" || next == "$" && stream.eat("{"))) {
+ state.tokenize = tokenBase;
+ break;
+ }
+ escaped = !escaped && next == "\\";
+ }
+ return ret("quasi", "string-2", stream.current());
+ }
+
+ var brackets = "([{}])";
+ // This is a crude lookahead trick to try and notice that we're
+ // parsing the argument patterns for a fat-arrow function before we
+ // actually hit the arrow token. It only works if the arrow is on
+ // the same line as the arguments and there's no strange noise
+ // (comments) in between. Fallback is to only notice when we hit the
+ // arrow, and not declare the arguments as locals for the arrow
+ // body.
+ function findFatArrow(stream, state) {
+ if (state.fatArrowAt) state.fatArrowAt = null;
+ var arrow = stream.string.indexOf("=>", stream.start);
+ if (arrow < 0) return;
+
+ var depth = 0, sawSomething = false;
+ for (var pos = arrow - 1; pos >= 0; --pos) {
+ var ch = stream.string.charAt(pos);
+ var bracket = brackets.indexOf(ch);
+ if (bracket >= 0 && bracket < 3) {
+ if (!depth) { ++pos; break; }
+ if (--depth == 0) break;
+ } else if (bracket >= 3 && bracket < 6) {
+ ++depth;
+ } else if (/[$\w]/.test(ch)) {
+ sawSomething = true;
+ } else if (sawSomething && !depth) {
+ ++pos;
+ break;
+ }
+ }
+ if (sawSomething && !depth) state.fatArrowAt = pos;
+ }
+
+ // Parser
+
+ var atomicTypes = {"atom": true, "number": true, "variable": true, "string": true, "regexp": true, "this": true, "jsonld-keyword": true};
+
+ function JSLexical(indented, column, type, align, prev, info) {
+ this.indented = indented;
+ this.column = column;
+ this.type = type;
+ this.prev = prev;
+ this.info = info;
+ if (align != null) this.align = align;
+ }
+
+ function inScope(state, varname) {
+ for (var v = state.localVars; v; v = v.next)
+ if (v.name == varname) return true;
+ for (var cx = state.context; cx; cx = cx.prev) {
+ for (var v = cx.vars; v; v = v.next)
+ if (v.name == varname) return true;
+ }
+ }
+
+ function parseJS(state, style, type, content, stream) {
+ var cc = state.cc;
+ // Communicate our context to the combinators.
+ // (Less wasteful than consing up a hundred closures on every call.)
+ cx.state = state; cx.stream = stream; cx.marked = null, cx.cc = cc;
+
+ if (!state.lexical.hasOwnProperty("align"))
+ state.lexical.align = true;
+
+ while(true) {
+ var combinator = cc.length ? cc.pop() : jsonMode ? expression : statement;
+ if (combinator(type, content)) {
+ while(cc.length && cc[cc.length - 1].lex)
+ cc.pop()();
+ if (cx.marked) return cx.marked;
+ if (type == "variable" && inScope(state, content)) return "variable-2";
+ return style;
+ }
+ }
+ }
+
+ // Combinator utils
+
+ var cx = {state: null, column: null, marked: null, cc: null};
+ function pass() {
+ for (var i = arguments.length - 1; i >= 0; i--) cx.cc.push(arguments[i]);
+ }
+ function cont() {
+ pass.apply(null, arguments);
+ return true;
+ }
+ function register(varname) {
+ function inList(list) {
+ for (var v = list; v; v = v.next)
+ if (v.name == varname) return true;
+ return false;
+ }
+ var state = cx.state;
+ if (state.context) {
+ cx.marked = "def";
+ if (inList(state.localVars)) return;
+ state.localVars = {name: varname, next: state.localVars};
+ } else {
+ if (inList(state.globalVars)) return;
+ if (parserConfig.globalVars)
+ state.globalVars = {name: varname, next: state.globalVars};
+ }
+ }
+
+ // Combinators
+
+ var defaultVars = {name: "this", next: {name: "arguments"}};
+ function pushcontext() {
+ cx.state.context = {prev: cx.state.context, vars: cx.state.localVars};
+ cx.state.localVars = defaultVars;
+ }
+ function popcontext() {
+ cx.state.localVars = cx.state.context.vars;
+ cx.state.context = cx.state.context.prev;
+ }
+ function pushlex(type, info) {
+ var result = function() {
+ var state = cx.state, indent = state.indented;
+ if (state.lexical.type == "stat") indent = state.lexical.indented;
+ state.lexical = new JSLexical(indent, cx.stream.column(), type, null, state.lexical, info);
+ };
+ result.lex = true;
+ return result;
+ }
+ function poplex() {
+ var state = cx.state;
+ if (state.lexical.prev) {
+ if (state.lexical.type == ")")
+ state.indented = state.lexical.indented;
+ state.lexical = state.lexical.prev;
+ }
+ }
+ poplex.lex = true;
+
+ function expect(wanted) {
+ function exp(type) {
+ if (type == wanted) return cont();
+ else if (wanted == ";") return pass();
+ else return cont(exp);
+ };
+ return exp;
+ }
+
+ function statement(type, value) {
+ if (type == "var") return cont(pushlex("vardef", value.length), vardef, expect(";"), poplex);
+ if (type == "keyword a") return cont(pushlex("form"), expression, statement, poplex);
+ if (type == "keyword b") return cont(pushlex("form"), statement, poplex);
+ if (type == "{") return cont(pushlex("}"), block, poplex);
+ if (type == ";") return cont();
+ if (type == "if") {
+ if (cx.state.lexical.info == "else" && cx.state.cc[cx.state.cc.length - 1] == poplex)
+ cx.state.cc.pop()();
+ return cont(pushlex("form"), expression, statement, poplex, maybeelse);
+ }
+ if (type == "function") return cont(functiondef);
+ if (type == "for") return cont(pushlex("form"), forspec, statement, poplex);
+ if (type == "variable") return cont(pushlex("stat"), maybelabel);
+ if (type == "switch") return cont(pushlex("form"), expression, pushlex("}", "switch"), expect("{"),
+ block, poplex, poplex);
+ if (type == "case") return cont(expression, expect(":"));
+ if (type == "default") return cont(expect(":"));
+ if (type == "catch") return cont(pushlex("form"), pushcontext, expect("("), funarg, expect(")"),
+ statement, poplex, popcontext);
+ if (type == "module") return cont(pushlex("form"), pushcontext, afterModule, popcontext, poplex);
+ if (type == "class") return cont(pushlex("form"), className, objlit, poplex);
+ if (type == "export") return cont(pushlex("form"), afterExport, poplex);
+ if (type == "import") return cont(pushlex("form"), afterImport, poplex);
+ return pass(pushlex("stat"), expression, expect(";"), poplex);
+ }
+ function expression(type) {
+ return expressionInner(type, false);
+ }
+ function expressionNoComma(type) {
+ return expressionInner(type, true);
+ }
+ function expressionInner(type, noComma) {
+ if (cx.state.fatArrowAt == cx.stream.start) {
+ var body = noComma ? arrowBodyNoComma : arrowBody;
+ if (type == "(") return cont(pushcontext, pushlex(")"), commasep(pattern, ")"), poplex, expect("=>"), body, popcontext);
+ else if (type == "variable") return pass(pushcontext, pattern, expect("=>"), body, popcontext);
+ }
+
+ var maybeop = noComma ? maybeoperatorNoComma : maybeoperatorComma;
+ if (atomicTypes.hasOwnProperty(type)) return cont(maybeop);
+ if (type == "function") return cont(functiondef, maybeop);
+ if (type == "keyword c") return cont(noComma ? maybeexpressionNoComma : maybeexpression);
+ if (type == "(") return cont(pushlex(")"), maybeexpression, comprehension, expect(")"), poplex, maybeop);
+ if (type == "operator" || type == "spread") return cont(noComma ? expressionNoComma : expression);
+ if (type == "[") return cont(pushlex("]"), arrayLiteral, poplex, maybeop);
+ if (type == "{") return contCommasep(objprop, "}", null, maybeop);
+ if (type == "quasi") { return pass(quasi, maybeop); }
+ return cont();
+ }
+ function maybeexpression(type) {
+ if (type.match(/[;\}\)\],]/)) return pass();
+ return pass(expression);
+ }
+ function maybeexpressionNoComma(type) {
+ if (type.match(/[;\}\)\],]/)) return pass();
+ return pass(expressionNoComma);
+ }
+
+ function maybeoperatorComma(type, value) {
+ if (type == ",") return cont(expression);
+ return maybeoperatorNoComma(type, value, false);
+ }
+ function maybeoperatorNoComma(type, value, noComma) {
+ var me = noComma == false ? maybeoperatorComma : maybeoperatorNoComma;
+ var expr = noComma == false ? expression : expressionNoComma;
+ if (value == "=>") return cont(pushcontext, noComma ? arrowBodyNoComma : arrowBody, popcontext);
+ if (type == "operator") {
+ if (/\+\+|--/.test(value)) return cont(me);
+ if (value == "?") return cont(expression, expect(":"), expr);
+ return cont(expr);
+ }
+ if (type == "quasi") { return pass(quasi, me); }
+ if (type == ";") return;
+ if (type == "(") return contCommasep(expressionNoComma, ")", "call", me);
+ if (type == ".") return cont(property, me);
+ if (type == "[") return cont(pushlex("]"), maybeexpression, expect("]"), poplex, me);
+ }
+ function quasi(type, value) {
+ if (type != "quasi") return pass();
+ if (value.slice(value.length - 2) != "${") return cont(quasi);
+ return cont(expression, continueQuasi);
+ }
+ function continueQuasi(type) {
+ if (type == "}") {
+ cx.marked = "string-2";
+ cx.state.tokenize = tokenQuasi;
+ return cont(quasi);
+ }
+ }
+ function arrowBody(type) {
+ findFatArrow(cx.stream, cx.state);
+ if (type == "{") return pass(statement);
+ return pass(expression);
+ }
+ function arrowBodyNoComma(type) {
+ findFatArrow(cx.stream, cx.state);
+ if (type == "{") return pass(statement);
+ return pass(expressionNoComma);
+ }
+ function maybelabel(type) {
+ if (type == ":") return cont(poplex, statement);
+ return pass(maybeoperatorComma, expect(";"), poplex);
+ }
+ function property(type) {
+ if (type == "variable") {cx.marked = "property"; return cont();}
+ }
+ function objprop(type, value) {
+ if (type == "variable") {
+ cx.marked = "property";
+ if (value == "get" || value == "set") return cont(getterSetter);
+ } else if (type == "number" || type == "string") {
+ cx.marked = jsonldMode ? "property" : (type + " property");
+ } else if (type == "[") {
+ return cont(expression, expect("]"), afterprop);
+ }
+ if (atomicTypes.hasOwnProperty(type)) return cont(afterprop);
+ }
+ function getterSetter(type) {
+ if (type != "variable") return pass(afterprop);
+ cx.marked = "property";
+ return cont(functiondef);
+ }
+ function afterprop(type) {
+ if (type == ":") return cont(expressionNoComma);
+ if (type == "(") return pass(functiondef);
+ }
+ function commasep(what, end) {
+ function proceed(type) {
+ if (type == ",") {
+ var lex = cx.state.lexical;
+ if (lex.info == "call") lex.pos = (lex.pos || 0) + 1;
+ return cont(what, proceed);
+ }
+ if (type == end) return cont();
+ return cont(expect(end));
+ }
+ return function(type) {
+ if (type == end) return cont();
+ return pass(what, proceed);
+ };
+ }
+ function contCommasep(what, end, info) {
+ for (var i = 3; i < arguments.length; i++)
+ cx.cc.push(arguments[i]);
+ return cont(pushlex(end, info), commasep(what, end), poplex);
+ }
+ function block(type) {
+ if (type == "}") return cont();
+ return pass(statement, block);
+ }
+ function maybetype(type) {
+ if (isTS && type == ":") return cont(typedef);
+ }
+ function typedef(type) {
+ if (type == "variable"){cx.marked = "variable-3"; return cont();}
+ }
+ function vardef() {
+ return pass(pattern, maybetype, maybeAssign, vardefCont);
+ }
+ function pattern(type, value) {
+ if (type == "variable") { register(value); return cont(); }
+ if (type == "[") return contCommasep(pattern, "]");
+ if (type == "{") return contCommasep(proppattern, "}");
+ }
+ function proppattern(type, value) {
+ if (type == "variable" && !cx.stream.match(/^\s*:/, false)) {
+ register(value);
+ return cont(maybeAssign);
+ }
+ if (type == "variable") cx.marked = "property";
+ return cont(expect(":"), pattern, maybeAssign);
+ }
+ function maybeAssign(_type, value) {
+ if (value == "=") return cont(expressionNoComma);
+ }
+ function vardefCont(type) {
+ if (type == ",") return cont(vardef);
+ }
+ function maybeelse(type, value) {
+ if (type == "keyword b" && value == "else") return cont(pushlex("form", "else"), statement, poplex);
+ }
+ function forspec(type) {
+ if (type == "(") return cont(pushlex(")"), forspec1, expect(")"), poplex);
+ }
+ function forspec1(type) {
+ if (type == "var") return cont(vardef, expect(";"), forspec2);
+ if (type == ";") return cont(forspec2);
+ if (type == "variable") return cont(formaybeinof);
+ return pass(expression, expect(";"), forspec2);
+ }
+ function formaybeinof(_type, value) {
+ if (value == "in" || value == "of") { cx.marked = "keyword"; return cont(expression); }
+ return cont(maybeoperatorComma, forspec2);
+ }
+ function forspec2(type, value) {
+ if (type == ";") return cont(forspec3);
+ if (value == "in" || value == "of") { cx.marked = "keyword"; return cont(expression); }
+ return pass(expression, expect(";"), forspec3);
+ }
+ function forspec3(type) {
+ if (type != ")") cont(expression);
+ }
+ function functiondef(type, value) {
+ if (value == "*") {cx.marked = "keyword"; return cont(functiondef);}
+ if (type == "variable") {register(value); return cont(functiondef);}
+ if (type == "(") return cont(pushcontext, pushlex(")"), commasep(funarg, ")"), poplex, statement, popcontext);
+ }
+ function funarg(type) {
+ if (type == "spread") return cont(funarg);
+ return pass(pattern, maybetype);
+ }
+ function className(type, value) {
+ if (type == "variable") {register(value); return cont(classNameAfter);}
+ }
+ function classNameAfter(_type, value) {
+ if (value == "extends") return cont(expression);
+ }
+ function objlit(type) {
+ if (type == "{") return contCommasep(objprop, "}");
+ }
+ function afterModule(type, value) {
+ if (type == "string") return cont(statement);
+ if (type == "variable") { register(value); return cont(maybeFrom); }
+ }
+ function afterExport(_type, value) {
+ if (value == "*") { cx.marked = "keyword"; return cont(maybeFrom, expect(";")); }
+ if (value == "default") { cx.marked = "keyword"; return cont(expression, expect(";")); }
+ return pass(statement);
+ }
+ function afterImport(type) {
+ if (type == "string") return cont();
+ return pass(importSpec, maybeFrom);
+ }
+ function importSpec(type, value) {
+ if (type == "{") return contCommasep(importSpec, "}");
+ if (type == "variable") register(value);
+ return cont();
+ }
+ function maybeFrom(_type, value) {
+ if (value == "from") { cx.marked = "keyword"; return cont(expression); }
+ }
+ function arrayLiteral(type) {
+ if (type == "]") return cont();
+ return pass(expressionNoComma, maybeArrayComprehension);
+ }
+ function maybeArrayComprehension(type) {
+ if (type == "for") return pass(comprehension, expect("]"));
+ if (type == ",") return cont(commasep(expressionNoComma, "]"));
+ return pass(commasep(expressionNoComma, "]"));
+ }
+ function comprehension(type) {
+ if (type == "for") return cont(forspec, comprehension);
+ if (type == "if") return cont(expression, comprehension);
+ }
+
+ // Interface
+
+ return {
+ startState: function(basecolumn) {
+ var state = {
+ tokenize: tokenBase,
+ lastType: "sof",
+ cc: [],
+ lexical: new JSLexical((basecolumn || 0) - indentUnit, 0, "block", false),
+ localVars: parserConfig.localVars,
+ context: parserConfig.localVars && {vars: parserConfig.localVars},
+ indented: 0
+ };
+ if (parserConfig.globalVars && typeof parserConfig.globalVars == "object")
+ state.globalVars = parserConfig.globalVars;
+ return state;
+ },
+
+ token: function(stream, state) {
+ if (stream.sol()) {
+ if (!state.lexical.hasOwnProperty("align"))
+ state.lexical.align = false;
+ state.indented = stream.indentation();
+ findFatArrow(stream, state);
+ }
+ if (state.tokenize != tokenComment && stream.eatSpace()) return null;
+ var style = state.tokenize(stream, state);
+ if (type == "comment") return style;
+ state.lastType = type == "operator" && (content == "++" || content == "--") ? "incdec" : type;
+ return parseJS(state, style, type, content, stream);
+ },
+
+ indent: function(state, textAfter) {
+ if (state.tokenize == tokenComment) return CodeMirror.Pass;
+ if (state.tokenize != tokenBase) return 0;
+ var firstChar = textAfter && textAfter.charAt(0), lexical = state.lexical;
+ // Kludge to prevent 'maybelse' from blocking lexical scope pops
+ if (!/^\s*else\b/.test(textAfter)) for (var i = state.cc.length - 1; i >= 0; --i) {
+ var c = state.cc[i];
+ if (c == poplex) lexical = lexical.prev;
+ else if (c != maybeelse) break;
+ }
+ if (lexical.type == "stat" && firstChar == "}") lexical = lexical.prev;
+ if (statementIndent && lexical.type == ")" && lexical.prev.type == "stat")
+ lexical = lexical.prev;
+ var type = lexical.type, closing = firstChar == type;
+
+ if (type == "vardef") return lexical.indented + (state.lastType == "operator" || state.lastType == "," ? lexical.info + 1 : 0);
+ else if (type == "form" && firstChar == "{") return lexical.indented;
+ else if (type == "form") return lexical.indented + indentUnit;
+ else if (type == "stat")
+ return lexical.indented + (state.lastType == "operator" || state.lastType == "," ? statementIndent || indentUnit : 0);
+ else if (lexical.info == "switch" && !closing && parserConfig.doubleIndentSwitch != false)
+ return lexical.indented + (/^(?:case|default)\b/.test(textAfter) ? indentUnit : 2 * indentUnit);
+ else if (lexical.align) return lexical.column + (closing ? 0 : 1);
+ else return lexical.indented + (closing ? 0 : indentUnit);
+ },
+
+ electricChars: ":{}",
+ blockCommentStart: jsonMode ? null : "/*",
+ blockCommentEnd: jsonMode ? null : "*/",
+ lineComment: jsonMode ? null : "//",
+ fold: "brace",
+
+ helperType: jsonMode ? "json" : "javascript",
+ jsonldMode: jsonldMode,
+ jsonMode: jsonMode
+ };
+});
+
+CodeMirror.registerHelper("wordChars", "javascript", /[\\w$]/);
+
+CodeMirror.defineMIME("text/javascript", "javascript");
+CodeMirror.defineMIME("text/ecmascript", "javascript");
+CodeMirror.defineMIME("application/javascript", "javascript");
+CodeMirror.defineMIME("application/ecmascript", "javascript");
+CodeMirror.defineMIME("application/json", {name: "javascript", json: true});
+CodeMirror.defineMIME("application/x-json", {name: "javascript", json: true});
+CodeMirror.defineMIME("application/ld+json", {name: "javascript", jsonld: true});
+CodeMirror.defineMIME("text/typescript", { name: "javascript", typescript: true });
+CodeMirror.defineMIME("application/typescript", { name: "javascript", typescript: true });
+
+});
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/ntriples.js
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/ntriples.js (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/ntriples.js 2014-05-21 09:59:04 UTC (rev 8392)
@@ -0,0 +1,183 @@
+/**********************************************************
+* This script provides syntax highlighting support for
+* the Ntriples format.
+* Ntriples format specification:
+* http://www.w3.org/TR/rdf-testcases/#ntriples
+***********************************************************/
+
+/*
+ The following expression defines the defined ASF grammar transitions.
+
+ pre_subject ->
+ {
+ ( writing_subject_uri | writing_bnode_uri )
+ -> pre_predicate
+ -> writing_predicate_uri
+ -> pre_object
+ -> writing_object_uri | writing_object_bnode |
+ (
+ writing_object_literal
+ -> writing_literal_lang | writing_literal_type
+ )
+ -> post_object
+ -> BEGIN
+ } otherwise {
+ -> ERROR
+ }
+*/
+
+(function(mod) {
+ if (typeof exports == "object" && typeof module == "object") // CommonJS
+ mod(require("../../lib/codemirror"));
+ else if (typeof define == "function" && define.amd) // AMD
+ define(["../../lib/codemirror"], mod);
+ else // Plain browser env
+ mod(CodeMirror);
+})(function(CodeMirror) {
+"use strict";
+
+CodeMirror.defineMode("ntriples", function() {
+
+ var Location = {
+ PRE_SUBJECT : 0,
+ WRITING_SUB_URI : 1,
+ WRITING_BNODE_URI : 2,
+ PRE_PRED : 3,
+ WRITING_PRED_URI : 4,
+ PRE_OBJ : 5,
+ WRITING_OBJ_URI : 6,
+ WRITING_OBJ_BNODE : 7,
+ WRITING_OBJ_LITERAL : 8,
+ WRITING_LIT_LANG : 9,
+ WRITING_LIT_TYPE : 10,
+ POST_OBJ : 11,
+ ERROR : 12
+ };
+ function transitState(currState, c) {
+ var currLocation = currState.location;
+ var ret;
+
+ // Opening.
+ if (currLocation == Location.PRE_SUBJECT && c == '<') ret = Location.WRITING_SUB_URI;
+ else if(currLocation == Location.PRE_SUBJECT && c == '_') ret = Location.WRITING_BNODE_URI;
+ else if(currLocation == Location.PRE_PRED && c == '<') ret = Location.WRITING_PRED_URI;
+ else if(currLocation == Location.PRE_OBJ && c == '<') ret = Location.WRITING_OBJ_URI;
+ else if(currLocation == Location.PRE_OBJ && c == '_') ret = Location.WRITING_OBJ_BNODE;
+ else if(currLocation == Location.PRE_OBJ && c == '"') ret = Location.WRITING_OBJ_LITERAL;
+
+ // Closing.
+ else if(currLocation == Location.WRITING_SUB_URI && c == '>') ret = Location.PRE_PRED;
+ else if(currLocation == Location.WRITING_BNODE_URI && c == ' ') ret = Location.PRE_PRED;
+ else if(currLocation == Location.WRITING_PRED_URI && c == '>') ret = Location.PRE_OBJ;
+ else if(currLocation == Location.WRITING_OBJ_URI && c == '>') ret = Location.POST_OBJ;
+ else if(currLocation == Location.WRITING_OBJ_BNODE && c == ' ') ret = Location.POST_OBJ;
+ else if(currLocation == Location.WRITING_OBJ_LITERAL && c == '"') ret = Location.POST_OBJ;
+ else if(currLocation == Location.WRITING_LIT_LANG && c == ' ') ret = Location.POST_OBJ;
+ else if(currLocation == Location.WRITING_LIT_TYPE && c == '>') ret = Location.POST_OBJ;
+
+ // Closing typed and language literal.
+ else if(currLocation == Location.WRITING_OBJ_LITERAL && c == '@') ret = Location.WRITING_LIT_LANG;
+ else if(currLocation == Location.WRITING_OBJ_LITERAL && c == '^') ret = Location.WRITING_LIT_TYPE;
+
+ // Spaces.
+ else if( c == ' ' &&
+ (
+ currLocation == Location.PRE_SUBJECT ||
+ currLocation == Location.PRE_PRED ||
+ currLocation == Location.PRE_OBJ ||
+ currLocation == Location.POST_OBJ
+ )
+ ) ret = currLocation;
+
+ // Reset.
+ else if(currLocation == Location.POST_OBJ && c == '.') ret = Location.PRE_SUBJECT;
+
+ // Error
+ else ret = Location.ERROR;
+
+ currState.location=ret;
+ }
+
+ return {
+ startState: function() {
+ return {
+ location : Location.PRE_SUBJECT,
+ uris : [],
+ anchors : [],
+ bnodes : [],
+ langs : [],
+ types : []
+ };
+ },
+ token: function(stream, state) {
+ var ch = stream.next();
+ if(ch == '<') {
+ transitState(state, ch);
+ var parsedURI = '';
+ stream.eatWhile( function(c) { if( c != '#' && c != '>' ) { parsedURI += c; return true; } return false;} );
+ state.uris.push(parsedURI);
+ if( stream.match('#', false) ) return 'variable';
+ stream.next();
+ transitState(state, '>');
+ return 'variable';
+ }
+ if(ch == '#') {
+ var parsedAnchor = '';
+ stream.eatWhile(function(c) { if(c != '>' && c != ' ') { parsedAnchor+= c; return true; } return false;});
+ state.anchors.push(parsedAnchor);
+ return 'variable-2';
+ }
+ if(ch == '>') {
+ transitState(state, '>');
+ return 'variable';
+ }
+ if(ch == '_') {
+ transitState(state, ch);
+ var parsedBNode = '';
+ stream.eatWhile(function(c) { if( c != ' ' ) { parsedBNode += c; return true; } return false;});
+ state.bnodes.push(parsedBNode);
+ stream.next();
+ transitState(state, ' ');
+ return 'builtin';
+ }
+ if(ch == '"') {
+ transitState(state, ch);
+ stream.eatWhile( function(c) { return c != '"'; } );
+ stream.next();
+ if( stream.peek() != '@' && stream.peek() != '^' ) {
+ transitState(state, '"');
+ }
+ return 'string';
+ }
+ if( ch == '@' ) {
+ transitState(state, '@');
+ var parsedLang = '';
+ stream.eatWhile(function(c) { if( c != ' ' ) { parsedLang += c; return true; } return false;});
+ state.langs.push(parsedLang);
+ stream.next();
+ transitState(state, ' ');
+ return 'string-2';
+ }
+ if( ch == '^' ) {
+ stream.next();
+ transitState(state, '^');
+ var parsedType = '';
+ stream.eatWhile(function(c) { if( c != '>' ) { parsedType += c; return true; } return false;} );
+ state.types.push(parsedType);
+ stream.next();
+ transitState(state, '>');
+ return 'variable';
+ }
+ if( ch == ' ' ) {
+ transitState(state, ch);
+ }
+ if( ch == '.' ) {
+ transitState(state, ch);
+ }
+ }
+ };
+});
+
+CodeMirror.defineMIME("text/n-triples", "ntriples");
+
+});
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/sparql.js
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/sparql.js (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/sparql.js 2014-05-21 09:59:04 UTC (rev 8392)
@@ -0,0 +1,157 @@
+(function(mod) {
+ if (typeof exports == "object" && typeof module == "object") // CommonJS
+ mod(require("../../lib/codemirror"));
+ else if (typeof define == "function" && define.amd) // AMD
+ define(["../../lib/codemirror"], mod);
+ else // Plain browser env
+ mod(CodeMirror);
+})(function(CodeMirror) {
+"use strict";
+
+CodeMirror.defineMode("sparql", function(config) {
+ var indentUnit = config.indentUnit;
+ var curPunc;
+
+ function wordRegexp(words) {
+ return new RegExp("^(?:" + words.join("|") + ")$", "i");
+ }
+ var ops = wordRegexp(["str", "lang", "langmatches", "datatype", "bound", "sameterm", "isiri", "isuri",
+ "isblank", "isliteral", "a"]);
+ var keywords = wordRegexp(["base", "prefix", "select", "distinct", "reduced", "construct", "describe",
+ "ask", "from", "named", "where", "order", "limit", "offset", "filter", "optional",
+ "graph", "by", "asc", "desc", "as", "having", "undef", "values", "group",
+ "minus", "in", "not", "service", "silent", "using", "insert", "delete", "union",
+ "data", "copy", "to", "move", "add", "create", "drop", "clear", "load"]);
+ var operatorChars = /[*+\-<>=&|]/;
+
+ function tokenBase(stream, state) {
+ var ch = stream.next();
+ curPunc = null;
+ if (ch == "$" || ch == "?") {
+ stream.match(/^[\w\d]*/);
+ return "variable-2";
+ }
+ else if (ch == "<" && !stream.match(/^[\s\u00a0=]/, false)) {
+ stream.match(/^[^\s\u00a0>]*>?/);
+ return "atom";
+ }
+ else if (ch == "\"" || ch == "'") {
+ state.tokenize = tokenLiteral(ch);
+ return state.tokenize(stream, state);
+ }
+ else if (/[{}\(\),\.;\[\]]/.test(ch)) {
+ curPunc = ch;
+ return null;
+ }
+ else if (ch == "#") {
+ stream.skipToEnd();
+ return "comment";
+ }
+ else if (operatorChars.test(ch)) {
+ stream.eatWhile(operatorChars);
+ return null;
+ }
+ else if (ch == ":") {
+ stream.eatWhile(/[\w\d\._\-]/);
+ return "atom";
+ }
+ else {
+ stream.eatWhile(/[_\w\d]/);
+ if (stream.eat(":")) {
+ stream.eatWhile(/[\w\d_\-]/);
+ return "atom";
+ }
+ var word = stream.current();
+ if (ops.test(word))
+ return null;
+ else if (keywords.test(word))
+ return "keyword";
+ else
+ return "variable";
+ }
+ }
+
+ function tokenLiteral(quote) {
+ return function(stream, state) {
+ var escaped = false, ch;
+ while ((ch = stream.next()) != null) {
+ if (ch == quote && !escaped) {
+ state.tokenize = tokenBase;
+ break;
+ }
+ escaped = !escaped && ch == "\\";
+ }
+ return "string";
+ };
+ }
+
+ function pushContext(state, type, col) {
+ state.context = {prev: state.context, indent: state.indent, col: col, type: type};
+ }
+ function popContext(state) {
+ state.indent = state.context.indent;
+ state.context = state.context.prev;
+ }
+
+ return {
+ startState: function() {
+ return {tokenize: tokenBase,
+ context: null,
+ indent: 0,
+ col: 0};
+ },
+
+ token: function(stream, state) {
+ if (stream.sol()) {
+ if (state.context && state.context.align == null) state.context.align = false;
+ state.indent = stream.indentation();
+ }
+ if (stream.eatSpace()) return null;
+ var style = state.tokenize(stream, state);
+
+ if (style != "comment" && state.context && state.context.align == null && state.context.type != "pattern") {
+ state.context.align = true;
+ }
+
+ if (curPunc == "(") pushContext(state, ")", stream.column());
+ else if (curPunc == "[") pushContext(state, "]", stream.column());
+ else if (curPunc == "{") pushContext(state, "}", stream.column());
+ else if (/[\]\}\)]/.test(curPunc)) {
+ while (state.context && state.context.type == "pattern") popContext(state);
+ if (state.context && curPunc == state.context.type) popContext(state);
+ }
+ else if (curPunc == "." && state.context && state.context.type == "pattern") popContext(state);
+ else if (/atom|string|variable/.test(style) && state.context) {
+ if (/[\}\]]/.test(state.context.type))
+ pushContext(state, "pattern", stream.column());
+ else if (state.context.type == "pattern" && !state.context.align) {
+ state.context.align = true;
+ state.context.col = stream.column();
+ }
+ }
+
+ return style;
+ },
+
+ indent: function(state, textAfter) {
+ var firstChar = textAfter && textAfter.charAt(0);
+ var context = state.context;
+ if (/[\]\}]/.test(firstChar))
+ while (context && context.type == "pattern") context = context.prev;
+
+ var closing = context && firstChar == context.type;
+ if (!context)
+ return 0;
+ else if (context.type == "pattern")
+ return context.col;
+ else if (context.align)
+ return context.col + (closing ? 0 : 1);
+ else
+ return context.indent + (closing ? 0 : indentUnit);
+ }
+ };
+});
+
+CodeMirror.defineMIME("application/x-sparql-query", "sparql");
+
+});
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/turtle.js
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/turtle.js (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/vendor/cm-modes/turtle.js 2014-05-21 09:59:04 UTC (rev 8392)
@@ -0,0 +1,157 @@
+(function(mod) {
+ if (typeof exports == "object" && typeof module == "object") // CommonJS
+ mod(require("../../lib/codemirror"));
+ else if (typeof define == "function" && define.amd) // AMD
+ define(["../../lib/codemirror"], mod);
+ else // Plain browser env
+ mod(CodeMirror);
+})(function(CodeMirror) {
+"use strict";
+
+CodeMirror.defineMode("turtle", function(config) {
+ var indentUnit = config.indentUnit;
+ var curPunc;
+
+ function wordRegexp(words) {
+ return new RegExp("^(?:" + words.join("|") + ")$", "i");
+ }
+ var ops = wordRegexp([]);
+ var keywords = wordRegexp(["@prefix", "@base", "a"]);
+ var operatorChars = /[*+\-<>=&|]/;
+
+ function tokenBase(stream, state) {
+ var ch = stream.next();
+ curPunc = null;
+ if (ch == "<" && !stream.match(/^[\s\u00a0=]/, false)) {
+ stream.match(/^[^\s\u00a0>]*>?/);
+ return "atom";
+ }
+ else if (ch == "\"" || ch == "'") {
+ state.tokenize = tokenLiteral(ch);
+ return state.tokenize(stream, state);
+ }
+ else if (/[{}\(\),\.;\[\]]/.test(ch)) {
+ curPunc = ch;
+ return null;
+ }
+ else if (ch == "#") {
+ stream.skipToEnd();
+ return "comment";
+ }
+ else if (operatorChars.test(ch)) {
+ stream.eatWhile(operatorChars);
+ return null;
+ }
+ else if (ch == ":") {
+ return "operator";
+ } else {
+ stream.eatWhile(/[_\w\d]/);
+ if(stream.peek() == ":") {
+ return "variable-3";
+ } else {
+ var word = stream.current();
+
+ if(keywords.test(word)) {
+ return "meta";
+ }
+
+ if(ch >= "A" && ch <= "Z") {
+ return "comment";
+ } else {
+ return "keyword";
+ }
+ }
+ var word = stream.current();
+ if (ops.test(word))
+ return null;
+ else if (keywords.test(word))
+ return "meta";
+ else
+ return "variable";
+ }
+ }
+
+ function tokenLiteral(quote) {
+ return function(stream, state) {
+ var escaped = false, ch;
+ while ((ch = stream.next()) != null) {
+ if (ch == quote && !escaped) {
+ state.tokenize = tokenBase;
+ break;
+ }
+ escaped = !escaped && ch == "\\";
+ }
+ return "string";
+ };
+ }
+
+ function pushContext(state, type, col) {
+ state.context = {prev: state.context, indent: state.indent, col: col, type: type};
+ }
+ function popContext(state) {
+ state.indent = state.context.indent;
+ state.context = state.context.prev;
+ }
+
+ return {
+ startState: function() {
+ return {tokenize: tokenBase,
+ context: null,
+ indent: 0,
+ col: 0};
+ },
+
+ token: function(stream, state) {
+ if (stream.sol()) {
+ if (state.context && state.context.align == null) state.context.align = false;
+ state.indent = stream.indentation();
+ }
+ if (stream.eatSpace()) return null;
+ var style = state.tokenize(stream, state);
+
+ if (style != "comment" && state.context && state.context.align == null && state...
[truncated message content] |
|
From: <dme...@us...> - 2014-05-21 09:58:19
|
Revision: 8391
http://sourceforge.net/p/bigdata/code/8391
Author: dmekonnen
Date: 2014-05-21 09:58:14 +0000 (Wed, 21 May 2014)
Log Message:
-----------
commit to fix failed sync.
Added Paths:
-----------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/lib/bigdata-ganglia-1.0.4.jar
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/architecture/HA_LBS.xlsx
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/format/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/format/CounterSetFormat.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/ConfigurableAnalyzerFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/ConfiguredAnalyzerFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/EmptyAnalyzer.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/LanguageRange.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/NeedsConfiguringAnalyzerFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/TermCompletionAnalyzer.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/releases/RELEASE_1_3_1.txt
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/AbstractAnalyzerFactoryTest.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/AbstractDefaultAnalyzerFactoryTest.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/AbstractSearchTest.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/NonEnglishExamples.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/TestConfigurableAnalyzerFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/TestConfigurableAsDefaultAnalyzerFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/TestDefaultAnalyzerFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/TestLanguageRange.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/TestUnconfiguredAnalyzerFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/examples.properties
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/lib/bigdata-ganglia-1.0.4.jar
===================================================================
(Binary files differ)
Index: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/lib/bigdata-ganglia-1.0.4.jar
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/lib/bigdata-ganglia-1.0.4.jar 2014-05-21 09:56:34 UTC (rev 8390)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/lib/bigdata-ganglia-1.0.4.jar 2014-05-21 09:58:14 UTC (rev 8391)
Property changes on: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/lib/bigdata-ganglia-1.0.4.jar
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+application/octet-stream
\ No newline at end of property
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/architecture/HA_LBS.xlsx
===================================================================
(Binary files differ)
Index: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/architecture/HA_LBS.xlsx
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/architecture/HA_LBS.xlsx 2014-05-21 09:56:34 UTC (rev 8390)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/architecture/HA_LBS.xlsx 2014-05-21 09:58:14 UTC (rev 8391)
Property changes on: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/architecture/HA_LBS.xlsx
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+application/octet-stream
\ No newline at end of property
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/format/CounterSetFormat.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/format/CounterSetFormat.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/format/CounterSetFormat.java 2014-05-21 09:58:14 UTC (rev 8391)
@@ -0,0 +1,214 @@
+/**
+
+Copyright (C) SYSTAP, LLC 2006-2012. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+/*
+Portions of this code are:
+
+Copyright Aduna (http://www.aduna-software.com/) � 2001-2007
+
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification,
+are permitted provided that the following conditions are met:
+
+ * Redistributions of source code must retain the above copyright notice,
+ this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+ * Neither the name of the copyright holder nor the names of its contributors
+ may be used to endorse or promote products derived from this software
+ without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+*/
+/*
+ * Created on Jul 25, 2012
+ */
+package com.bigdata.counters.format;
+
+import info.aduna.lang.FileFormat;
+
+import java.nio.charset.Charset;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.concurrent.CopyOnWriteArraySet;
+
+import com.bigdata.counters.ICounterSet;
+
+/**
+ * Formats for {@link ICounterSet}s.
+ *
+ * @author <a href="mailto:tho...@us...">Bryan Thompson</a>
+ */
+public class CounterSetFormat extends FileFormat implements Iterable<CounterSetFormat> {
+
+ /**
+ * All known/registered formats for this class.
+ */
+ private static final CopyOnWriteArraySet<CounterSetFormat> formats = new CopyOnWriteArraySet<CounterSetFormat>();
+
+ /**
+ * A thread-safe iterator that will visit all known formats (declared by
+ * {@link Iterable}).
+ */
+ @Override
+ public Iterator<CounterSetFormat> iterator() {
+
+ return formats.iterator();
+
+ }
+
+ /**
+ * Alternative static method signature.
+ */
+ static public Iterator<CounterSetFormat> getFormats() {
+
+ return formats.iterator();
+
+ }
+
+ /**
+ * Text properties file using <code>text/plain</code> and
+ * <code>UTF-8</code>.
+ */
+ public static final CounterSetFormat TEXT = new CounterSetFormat(//
+ "text/plain",//
+ Arrays.asList("text/plain"),//
+ Charset.forName("UTF-8"), //
+ Arrays.asList("counterSet")//
+ );
+
+ /**
+ * XML properties file using <code>application/xml</code> and
+ * <code>UTF-8</code>.
+ */
+ public static final CounterSetFormat XML = new CounterSetFormat(//
+ "application/xml",//
+ Arrays.asList("application/xml"),//
+ Charset.forName("UTF-8"),// charset
+ Arrays.asList("xml")// known-file-extensions
+ );
+
+ /**
+ * XML properties file using <code>text/html</code> and <code>UTF-8</code>.
+ */
+ public static final CounterSetFormat HTML = new CounterSetFormat(//
+ "text/html",//
+ Arrays.asList("text/html"),//
+ Charset.forName("UTF-8"),// charset
+ Arrays.asList("html")// known-file-extensions
+ );
+
+ /**
+ * Registers the specified format.
+ */
+ public static void register(final CounterSetFormat format) {
+
+ formats.add(format);
+
+ }
+
+ static {
+
+ register(HTML);
+ register(TEXT);
+ register(XML);
+
+ }
+
+ /**
+ * Creates a new RDFFormat object.
+ *
+ * @param name
+ * The name of the RDF file format, e.g. "RDF/XML".
+ * @param mimeTypes
+ * The MIME types of the RDF file format, e.g.
+ * <tt>application/rdf+xml</tt> for the RDF/XML file format.
+ * The first item in the list is interpreted as the default
+ * MIME type for the format.
+ * @param charset
+ * The default character encoding of the RDF file format.
+ * Specify <tt>null</tt> if not applicable.
+ * @param fileExtensions
+ * The RDF format's file extensions, e.g. <tt>rdf</tt> for
+ * RDF/XML files. The first item in the list is interpreted
+ * as the default file extension for the format.
+ */
+ public CounterSetFormat(final String name,
+ final Collection<String> mimeTypes, final Charset charset,
+ final Collection<String> fileExtensions) {
+
+ super(name, mimeTypes, charset, fileExtensions);
+
+ }
+
+ /**
+ * Tries to determine the appropriate file format based on the a MIME type
+ * that describes the content type.
+ *
+ * @param mimeType
+ * A MIME type, e.g. "text/html".
+ * @return An {@link CounterSetFormat} object if the MIME type was
+ * recognized, or <tt>null</tt> otherwise.
+ * @see #forMIMEType(String,PropertiesFormat)
+ * @see #getMIMETypes()
+ */
+ public static CounterSetFormat forMIMEType(final String mimeType) {
+
+ return forMIMEType(mimeType, null);
+
+ }
+
+ /**
+ * Tries to determine the appropriate file format based on the a MIME type
+ * that describes the content type. The supplied fallback format will be
+ * returned when the MIME type was not recognized.
+ *
+ * @param mimeType
+ * A file name.
+ * @return An {@link CounterSetFormat} that matches the MIME type, or the
+ * fallback format if the extension was not recognized.
+ * @see #forMIMEType(String)
+ * @see #getMIMETypes()
+ */
+ public static CounterSetFormat forMIMEType(String mimeType,
+ CounterSetFormat fallback) {
+
+ return matchMIMEType(mimeType, formats/* Iterable<FileFormat> */,
+ fallback);
+
+ }
+
+}
\ No newline at end of file
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/ConfigurableAnalyzerFactory.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/ConfigurableAnalyzerFactory.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/ConfigurableAnalyzerFactory.java 2014-05-21 09:58:14 UTC (rev 8391)
@@ -0,0 +1,318 @@
+/**
+
+Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+/*
+ * Created on May 6, 2014 by Jeremy J. Carroll, Syapse Inc.
+ */
+package com.bigdata.search;
+
+import java.io.IOException;
+import java.io.Reader;
+import java.io.StringReader;
+import java.util.Set;
+import java.util.regex.Pattern;
+
+import org.apache.log4j.Logger;
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.analysis.KeywordAnalyzer;
+import org.apache.lucene.analysis.SimpleAnalyzer;
+import org.apache.lucene.analysis.StopAnalyzer;
+import org.apache.lucene.analysis.TokenFilter;
+import org.apache.lucene.analysis.TokenStream;
+import org.apache.lucene.analysis.WhitespaceAnalyzer;
+import org.apache.lucene.analysis.miscellaneous.PatternAnalyzer;
+import org.apache.lucene.analysis.standard.StandardAnalyzer;
+import org.apache.lucene.analysis.tokenattributes.TermAttribute;
+import org.apache.lucene.util.Version;
+
+/**
+ * This class can be used with the bigdata properties file to specify
+ * which {@link Analyzer}s are used for which languages.
+ * Languages are specified by the language tag on RDF literals, which conform
+ * with <a href="http://www.rfc-editor.org/rfc/rfc5646.txt">RFC 5646</a>.
+ * Within bigdata plain literals are assigned to the default locale's language.
+ *
+ * The bigdata properties are used to map language ranges, as specified by
+ * <a href="http://www.rfc-editor.org/rfc/rfc4647.txt">RFC 4647</a> to classes which extend {@link Analyzer}.
+ * Supported classes included all the natural language specific classes from Lucene, and also:
+ * <ul>
+ * <li>{@link PatternAnalyzer}
+ * <li>{@link TermCompletionAnalyzer}
+ * <li>{@link KeywordAnalyzer}
+ * <li>{@link SimpleAnalyzer}
+ * <li>{@link StopAnalyzer}
+ * <li>{@link WhitespaceAnalyzer}
+ * <li>{@link StandardAnalyzer}
+ * </ul>
+ * More generally any subclass of {@link Analyzer} that has at least one constructor matching:
+ * <ul>
+ * <li>no arguments
+ * <li>{@link Version}
+ * <li>{@link Version}, {@link Set}
+ * </ul>
+ * is usable. If the class has a static method named <code>getDefaultStopSet()</code> then this is assumed
+ * to do what it says on the can; some of the Lucene analyzers store their default stop words elsewhere,
+ * and such stopwords are usable by this class. If no stop word set can be found, and there is a constructor without
+ * stopwords and a constructor with stopwords, then the former is assumed to use a default stop word set.
+ * <p>
+ * Configuration is by means of the bigdata properties file.
+ * All relevant properties start <code>com.bigdata.search.ConfigurableAnalyzerFactory</code> which we
+ * abbreviate to <code>c.b.s.C</code> in this documentation.
+ * Properties from {@link Options} apply to the factory.
+ * <p>
+ * Other properties, from {@link AnalyzerOptions} start with
+ * <code>c.b.s.C.analyzer.<em>language-range</em></code> where <code><em>language-range</em></code> conforms
+ * with the extended language range construct from RFC 4647, section 2.2.
+ * There is an issue that bigdata does not allow '*' in property names, and we use the character '_' to
+ * substitute for '*' in extended language ranges in property names.
+ * These are used to specify an analyzer for the given language range.
+ * <p>
+ * If no analyzer is specified for the language range <code>*</code> then the {@link StandardAnalyzer} is used.
+ * <p>
+ * Given any specific language, then the analyzer matching the longest configured language range,
+ * measured in number of subtags is returned by {@link #getAnalyzer(String, boolean)}
+ * In the event of a tie, the alphabetically first language range is used.
+ * The algorithm to find a match is "Extended Filtering" as defined in section 3.3.2 of RFC 4647.
+ * <p>
+ * Some useful analyzers are as follows:
+ * <dl>
+ * <dt>{@link KeywordAnalyzer}</dt>
+ * <dd>This treats every lexical value as a single search token</dd>
+ * <dt>{@link WhitespaceAnalyzer}</dt>
+ * <dd>This uses whitespace to tokenize</dd>
+ * <dt>{@link PatternAnalyzer}</dt>
+ * <dd>This uses a regular expression to tokenize</dd>
+ * <dt>{@link TermCompletionAnalyzer}</dt>
+ * <dd>This uses up to three regular expressions to specify multiple tokens for each word, to address term completion use cases.</dd>
+ * <dt>{@link EmptyAnalyzer}</dt>
+ * <dd>This suppresses the functionality, by treating every expression as a stop word.</dd>
+ * </dl>
+ * there are in addition the language specific analyzers that are included
+ * by using the option {@link Options#NATURAL_LANGUAGE_SUPPORT}
+ *
+ *
+ * @author jeremycarroll
+ *
+ */
+public class ConfigurableAnalyzerFactory implements IAnalyzerFactory {
+ final private static transient Logger log = Logger.getLogger(ConfigurableAnalyzerFactory.class);
+
+ /**
+ * Options understood by the {@link ConfigurableAnalyzerFactory}.
+ */
+ public interface Options {
+ /**
+ * By setting this option to true, then all the known Lucene Analyzers for natural
+ * languages are used for a range of language tags.
+ * These settings may then be overridden by the settings of the user.
+ * Specifically the following properties are loaded, prior to loading the
+ * user's specification (with <code>c.b.s.C</code> expanding to
+ * <code>com.bigdata.search.ConfigurableAnalyzerFactory</code>)
+<pre>
+c.b.s.C.analyzer._.like=eng
+c.b.s.C.analyzer.por.analyzerClass=org.apache.lucene.analysis.br.BrazilianAnalyzer
+c.b.s.C.analyzer.pt.like=por
+c.b.s.C.analyzer.zho.analyzerClass=org.apache.lucene.analysis.cn.ChineseAnalyzer
+c.b.s.C.analyzer.chi.like=zho
+c.b.s.C.analyzer.zh.like=zho
+c.b.s.C.analyzer.jpn.analyzerClass=org.apache.lucene.analysis.cjk.CJKAnalyzer
+c.b.s.C.analyzer.ja.like=jpn
+c.b.s.C.analyzer.kor.like=jpn
+c.b.s.C.analyzer.ko.like=kor
+c.b.s.C.analyzer.ces.analyzerClass=org.apache.lucene.analysis.cz.CzechAnalyzer
+c.b.s.C.analyzer.cze.like=ces
+c.b.s.C.analyzer.cs.like=ces
+c.b.s.C.analyzer.dut.analyzerClass=org.apache.lucene.analysis.nl.DutchAnalyzer
+c.b.s.C.analyzer.nld.like=dut
+c.b.s.C.analyzer.nl.like=dut
+c.b.s.C.analyzer.deu.analyzerClass=org.apache.lucene.analysis.de.GermanAnalyzer
+c.b.s.C.analyzer.ger.like=deu
+c.b.s.C.analyzer.de.like=deu
+c.b.s.C.analyzer.gre.analyzerClass=org.apache.lucene.analysis.el.GreekAnalyzer
+c.b.s.C.analyzer.ell.like=gre
+c.b.s.C.analyzer.el.like=gre
+c.b.s.C.analyzer.rus.analyzerClass=org.apache.lucene.analysis.ru.RussianAnalyzer
+c.b.s.C.analyzer.ru.like=rus
+c.b.s.C.analyzer.tha.analyzerClass=org.apache.lucene.analysis.th.ThaiAnalyzer
+c.b.s.C.analyzer.th.like=tha
+c.b.s.C.analyzer.eng.analyzerClass=org.apache.lucene.analysis.standard.StandardAnalyzer
+c.b.s.C.analyzer.en.like=eng
+</pre>
+ *
+ *
+ */
+ String NATURAL_LANGUAGE_SUPPORT = ConfigurableAnalyzerFactory.class.getName() + ".naturalLanguageSupport";
+ /**
+ * This is the prefix to all properties configuring the individual analyzers.
+ */
+ String ANALYZER = ConfigurableAnalyzerFactory.class.getName() + ".analyzer.";
+
+ String DEFAULT_NATURAL_LANGUAGE_SUPPORT = "false";
+ }
+ /**
+ * Options understood by analyzers created by {@link ConfigurableAnalyzerFactory}.
+ * These options are appended to the RFC 4647 language range
+ */
+ public interface AnalyzerOptions {
+ /**
+ * If specified this is the fully qualified name of a subclass of {@link Analyzer}
+ * that has appropriate constructors.
+ * This is set implicitly if some of the options below are selected (for example {@link #PATTERN}).
+ * For each configured language range, if it is not set, either explicitly or implicitly, then
+ * {@link #LIKE} must be specified.
+ */
+ String ANALYZER_CLASS = "analyzerClass";
+
+ /**
+ * The value of this property is a language range, for which
+ * an analyzer is defined.
+ * Treat this language range in the same way as the specified
+ * language range.
+ *
+ * {@link #LIKE} loops are not permitted.
+ *
+ * If this is option is specified for a language range,
+ * then no other option is permitted.
+ */
+ String LIKE = "like";
+
+ /**
+ * The value of this property is one of:
+ * <dl>
+ * <dt>{@link #STOPWORDS_VALUE_NONE}</dt>
+ * <dd>This analyzer is used without stop words.</dd>
+ * <dt>{@link #STOPWORDS_VALUE_DEFAULT}</dt>
+ * <dd>Use the default setting for stopwords for this analyzer. It is an error
+ * to set this value on some analyzers such as {@link SimpleAnalyzer} that do not supprt stop words.
+ * </dd>
+ * <dt>A fully qualified class name</dt>
+ * <dd>... of a subclass of {@link Analyzer} which
+ * has a static method <code>getDefaultStopSet()</code>, in which case, the returned set of stop words is used.
+ * </dd>
+ * </dl>
+ * If the {@link #ANALYZER_CLASS} does not support stop words then any value other than {@link #STOPWORDS_VALUE_NONE} is an error.
+ * If the {@link #ANALYZER_CLASS} does support stop words then the default value is {@link #STOPWORDS_VALUE_DEFAULT}
+ */
+ String STOPWORDS = "stopwords";
+
+ String STOPWORDS_VALUE_DEFAULT = "default";
+
+ String STOPWORDS_VALUE_NONE = "none";
+ /**
+ * The value of the pattern parameter to
+ * {@link PatternAnalyzer#PatternAnalyzer(Version, Pattern, boolean, Set)}
+ * (Note the {@link Pattern#UNICODE_CHARACTER_CLASS} flag is enabled).
+ * It is an error if a different analyzer class is specified.
+ */
+ String PATTERN = "pattern";
+ /**
+ * The value of the wordBoundary parameter to
+ * {@link TermCompletionAnalyzer#TermCompletionAnalyzer(Pattern, Pattern, Pattern, boolean)}
+ * (Note the {@link Pattern#UNICODE_CHARACTER_CLASS} flag is enabled).
+ * It is an error if a different analyzer class is specified.
+ */
+ String WORD_BOUNDARY = "wordBoundary";
+ /**
+ * The value of the subWordBoundary parameter to
+ * {@link TermCompletionAnalyzer#TermCompletionAnalyzer(Pattern, Pattern, Pattern, boolean)}
+ * (Note the {@link Pattern#UNICODE_CHARACTER_CLASS} flag is enabled).
+ * It is an error if a different analyzer class is specified.
+ */
+ String SUB_WORD_BOUNDARY = "subWordBoundary";
+ /**
+ * The value of the softHyphens parameter to
+ * {@link TermCompletionAnalyzer#TermCompletionAnalyzer(Pattern, Pattern, Pattern, boolean)}
+ * (Note the {@link Pattern#UNICODE_CHARACTER_CLASS} flag is enabled).
+ * It is an error if a different analyzer class is specified.
+ */
+ String SOFT_HYPHENS = "softHyphens";
+ /**
+ * The value of the alwaysRemoveSoftHypens parameter to
+ * {@link TermCompletionAnalyzer#TermCompletionAnalyzer(Pattern, Pattern, Pattern, boolean)}
+ * (Note the {@link Pattern#UNICODE_CHARACTER_CLASS} flag is enabled).
+ * It is an error if a different analyzer class is specified.
+ */
+ String ALWAYS_REMOVE_SOFT_HYPHENS = "alwaysRemoveSoftHyphens";
+
+ boolean DEFAULT_ALWAYS_REMOVE_SOFT_HYPHENS = false;
+
+ /**
+ * The default sub-word boundary is a pattern that never matches,
+ * i.e. there are no sub-word boundaries.
+ */
+ Pattern DEFAULT_SUB_WORD_BOUNDARY = Pattern.compile("(?!)");
+
+ }
+
+ /**
+ * Initialization is a little tricky, because on the very first
+ * call to the constructor with a new namespace or a new journal
+ * the fullTextIndex is not ready for use.
+ * Therefore we delegate to an unconfigured object
+ * which on the first call to {@link NeedsConfiguringAnalyzerFactory#getAnalyzer(String, boolean)}
+ * does the configuration and replaces itself here with a
+ * {@link ConfiguredAnalyzerFactory}
+ */
+ IAnalyzerFactory delegate;
+
+ /**
+ * Builds a new ConfigurableAnalyzerFactory.
+ * @param fullTextIndex
+ */
+ public ConfigurableAnalyzerFactory(final FullTextIndex<?> fullTextIndex) {
+ delegate = new NeedsConfiguringAnalyzerFactory(this, fullTextIndex);
+ }
+
+
+ static int loggerIdCounter = 0;
+ @Override
+ public Analyzer getAnalyzer(final String languageCode, boolean filterStopwords) {
+
+ final Analyzer unlogged = delegate.getAnalyzer(languageCode, filterStopwords);
+ if (log.isDebugEnabled()) {
+ return new Analyzer() {
+ @Override
+ public TokenStream tokenStream(final String fieldName, final Reader reader) {
+ final int id = loggerIdCounter++;
+ final String term = TermCompletionAnalyzer.getStringReaderContents((StringReader)reader);
+ log.debug(id + " " + languageCode +" **"+term+"**");
+ return new TokenFilter(unlogged.tokenStream(fieldName, reader)){
+
+ TermAttribute attr = addAttribute(TermAttribute.class);
+ @Override
+ public boolean incrementToken() throws IOException {
+ if (input.incrementToken()) {
+ log.debug(id + " |"+attr.term()+"|");
+ return true;
+ }
+ return false;
+ }};
+ }
+ };
+ } else {
+ return unlogged;
+ }
+
+ }
+
+}
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/ConfiguredAnalyzerFactory.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/ConfiguredAnalyzerFactory.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/ConfiguredAnalyzerFactory.java 2014-05-21 09:58:14 UTC (rev 8391)
@@ -0,0 +1,161 @@
+/**
+
+Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+/*
+ * Created on May 6, 2014 by Jeremy J. Carroll, Syapse Inc.
+ */
+package com.bigdata.search;
+
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.lucene.analysis.Analyzer;
+
+import com.bigdata.search.ConfigurableAnalyzerFactory.AnalyzerOptions;
+/**
+ * This comment describes the implementation of {@link ConfiguredAnalyzerFactory}.
+ * The only method in the interface is {@link ConfiguredAnalyzerFactory#getAnalyzer(String, boolean)},
+ * a map is used from language tag to {@link AnalyzerPair}, where the pair contains
+ * an {@link Analyzer} both with and without stopwords configured (some times these two analyzers are identical,
+ * if, for example, stop words are not supported or not required).
+ * <p>
+ * If there is no entry for the language tag in the map {@link ConfiguredAnalyzerFactory#langTag2AnalyzerPair},
+ * then one is created, by walking down the array {@link ConfiguredAnalyzerFactory#config} of AnalyzerPairs
+ * until a matching one is found.
+ * @author jeremycarroll
+ *
+ */
+class ConfiguredAnalyzerFactory implements IAnalyzerFactory {
+
+
+ /**
+ * These provide a mapping from a language range to a pair of Analyzers
+ * and sort with the best-match (i.e. longest match) first.
+ * @author jeremycarroll
+ *
+ */
+ protected static class AnalyzerPair implements Comparable<AnalyzerPair>{
+ final LanguageRange range;
+ private final Analyzer withStopWords;
+ private final Analyzer withoutStopWords;
+
+ public Analyzer getAnalyzer(boolean filterStopwords) {
+ return filterStopwords ? withStopWords : withoutStopWords;
+ }
+
+ public boolean extendedFilterMatch(String[] language) {
+ return range.extendedFilterMatch(language);
+ }
+
+ AnalyzerPair(String range, Analyzer withStopWords, Analyzer withOutStopWords) {
+ this.range = new LanguageRange(range);
+ this.withStopWords = withStopWords;
+ this.withoutStopWords = withOutStopWords;
+ }
+
+ /**
+ * This clone constructor implements {@link AnalyzerOptions#LIKE}.
+ * @param range
+ * @param copyMe
+ */
+ AnalyzerPair(String range, AnalyzerPair copyMe) {
+ this(range, copyMe.withStopWords, copyMe.withoutStopWords);
+ }
+
+ @Override
+ public String toString() {
+ return range.full + "=(" + withStopWords.getClass().getSimpleName() +")";
+ }
+
+ @Override
+ public int compareTo(AnalyzerPair o) {
+ return range.compareTo(o.range);
+ }
+ }
+
+
+ private final AnalyzerPair config[];
+
+ /**
+ * This caches the result of looking up a lang tag in the
+ * config of language ranges.
+ */
+ private final Map<String, AnalyzerPair> langTag2AnalyzerPair = new ConcurrentHashMap<String, AnalyzerPair>();;
+
+ /**
+ * While it would be very unusual to have more than 500 different language tags in a store
+ * it is possible - we use a max size to prevent a memory explosion, and a naive caching
+ * strategy so the code will still work on the {@link #MAX_LANG_CACHE_SIZE}+1 th entry.
+ */
+ private static final int MAX_LANG_CACHE_SIZE = 500;
+
+
+ private final String defaultLanguage;
+ /**
+ * Builds a new ConfigurableAnalyzerFactory.
+ * @param fullTextIndex
+ */
+ public ConfiguredAnalyzerFactory(AnalyzerPair config[], String defaultLanguage) {
+ this.config = config;
+ this.defaultLanguage = defaultLanguage;
+ }
+
+ private String getDefaultLanguage() {
+ return defaultLanguage;
+ }
+
+ @Override
+ public Analyzer getAnalyzer(String languageCode, boolean filterStopwords) {
+
+ if (languageCode == null || languageCode.equals("")) {
+
+ languageCode = getDefaultLanguage();
+ }
+
+ AnalyzerPair pair = langTag2AnalyzerPair.get(languageCode);
+
+ if (pair == null) {
+ pair = lookupPair(languageCode);
+
+ // naive cache - clear everything if cache is full
+ if (langTag2AnalyzerPair.size() == MAX_LANG_CACHE_SIZE) {
+ langTag2AnalyzerPair.clear();
+ }
+ // there is a race condition below, but we don't care who wins.
+ langTag2AnalyzerPair.put(languageCode, pair);
+ }
+
+ return pair.getAnalyzer(filterStopwords);
+
+ }
+
+ private AnalyzerPair lookupPair(String languageCode) {
+ String language[] = languageCode.split("-");
+ for (AnalyzerPair p: config) {
+ if (p.extendedFilterMatch(language)) {
+ return p;
+ }
+ }
+ throw new RuntimeException("Impossible - supposedly - did not match '*'");
+ }
+}
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/EmptyAnalyzer.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/EmptyAnalyzer.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/EmptyAnalyzer.java 2014-05-21 09:58:14 UTC (rev 8391)
@@ -0,0 +1,49 @@
+/**
+
+Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+/*
+ * Created on May 6, 2014 by Jeremy J. Carroll, Syapse Inc.
+ */
+package com.bigdata.search;
+
+import java.io.Reader;
+
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.analysis.TokenStream;
+import org.apache.lucene.analysis.miscellaneous.EmptyTokenStream;
+
+/**
+ * An analyzer that always returns an {@link EmptyTokenStream}, this can
+ * be used with {@link ConfigurableAnalyzerFactory}
+ * to switch off indexing and searching for specific language tags.
+ * @author jeremycarroll
+ *
+ */
+public class EmptyAnalyzer extends Analyzer {
+
+ @Override
+ public TokenStream tokenStream(String arg0, Reader arg1) {
+ return new EmptyTokenStream();
+ }
+
+}
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/LanguageRange.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/LanguageRange.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/LanguageRange.java 2014-05-21 09:58:14 UTC (rev 8391)
@@ -0,0 +1,126 @@
+package com.bigdata.search;
+
+import java.util.Locale;
+
+
+/**
+ * This is an implementation of RFC 4647 language range,
+ * targetted at the specific needs within bigdata, and only
+ * supporting the extended filtering specified in section 3.3.2
+ * <p>
+ * Language ranges are comparable so that
+ * sorting an array and then matching a language tag against each
+ * member of the array in sequence will give the longest match.
+ * i.e. the longer ranges come first.
+ * @author jeremycarroll
+ *
+ */
+public class LanguageRange implements Comparable<LanguageRange> {
+
+ private final String range[];
+ final String full;
+ /**
+ * Note range must be in lower case, this is not verified.
+ * @param range
+ */
+ public LanguageRange(String range) {
+ this.range = range.split("-");
+ full = range;
+ }
+
+ @Override
+ public int compareTo(LanguageRange o) {
+ if (equals(o)) {
+ return 0;
+ }
+ int diff = o.range.length - range.length;
+ if (diff != 0) {
+ // longest first
+ return diff;
+ }
+ if (range.length == 1) {
+ // * last
+ if (range[0].equals("*")) {
+ return 1;
+ }
+ if (o.range[0].equals("*")) {
+ return -1;
+ }
+ }
+ // alphabetically
+ for (int i=0; i<range.length; i++) {
+ diff = range[i].compareTo(o.range[i]);
+ if (diff != 0) {
+ return diff;
+ }
+ }
+ throw new RuntimeException("Impossible - supposedly");
+ }
+
+ @Override
+ public boolean equals(Object o) {
+ return (o instanceof LanguageRange) && ((LanguageRange)o).full.equals(full);
+ }
+ @Override
+ public int hashCode() {
+ return full.hashCode();
+ }
+
+ /**
+ * This implements the algoirthm of section 3.3.2 of RFC 4647
+ * as modified with the observation about private use tags
+ * in <a href="http://lists.w3.org/Archives/Public/www-international/2014AprJun/0084">
+ * this message</a>.
+ *
+ *
+ * @param langTag The RFC 5646 Language tag in lower case
+ * @return The result of the algorithm
+ */
+ public boolean extendedFilterMatch(String langTag) {
+ return extendedFilterMatch(langTag.toLowerCase(Locale.ROOT).split("-"));
+ }
+
+ // See RFC 4647, 3.3.2
+ boolean extendedFilterMatch(String[] language) {
+ // RFC 4647 step 2
+ if (!matchSubTag(language[0], range[0])) {
+ return false;
+ }
+ int rPos = 1;
+ int lPos = 1;
+ // variant step - for private use flags
+ if (language[0].equals("x") && range[0].equals("*")) {
+ lPos = 0;
+ }
+ // RFC 4647 step 3
+ while (rPos < range.length) {
+ // step 3A
+ if (range[rPos].equals("*")) {
+ rPos ++;
+ continue;
+ }
+ // step 3B
+ if (lPos >= language.length) {
+ return false;
+ }
+ // step 3C
+ if (matchSubTag(language[lPos], range[rPos])) {
+ lPos++;
+ rPos++;
+ continue;
+ }
+ if (language[lPos].length()==1) {
+ return false;
+ }
+ lPos++;
+ }
+ // RFC 4647 step 4
+ return true;
+ }
+
+ // RFC 4647, 3.3.2, step 1
+ private boolean matchSubTag(String langSubTag, String rangeSubTag) {
+ return langSubTag.equals(rangeSubTag) || "*".equals(rangeSubTag);
+ }
+
+}
\ No newline at end of file
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/NeedsConfiguringAnalyzerFactory.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/NeedsConfiguringAnalyzerFactory.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/NeedsConfiguringAnalyzerFactory.java 2014-05-21 09:58:14 UTC (rev 8391)
@@ -0,0 +1,649 @@
+/**
+
+Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+/*
+ * Created on May 6, 2014 by Jeremy J. Carroll, Syapse Inc.
+ */
+package com.bigdata.search;
+
+import java.io.IOException;
+import java.io.StringReader;
+import java.lang.reflect.Constructor;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Enumeration;
+import java.util.HashMap;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+import java.util.UUID;
+import java.util.WeakHashMap;
+import java.util.regex.Pattern;
+
+import org.apache.log4j.Logger;
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.analysis.StopAnalyzer;
+import org.apache.lucene.analysis.miscellaneous.PatternAnalyzer;
+import org.apache.lucene.analysis.ru.RussianAnalyzer;
+import org.apache.lucene.analysis.standard.StandardAnalyzer;
+import org.apache.lucene.util.Version;
+
+import com.bigdata.btree.keys.IKeyBuilder;
+import com.bigdata.btree.keys.KeyBuilder;
+import com.bigdata.search.ConfigurableAnalyzerFactory.AnalyzerOptions;
+import com.bigdata.search.ConfigurableAnalyzerFactory.Options;
+
+
+/**
+ * <p>
+ * The bulk of the code in this class is invoked from {@link #init()} to set up the array of
+ * {@link ConfiguredAnalyzerFactory.AnalyzerPair}s. For example, all of the subclasses of {@link AnalyzerPair}s,
+ * are simply to call the appropriate constructor in the appropriate way: the difficulty is that many subclasses
+ * of {@link Analyzer} have constructors with different signatures, and our code needs to navigate each sort.
+ * @author jeremycarroll
+ *
+ */
+class NeedsConfiguringAnalyzerFactory implements IAnalyzerFactory {
+ final private static transient Logger log = Logger.getLogger(NeedsConfiguringAnalyzerFactory.class);
+
+ /**
+ * We create only one {@link ConfiguredAnalyzerFactory} per namespace
+ * and store it here. The UUID is stable and allows us to side-step lifecycle
+ * issues such as creation and destruction of namespaces, potentially with different properties.
+ * We use a WeakHashMap to ensure that after the destruction of a namespace we clean up.
+ * We have to synchronize this for thread safety.
+ */
+ private static final Map<UUID, ConfiguredAnalyzerFactory> allConfigs =
+ Collections.synchronizedMap(new WeakHashMap<UUID, ConfiguredAnalyzerFactory>());
+
+
+ private static final String ALL_LUCENE_NATURAL_LANGUAGES =
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.*.like=eng\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.por.analyzerClass=org.apache.lucene.analysis.br.BrazilianAnalyzer\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.pt.like=por\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.zho.analyzerClass=org.apache.lucene.analysis.cn.ChineseAnalyzer\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.chi.like=zho\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.zh.like=zho\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.jpn.analyzerClass=org.apache.lucene.analysis.cjk.CJKAnalyzer\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.ja.like=jpn\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.kor.like=jpn\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.ko.like=kor\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.ces.analyzerClass=org.apache.lucene.analysis.cz.CzechAnalyzer\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.cze.like=ces\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.cs.like=ces\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.dut.analyzerClass=org.apache.lucene.analysis.nl.DutchAnalyzer\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.nld.like=dut\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.nl.like=dut\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.deu.analyzerClass=org.apache.lucene.analysis.de.GermanAnalyzer\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.ger.like=deu\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.de.like=deu\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.gre.analyzerClass=org.apache.lucene.analysis.el.GreekAnalyzer\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.ell.like=gre\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.el.like=gre\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.rus.analyzerClass=org.apache.lucene.analysis.ru.RussianAnalyzer\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.ru.like=rus\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.tha.analyzerClass=org.apache.lucene.analysis.th.ThaiAnalyzer\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.th.like=tha\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.eng.analyzerClass=org.apache.lucene.analysis.standard.StandardAnalyzer\n" +
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.en.like=eng\n";
+
+ private static final String LUCENE_STANDARD_ANALYZER =
+ "com.bigdata.search.ConfigurableAnalyzerFactory.analyzer.*.analyzerClass=org.apache.lucene.analysis.standard.StandardAnalyzer\n";
+
+ static int loggerIdCounter = 0;
+
+ /**
+ * This class and all its subclasses provide a variety of patterns
+ * for mapping from the various constructor patterns of subclasses
+ * of {@link Analyzer} to {@link ConfiguredAnalyzerFactory#AnalyzerPair}.
+ * @author jeremycarroll
+ *
+ */
+ private static class AnalyzerPair extends ConfiguredAnalyzerFactory.AnalyzerPair {
+
+ AnalyzerPair(String range, Analyzer withStopWords, Analyzer withOutStopWords) {
+ super(range, withStopWords, withOutStopWords);
+ }
+
+ /**
+ * This clone constructor implements {@link AnalyzerOptions#LIKE}.
+ * @param range
+ * @param copyMe
+ */
+ AnalyzerPair(String range, AnalyzerPair copyMe) {
+ super(range, copyMe);
+ }
+
+ /**
+ * If we have a constructor, with arguments including a populated
+ * stop word set, then we can use it to make both the withStopWords
+ * analyzer, and the withoutStopWords analyzer.
+ * @param range
+ * @param cons A Constructor including a {@link java.util.Set} argument
+ * for the stop words.
+ * @param params The arguments to pass to the constructor including a populated stopword set.
+ * @throws Exception
+ */
+ AnalyzerPair(String range, Constructor<? extends Analyzer> cons, Object ... params) throws Exception {
+ this(range, cons.newInstance(params), cons.newInstance(useEmptyStopWordSet(params)));
+ }
+ AnalyzerPair(String range, Analyzer stopWordsNotSupported) {
+ this(range, stopWordsNotSupported, stopWordsNotSupported);
+ }
+ private static Object[] useEmptyStopWordSet(Object[] params) {
+ Object rslt[] = new Object[params.length];
+ for (int i=0; i<params.length; i++) {
+ if (params[i] instanceof Set) {
+ rslt[i] = Collections.EMPTY_SET;
+ } else {
+ rslt[i] = params[i];
+ }
+ }
+ return rslt;
+ }
+
+ }
+
+
+ /**
+ * Used for Analyzer classes with a constructor with signature (Version, Set).
+ * @author jeremycarroll
+ *
+ */
+ private static class VersionSetAnalyzerPair extends AnalyzerPair {
+ public VersionSetAnalyzerPair(ConfigOptionsToAnalyzer lro,
+ Class<? extends Analyzer> cls) throws Exception {
+ super(lro.languageRange, getConstructor(cls, Version.class, Set.class), Version.LUCENE_CURRENT, lro.getStopWords());
+ }
+ }
+
+ /**
+ * Used for Analyzer classes which do not support stopwords and have a constructor with signature (Version).
+ * @author jeremycarroll
+ *
+ */
+ private static class VersionAnalyzerPair extends AnalyzerPair {
+ public VersionAnalyzerPair(String range, Class<? extends Analyzer> cls) throws Exception {
+ super(range, getConstructor(cls, Version.class).newInstance(Version.LUCENE_CURRENT));
+ }
+ }
+
+ /**
+ * Special case code for {@link PatternAnalyzer}
+ * @author jeremycarroll
+ *
+ */
+ private static class PatternAnalyzerPair extends AnalyzerPair {
+ public PatternAnalyzerPair(ConfigOptionsToAnalyzer lro, Pattern pattern) throws Exception {
+ super(lro.languageRange, getConstructor(PatternAnalyzer.class,Version.class,Pattern.class,Boolean.TYPE,Set.class),
+ Version.LUCENE_CURRENT,
+ pattern,
+ true,
+ lro.getStopWords());
+ }
+ }
+
+
+
+ /**
+ * This class is initialized with the config options, using the {@link #setProperty(String, String)}
+ * method, for a particular language range and works out which pair of {@link Analyzer}s
+ * to use for that language range.
+ * <p>
+ * Instances of this class are only alive during the execution of
+ * {@link NeedsConfiguringAnalyzerFactory#ConfigurableAnalyzerFactory(FullTextIndex)},
+ * the life-cycle is:
+ * <ol>
+ * <li>The relveant config properties are applied, and are used to populate the fields.
+ * <li>The fields are validated
+ * <li>An {@link AnalyzerPair} is constructed
+ * </ol>
+ *
+ * @author jeremycarroll
+ *
+ */
+ private static class ConfigOptionsToAnalyzer {
+
+ String like;
+ String className;
+ String stopwords;
+ Pattern pattern;
+ final String languageRange;
+ AnalyzerPair result;
+ Pattern wordBoundary;
+ Pattern subWordBoundary;
+ Pattern softHyphens;
+ Boolean alwaysRemoveSoftHyphens;
+
+ public ConfigOptionsToAnalyzer(String languageRange) {
+ this.languageRange = languageRange;
+ }
+
+ /**
+ * This is called only when we have already identified that
+ * the class does support stopwords.
+ * @return
+ */
+ public Set<?> getStopWords() {
+
+ if (doNotUseStopWords())
+ return Collections.EMPTY_SET;
+
+ if (useDefaultStopWords()) {
+ return getStopWordsForClass(className);
+ }
+
+ return getStopWordsForClass(stopwords);
+ }
+
+ boolean doNotUseStopWords() {
+ return AnalyzerOptions.STOPWORDS_VALUE_NONE.equals(stopwords) || (stopwords == null && pattern != null);
+ }
+
+ protected Set<?> getStopWordsForClass(String clazzName) {
+ Class<? extends Analyzer> analyzerClass = getAnalyzerClass(clazzName);
+ try {
+ return (Set<?>) analyzerClass.getMethod("getDefaultStopSet").invoke(null);
+ } catch (Exception e) {
+ if (StandardAnalyzer.class.equals(analyzerClass)) {
+ return StandardAnalyzer.STOP_WORDS_SET;
+ }
+ if (StopAnalyzer.class.equals(analyzerClass)) {
+ return StopAnalyzer.ENGLISH_STOP_WORDS_SET;
+ }
+ throw new RuntimeException("Failed to find stop words from " + clazzName + " for language range "+languageRange);
+ }
+ }
+
+ protected boolean useDefaultStopWords() {
+ return ( stopwords == null && pattern == null ) || AnalyzerOptions.STOPWORDS_VALUE_DEFAULT.equals(stopwords);
+ }
+
+ /**
+ * The first step in the life-cycle, used to initialize the fields.
+ * @return true if the property was recognized.
+ */
+ public boolean setProperty(String shortProperty, String value) {
+ if (shortProperty.equals(AnalyzerOptions.LIKE) ) {
+ like = value;
+ } else if (shortProperty.equals(AnalyzerOptions.ANALYZER_CLASS) ) {
+ className = value;
+ } else if (shortProperty.equals(AnalyzerOptions.STOPWORDS) ) {
+ stopwords = value;
+ } else if (shortProperty.equals(AnalyzerOptions.PATTERN) ) {
+ pattern = Pattern.compile(value,Pattern.UNICODE_CHARACTER_CLASS);
+ } else if (shortProperty.equals(AnalyzerOptions.WORD_BOUNDARY) ) {
+ wordBoundary = Pattern.compile(value,Pattern.UNICODE_CHARACTER_CLASS);
+ } else if (shortProperty.equals(AnalyzerOptions.SUB_WORD_BOUNDARY) ) {
+ subWordBoundary = Pattern.compile(value,Pattern.UNICODE_CHARACTER_CLASS);
+ } else if (shortProperty.equals(AnalyzerOptions.SOFT_HYPHENS) ) {
+ softHyphens = Pattern.compile(value,Pattern.UNICODE_CHARACTER_CLASS);
+ } else if (shortProperty.equals(AnalyzerOptions.ALWAYS_REMOVE_SOFT_HYPHENS) ) {
+ alwaysRemoveSoftHyphens = Boolean.valueOf(value);
+ } else {
+ return false;
+ }
+ return true;
+ }
+
+ /**
+ * The second phase of the life-cycle, used for sanity checking.
+ */
+ public void validate() {
+ if (pattern != null ) {
+ if ( className != null && className != PatternAnalyzer.class.getName()) {
+ throw new RuntimeException("Bad Option: Language range "+languageRange + " with pattern propety for class "+ className);
+ }
+ className = PatternAnalyzer.class.getName();
+ }
+ if (this.wordBoundary != null ) {
+ if ( className != null && className != TermCompletionAnalyzer.class.getName()) {
+ throw new RuntimeException("Bad Option: Language range "+languageRange + " with pattern propety for class "+ className);
+ }
+ className = TermCompletionAnalyzer.class.getName();
+
+ if ( subWordBoundary == null ) {
+ subWordBoundary = AnalyzerOptions.DEFAULT_SUB_WORD_BOUNDARY;
+ }
+ if ( alwaysRemoveSoftHyphens != null && softHyphens == null ) {
+ throw new RuntimeException("Bad option: Language range "+languageRange + ": must specify softHypens when setting alwaysRemoveSoftHyphens");
+ }
+ if (softHyphens != null && alwaysRemoveSoftHyphens == null) {
+ alwaysRemoveSoftHyphens = AnalyzerOptions.DEFAULT_ALWAYS_REMOVE_SOFT_HYPHENS;
+ }
+
+ } else if ( subWordBoundary != null || softHyphens != null || alwaysRemoveSoftHyphens != null ||
+ TermCompletionAnalyzer.class.getName().equals(className) ) {
+ throw new RuntimeException("Bad option: Language range "+languageRange + ": must specify wordBoundary for TermCompletionAnalyzer");
+ }
+
+ if (PatternAnalyzer.class.getName().equals(className) && pattern == null ) {
+ throw new RuntimeException("Bad Option: Language range "+languageRange + " must specify pattern for PatternAnalyzer.");
+ }
+ if ( (like != null) == (className != null) ) {
+ throw new RuntimeException("Bad Option: Language range "+languageRange + " must specify exactly one of implementation class or like.");
+ }
+ if (stopwords != null && like != null) {
+ throw new RuntimeException("Bad Option: Language range "+languageRange + " must not specify stopwords with like.");
+ }
+
+ }
+
+ /**
+ * The third and final phase of the life-cyle used for identifying
+ * the AnalyzerPair.
+ */
+ private AnalyzerPair construct() throws Exception {
+ if (className == null) {
+ return null;
+ }
+ if (pattern != null) {
+ return new PatternAnalyzerPair(this, pattern);
+ }
+ if (softHyphens != null) {
+ return new AnalyzerPair(
+ languageRange,
+ new TermCompletionAnalyzer(
+ wordBoundary,
+ subWordBoundary,
+ softHyphens,
+ alwaysRemoveSoftHyphens));
+ }
+ if (wordBoundary != null) {
+ return new AnalyzerPair(
+ languageRange,
+ new TermCompletionAnalyzer(
+ wordBoundary,
+ subWordBoundary));
+ }
+ final Class<? extends Analyzer> cls = getAnalyzerClass();
+
+ if (hasConstructor(cls, Version.class, Set.class)) {
+
+ // RussianAnalyzer is missing any way to access stop words.
+ if (RussianAnalyzer.class.equals(cls)) {
+ if (useDefaultStopWords()) {
+ return new AnalyzerPair(languageRange, new RussianAnalyzer(Version.LUCENE_CURRENT), new RussianAnalyzer(Version.LUCENE_CURRENT, Collections.EMPTY_SET));
+ }
+ if (doNotUseStopWords()) {
+ return new AnalyzerPair(languageRange, new RussianAnalyzer(Version.LUCENE_CURRENT, Collections.EMPTY_SET));
+ }
+ }
+ return new VersionSetAnalyzerPair(this, cls);
+ }
+
+ if (stopwords != null && !stopwords.equals(AnalyzerOptions.STOPWORDS_VALUE_NONE)) {
+ throw new RuntimeException("Bad option: language range: " + languageRange + " stopwords are not supported by " + className);
+ }
+ if (hasConstructor(cls, Version.class)) {
+ return new VersionAnalyzerPair(languageRange, cls);
+ }
+
+ if (hasConstructor(cls)) {
+ return new AnalyzerPair(languageRange, cls.newInstance());
+ }
+ throw new RuntimeException("Bad option: cannot find constructor for class " + className + " for language range " + languageRange);
+ }
+
+ /**
+ * Also part of the third phase of the life-cycle, following the {@link AnalyzerOptions#LIKE}
+ * properties.
+ * @param depth
+ * @param max
+ * @param analyzers
+ * @return
+ */
+ AnalyzerPair followLikesToAnalyzerPair(int depth, int max,
+ Map<String, ConfigOptionsToAnalyzer> analyzers) {
+ if (result == null) {
+ if (depth == max) {
+ throw new RuntimeException("Bad configuration: - 'like' loop for language range " + languageRange);
+ }
+ ConfigOptionsToAnalyzer next = analyzers.get(like);
+ if (next == null) {
+ throw new RuntimeException("Bad option: - 'like' not found for language range " + languageRange+ " (not found: '"+ like +"')");
+ }
+ result = new AnalyzerPair(languageRange, next.followLikesToAnalyzerPair(depth+1, max, analyzers));
+ }
+ return result;
+ }
+
+ protected Class<? extends Analyzer> getAnalyzerClass() {
+ return getAnalyzerClass(className);
+ }
+
+ @SuppressWarnings("unchecked")
+ protected Class<? extends Analyzer> getAnalyzerClass(String className2) {
+ final Class<? extends Analyzer> cls;
+ try {
+ cls = (Class<? extends Analyzer>) Class.forName(className2);
+ } catch (ClassNotFoundException e) {
+ throw new RuntimeException("Bad option: cannot find class " + className2 + " for language range " + languageRange, e);
+ }
+ return cls;
+ }
+
+ void setAnalyzerPair(AnalyzerPair ap) {
+ result = ap;
+ }
+ }
+
+
+ private final FullTextIndex<?> fullTextIndex;
+
+ private final ConfigurableAnalyzerFactory parent;
+
+
+ NeedsConfiguringAnalyzerFactory(ConfigurableAnalyzerFactory parent, final FullTextIndex<?> fullTextIndex) {
+ if (fullTextIndex == null)
+ throw new IllegalArgumentException();
+ this.fullTextIndex = fullTextIndex;
+ this.parent = parent;
+
+ }
+
+ private ConfiguredAnalyzerFactory init() {
+
+ UUID uuid = fullTextIndex.getIndex().getIndexMetadata().getIndexUUID();
+
+ ConfiguredAnalyzerFactory configuration = allConfigs.get(uuid);
+
+ if (configuration == null) {
+
+ // First time for this namespace - we need to analyze the properties
+ // and construct a ConfiguredAnalyzerFactory
+
+ final Properties properties = initProperties();
+
+ final Map<String, ConfigOptionsToAnalyzer> analyzers = new HashMap<String, ConfigOptionsToAnalyzer>();
+
+ properties2analyzers(properties, analyzers);
+
+ if (!analyzers.containsKey("*")) {
+ throw new RuntimeException("Bad config: must specify behavior on language range '*'");
+ }
+
+ for (ConfigOptionsToAnalyzer a: analyzers.values()) {
+ a.validate();
+ }
+
+ try {
+ for (ConfigOptionsToAnalyzer a: analyzers.values()) {
+ a.setAnalyzerPair(a.construct());
+ }
+ } catch (Exception e) {
+ throw new RuntimeException("Cannot construct ConfigurableAnalyzerFactory", e);
+ }
+ int sz = analyzers.size();
+ for (ConfigOptionsToAnalyzer a: analyzers.values()) {
+ a.followLikesToAnalyzerPair(0, sz, analyzers);
+ }
+
+ AnalyzerPair[] allPairs = new AnalyzerPair[sz];
+ int i = 0;
+ for (ConfigOptionsToAnalyzer a: analyzers.values()) {
+ allPairs[i++] = a.result;
+ }
+ Arrays.sort(allPairs);
+ if (log.isInfoEnabled()) {
+ StringBuilder sb = new StringBuilder();
+ sb.append("Installed text Analyzer's: ");
+ for (AnalyzerPair ap: allPairs) {
+ sb.append(ap.toString());
+ sb.append(", ");
+ }
+ log.info(sb.toString());
+ }
+ configuration = new ConfiguredAnalyzerFactory(allPairs, getDefaultLanguage());
+ allConfigs.put(uuid, configuration);
+ }
+ return configuration;
+ }
+
+ private String getDefaultLanguage() {
+
+ final IKeyBuilder keyBuilder = fullTextIndex.getKeyBuilder();
+
+ if (keyBuilder.isUnicodeSupported()) {
+
+ // The configured locale for the database.
+ final Locale locale = ((KeyBuilder) keyBuilder)
+ .getSortKeyGenerator().getLocale();
+
+ // The language for that locale.
+ return locale.getLanguage();
+
+ } else {
+ // Rule, Britannia!
+ return "en";
+
+ }
+ }
+
+ private static boolean hasConstructor(Class<? extends Analyzer> cls, Class<?> ... parameterTypes) {
+ return getConstructor(cls, parameterTypes) != null;
+ }
+
+ protected static Constructor<? extends Analyzer> getConstructor(Class<? extends Analyzer> cls,
+ Class<?>... parameterTypes) {
+ try {
+ return cls.getConstructor(parameterTypes);
+ } catch (NoSuchMethodException | SecurityException e) {
+ return null;
+ }
+ }
+
+ private void properties2analyzers(Properties props, Map<String, ConfigOptionsToAnalyzer> analyzers) {
+
+ Enumeration<?> en = props.propertyNames();
+ while (en.hasMoreElements()) {
+
+ String prop = (String)en.nextElement();
+ if (prop.equals(Options.NATURAL_LANGUAGE_SUPPORT)) continue;
+ if (prop.startsWith(Options.ANALYZER)) {
+ String...
[truncated message content] |
|
From: <dme...@us...> - 2014-05-21 09:56:40
|
Revision: 8390
http://sourceforge.net/p/bigdata/code/8390
Author: dmekonnen
Date: 2014-05-21 09:56:34 +0000 (Wed, 21 May 2014)
Log Message:
-----------
last commit failed, retrying.
Added Paths:
-----------
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/apache-commons.txt
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/jettison-license.txt
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/rexster-license.txt
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-core-2.5.0.jar
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-test-2.5.0.jar
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/commons-configuration-1.10.jar
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/jettison-1.3.3.jar
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/rexster-core-2.5.0.jar
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraph.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphBulkLoad.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphClient.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphConfiguration.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphEmbedded.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphQuery.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataPredicate.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataRDFFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BlueprintsRDFFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/resources/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/resources/rexster.xml
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/test/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/test/com/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/test/com/bigdata/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/test/com/bigdata/blueprints/
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/test/com/bigdata/blueprints/AbstractTestBigdataGraph.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/test/com/bigdata/blueprints/TestAll.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/test/com/bigdata/blueprints/TestBigdataGraphClient.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/test/com/bigdata/blueprints/TestBigdataGraphEmbedded.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/test/com/bigdata/blueprints/graph-example-1.xml
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/apache-commons.txt
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/apache-commons.txt (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/apache-commons.txt 2014-05-21 09:56:34 UTC (rev 8390)
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/jettison-license.txt
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/jettison-license.txt (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/jettison-license.txt 2014-05-21 09:56:34 UTC (rev 8390)
@@ -0,0 +1,13 @@
+Copyright 2006 Envoi Solutions LLC
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
\ No newline at end of file
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/rexster-license.txt
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/rexster-license.txt (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/LEGAL/rexster-license.txt 2014-05-21 09:56:34 UTC (rev 8390)
@@ -0,0 +1,24 @@
+Copyright (c) 2009-Infinity, TinkerPop [http://tinkerpop.com]
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ * Neither the name of the TinkerPop nor the
+ names of its contributors may be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL TINKERPOP BE LIABLE FOR ANY
+DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
\ No newline at end of file
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-core-2.5.0.jar
===================================================================
(Binary files differ)
Index: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-core-2.5.0.jar
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-core-2.5.0.jar 2014-05-21 09:12:36 UTC (rev 8389)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-core-2.5.0.jar 2014-05-21 09:56:34 UTC (rev 8390)
Property changes on: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-core-2.5.0.jar
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+application/octet-stream
\ No newline at end of property
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-test-2.5.0.jar
===================================================================
(Binary files differ)
Index: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-test-2.5.0.jar
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-test-2.5.0.jar 2014-05-21 09:12:36 UTC (rev 8389)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-test-2.5.0.jar 2014-05-21 09:56:34 UTC (rev 8390)
Property changes on: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/blueprints-test-2.5.0.jar
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+application/octet-stream
\ No newline at end of property
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/commons-configuration-1.10.jar
===================================================================
(Binary files differ)
Index: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/commons-configuration-1.10.jar
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/commons-configuration-1.10.jar 2014-05-21 09:12:36 UTC (rev 8389)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/commons-configuration-1.10.jar 2014-05-21 09:56:34 UTC (rev 8390)
Property changes on: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/commons-configuration-1.10.jar
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+application/octet-stream
\ No newline at end of property
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/jettison-1.3.3.jar
===================================================================
(Binary files differ)
Index: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/jettison-1.3.3.jar
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/jettison-1.3.3.jar 2014-05-21 09:12:36 UTC (rev 8389)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/jettison-1.3.3.jar 2014-05-21 09:56:34 UTC (rev 8390)
Property changes on: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/jettison-1.3.3.jar
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+application/octet-stream
\ No newline at end of property
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/rexster-core-2.5.0.jar
===================================================================
(Binary files differ)
Index: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/rexster-core-2.5.0.jar
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/rexster-core-2.5.0.jar 2014-05-21 09:12:36 UTC (rev 8389)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/rexster-core-2.5.0.jar 2014-05-21 09:56:34 UTC (rev 8390)
Property changes on: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/lib/rexster-core-2.5.0.jar
___________________________________________________________________
Added: svn:mime-type
## -0,0 +1 ##
+application/octet-stream
\ No newline at end of property
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraph.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraph.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraph.java 2014-05-21 09:56:34 UTC (rev 8390)
@@ -0,0 +1,1050 @@
+/**
+Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+package com.bigdata.blueprints;
+
+import info.aduna.iteration.CloseableIteration;
+
+import java.util.Iterator;
+import java.util.LinkedHashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+
+import org.openrdf.OpenRDFException;
+import org.openrdf.model.Literal;
+import org.openrdf.model.Statement;
+import org.openrdf.model.URI;
+import org.openrdf.model.Value;
+import org.openrdf.model.impl.StatementImpl;
+import org.openrdf.model.impl.URIImpl;
+import org.openrdf.model.vocabulary.RDF;
+import org.openrdf.model.vocabulary.RDFS;
+import org.openrdf.query.GraphQueryResult;
+import org.openrdf.query.QueryLanguage;
+import org.openrdf.repository.RepositoryConnection;
+import org.openrdf.repository.RepositoryResult;
+
+import com.bigdata.rdf.store.BD;
+import com.tinkerpop.blueprints.Direction;
+import com.tinkerpop.blueprints.Edge;
+import com.tinkerpop.blueprints.Features;
+import com.tinkerpop.blueprints.Graph;
+import com.tinkerpop.blueprints.GraphQuery;
+import com.tinkerpop.blueprints.Vertex;
+import com.tinkerpop.blueprints.util.io.graphml.GraphMLReader;
+
+/**
+ * A base class for a Blueprints wrapper around a bigdata back-end.
+ *
+ * @author mikepersonick
+ *
+ */
+public abstract class BigdataGraph implements Graph {
+
+ /**
+ * URI used to represent a Vertex.
+ */
+ public static final URI VERTEX = new URIImpl(BD.NAMESPACE + "Vertex");
+
+ /**
+ * URI used to represent a Edge.
+ */
+ public static final URI EDGE = new URIImpl(BD.NAMESPACE + "Edge");
+
+ /**
+ * Factory for round-tripping between Blueprints data and RDF data.
+ */
+ final BlueprintsRDFFactory factory;
+
+ public BigdataGraph(final BlueprintsRDFFactory factory) {
+
+ this.factory = factory;
+
+ }
+
+ /**
+ * For some reason this is part of the specification (i.e. part of the
+ * Blueprints test suite).
+ */
+ public String toString() {
+
+ return getClass().getSimpleName().toLowerCase();
+
+ }
+
+ /**
+ * Different implementations will return different types of connections
+ * depending on the mode (client/server, embedded, read-only, etc.)
+ */
+ protected abstract RepositoryConnection cxn() throws Exception;
+
+ /**
+ * Return a single-valued property for an edge or vertex.
+ *
+ * @see {@link BigdataElement}
+ */
+ public Object getProperty(final URI uri, final String prop) {
+
+ return getProperty(uri, factory.toPropertyURI(prop));
+
+ }
+
+ /**
+ * Return a single-valued property for an edge or vertex.
+ *
+ * @see {@link BigdataElement}
+ */
+ public Object getProperty(final URI uri, final URI prop) {
+
+ try {
+
+ final RepositoryResult<Statement> result =
+ cxn().getStatements(uri, prop, null, false);
+
+ if (result.hasNext()) {
+
+ final Value value = result.next().getObject();
+
+ if (result.hasNext()) {
+ throw new RuntimeException(uri
+ + ": more than one value for p: " + prop
+ + ", did you mean to call getProperties()?");
+ }
+
+ if (!(value instanceof Literal)) {
+ throw new RuntimeException("not a property: " + value);
+ }
+
+ final Literal lit = (Literal) value;
+
+ return factory.fromLiteral(lit);
+
+ }
+
+ return null;
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Return a multi-valued property for an edge or vertex.
+ *
+ * @see {@link BigdataElement}
+ */
+ public List<Object> getProperties(final URI uri, final String prop) {
+
+ return getProperties(uri, factory.toPropertyURI(prop));
+
+ }
+
+
+ /**
+ * Return a multi-valued property for an edge or vertex.
+ *
+ * @see {@link BigdataElement}
+ */
+ public List<Object> getProperties(final URI uri, final URI prop) {
+
+ try {
+
+ final RepositoryResult<Statement> result =
+ cxn().getStatements(uri, prop, null, false);
+
+ final List<Object> props = new LinkedList<Object>();
+
+ while (result.hasNext()) {
+
+ final Value value = result.next().getObject();
+
+ if (!(value instanceof Literal)) {
+ throw new RuntimeException("not a property: " + value);
+ }
+
+ final Literal lit = (Literal) value;
+
+ props.add(factory.fromLiteral(lit));
+
+ }
+
+ return props;
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Return the property names for an edge or vertex.
+ *
+ * @see {@link BigdataElement}
+ */
+ public Set<String> getPropertyKeys(final URI uri) {
+
+ try {
+
+ final RepositoryResult<Statement> result =
+ cxn().getStatements(uri, null, null, false);
+
+ final Set<String> properties = new LinkedHashSet<String>();
+
+ while (result.hasNext()) {
+
+ final Statement stmt = result.next();
+
+ if (!(stmt.getObject() instanceof Literal)) {
+ continue;
+ }
+
+ if (stmt.getPredicate().equals(RDFS.LABEL)) {
+ continue;
+ }
+
+ final String p =
+ factory.fromPropertyURI(stmt.getPredicate());
+
+ properties.add(p);
+
+ }
+
+ return properties;
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Remove all values for a particular property on an edge or vertex.
+ *
+ * @see {@link BigdataElement}
+ */
+ public Object removeProperty(final URI uri, final String prop) {
+
+ return removeProperty(uri, factory.toPropertyURI(prop));
+
+ }
+
+ /**
+ * Remove all values for a particular property on an edge or vertex.
+ *
+ * @see {@link BigdataElement}
+ */
+ public Object removeProperty(final URI uri, final URI prop) {
+
+ try {
+
+ final Object oldVal = getProperty(uri, prop);
+
+ cxn().remove(uri, prop, null);
+
+ return oldVal;
+
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Set a single-value property on an edge or vertex (remove the old
+ * value first).
+ *
+ * @see {@link BigdataElement}
+ */
+ public void setProperty(final URI uri, final String prop, final Object val) {
+
+ setProperty(uri, factory.toPropertyURI(prop), factory.toLiteral(val));
+
+ }
+
+ /**
+ * Set a single-value property on an edge or vertex (remove the old
+ * value first).
+ *
+ * @see {@link BigdataElement}
+ */
+ public void setProperty(final URI uri, final URI prop, final Literal val) {
+
+ try {
+
+ cxn().remove(uri, prop, null);
+
+ cxn().add(uri, prop, val);
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Add a property on an edge or vertex (multi-value property extension).
+ *
+ * @see {@link BigdataElement}
+ */
+ public void addProperty(final URI uri, final String prop, final Object val) {
+
+ setProperty(uri, factory.toPropertyURI(prop), factory.toLiteral(val));
+
+ }
+
+ /**
+ * Add a property on an edge or vertex (multi-value property extension).
+ *
+ * @see {@link BigdataElement}
+ */
+ public void addProperty(final URI uri, final URI prop, final Literal val) {
+
+ try {
+
+ cxn().add(uri, prop, val);
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Post a GraphML file to the remote server. (Bulk-upload operation.)
+ */
+ public void loadGraphML(final String file) throws Exception {
+
+ GraphMLReader.inputGraph(this, file);
+
+ }
+
+ /**
+ * Add an edge.
+ */
+ @Override
+ public Edge addEdge(final Object key, final Vertex from, final Vertex to,
+ final String label) {
+
+ if (label == null) {
+ throw new IllegalArgumentException();
+ }
+
+ final String eid = key != null ? key.toString() : UUID.randomUUID().toString();
+
+ final URI edgeURI = factory.toEdgeURI(eid);
+
+ if (key != null) {
+
+ final Edge edge = getEdge(key);
+
+ if (edge != null) {
+ if (!(edge.getVertex(Direction.OUT).equals(from) &&
+ (edge.getVertex(Direction.OUT).equals(to)))) {
+ throw new IllegalArgumentException("edge already exists: " + key);
+ }
+ }
+
+ }
+
+ try {
+
+ // do we need to check this?
+// if (cxn().hasStatement(edgeURI, RDF.TYPE, EDGE, false)) {
+// throw new IllegalArgumentException("edge " + eid + " already exists");
+// }
+
+ final URI fromURI = factory.toVertexURI(from.getId().toString());
+ final URI toURI = factory.toVertexURI(to.getId().toString());
+
+ cxn().add(fromURI, edgeURI, toURI);
+ cxn().add(edgeURI, RDF.TYPE, EDGE);
+ cxn().add(edgeURI, RDFS.LABEL, factory.toLiteral(label));
+
+ return new BigdataEdge(new StatementImpl(fromURI, edgeURI, toURI), this);
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Add a vertex.
+ */
+ @Override
+ public Vertex addVertex(final Object key) {
+
+ try {
+
+ final String vid = key != null ?
+ key.toString() : UUID.randomUUID().toString();
+
+ final URI uri = factory.toVertexURI(vid);
+
+ // do we need to check this?
+// if (cxn().hasStatement(vertexURI, RDF.TYPE, VERTEX, false)) {
+// throw new IllegalArgumentException("vertex " + vid + " already exists");
+// }
+
+ cxn().add(uri, RDF.TYPE, VERTEX);
+
+ return new BigdataVertex(uri, this);
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Lookup an edge.
+ */
+ @Override
+ public Edge getEdge(final Object key) {
+
+ if (key == null)
+ throw new IllegalArgumentException();
+
+ try {
+
+ final URI edge = factory.toEdgeURI(key.toString());
+
+ final RepositoryResult<Statement> result =
+ cxn().getStatements(null, edge, null, false);
+
+ if (result.hasNext()) {
+
+ final Statement stmt = result.next();
+
+ if (result.hasNext()) {
+ throw new RuntimeException(
+ "duplicate edge: " + key);
+ }
+
+ return new BigdataEdge(stmt, this);
+
+ }
+
+ return null;
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Iterate all edges.
+ */
+ @Override
+ public Iterable<Edge> getEdges() {
+
+ final URI wild = null;
+ return getEdges(wild, wild);
+
+ }
+
+ /**
+ * Find edges based on the from and to vertices and the edge labels, all
+ * optional parameters (can be null). The edge labels can be null to include
+ * all labels.
+ * <p>
+ *
+ * @param from
+ * the from vertex (null for wildcard)
+ * @param to
+ * the to vertex (null for wildcard)
+ * @param labels
+ * the edge labels to consider (optional)
+ * @return the edges matching the supplied criteria
+ */
+ Iterable<Edge> getEdges(final URI from, final URI to, final String... labels) {
+
+ final GraphQueryResult stmts = getElements(from, to, labels);
+
+ return new EdgeIterable(stmts);
+
+ }
+
+ /**
+ * Translates the request to a high-performance SPARQL query:
+ *
+ * construct {
+ * ?from ?edge ?to .
+ * } where {
+ * ?edge rdf:type <Edge> .
+ *
+ * ?from ?edge ?to .
+ *
+ * # filter by edge label
+ * ?edge rdfs:label ?label .
+ * filter(?label in ("label1", "label2", ...)) .
+ * }
+ */
+ protected GraphQueryResult getElements(final URI from, final URI to,
+ final String... labels) {
+
+ final StringBuilder sb = new StringBuilder();
+ sb.append("construct { ?from ?edge ?to . } where {\n");
+ sb.append(" ?edge rdf:type bd:Edge .\n");
+ sb.append(" ?from ?edge ?to .\n");
+ if (labels != null && labels.length > 0) {
+ if (labels.length == 1) {
+ sb.append(" ?edge rdfs:label \"").append(labels[0]).append("\" .\n");
+ } else {
+ sb.append(" ?edge rdfs:label ?label .\n");
+ sb.append(" filter(?label in (");
+ for (String label : labels) {
+ sb.append("\""+label+"\", ");
+ }
+ sb.setLength(sb.length()-2);
+ sb.append(")) .\n");
+ }
+ }
+ sb.append("}");
+
+ // bind the from and/or to
+ final String queryStr = sb.toString()
+ .replace("?from", from != null ? "<"+from+">" : "?from")
+ .replace("?to", to != null ? "<"+to+">" : "?to");
+
+ try {
+
+ final org.openrdf.query.GraphQuery query =
+ cxn().prepareGraphQuery(QueryLanguage.SPARQL, queryStr);
+
+ final GraphQueryResult stmts = query.evaluate();
+
+ return stmts;
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Find edges based on a SPARQL construct query. The query MUST construct
+ * edge statements:
+ * <p>
+ * construct { ?from ?edge ?to } where { ... }
+ *
+ * @see {@link BigdataGraphQuery}
+ */
+ Iterable<Edge> getEdges(final String queryStr) {
+
+ try {
+
+ final org.openrdf.query.GraphQuery query =
+ cxn().prepareGraphQuery(QueryLanguage.SPARQL, queryStr);
+
+ final GraphQueryResult stmts = query.evaluate();
+
+ return new EdgeIterable(stmts);
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Find vertices based on the supplied from and to vertices and the edge
+ * labels. One or the other (from and to) must be null (wildcard), but not
+ * both. Use getEdges() for wildcards on both the from and to. The edge
+ * labels can be null to include all labels.
+ *
+ * @param from
+ * the from vertex (null for wildcard)
+ * @param to
+ * the to vertex (null for wildcard)
+ * @param labels
+ * the edge labels to consider (optional)
+ * @return
+ * the vertices matching the supplied criteria
+ */
+ Iterable<Vertex> getVertices(final URI from, final URI to,
+ final String... labels) {
+
+ if (from != null && to != null) {
+ throw new IllegalArgumentException();
+ }
+
+ if (from == null && to == null) {
+ throw new IllegalArgumentException();
+ }
+
+ final GraphQueryResult stmts = getElements(from, to, labels);
+
+ return new VertexIterable(stmts, from == null);
+
+ }
+
+ /**
+ * Find vertices based on a SPARQL construct query. If the subject parameter
+ * is true, the vertices will be taken from the subject position of the
+ * constructed statements, otherwise they will be taken from the object
+ * position.
+ *
+ * @see {@link BigdataGraphQuery}
+ */
+ Iterable<Vertex> getVertices(final String queryStr, final boolean subject) {
+
+ try {
+
+ final org.openrdf.query.GraphQuery query =
+ cxn().prepareGraphQuery(QueryLanguage.SPARQL, queryStr);
+
+ final GraphQueryResult stmts = query.evaluate();
+
+ return new VertexIterable(stmts, subject);
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Find edges with the supplied property value.
+ *
+ * construct {
+ * ?from ?edge ?to .
+ * }
+ * where {
+ * ?edge <prop> <val> .
+ * ?from ?edge ?to .
+ * }
+ */
+ @Override
+ public Iterable<Edge> getEdges(final String prop, final Object val) {
+
+ final URI p = factory.toPropertyURI(prop);
+ final Literal o = factory.toLiteral(val);
+
+ try {
+
+ final StringBuilder sb = new StringBuilder();
+ sb.append("construct { ?from ?edge ?to . } where {\n");
+ sb.append(" ?edge <"+p+"> "+o+" .\n");
+ sb.append(" ?from ?edge ?to .\n");
+ sb.append("}");
+
+ final String queryStr = sb.toString();
+
+ return getEdges(queryStr);
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Lookup a vertex.
+ */
+ @Override
+ public Vertex getVertex(final Object key) {
+
+ if (key == null)
+ throw new IllegalArgumentException();
+
+ final URI uri = factory.toVertexURI(key.toString());
+
+ try {
+
+ if (cxn().hasStatement(uri, RDF.TYPE, VERTEX, false)) {
+ return new BigdataVertex(uri, this);
+ }
+
+ return null;
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+
+ /**
+ * Iterate all vertices.
+ */
+ @Override
+ public Iterable<Vertex> getVertices() {
+
+ try {
+
+ final RepositoryResult<Statement> result =
+ cxn().getStatements(null, RDF.TYPE, VERTEX, false);
+
+ return new VertexIterable(result, true);
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Find vertices with the supplied property value.
+ */
+ @Override
+ public Iterable<Vertex> getVertices(final String prop, final Object val) {
+
+ final URI p = factory.toPropertyURI(prop);
+ final Literal o = factory.toLiteral(val);
+
+ try {
+
+ final RepositoryResult<Statement> result =
+ cxn().getStatements(null, p, o, false);
+
+ return new VertexIterable(result, true);
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Providing an override implementation for our GraphQuery to avoid the
+ * low-performance scan and filter paradigm. See {@link BigdataGraphQuery}.
+ */
+ @Override
+ public GraphQuery query() {
+// return new DefaultGraphQuery(this);
+ return new BigdataGraphQuery(this);
+ }
+
+ /**
+ * Remove an edge and its properties.
+ */
+ @Override
+ public void removeEdge(final Edge edge) {
+
+ try {
+
+ final URI uri = factory.toURI(edge);
+
+ if (!cxn().hasStatement(uri, RDF.TYPE, EDGE, false)) {
+ throw new IllegalStateException();
+ }
+
+ final URI wild = null;
+
+ // remove the edge statement
+ cxn().remove(wild, uri, wild);
+
+ // remove its properties
+ cxn().remove(uri, wild, wild);
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Remove a vertex and its edges and properties.
+ */
+ @Override
+ public void removeVertex(final Vertex vertex) {
+
+ try {
+
+ final URI uri = factory.toURI(vertex);
+
+ if (!cxn().hasStatement(uri, RDF.TYPE, VERTEX, false)) {
+ throw new IllegalStateException();
+ }
+
+ final URI wild = null;
+
+ // remove outgoing edges and properties
+ cxn().remove(uri, wild, wild);
+
+ // remove incoming edges
+ cxn().remove(wild, wild, uri);
+
+ } catch (RuntimeException e) {
+ throw e;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Translate a collection of Bigdata statements into an iteration of
+ * Blueprints vertices.
+ *
+ * @author mikepersonick
+ *
+ */
+ public class VertexIterable implements Iterable<Vertex>, Iterator<Vertex> {
+
+ private final CloseableIteration<Statement, ? extends OpenRDFException> stmts;
+
+ private final boolean subject;
+
+ private final List<Vertex> cache;
+
+ public VertexIterable(
+ final CloseableIteration<Statement, ? extends OpenRDFException> stmts,
+ final boolean subject) {
+ this.stmts = stmts;
+ this.subject = subject;
+ this.cache = new LinkedList<Vertex>();
+ }
+
+ @Override
+ public boolean hasNext() {
+ try {
+ return stmts.hasNext();
+ } catch (OpenRDFException e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ @Override
+ public Vertex next() {
+ try {
+ final Statement stmt = stmts.next();
+ final URI v = (URI)
+ (subject ? stmt.getSubject() : stmt.getObject());
+ if (!hasNext()) {
+ stmts.close();
+ }
+ final Vertex vertex = new BigdataVertex(v, BigdataGraph.this);
+ cache.add(vertex);
+ return vertex;
+ } catch (OpenRDFException e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public Iterator<Vertex> iterator() {
+ return hasNext() ? this : cache.iterator();
+ }
+
+ }
+
+ /**
+ * Translate a collection of Bigdata statements into an iteration of
+ * Blueprints edges.
+ *
+ * @author mikepersonick
+ *
+ */
+ public class EdgeIterable implements Iterable<Edge>, Iterator<Edge> {
+
+ private final CloseableIteration<Statement, ? extends OpenRDFException> stmts;
+
+ private final List<Edge> cache;
+
+ public EdgeIterable(
+ final CloseableIteration<Statement, ? extends OpenRDFException> stmts) {
+ this.stmts = stmts;
+ this.cache = new LinkedList<Edge>();
+ }
+
+ @Override
+ public boolean hasNext() {
+ try {
+ return stmts.hasNext();
+ } catch (OpenRDFException e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ @Override
+ public Edge next() {
+ try {
+ final Statement stmt = stmts.next();
+ if (!hasNext()) {
+ stmts.close();
+ }
+ final Edge edge = new BigdataEdge(stmt, BigdataGraph.this);
+ cache.add(edge);
+ return edge;
+ } catch (OpenRDFException e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public Iterator<Edge> iterator() {
+ return hasNext() ? this : cache.iterator();
+ }
+
+ }
+
+ /**
+ * Fuse two iterables together into one. Useful for combining IN and OUT
+ * edges for a vertex.
+ */
+ public final <T> Iterable<T> fuse(final Iterable<T>... args) {
+
+ return new FusedIterable<T>(args);
+ }
+
+ /**
+ * Fuse two iterables together into one. Useful for combining IN and OUT
+ * edges for a vertex.
+ *
+ * @author mikepersonick
+ */
+ public class FusedIterable<T> implements Iterable<T>, Iterator<T> {
+
+ private final Iterable<T>[] args;
+
+ private transient int i = 0;
+
+ private transient Iterator<T> curr;
+
+ public FusedIterable(final Iterable<T>... args) {
+ this.args = args;
+ this.curr = args[0].iterator();
+ }
+
+ @Override
+ public boolean hasNext() {
+ if (curr.hasNext()) {
+ return true;
+ }
+ while (!curr.hasNext() && i < (args.length-1)) {
+ curr = args[++i].iterator();
+ if (curr.hasNext()) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ @Override
+ public T next() {
+ return curr.next();
+ }
+
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public Iterator<T> iterator() {
+ return this;
+ }
+
+ }
+
+ protected static final Features FEATURES = new Features();
+
+ @Override
+ public Features getFeatures() {
+
+ return FEATURES;
+
+ }
+
+ static {
+
+ FEATURES.supportsSerializableObjectProperty = false;
+ FEATURES.supportsBooleanProperty = true;
+ FEATURES.supportsDoubleProperty = true;
+ FEATURES.supportsFloatProperty = true;
+ FEATURES.supportsIntegerProperty = true;
+ FEATURES.supportsPrimitiveArrayProperty = false;
+ FEATURES.supportsUniformListProperty = false;
+ FEATURES.supportsMixedListProperty = false;
+ FEATURES.supportsLongProperty = true;
+ FEATURES.supportsMapProperty = false;
+ FEATURES.supportsStringProperty = true;
+ FEATURES.supportsDuplicateEdges = true;
+ FEATURES.supportsSelfLoops = true;
+ FEATURES.isPersistent = true;
+ FEATURES.isWrapper = false;
+ FEATURES.supportsVertexIteration = true;
+ FEATURES.supportsEdgeIteration = true;
+ FEATURES.supportsVertexIndex = false;
+ FEATURES.supportsEdgeIndex = false;
+ FEATURES.ignoresSuppliedIds = true;
+ FEATURES.supportsTransactions = false;
+ FEATURES.supportsIndices = true;
+ FEATURES.supportsKeyIndices = true;
+ FEATURES.supportsVertexKeyIndex = true;
+ FEATURES.supportsEdgeKeyIndex = true;
+ FEATURES.supportsEdgeRetrieval = true;
+ FEATURES.supportsVertexProperties = true;
+ FEATURES.supportsEdgeProperties = true;
+ FEATURES.supportsThreadedTransactions = false;
+
+ }
+
+}
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphBulkLoad.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphBulkLoad.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphBulkLoad.java 2014-05-21 09:56:34 UTC (rev 8390)
@@ -0,0 +1,298 @@
+/**
+Copyright (C) SYSTAP, LLC 2006-2014. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+package com.bigdata.blueprints;
+
+import java.util.UUID;
+
+import org.openrdf.model.Literal;
+import org.openrdf.model.URI;
+import org.openrdf.model.impl.StatementImpl;
+import org.openrdf.model.vocabulary.RDF;
+import org.openrdf.model.vocabulary.RDFS;
+import org.openrdf.repository.RepositoryConnection;
+
+import com.bigdata.rdf.changesets.IChangeLog;
+import com.bigdata.rdf.changesets.IChangeRecord;
+import com.bigdata.rdf.sail.BigdataSailRepositoryConnection;
+import com.tinkerpop.blueprints.Edge;
+import com.tinkerpop.blueprints.Features;
+import com.tinkerpop.blueprints.GraphQuery;
+import com.tinkerpop.blueprints.TransactionalGraph;
+import com.tinkerpop.blueprints.Vertex;
+
+/**
+ * Simple bulk loader that will insert graph data without any consistency
+ * checking (won't check for duplicate vertex or edge identifiers). Currently
+ * does not overwrite old property values, but we may need to change this.
+ * <p>
+ * Implements {@link IChangeLog} so that we can report a mutation count.
+ *
+ * @author mikepersonick
+ *
+ */
+public class BigdataGraphBulkLoad extends BigdataGraph
+ implements TransactionalGraph, IChangeLog {
+
+ private final BigdataSailRepositoryConnection cxn;
+
+ public BigdataGraphBulkLoad(final BigdataSailRepositoryConnection cxn) {
+ this(cxn, BigdataRDFFactory.INSTANCE);
+ }
+
+ public BigdataGraphBulkLoad(final BigdataSailRepositoryConnection cxn,
+ final BlueprintsRDFFactory factory) {
+ super(factory);
+
+ this.cxn = cxn;
+ this.cxn.addChangeLog(this);
+ }
+
+ protected RepositoryConnection cxn() throws Exception {
+ return cxn;
+ }
+
+ @Override
+ public void commit() {
+ try {
+ cxn.commit();
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ @Override
+ public void rollback() {
+ try {
+ cxn.rollback();
+ cxn.close();
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ @Override
+ public void shutdown() {
+ try {
+ cxn.close();
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ @Override
+ @Deprecated
+ public void stopTransaction(Conclusion arg0) {
+ }
+
+
+ static {
+
+ FEATURES.supportsTransactions = true;
+
+ }
+
+
+ @Override
+ public Edge getEdge(Object arg0) {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public Iterable<Edge> getEdges() {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public Iterable<Edge> getEdges(String arg0, Object arg1) {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public Vertex getVertex(Object arg0) {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public Iterable<Vertex> getVertices() {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public Iterable<Vertex> getVertices(String arg0, Object arg1) {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public GraphQuery query() {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public void removeEdge(Edge arg0) {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public void removeVertex(Vertex arg0) {
+ throw new UnsupportedOperationException();
+ }
+
+ /**
+ * Set a property without removing the old value first.
+ */
+ @Override
+ public void setProperty(final URI s, final URI p, final Literal o) {
+
+ try {
+
+// cxn().remove(s, p, null);
+
+ cxn().add(s, p, o);
+
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+
+ }
+
+ /**
+ * Add a vertex without consistency checking (does not check for a duplicate
+ * identifier).
+ */
+ @Override
+ public Vertex addVertex(final Object key) {
+
+ try {
+
+ final String vid = key != null ?
+ key.toString() : UUID.randomUUID().toString();
+
+ final URI uri = factory.toVertexURI(vid);
+
+// if (cxn().hasStatement(vertexURI, RDF.TYPE, VERTEX, false)) {
+// throw new IllegalArgumentException("vertex " + vid + " already exists");
+// }
+
+ cxn().add(uri, RDF.TYPE, VERTEX);
+
+ return new BigdataVertex(uri, this);
+
+ } catch (Exception ex) {
+ throw new RuntimeException(ex);
+ }
+
+ }
+
+ /**
+ * Add an edge without consistency checking (does not check for a duplicate
+ * identifier).
+ */
+ @Override
+ public Edge addEdge(final Object key, final Vertex from, final Vertex to,
+ final String label) {
+
+ if (label == null) {
+ throw new IllegalArgumentException();
+ }
+
+ final String eid = key != null ? key.toString() : UUID.randomUUID().toString();
+
+ final URI edgeURI = factory.toEdgeURI(eid);
+
+// if (key != null) {
+//
+// final Edge edge = getEdge(key);
+//
+// if (edge != null) {
+// if (!(edge.getVertex(Direction.OUT).equals(from) &&
+// (edge.getVertex(Direction.OUT).equals(to)))) {
+// throw new IllegalArgumentException("edge already exists: " + key);
+// }
+// }
+//
+// }
+
+ try {
+
+// if (cxn().hasStatement(edgeURI, RDF.TYPE, EDGE, false)) {
+// throw new IllegalArgumentException("edge " + eid + " already exists");
+// }
+
+ final URI fromURI = factory.toVertexURI(from.getId().toString());
+ final URI toURI = factory.toVertexURI(to.getId().toString());
+
+ cxn().add(fromURI, edgeURI, toURI);
+ cxn().add(edgeURI, RDF.TYPE, EDGE);
+ cxn().add(edgeURI, RDFS.LABEL, factory.toLiteral(label));
+
+ return new BigdataEdge(new StatementImpl(fromURI, edgeURI, toURI), this);
+
+ } catch (Exception ex) {
+ throw new RuntimeException(ex);
+ }
+
+ }
+
+ private transient long mutationCountTotal = 0;
+ private transient long mutationCountCurrentCommit = 0;
+ private transient long mutationCountLastCommit = 0;
+
+ @Override
+ public void changeEvent(final IChangeRecord record) {
+ mutationCountTotal++;
+ mutationCountCurrentCommit++;
+ }
+
+ @Override
+ public void transactionBegin() {
+ }
+
+ @Override
+ public void transactionPrepare() {
+ }
+
+ @Override
+ public void transactionCommited(long commitTime) {
+ mutationCountLastCommit = mutationCountCurrentCommit;
+ mutationCountCurrentCommit = 0;
+ }
+
+ @Override
+ public void transactionAborted() {
+ }
+
+ public long getMutationCountTotal() {
+ return mutationCountTotal;
+ }
+
+ public long getMutationCountCurrentCommit() {
+ return mutationCountCurrentCommit;
+ }
+
+ public long getMutationCountLastCommit() {
+ return mutationCountLastCommit;
+ }
+
+
+
+}
Added: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphClient.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphClient.java (rev 0)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataGraphClient.java 2014-05-21 09:56:34 UTC (rev 8390)
@@ -0,0 +1,160 @@
+/**
+Copyright (C) SYSTAP, LLC 2006-Infinity. All rights reserved.
+
+Contact:
+ SYSTAP, LLC
+ 4501 Tower Road
+ Greensboro, NC 27410
+ lic...@bi...
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; version 2 of the License.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+package com.bigdata.blueprints;
+
+import com.bigdata.rdf.sail.remote.BigdataSailRemoteRepository;
+import com.bigdata.rdf.sail.remote.BigdataSailRemoteRepositoryConnection;
+import com.bigdata.rdf.sail.webapp.client.RemoteRepository;
+import com.tinkerpop.blueprints.Features;
+
+/**
+ * This is a thin-client implementation of a Blueprints wrapper around the
+ * client library that interacts with the NanoSparqlServer. This is a functional
+ * implementation suitable for writing POCs - it is not a high performance
+ * implementation by any means (currently does not support caching or batched
+ * update). Does have a single "bulk upload" operation that wraps a method on
+ * RemoteRepository that will POST a graphml file to the blueprints layer of the
+ * bigdata server.
+ *
+ * @see {@link BigdataSailRemoteRepository}
+ * @see {@link BigdataSailRemoteRepositoryConnection}
+ * @see {@link RemoteRepository}
+ *
+ * @author mikepersonick
+ *
+ */
+public class BigdataGraphClient extends BigdataGraph {
+
+ final BigdataSailRemoteRepository repo;
+
+ transient BigdataSailRemoteRepositoryConnection cxn;
+
+ public Bigda...
[truncated message content] |
|
From: <dme...@us...> - 2014-05-21 09:12:41
|
Revision: 8389
http://sourceforge.net/p/bigdata/code/8389
Author: dmekonnen
Date: 2014-05-21 09:12:36 +0000 (Wed, 21 May 2014)
Log Message:
-----------
Synching with BIGDATA_RELEASE_1_3_1
Modified Paths:
--------------
branches/DEPLOYMENT_BRANCH_1_3_1/.classpath
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/BigdataStatics.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/Depends.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/AbstractStatisticsCollector.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/CounterSet.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/PIDStatCollector.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/SarCpuUtilizationCollector.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/VMStatCollector.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/osx/IOStatCollector.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/osx/VMStatCollector.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/query/CounterSetBTreeSelector.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/query/CounterSetSelector.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/query/ICounterSelector.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/query/URLQueryModel.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/render/TextRenderer.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/render/XHTMLRenderer.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/render/XMLRenderer.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/search/DefaultAnalyzerFactory.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/TestAll.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/TestKeyBuilder.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/TestPrefixSearch.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/TestSearch.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/test/com/bigdata/search/TestSearchRestartSafe.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataEdge.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataElement.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-blueprints/src/java/com/bigdata/blueprints/BigdataVertex.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-ganglia/build.properties
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-ganglia/src/java/com/bigdata/ganglia/GangliaService.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/java/com/bigdata/jini/start/config/JiniCoreServicesConfiguration.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/java/com/bigdata/jini/start/process/JiniCoreServicesProcessHelper.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-A.config
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournal-C.config
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/java/com/bigdata/journal/jini/ha/HAJournalServer.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/test/com/bigdata/journal/jini/ha/AbstractHA3JournalServerTestCase.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/test/com/bigdata/journal/jini/ha/TestAll_LBS.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-jini/src/test/com/bigdata/journal/jini/ha/log4j-template-A.properties
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/java/com/bigdata/rdf/properties/PropertiesFormat.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/ArbitraryLengthPathNode.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/StaticAnalysis.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/StaticAnalysis_CanJoin.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/AST2BOpUtility.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/service/ServiceRegistry.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestAll.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/TestUnions.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-rdf/src/test/com/bigdata/rdf/sparql/ast/eval/service/TestServiceRegistry.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/remote/BigdataSailRemoteRepository.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/remote/BigdataSailRemoteRepositoryConnection.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFContext.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/BigdataRDFServletContextListener.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/ConnegScore.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/ConnegUtil.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/CountersServlet.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/HALoadBalancerServlet.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/HAStatusServletUtil.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/NanoSparqlServer.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/RESTServlet.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/ConnectOptions.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/IPreparedQuery.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepository.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/client/RemoteRepositoryManager.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/AbstractLBSPolicy.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/HostScore.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/IHALoadBalancerPolicy.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/ServiceScore.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/NOPLBSPolicy.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/RoundRobinLBSPolicy.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/DefaultHostScoringRule.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/GangliaLBSPolicy.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/java/com/bigdata/rdf/sail/webapp/lbs/policy/ganglia/LoadOneHostScoringRule.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/samples/com/bigdata/samples/NSSEmbeddedExample.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataFederationSparqlTest.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/tck/BigdataSparqlTest.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAll.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestAll2.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestConneg.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestMultiTenancyAPI.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlClient.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlClient2.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestNanoSparqlServerWithProxyIndexManager2.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestProtocolAll.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-sails/src/test/com/bigdata/rdf/sail/webapp/TestService794.java
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/WEB-INF/web.xml
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/css/style.css
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/index.html
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/html/js/workbench.js
branches/DEPLOYMENT_BRANCH_1_3_1/bigdata-war/src/jetty.xml
branches/DEPLOYMENT_BRANCH_1_3_1/build.properties
branches/DEPLOYMENT_BRANCH_1_3_1/build.xml
branches/DEPLOYMENT_BRANCH_1_3_1/pom.xml
branches/DEPLOYMENT_BRANCH_1_3_1/src/resources/HAJournal/HAJournal.config
branches/DEPLOYMENT_BRANCH_1_3_1/src/resources/HAJournal/README
branches/DEPLOYMENT_BRANCH_1_3_1/src/resources/bin/startHAServices
branches/DEPLOYMENT_BRANCH_1_3_1/src/resources/etc/default/bigdataHA
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/.classpath
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/.classpath 2014-05-20 20:03:55 UTC (rev 8388)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/.classpath 2014-05-21 09:12:36 UTC (rev 8389)
@@ -1,16 +1,19 @@
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
+ <classpathentry kind="src" path="bigdata/src/java"/>
<classpathentry kind="src" path="bigdata-rdf/src/java"/>
+ <classpathentry kind="src" path="bigdata-sails/src/java"/>
+ <classpathentry kind="src" path="bigdata-blueprints/src/java"/>
+ <classpathentry kind="src" path="bigdata/src/test"/>
+ <classpathentry kind="src" path="bigdata-rdf/src/test"/>
+ <classpathentry kind="src" path="bigdata-sails/src/test"/>
+ <classpathentry kind="src" path="bigdata-blueprints/src/test"/>
+ <classpathentry kind="src" path="bigdata-war/src"/>
+ <classpathentry kind="src" path="bigdata/src/resources/logging"/>
<classpathentry kind="src" path="bigdata-rdf/src/samples"/>
<classpathentry kind="src" path="dsi-utils/src/java"/>
- <classpathentry kind="src" path="bigdata/src/resources/logging"/>
<classpathentry kind="src" path="bigdata-sails/src/samples"/>
<classpathentry kind="src" path="bigdata-jini/src/test"/>
- <classpathentry kind="src" path="bigdata-sails/src/java"/>
- <classpathentry kind="src" path="bigdata/src/java"/>
- <classpathentry kind="src" path="bigdata-rdf/src/test"/>
- <classpathentry kind="src" path="bigdata/src/test"/>
- <classpathentry kind="src" path="bigdata-sails/src/test"/>
<classpathentry kind="src" path="bigdata-jini/src/java"/>
<classpathentry kind="src" path="contrib/src/problems"/>
<classpathentry kind="src" path="bigdata/src/samples"/>
@@ -21,7 +24,6 @@
<classpathentry kind="src" path="junit-ext/src/java"/>
<classpathentry kind="src" path="lgpl-utils/src/java"/>
<classpathentry kind="src" path="lgpl-utils/src/test"/>
- <classpathentry kind="src" path="bigdata-war/src"/>
<classpathentry kind="src" path="bigdata-ganglia/src/java"/>
<classpathentry kind="src" path="bigdata-ganglia/src/test"/>
<classpathentry kind="src" path="bigdata-rdf/src/resources/service-providers"/>
@@ -74,7 +76,7 @@
<classpathentry exported="true" kind="lib" path="bigdata-sails/lib/httpcomponents/commons-fileupload-1.2.2.jar"/>
<classpathentry exported="true" kind="lib" path="bigdata-sails/lib/httpcomponents/commons-io-2.1.jar"/>
<classpathentry exported="true" kind="lib" path="bigdata/lib/apache/log4j-1.2.17.jar"/>
- <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/openrdf-sesame-2.6.10-onejar.jar"/>
+ <classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/openrdf-sesame-2.6.10-onejar.jar" sourcepath="/Users/bryan/Documents/workspace/org.openrdf.sesame-2.6.10"/>
<classpathentry exported="true" kind="lib" path="bigdata-rdf/lib/sesame-rio-testsuite-2.6.10.jar"/>
<classpathentry exported="true" kind="lib" path="bigdata-sails/lib/sesame-sparql-testsuite-2.6.10.jar"/>
<classpathentry exported="true" kind="lib" path="bigdata-sails/lib/sesame-store-testsuite-2.6.10.jar"/>
@@ -86,11 +88,16 @@
<classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-proxy-9.1.4.v20140401.jar" sourcepath="/Users/bryan/Downloads/org.eclipse.jetty.project-jetty-9.1.4.v20140401"/>
<classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-rewrite-9.1.4.v20140401.jar"/>
<classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-security-9.1.4.v20140401.jar"/>
- <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-server-9.1.4.v20140401.jar"/>
+ <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-server-9.1.4.v20140401.jar" sourcepath="/Users/bryan/Downloads/org.eclipse.jetty.project-jetty-9.1.4.v20140401"/>
<classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-servlet-9.1.4.v20140401.jar"/>
<classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-util-9.1.4.v20140401.jar"/>
- <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-webapp-9.1.4.v20140401.jar"/>
+ <classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-webapp-9.1.4.v20140401.jar" sourcepath="/Users/bryan/Downloads/org.eclipse.jetty.project-jetty-9.1.4.v20140401"/>
<classpathentry exported="true" kind="lib" path="bigdata/lib/jetty/jetty-xml-9.1.4.v20140401.jar"/>
<classpathentry exported="true" kind="lib" path="bigdata-sails/lib/jackson-core-2.2.3.jar"/>
+ <classpathentry kind="lib" path="bigdata-blueprints/lib/jettison-1.3.3.jar"/>
+ <classpathentry kind="lib" path="bigdata-blueprints/lib/blueprints-core-2.5.0.jar"/>
+ <classpathentry kind="lib" path="bigdata-blueprints/lib/blueprints-test-2.5.0.jar"/>
+ <classpathentry kind="lib" path="bigdata-blueprints/lib/rexster-core-2.5.0.jar"/>
+ <classpathentry kind="lib" path="bigdata-blueprints/lib/commons-configuration-1.10.jar"/>
<classpathentry kind="output" path="bin"/>
</classpath>
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/BigdataStatics.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/BigdataStatics.java 2014-05-20 20:03:55 UTC (rev 8388)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/BigdataStatics.java 2014-05-21 09:12:36 UTC (rev 8389)
@@ -27,9 +27,6 @@
package com.bigdata;
-import com.bigdata.counters.AbstractStatisticsCollector;
-import com.bigdata.jini.start.process.ProcessHelper;
-
/**
* A class for those few statics that it makes sense to reference from other
* places.
@@ -49,29 +46,31 @@
/**
* The name of an environment variable whose value will be used as the
* canoncial host name for the host running this JVM. This information is
- * used by the {@link AbstractStatisticsCollector}, which is responsible for
- * obtaining and reporting the canonical hostname for the {@link Banner} and
- * other purposes.
+ * used by the {@link com.bigdata.counters.AbstractStatisticsCollector},
+ * which is responsible for obtaining and reporting the canonical hostname
+ * for the {@link Banner} and other purposes.
*
- * @see AbstractStatisticsCollector
- * @see Banner
+ * @see com.bigdata.counters.AbstractStatisticsCollector
+ * @see com.bigdata.Banner
+ * @see com.bigdata.ganglia.GangliaService#HOSTNAME
* @see <a href="http://trac.bigdata.com/ticket/886" >Provide workaround for
* bad reverse DNS setups</a>
*/
public static final String HOSTNAME = "com.bigdata.hostname";
-
+
/**
* The #of lines of output from a child process which will be echoed onto
* {@link System#out} when that child process is executed. This makes it
* easy to track down why a child process dies during service start. If you
* want to see all output from the child process, then you should set the
- * log level for the {@link ProcessHelper} class to INFO.
+ * log level for the {@link com.bigdata.jini.start.process.ProcessHelper}
+ * class to INFO.
* <p>
- * Note: This needs to be more than the length of the {@link Banner} output
- * in order for anything related to the process behavior to be echoed on
- * {@link System#out}.
+ * Note: This needs to be more than the length of the
+ * {@link com.bigdata.Banner} output in order for anything related to the
+ * process behavior to be echoed on {@link System#out}.
*
- * @see ProcessHelper
+ * @see com.bigdata.jini.start.process.ProcessHelper
*/
public static int echoProcessStartupLineCount = 30;//Integer.MAX_VALUE;//100
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/Depends.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/Depends.java 2014-05-20 20:03:55 UTC (rev 8388)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/Depends.java 2014-05-21 09:12:36 UTC (rev 8389)
@@ -277,6 +277,10 @@
"https://github.com/tinkerpop/blueprints",
"https://github.com/tinkerpop/blueprints/blob/master/LICENSE.txt");
+ private final static Dep rexsterCore = new Dep("rexster-core",
+ "https://github.com/tinkerpop/rexster",
+ "https://github.com/tinkerpop/rexster/blob/master/LICENSE.txt");
+
static private final Dep[] depends;
static {
depends = new Dep[] { //
@@ -306,6 +310,7 @@
servletApi,//
jacksonCore,//
blueprintsCore,//
+ rexsterCore,//
bigdataGanglia,//
// scale-out
jini,//
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/AbstractStatisticsCollector.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/AbstractStatisticsCollector.java 2014-05-20 20:03:55 UTC (rev 8388)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/AbstractStatisticsCollector.java 2014-05-21 09:12:36 UTC (rev 8389)
@@ -137,6 +137,7 @@
* The interval in seconds at which the counter values are read from the
* host platform.
*/
+ @Override
public int getInterval() {
return interval;
@@ -225,8 +226,15 @@
* <p>
* Note: Subclasses MUST extend this method to initialize their own
* counters.
+ *
+ * TODO Why does this use the older <code>synchronized</code> pattern with a
+ * shared {@link #countersRoot} object rather than returning a new object
+ * per request? Check assumptions in the scale-out and local journal code
+ * bases for this.
*/
- synchronized public CounterSet getCounters() {
+ @Override
+ synchronized
+ public CounterSet getCounters() {
if (countersRoot == null) {
@@ -319,6 +327,7 @@
serviceRoot.addCounter(IProcessCounters.Memory_runtimeFreeMemory,
new Instrument<Long>() {
+ @Override
public void sample() {
setValue(Runtime.getRuntime().freeMemory());
}
@@ -326,6 +335,7 @@
serviceRoot.addCounter(IProcessCounters.Memory_runtimeTotalMemory,
new Instrument<Long>() {
+ @Override
public void sample() {
setValue(Runtime.getRuntime().totalMemory());
}
@@ -599,6 +609,7 @@
* Start collecting host performance data -- must be extended by the
* concrete subclass.
*/
+ @Override
public void start() {
if (log.isInfoEnabled())
@@ -612,6 +623,7 @@
* Stop collecting host performance data -- must be extended by the concrete
* subclass.
*/
+ @Override
public void stop() {
if (log.isInfoEnabled())
@@ -634,6 +646,7 @@
final Thread t = new Thread() {
+ @Override
public void run() {
AbstractStatisticsCollector.this.stop();
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/CounterSet.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/CounterSet.java 2014-05-20 20:03:55 UTC (rev 8388)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/CounterSet.java 2014-05-21 09:12:36 UTC (rev 8389)
@@ -44,6 +44,8 @@
import org.apache.log4j.Logger;
import org.xml.sax.SAXException;
+import com.bigdata.util.StackInfoReport;
+
import cutthecrap.utils.striterators.Expander;
import cutthecrap.utils.striterators.Filter;
import cutthecrap.utils.striterators.IStriterator;
@@ -87,7 +89,7 @@
*/
public class CounterSet extends AbstractCounterSet implements ICounterSet {
- static protected final Logger log = Logger.getLogger(CounterSet.class);
+ static private final Logger log = Logger.getLogger(CounterSet.class);
// private String pathx;
private final Map<String,ICounterNode> children = new ConcurrentHashMap<String,ICounterNode>();
@@ -107,7 +109,7 @@
* @param name
* The name of the child.
*/
- private CounterSet(String name,CounterSet parent) {
+ private CounterSet(final String name, final CounterSet parent) {
super(name,parent);
@@ -159,6 +161,9 @@
//
// }
+ /**
+ * Return <code>true</code> iff there are no children.
+ */
public boolean isLeaf() {
return children.isEmpty();
@@ -216,7 +221,6 @@
}
- @SuppressWarnings("unchecked")
private void attach2(final ICounterNode src, final boolean replace) {
if (src == null)
@@ -286,7 +290,7 @@
} else {
- ((Counter)src).parent = this;
+ ((Counter<?>)src).parent = this;
}
@@ -311,7 +315,8 @@
* @return The node -or- <code>null</code> if there is no node with that
* path.
*/
- synchronized public ICounterNode detach(String path) {
+ @SuppressWarnings({ "rawtypes", "unchecked" })
+ synchronized public ICounterNode detach(final String path) {
final ICounterNode node = getPath(path);
@@ -347,7 +352,7 @@
* @todo optimize for patterns that are anchored by filtering the child
* {@link ICounterSet}.
*/
- @SuppressWarnings("unchecked")
+ @SuppressWarnings({ "unchecked", "rawtypes" })
public Iterator<ICounter> counterIterator(final Pattern filter) {
final IStriterator src = new Striterator(directChildIterator(
@@ -391,7 +396,7 @@
*
* @return
*/
- @SuppressWarnings("unchecked")
+ @SuppressWarnings({ "unchecked", "rawtypes" })
public Iterator<ICounterNode> getNodes(final Pattern filter) {
IStriterator src = ((IStriterator) postOrderIterator())
@@ -414,7 +419,8 @@
}
- @SuppressWarnings("unchecked")
+ @Override
+ @SuppressWarnings({ "unchecked", "rawtypes" })
public Iterator<ICounter> getCounters(final Pattern filter) {
IStriterator src = ((IStriterator) postOrderIterator())
@@ -450,8 +456,9 @@
* When <code>null</code> all directly attached children
* (counters and counter sets) are visited.
*/
- public Iterator directChildIterator(boolean sorted,
- Class<? extends ICounterNode> type) {
+ @SuppressWarnings("rawtypes")
+ public Iterator directChildIterator(final boolean sorted,
+ final Class<? extends ICounterNode> type) {
/*
* Note: In order to avoid concurrent modification problems under
@@ -514,7 +521,7 @@
* child with a post-order traversal of its children and finally visits this
* node itself.
*/
- @SuppressWarnings("unchecked")
+ @SuppressWarnings({ "rawtypes", "unchecked" })
public Iterator postOrderIterator() {
/*
@@ -531,6 +538,7 @@
* child with a pre-order traversal of its children and finally visits this
* node itself.
*/
+ @SuppressWarnings({ "rawtypes", "unchecked" })
public Iterator preOrderIterator() {
/*
@@ -562,7 +570,9 @@
/*
* Expand each child in turn.
*/
- protected Iterator expand(Object childObj) {
+ @Override
+ @SuppressWarnings("rawtypes")
+ protected Iterator expand(final Object childObj) {
/*
* A child of this node.
@@ -603,7 +613,9 @@
/*
* Expand each child in turn.
*/
- protected Iterator expand(Object childObj) {
+ @Override
+ @SuppressWarnings("rawtypes")
+ protected Iterator expand(final Object childObj) {
/*
* A child of this node.
@@ -624,7 +636,8 @@
}
- public ICounterNode getChild(String name) {
+ @Override
+ public ICounterNode getChild(final String name) {
if (name == null)
throw new IllegalArgumentException();
@@ -642,6 +655,7 @@
*
* @return The {@link CounterSet} described by the path.
*/
+ @Override
synchronized public CounterSet makePath(String path) {
if (path == null) {
@@ -740,6 +754,7 @@
* The object that is used to take the measurements from which
* the counter's value will be determined.
*/
+ @SuppressWarnings("rawtypes")
synchronized public ICounter addCounter(final String path,
final IInstrument instrument) {
@@ -767,7 +782,7 @@
}
- @SuppressWarnings("unchecked")
+ @SuppressWarnings({ "unchecked", "rawtypes" })
private ICounter addCounter2(final String name, final IInstrument instrument) {
if (name == null)
@@ -785,7 +800,7 @@
if(counter instanceof ICounter ) {
// counter exists for that path.
- log.error("Exists: path=" + getPath() + ", name=" + name);
+ log.error(new StackInfoReport("Exists: path=" + getPath() + ", name=" + name));
// return existing counter for path @todo vs replace.
return (ICounter)counter;
@@ -831,12 +846,14 @@
*
* @throws IOException
*/
+ @Override
public void asXML(Writer w, Pattern filter) throws IOException {
XMLUtility.INSTANCE.writeXML(this, w, filter);
}
+ @Override
public void readXML(final InputStream is,
final IInstrumentFactory instrumentFactory, final Pattern filter)
throws IOException, ParserConfigurationException, SAXException {
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/PIDStatCollector.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/PIDStatCollector.java 2014-05-20 20:03:55 UTC (rev 8388)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/PIDStatCollector.java 2014-05-21 09:12:36 UTC (rev 8389)
@@ -28,13 +28,12 @@
package com.bigdata.counters.linux;
-import java.io.File;
import java.io.IOException;
-import java.util.HashMap;
-import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicLong;
import com.bigdata.counters.AbstractProcessCollector;
import com.bigdata.counters.AbstractProcessReader;
@@ -61,7 +60,6 @@
* repeat forever if interval was specified.
*
* @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- * @version $Id$
*/
public class PIDStatCollector extends AbstractProcessCollector implements
ICounterHierarchy, IProcessCounters {
@@ -92,7 +90,6 @@
* hierarchy.
*
* @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- * @version $Id$
*/
abstract class AbstractInst<T> implements IInstrument<T> {
@@ -104,17 +101,19 @@
}
- protected AbstractInst(String path) {
+ protected AbstractInst(final String path) {
+
+ if (path == null)
+ throw new IllegalArgumentException();
- assert path != null;
-
this.path = path;
}
+ @Override
final public long lastModified() {
- return lastModified;
+ return lastModified.get();
}
@@ -122,7 +121,8 @@
* @throws UnsupportedOperationException
* always.
*/
- final public void setValue(T value, long timestamp) {
+ @Override
+ final public void setValue(final T value, final long timestamp) {
throw new UnsupportedOperationException();
@@ -135,13 +135,12 @@
* hierarchy.
*
* @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- * @version $Id$
*/
class IL extends AbstractInst<Long> {
protected final long scale;
- public IL(String path, long scale) {
+ public IL(final String path, final long scale) {
super( path );
@@ -149,6 +148,7 @@
}
+ @Override
public Long getValue() {
final Long value = (Long) vals.get(path);
@@ -170,13 +170,12 @@
* hierarchy.
*
* @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- * @version $Id$
*/
class ID extends AbstractInst<Double> {
protected final double scale;
- public ID(String path, double scale) {
+ public ID(final String path, final double scale) {
super(path);
@@ -184,6 +183,7 @@
}
+ @Override
public Double getValue() {
final Double value = (Double) vals.get(path);
@@ -205,7 +205,7 @@
* as the last modified time for counters based on that process and
* defaulted to the time that we begin to collect performance data.
*/
- private long lastModified = System.currentTimeMillis();
+ private final AtomicLong lastModified = new AtomicLong(System.currentTimeMillis());
/**
* Map containing the current values for the configured counters. The keys
@@ -217,7 +217,7 @@
* declared within {@link #getCounters()} and not whatever path the counters
* are eventually placed under within a larger hierarchy.
*/
- private Map<String,Object> vals = new HashMap<String, Object>();
+ private final Map<String,Object> vals = new ConcurrentHashMap<String, Object>();
/**
* @param pid
@@ -229,7 +229,8 @@
*
* @todo kernelVersion could be static.
*/
- public PIDStatCollector(int pid, int interval, KernelVersion kernelVersion) {
+ public PIDStatCollector(final int pid, final int interval,
+ final KernelVersion kernelVersion) {
super(interval);
@@ -269,79 +270,69 @@
command.add("-r"); // memory report
// command.add("-w"); // context switching report (not implemented in our code).
-
- command.add(""+getInterval());
-
+
+ command.add("" + getInterval());
+
return command;
}
- /**
- * Declare the counters that we will collect using <code>pidstat</code>.
- * These counters are NOT placed within the counter hierarchy but are
- * declared using the bare path for the counter. E.g., as
- * {@link IProcessCounters#Memory_virtualSize}.
- */
- /*synchronized*/ public CounterSet getCounters() {
+ @Override
+ public CounterSet getCounters() {
-// if(root == null) {
-
- final CounterSet root = new CounterSet();
-
- inst = new LinkedList<AbstractInst<?>>();
-
- /*
- * Note: Counters are all declared as Double to facilitate
- * aggregation and scaling.
- *
- * Note: pidstat reports percentages as [0:100] so we normalize them
- * to [0:1] using a scaling factor.
- */
+ final List<AbstractInst<?>> inst = new LinkedList<AbstractInst<?>>();
- inst.add(new ID(IProcessCounters.CPU_PercentUserTime,.01d));
- inst.add(new ID(IProcessCounters.CPU_PercentSystemTime,.01d));
- inst.add(new ID(IProcessCounters.CPU_PercentProcessorTime,.01d));
-
- inst.add(new ID(IProcessCounters.Memory_minorFaultsPerSec,1d));
- inst.add(new ID(IProcessCounters.Memory_majorFaultsPerSec,1d));
- inst.add(new IL(IProcessCounters.Memory_virtualSize,Bytes.kilobyte));
- inst.add(new IL(IProcessCounters.Memory_residentSetSize,Bytes.kilobyte));
- inst.add(new ID(IProcessCounters.Memory_percentMemorySize,.01d));
+ /*
+ * Note: Counters are all declared as Double to facilitate aggregation
+ * and scaling.
+ *
+ * Note: pidstat reports percentages as [0:100] so we normalize them to
+ * [0:1] using a scaling factor.
+ */
- /*
- * Note: pidstat reports in kb/sec so we normalize to bytes/second
- * using a scaling factor.
- */
- inst.add(new ID(IProcessCounters.PhysicalDisk_BytesReadPerSec, Bytes.kilobyte32));
- inst.add(new ID(IProcessCounters.PhysicalDisk_BytesWrittenPerSec, Bytes.kilobyte32));
+ inst.add(new ID(IProcessCounters.CPU_PercentUserTime, .01d));
+ inst.add(new ID(IProcessCounters.CPU_PercentSystemTime, .01d));
+ inst.add(new ID(IProcessCounters.CPU_PercentProcessorTime, .01d));
-// }
+ inst.add(new ID(IProcessCounters.Memory_minorFaultsPerSec, 1d));
+ inst.add(new ID(IProcessCounters.Memory_majorFaultsPerSec, 1d));
+ inst.add(new IL(IProcessCounters.Memory_virtualSize, Bytes.kilobyte));
+ inst.add(new IL(IProcessCounters.Memory_residentSetSize, Bytes.kilobyte));
+ inst.add(new ID(IProcessCounters.Memory_percentMemorySize, .01d));
+
+ /*
+ * Note: pidstat reports in kb/sec so we normalize to bytes/second using
+ * a scaling factor.
+ */
+ inst.add(new ID(IProcessCounters.PhysicalDisk_BytesReadPerSec,
+ Bytes.kilobyte32));
+ inst.add(new ID(IProcessCounters.PhysicalDisk_BytesWrittenPerSec,
+ Bytes.kilobyte32));
+
+ final CounterSet root = new CounterSet();
- for(Iterator<AbstractInst<?>> itr = inst.iterator(); itr.hasNext(); ) {
-
- final AbstractInst<?> i = itr.next();
-
+ for (AbstractInst<?> i : inst) {
+
root.addCounter(i.getPath(), i);
-
+
}
-
+
return root;
-
+
}
- private List<AbstractInst<?>> inst = null;
-// private CounterSet root = null;
-
+
/**
- * Extended to force <code>pidstat</code> to use a consistent
- * timestamp format regardless of locale by setting
- * <code>S_TIME_FORMAT="ISO"</code> in the environment.
+ * Extended to force <code>pidstat</code> to use a consistent timestamp
+ * format regardless of locale by setting <code>S_TIME_FORMAT="ISO"</code>
+ * in the environment.
*/
- protected void setEnvironment(Map<String, String> env) {
+ @Override
+ protected void setEnvironment(final Map<String, String> env) {
super.setEnvironment(env);
-
+
env.put("S_TIME_FORMAT", "ISO");
-
+
}
@Override
@@ -374,10 +365,10 @@
* </pre>
*
* @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- * @version $Id$
*/
protected class PIDStatReader extends ProcessReaderHelper {
+ @Override
protected ActiveProcess getActiveProcess() {
if (activeProcess == null)
@@ -410,6 +401,7 @@
* is possible that this will not work when the host is
* using an English locale.
*/
+ @Override
protected void readProcess() throws IOException, InterruptedException {
if(log.isInfoEnabled())
@@ -478,7 +470,7 @@
* time of the start of the current day, which is what we would have
* to do.
*/
- lastModified = System.currentTimeMillis();
+ lastModified.set(System.currentTimeMillis());
if(header.contains("%CPU")) {
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/SarCpuUtilizationCollector.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/SarCpuUtilizationCollector.java 2014-05-20 20:03:55 UTC (rev 8388)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/SarCpuUtilizationCollector.java 2014-05-21 09:12:36 UTC (rev 8389)
@@ -28,12 +28,11 @@
package com.bigdata.counters.linux;
-import java.io.File;
-import java.util.HashMap;
-import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicLong;
import com.bigdata.counters.AbstractProcessCollector;
import com.bigdata.counters.AbstractProcessReader;
@@ -51,31 +50,16 @@
* <code>sar -u</code>.
*
* @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- * @version $Id$
*/
public class SarCpuUtilizationCollector extends AbstractProcessCollector
implements ICounterHierarchy, IRequiredHostCounters, IHostCounters {
-// static protected final Logger log = Logger
-// .getLogger(SarCpuUtilizationCollector.class);
-//
-// /**
-// * True iff the {@link #log} level is DEBUG or less.
-// */
-// final protected static boolean DEBUG = log.isDebugEnabled();
-//
-// /**
-// * True iff the {@link #log} level is log.isInfoEnabled() or less.
-// */
-// final protected static boolean log.isInfoEnabled() = log.isInfoEnabled();
-
/**
* Inner class integrating the current values with the {@link ICounterSet}
* hierarchy.
*
* @author <a href="mailto:tho...@us...">Bryan
* Thompson</a>
- * @version $Id$
*/
abstract class I<T> implements IInstrument<T> {
@@ -87,17 +71,19 @@
}
- public I(String path) {
+ public I(final String path) {
- assert path != null;
+ if (path == null)
+ throw new IllegalArgumentException();
this.path = path;
}
+ @Override
public long lastModified() {
- return lastModified;
+ return lastModified.get();
}
@@ -105,7 +91,8 @@
* @throws UnsupportedOperationException
* always.
*/
- public void setValue(T value, long timestamp) {
+ @Override
+ public void setValue(final T value, final long timestamp) {
throw new UnsupportedOperationException();
@@ -117,13 +104,12 @@
* Double precision counter with scaling factor.
*
* @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- * @version $Id$
*/
class DI extends I<Double> {
protected final double scale;
- DI(String path, double scale) {
+ DI(final String path, final double scale) {
super( path );
@@ -131,7 +117,7 @@
}
-
+ @Override
public Double getValue() {
final Double value = (Double) vals.get(path);
@@ -153,12 +139,12 @@
* keys are paths into the {@link CounterSet}. The values are the data
* most recently read from <code>sar</code>.
*/
- private Map<String,Object> vals = new HashMap<String, Object>();
+ private final Map<String,Object> vals = new ConcurrentHashMap<String, Object>();
/**
* The timestamp associated with the most recently collected values.
*/
- private long lastModified = System.currentTimeMillis();
+ private final AtomicLong lastModified = new AtomicLong(System.currentTimeMillis());
/**
*
@@ -173,6 +159,7 @@
}
+ @Override
public List<String> getCommand() {
final List<String> command = new LinkedList<String>();
@@ -192,53 +179,44 @@
}
- /**
- * Declares the counters that we will collect using <code>sar</code>.
- */
- /*synchronized*/ public CounterSet getCounters() {
+ @Override
+ public CounterSet getCounters() {
-// if(root == null) {
-
- final CounterSet root = new CounterSet();
-
- inst = new LinkedList<I>();
-
- /*
- * Note: Counters are all declared as Double to facilitate
- * aggregation.
- *
- * Note: sar reports percentages in [0:100] so we convert them to
- * [0:1] using a scaling factor.
- */
+ final CounterSet root = new CounterSet();
- inst.add(new DI(IRequiredHostCounters.CPU_PercentProcessorTime,.01d));
-
- inst.add(new DI(IHostCounters.CPU_PercentUserTime,.01d));
- inst.add(new DI(IHostCounters.CPU_PercentSystemTime,.01d));
- inst.add(new DI(IHostCounters.CPU_PercentIOWait,.01d));
-
- for(Iterator<I> itr = inst.iterator(); itr.hasNext(); ) {
-
- final I i = itr.next();
-
- root.addCounter(i.getPath(), i);
-
- }
-
-// }
+ @SuppressWarnings("rawtypes")
+ final List<I> inst = new LinkedList<I>();
+
+ /*
+ * Note: Counters are all declared as Double to facilitate aggregation.
+ *
+ * Note: sar reports percentages in [0:100] so we convert them to [0:1]
+ * using a scaling factor.
+ */
+
+ inst.add(new DI(IRequiredHostCounters.CPU_PercentProcessorTime, .01d));
+
+ inst.add(new DI(IHostCounters.CPU_PercentUserTime, .01d));
+ inst.add(new DI(IHostCounters.CPU_PercentSystemTime, .01d));
+ inst.add(new DI(IHostCounters.CPU_PercentIOWait, .01d));
+
+ for (@SuppressWarnings("rawtypes") I i : inst) {
+
+ root.addCounter(i.getPath(), i);
+
+ }
return root;
}
- private List<I> inst = null;
-// private CounterSet root = null;
/**
* Extended to force <code>sar</code> to use a consistent timestamp
* format regardless of locale by setting
* <code>S_TIME_FORMAT="ISO"</code> in the environment.
*/
- protected void setEnvironment(Map<String, String> env) {
+ @Override
+ protected void setEnvironment(final Map<String, String> env) {
super.setEnvironment(env);
@@ -246,6 +224,7 @@
}
+ @Override
public AbstractProcessReader getProcessReader() {
return new SarReader();
@@ -269,10 +248,10 @@
*
* @author <a href="mailto:tho...@us...">Bryan
* Thompson</a>
- * @version $Id$
*/
private class SarReader extends ProcessReaderHelper {
+ @Override
protected ActiveProcess getActiveProcess() {
if (activeProcess == null)
@@ -376,7 +355,7 @@
* adjusting for the UTC time of the start of the current day,
* which is what we would have to do.
*/
- lastModified = System.currentTimeMillis();
+ lastModified.set(System.currentTimeMillis());
// final String user = data.substring(20-1, 30-1);
//// final String nice = data.substring(30-1, 40-1);
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/VMStatCollector.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/VMStatCollector.java 2014-05-20 20:03:55 UTC (rev 8388)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/linux/VMStatCollector.java 2014-05-21 09:12:36 UTC (rev 8389)
@@ -28,11 +28,11 @@
package com.bigdata.counters.linux;
-import java.util.HashMap;
-import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicLong;
import java.util.regex.Pattern;
import com.bigdata.counters.AbstractProcessCollector;
@@ -50,7 +50,6 @@
* Collects some counters using <code>vmstat</code>.
*
* @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- * @version $Id$
*/
public class VMStatCollector extends AbstractProcessCollector implements
ICounterHierarchy, IRequiredHostCounters, IHostCounters{
@@ -61,7 +60,6 @@
*
* @author <a href="mailto:tho...@us...">Bryan
* Thompson</a>
- * @version $Id$
*/
abstract class I<T> implements IInstrument<T> {
@@ -73,17 +71,19 @@
}
- public I(String path) {
+ public I(final String path) {
- assert path != null;
+ if (path == null)
+ throw new IllegalArgumentException();
this.path = path;
}
+ @Override
public long lastModified() {
- return lastModified;
+ return lastModified.get();
}
@@ -91,7 +91,8 @@
* @throws UnsupportedOperationException
* always.
*/
- public void setValue(T value, long timestamp) {
+ @Override
+ public void setValue(final T value, final long timestamp) {
throw new UnsupportedOperationException();
@@ -108,15 +109,15 @@
protected final double scale;
- DI(String path, double scale) {
+ DI(final String path, final double scale) {
+
+ super(path);
- super( path );
-
this.scale = scale;
}
-
+ @Override
public Double getValue() {
final Double value = (Double) vals.get(path);
@@ -138,24 +139,26 @@
* are paths into the {@link CounterSet}. The values are the data most
* recently read from <code>vmstat</code>.
*/
- final private Map<String, Object> vals = new HashMap<String, Object>();
+ final private Map<String, Object> vals = new ConcurrentHashMap<String, Object>();
/**
* The timestamp associated with the most recently collected values.
*/
- private long lastModified = System.currentTimeMillis();
+ private final AtomicLong lastModified = new AtomicLong(
+ System.currentTimeMillis());
/**
* <code>true</code> iff you want collect the user time, system time,
* and IO WAIT time using vmstat (as opposed to sar).
*/
- protected final boolean cpuStats;
+ private final boolean cpuStats;
/**
* The {@link Pattern} used to split apart the rows read from
* <code>vmstat</code>.
*/
- final protected static Pattern pattern = Pattern.compile("\\s+");
+ // Note: Exposed to the test suite.
+ final static Pattern pattern = Pattern.compile("\\s+");
/**
*
@@ -173,6 +176,7 @@
}
+ @Override
public List<String> getCommand() {
final List<String> command = new LinkedList<String>();
@@ -192,12 +196,12 @@
/**
* Declares the counters that we will collect
*/
+ @Override
public CounterSet getCounters() {
- final CounterSet root = new CounterSet();
+ @SuppressWarnings("rawtypes")
+ final List<I> inst = new LinkedList<I>();
- inst = new LinkedList<I>();
-
/*
* Note: Counters are all declared as Double to facilitate aggregation.
*/
@@ -257,20 +261,19 @@
}
- for (Iterator<I> itr = inst.iterator(); itr.hasNext();) {
+ final CounterSet root = new CounterSet();
- final I i = itr.next();
+ for (@SuppressWarnings("rawtypes") I i : inst) {
- root.addCounter(i.getPath(), i);
+ root.addCounter(i.getPath(), i);
- }
+ }
return root;
}
- private List<I> inst = null;
-
+ @Override
public AbstractProcessReader getProcessReader() {
return new VMStatReader();
@@ -296,6 +299,7 @@
*/
private class VMStatReader extends ProcessReaderHelper {
+ @Override
protected ActiveProcess getActiveProcess() {
if (activeProcess == null)
@@ -317,7 +321,7 @@
if(log.isInfoEnabled())
log.info("begin");
- for(int i=0; i<10 && !getActiveProcess().isAlive(); i++) {
+ for (int i = 0; i < 10 && !getActiveProcess().isAlive(); i++) {
if(log.isInfoEnabled())
log.info("waiting for the readerFuture to be set.");
@@ -362,7 +366,7 @@
try {
// timestamp
- lastModified = System.currentTimeMillis();
+ lastModified.set(System.currentTimeMillis());
final String[] fields = pattern.split(data.trim(), 0/* limit */);
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/osx/IOStatCollector.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/osx/IOStatCollector.java 2014-05-20 20:03:55 UTC (rev 8388)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/osx/IOStatCollector.java 2014-05-21 09:12:36 UTC (rev 8389)
@@ -28,11 +28,11 @@
package com.bigdata.counters.osx;
-import java.util.HashMap;
-import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicLong;
import java.util.regex.Pattern;
import com.bigdata.counters.AbstractProcessCollector;
@@ -48,14 +48,13 @@
import com.bigdata.rawstore.Bytes;
/**
- * Collects some counters using <code>iostat</code>. Unfortunately,
+ * Collects some counters using <code>iostat</code> under OSX. Unfortunately,
* <code>iostat</code> does not break down the reads and writes and does not
* report IO Wait. This information is obviously available from OSX as it is
* provided by the ActivityMonitor, but we can not get it from
* <code>iostat</code>.
*
* @author <a href="mailto:tho...@us...">Bryan Thompson</a>
- * @version $Id: VMStatCollector.java 4289 2011-03-10 21:22:30Z thompsonbry $
*/
public class IOStatCollector extends AbstractProcessCollector implements
ICounterHierarchy, IRequiredHostCounters, IHostCounters{
@@ -77,7 +76,7 @@
}
- public I(String path) {
+ public I(final String path) {
assert path != null;
@@ -85,9 +84,10 @@
}
+ @Override
public long lastModified() {
- return lastModified;
+ return lastModified.get();
}
@@ -95,7 +95,8 @@
* @throws UnsupportedOperationException
* always.
*/
- public void setValue(T value, long timestamp) {
+ @Override
+ public void setValue(final T value, final long timestamp) {
throw new UnsupportedOperationException();
@@ -114,7 +115,7 @@
DI(final String path) {
- this(path,1d);
+ this(path, 1d);
}
@@ -126,7 +127,7 @@
}
-
+ @Override
public Double getValue() {
final Double value = (Double) vals.get(path);
@@ -146,14 +147,14 @@
/**
* Map containing the current values for the configured counters. The keys
* are paths into the {@link CounterSet}. The values are the data most
- * recently read from <code>vmstat</code>.
+ * recently read from <code>iostat</code>.
*/
- final private Map<String, Object> vals = new HashMap<String, Object>();
+ final private Map<String, Object> vals = new ConcurrentHashMap<String, Object>();
/**
* The timestamp associated with the most recently collected values.
*/
- private long lastModified = System.currentTimeMillis();
+ private final AtomicLong lastModified = new AtomicLong(System.currentTimeMillis());
/**
* The {@link Pattern} used to split apart the rows read from
@@ -178,7 +179,8 @@
this.cpuStats = cpuStats;
}
-
+
+ @Override
public List<String> getCommand() {
final List<String> command = new LinkedList<String>();
@@ -203,14 +205,13 @@
}
- /**
- * Declares the counters that we will collect
- */
+ @Override
public CounterSet getCounters() {
final CounterSet root = new CounterSet();
- inst = new LinkedList<I>();
+ @SuppressWarnings("rawtypes")
+ final List<I> inst = new LinkedList<I>();
/*
* Note: Counters are all declared as Double to facilitate aggregation.
@@ -249,24 +250,22 @@
inst.add(new DI(IHostCounters.CPU_PercentUserTime, .01d));
// Note: column sy
inst.add(new DI(IHostCounters.CPU_PercentSystemTime, .01d));
-// // Note: IO Wait is NOT reported by vmstat.
+// // Note: IO Wait is NOT reported by iostat.
// inst.add(new DI(IHostCounters.CPU_PercentIOWait, .01d));
}
- for (Iterator<I> itr = inst.iterator(); itr.hasNext();) {
+ for (@SuppressWarnings("rawtypes") I i : inst) {
- final I i = itr.next();
+ root.addCounter(i.getPath(), i);
- root.addCounter(i.getPath(), i);
+ }
- }
-
return root;
}
- private List<I> inst = null;
+ @Override
public AbstractProcessReader getProcessReader() {
return new IOStatReader();
@@ -300,6 +299,7 @@
*/
private class IOStatReader extends ProcessReaderHelper {
+ @Override
protected ActiveProcess getActiveProcess() {
if (activeProcess == null)
@@ -427,7 +427,7 @@
try {
// timestamp
- lastModified = System.currentTimeMillis();
+ lastModified.set(System.currentTimeMillis());
final String[] fields = pattern
.split(data.trim(), 0/* limit */);
Modified: branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/osx/VMStatCollector.java
===================================================================
--- branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/osx/VMStatCollector.java 2014-05-20 20:03:55 UTC (rev 8388)
+++ branches/DEPLOYMENT_BRANCH_1_3_1/bigdata/src/java/com/bigdata/counters/osx/VMStatCollector.java 2014-05-21 09:12:36 UTC (rev 8389)
@@ -28,11 +28,11 @@
package com.bigdata.counters.osx;
-import java.util.HashMap;
-import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicLong;
import java.util.regex.Pattern;
import com.bigdata.counters.AbstractProcessCollector;
@@ -72,17 +72,19 @@
}
- public I(String path) {
-
- assert path != null;
-
+ public I(final String path) {
+
+ if (path == null)
+ throw new IllegalArgumentException();
+
this.path = path;
}
+ @Override
public long lastModified() {
- return lastModified;
+ return lastModified.get();
}
@@ -90,6 +92,7 @@
* @throws UnsupportedOperationException
* always.
*/
+ @Override
public void setValue(T value, long timestamp) {
throw new UnsupportedOperationException();
@@ -108,20 +111,20 @@
protected final double scale;
DI(final String path) {
-
- this(path,1d);
+ this(path, 1d);
+
}
DI(final String path, final double scale) {
-
- super( path );
-
+
+ super(path);
+
this.scale = scale;
-
+
}
-
-
+
+ @Override
public Double getValue() {
final Double value = (Double) vals.get(path);
@@ -143,12 +146,13 @@
* are paths into the {@link CounterSet}. The values are the data most
* recently read from <code>vmstat</code>.
*/
- final private Map<String, Object> vals = new HashMap<String, Object>();
-
+ private final Map<String, Object> vals = new ConcurrentHashMap<String, Object>();
+
/**
* The timestamp associated with the most recently collected values.
*/
- private long lastModified = System.currentTimeMillis();
+ private final AtomicLong lastModified = new AtomicLong(
+ System.currentTimeMillis());
/**
* The {@link Pattern} used to split apart the rows read from
@@ -166,7 +170,8 @@
super(interval);
}
-
+
+ @Override
public List<String> getCommand() {
final List<String> command = new LinkedList<String>();
@@ -180,14 +185,13 @@
}
- /**
- * Declares the counters that we will collect
- */
+ @Override
public CounterSet getCounters() {
final CounterSet root = new CounterSet();
- inst = new LinkedList<I>();
+ @SuppressWarnings("rawtypes")
+ final List<I> inst = new LinkedList<I>();
/*
* Note: Counters are all declared as Double to facilitate aggregation.
@@ -209,19 +213,17 @@
*/
inst.add(new DI(IHostCounters.Memory_Bytes_Free));
- for (Iterator<I> itr = inst.iterator(); itr.hasNext();) {
+ for (@SuppressWarnin...
[truncated message content] |
|
From: <tho...@us...> - 2014-05-20 20:04:00
|
Revision: 8388
http://sourceforge.net/p/bigdata/code/8388
Author: thompsonbry
Date: 2014-05-20 20:03:55 +0000 (Tue, 20 May 2014)
Log Message:
-----------
Bug fix to #955 (StaticOptimizer should always choose a ZERO cardinality tail first) for the 1.2.5 tag. This commit is against branches/BIGDATA_RELEASE_1_2_4. The 1.2.5 release was made from this branch. A 1.2.6 release will also be made from this branch.
See #955
Modified Paths:
--------------
branches/BIGDATA_RELEASE_1_2_4/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTStaticJoinOptimizer.java
Modified: branches/BIGDATA_RELEASE_1_2_4/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTStaticJoinOptimizer.java
===================================================================
--- branches/BIGDATA_RELEASE_1_2_4/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTStaticJoinOptimizer.java 2014-05-20 18:56:22 UTC (rev 8387)
+++ branches/BIGDATA_RELEASE_1_2_4/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/ASTStaticJoinOptimizer.java 2014-05-20 20:03:55 UTC (rev 8388)
@@ -929,6 +929,21 @@
}
}
+ /**
+ * Always choose a ZERO cardinality tail first, regardless of
+ * ancestry.
+ *
+ * @see <a href="http://trac.bigdata.com/ticket/955" >
+ * StaticOptimizer should always choose a ZERO cardinality tail
+ * first </a>
+ */
+ for (int i = 0; i < arity; i++) {
+ if (cardinality(i) == 0) {
+ preferredFirstTail = i;
+ break;
+ }
+ }
+
/*
* If there was no "run first" query hint, then go to the ancestry.
*/
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <mrp...@us...> - 2014-05-20 18:56:25
|
Revision: 8387
http://sourceforge.net/p/bigdata/code/8387
Author: mrpersonick
Date: 2014-05-20 18:56:22 +0000 (Tue, 20 May 2014)
Log Message:
-----------
fix for ticket 955
Modified Paths:
--------------
branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/StaticOptimizer.java
Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/StaticOptimizer.java
===================================================================
--- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/StaticOptimizer.java 2014-05-20 18:55:44 UTC (rev 8386)
+++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/optimizers/StaticOptimizer.java 2014-05-20 18:56:22 UTC (rev 8387)
@@ -215,6 +215,16 @@
}
/*
+ * Always choose a ZERO cardinality tail first, regardless of ancestry.
+ */
+ for (int i = 0; i < arity; i++) {
+ if (cardinality(i) == 0) {
+ preferredFirstTail = i;
+ break;
+ }
+ }
+
+ /*
* If there was no "run first" query hint, then go to the ancestry.
*/
if (preferredFirstTail == -1)
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <mrp...@us...> - 2014-05-20 18:55:47
|
Revision: 8386
http://sourceforge.net/p/bigdata/code/8386
Author: mrpersonick
Date: 2014-05-20 18:55:44 +0000 (Tue, 20 May 2014)
Log Message:
-----------
added a means of just running the AST optimizers without actually running the query
Modified Paths:
--------------
branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java
branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailTupleQuery.java
Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java
===================================================================
--- branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java 2014-05-20 17:13:40 UTC (rev 8385)
+++ branches/BIGDATA_RELEASE_1_3_0/bigdata-rdf/src/java/com/bigdata/rdf/sparql/ast/eval/ASTEvalHelper.java 2014-05-20 18:55:44 UTC (rev 8386)
@@ -411,7 +411,53 @@
}
}
+
+ /**
+ * Optimize a SELECT query.
+ *
+ * @param store
+ * The {@link AbstractTripleStore} having the data.
+ * @param queryPlan
+ * The {@link ASTContainer}.
+ * @param bs
+ * The initial solution to kick things off.
+ *
+ * @return An optimized AST.
+ *
+ * @throws QueryEvaluationException
+ */
+ static public QueryRoot optimizeTupleQuery(
+ final AbstractTripleStore store, final ASTContainer astContainer,
+ final QueryBindingSet bs) throws QueryEvaluationException {
+ final AST2BOpContext context = new AST2BOpContext(astContainer, store);
+
+ // Clear the optimized AST.
+ astContainer.clearOptimizedAST();
+
+ // Batch resolve Values to IVs and convert to bigdata binding set.
+ final IBindingSet[] bindingSets = mergeBindingSets(astContainer,
+ batchResolveIVs(store, bs));
+
+ // Convert the query (generates an optimized AST as a side-effect).
+ AST2BOpUtility.convert(context, bindingSets);
+
+ // Get the projection for the query.
+ final IVariable<?>[] projected = astContainer.getOptimizedAST()
+ .getProjection().getProjectionVars();
+
+ final List<String> projectedSet = new LinkedList<String>();
+
+ for (IVariable<?> var : projected)
+ projectedSet.add(var.getName());
+
+ // The optimized AST.
+ final QueryRoot optimizedQuery = astContainer.getOptimizedAST();
+
+ return optimizedQuery;
+
+ }
+
/**
* Evaluate a CONSTRUCT/DESCRIBE query.
* <p>
Modified: branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailTupleQuery.java
===================================================================
--- branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailTupleQuery.java 2014-05-20 17:13:40 UTC (rev 8385)
+++ branches/BIGDATA_RELEASE_1_3_0/bigdata-sails/src/java/com/bigdata/rdf/sail/BigdataSailTupleQuery.java 2014-05-20 18:55:44 UTC (rev 8386)
@@ -98,4 +98,32 @@
}
+ public QueryRoot optimize() throws QueryEvaluationException {
+
+ return optimize((BindingsClause) null);
+
+ }
+
+ public QueryRoot optimize(final BindingsClause bc)
+ throws QueryEvaluationException {
+
+ final QueryRoot originalQuery = astContainer.getOriginalAST();
+
+ if (bc != null)
+ originalQuery.setBindingsClause(bc);
+
+ if (getMaxQueryTime() > 0)
+ originalQuery.setTimeout(TimeUnit.SECONDS
+ .toMillis(getMaxQueryTime()));
+
+ originalQuery.setIncludeInferred(getIncludeInferred());
+
+ final QueryRoot optimized = ASTEvalHelper.optimizeTupleQuery(
+ getTripleStore(), astContainer, new QueryBindingSet(
+ getBindings()));
+
+ return optimized;
+
+ }
+
}
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|
|
From: <mrp...@us...> - 2014-05-20 17:13:44
|
Revision: 8385
http://sourceforge.net/p/bigdata/code/8385
Author: mrpersonick
Date: 2014-05-20 17:13:40 +0000 (Tue, 20 May 2014)
Log Message:
-----------
Added Paths:
-----------
branches/SESAME_2_7/
Index: branches/SESAME_2_7
===================================================================
--- branches/BIGDATA_RELEASE_1_3_0 2014-05-20 15:18:57 UTC (rev 8382)
+++ branches/SESAME_2_7 2014-05-20 17:13:40 UTC (rev 8385)
Property changes on: branches/SESAME_2_7
___________________________________________________________________
Added: svn:ignore
## -0,0 +1,31 ##
+ant-build
+src
+bin
+bigdata*.jar
+ant-release
+standalone
+test*
+countersfinal.xml
+events.jnl
+.settings
+*.jnl
+TestInsertRate.out
+SYSTAP-BBT-result.txt
+U10load+query
+*.hprof
+com.bigdata.cache.TestHardReferenceQueueWithBatchingUpdates.exp.csv
+commit-log.txt
+eventLog
+dist
+bigdata-test
+com.bigdata.rdf.stress.LoadClosureAndQueryTest.*.csv
+DIST.bigdata-*.tgz
+REL.bigdata-*.tgz
+queryLog*
+queryRunState*
+sparql.txt
+benchmark
+CI
+bsbm10-dataset.nt.gz
+bsbm10-dataset.nt.zip
+benchmark*
Added: svn:mergeinfo
## -0,0 +1,20 ##
+/branches/BIGDATA_MGC_HA1_HA5:8025-8122
+/branches/BIGDATA_OPENRDF_2_6_9_UPDATE:6769-6785
+/branches/BIGDATA_RELEASE_1_2_0:6766-7380
+/branches/BTREE_BUFFER_BRANCH:2004-2045
+/branches/DEV_BRANCH_27_OCT_2009:2270-2546,2548-2782
+/branches/INT64_BRANCH:4486-4522
+/branches/JOURNAL_HA_BRANCH:2596-4066
+/branches/LARGE_LITERALS_REFACTOR:4175-4387
+/branches/LEXICON_REFACTOR_BRANCH:2633-3304
+/branches/MGC_1_3_0:7609-7752
+/branches/QUADS_QUERY_BRANCH:4525-4531,4550-4584,4586-4609,4634-4643,4646-4672,4674-4685,4687-4693,4697-4735,4737-4782,4784-4792,4794-4796,4798-4801
+/branches/RDR:7665-8159
+/branches/READ_CACHE:7215-7271
+/branches/RWSTORE_1_1_0_DEBUG:5896-5935
+/branches/TIDS_PLUS_BLOBS_BRANCH:4814-4836
+/branches/ZK_DISCONNECT_HANDLING:7465-7484
+/branches/bugfix-btm:2594-3237
+/branches/dev-btm:2574-2730
+/branches/fko:3150-3194
+/trunk:3392-3437,3656-4061
\ No newline at end of property
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
|