This list is closed, nobody may subscribe to it.
| 2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
| 2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
| 2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
| 2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
| 2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
| 2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
| 2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Brian M. <btm...@gm...> - 2010-08-25 12:03:44
|
On Wed, Aug 25, 2010 at 5:49 AM, Bryan Thompson <br...@sy...> wrote: You had been looking at wrapping zookeeper for discovery from jini. What is > the status of that effort? The wrapper class is complete and checked in to my development branch, but only manually tested. Currently, it can be started from pstart or from the boot manager process added to that branch. It still has to be modified to be startable from the ServicesManagerService, but more importantly, a client wrapper needs to be written. I was planning on doing the remaining work after the smart proxy work is complete for each of the other services; in which the data service and the client service remain. BrianM |
|
From: Bryan T. <br...@sy...> - 2010-08-25 09:53:00
|
I have filed an issue on this, see below. Bryan
-----Original Message-----
From: bigdata(r) [mailto:no...@so...]
Sent: Wednesday, August 25, 2010 5:51 AM
Subject: [bigdata(r)] #148: Decouple the bulk loader configuration from the main bigdata configuration file
#148: Decouple the bulk loader configuration from the main bigdata configuration file
--------------------------------+---------------------------------------
--------------------------------+----
Reporter: thompsonbry | Owner:
Type: task | Status: new
Priority: major | Milestone: Simplified deployment and management
Component: Bigdata Federation | Version:
Keywords: |
--------------------------------+---------------------------------------
--------------------------------+----
Decouple the bulk loader configuration from the main bigdata configuration file. This will greatly simplify the main configuration file and make it possible to have sample configuration files for different bulk loader tasks.
We wind up provisioning the specific triple or quad store instance when running the bulk loader for the first time against the namespace for that triple/quads store. For purely historical reasons, the bulk loader is configured by two component sections in the bigdata configuration file:
- lubm : This is where we are setting the properties which will govern the triple/quad store.
- com.bigdata.rdf.load.MappedRDFDataLoadMaster : This is where we describe the bulk load job.
The MappedRDFDataLoadMaster section also uses some back references into fields defined in the lubm section, but the entire lubm section could be folded into the MappedRDFDataLoadMaster section.
At present, there are the following back references from into the rest of the configuration file:
bigdata.dataServiceCount : It seems that we should simply run with all logical data services found in jini/zookeeper.
bigdata.clientServiceCount : It seems to me that the #of client services could default to all unless overridden.
There is also a relatively complex declaration of the services templates which is used to describe which services must be running as a precondition for the bulk loader job. I propose that this should be either folder into the bulk loader code or these preconditions abolished as they basically assert that the configured system must be running (see bigdataCluster.config#1848).
awaitServicesTimeout = 10000;
servicesTemplates = new ServicesTemplate[] {...}
And bigdataCluster.config#1893, a template is established which says that the bulk loader will use dedicated client service nodes (rather than running the distributed bulk load job on the data service nodes, which can also host distributed job execution).
clientsTemplate = new ServicesTemplate(...);
I like to use distinct client service nodes because the bulk loader tends to be memory hungry when it is buffering data for the shards in memory before writing on the data services. Running the distributed job on the data services adds to the burden of the data service nodes and we still need to buffer the data and then scatter it to the appropriate shards. It is possible to reduce the memory demand of the bulk loader (by adjusting the queue capacity and chunk size used by the asynchronous write pipeline), and I have done this in the bigdataStandalone.config file. For this reason, the triple/quads store configurations are not "one size fits all" and I will get into this in a follow on email which addresses performance tuning properties used in the configuration files.
There are also some optional properties which really should be turned off unless you are engaged in forensics:
indexDumpDir = new File("@NAS@/"+jobName+"-indexDumps");
indexDumpNamespace = lubm.namespace;
Based on this, it seems that we could isolate the bulk loader configuration relatively easily into its own configuration file. That configuration file would only need to know a bare minimum of things:
- jini groups and locators.
- zookeeper quorum IPs and ports.
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/148>
bigdata(r) <http://www.bigdata.com/blog>
bigdata(r) is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: Bryan T. <br...@sy...> - 2010-08-25 09:49:56
|
Brian, You had been looking at wrapping zookeeper for discovery from jini. What is the status of that effort? Thanks, Bryan |
|
From: Bryan T. <br...@sy...> - 2010-08-25 09:49:35
|
All,
I would like to decouple the bulk loader configuration from the main bigdata configuration file. This will greatly simplify the main configuration file and make it possible to have sample configuration files for different bulk loader tasks.
We wind up provisioning the specific triple or quad store instance when running the bulk loader for the first time against the namespace for that triple/quads store. For purely historical reasons, the bulk loader is configured by two component sections in the bigdata configuration file:
- lubm : This is where we are setting the properties which will govern the triple/quad store.
- com.bigdata.rdf.load.MappedRDFDataLoadMaster : This is where we describe the bulk load job.
The MappedRDFDataLoadMaster section also uses some back references into fields defined in the lubm section, but the entire lubm section could be folded into the MappedRDFDataLoadMaster section.
At present, there are the following back references from into the rest of the configuration file:
bigdata.dataServiceCount : It seems that we should simply run with all logical data services found in jini/zookeeper.
bigdata.clientServiceCount : It seems to me that the #of client services could default to all unless overridden.
There is also a relatively complex declaration of the services templates which is used to describe which services must be running as a precondition for the bulk loader job. I propose that this should be either folder into the bulk loader code or these preconditions abolished as they basically assert that the configured system must be running (see bigdataCluster.config#1848).
awaitServicesTimeout = 10000;
servicesTemplates = new ServicesTemplate[] {...}
And bigdataCluster.config#1893, a template is established which says that the bulk loader will use dedicated client service nodes (rather than running the distributed bulk load job on the data service nodes, which can also host distributed job execution).
clientsTemplate = new ServicesTemplate(...);
I like to use distinct client service nodes because the bulk loader tends to be memory hungry when it is buffering data for the shards in memory before writing on the data services. Running the distributed job on the data services adds to the burden of the data service nodes and we still need to buffer the data and then scatter it to the appropriate shards. It is possible to reduce the memory demand of the bulk loader (by adjusting the queue capacity and chunk size used by the asynchronous write pipeline), and I have done this in the bigdataStandalone.config file. For this reason, the triple/quads store configurations are not "one size fits all" and I will get into this in a follow on email which addresses performance tuning properties used in the configuration files.
There are also some optional properties which really should be turned off unless you are engaged in forensics:
indexDumpDir = new File("@NAS@/"+jobName+"-indexDumps");
indexDumpNamespace = lubm.namespace;
Based on this, it seems that we could isolate the bulk loader configuration relatively easily into its own configuration file. That configuration file would only need to know a bare minimum of things:
- jini groups and locators.
- zookeeper quorum IPs and ports.
Thanks,
Bryan
|
|
From: husdon <no...@no...> - 2010-08-24 19:22:12
|
See <http://localhost/job/BigData/changes> |
|
From: husdon <no...@no...> - 2010-08-24 18:41:24
|
See <http://localhost/job/BigData/changes> |
|
From: Bryan T. <br...@sy...> - 2010-08-24 16:01:35
|
Brian,
I am seeing the message under eclipse based runs of individual junit tests or test suites. It sounds like we should somehow configure the eclipse project to define one of those properties. Does anyone know how to do that? In a manner which results in the correct setup when the project is checked out?
Also, concerning the property values, do we need to specify "file:" to indicate the protocol and make these paths into URLs?
E.g.,
-Dlog4j.configuration=file:bigdata/src/resources/logging/log4j.properties
rather than
-Dlog4j.configuration=bigdata/src/resources/logging/log4j.properties
Thanks,
Bryan
________________________________
From: Brian Murphy [mailto:btm...@gm...]
Sent: Tuesday, August 24, 2010 11:52 AM
To: big...@li...
Subject: Re: [Bigdata-developers] LogUtil, Log4jLoggingHandler
On Tue, Aug 24, 2010 at 11:10 AM, Bryan Thompson <br...@sy...<mailto:br...@sy...>> wrote:
I will occasionally see messages when running a junit test under eclipse such as:
ERROR: could not initialize Log4J logging utility
set system property '-Dlog4j.configuration=bigdata/src/resources/logging/log4j.properties
and/or
set system property '-Dlog4j.primary.configuration=<installDir>/bigdata/src/resources/logging/log4j.properties'
This means that neither of the referenced system properties
have been set on the current VM of the code that is attempting
to initialize a logger. In a past life, this message has been
quite useful to me in identifying where I've mis-configured
logging; especially given that a mis-configured logger can
often times fail silently.
Historically, I had simply included bigdata/src/resources/logging as a source folder under eclipse such that the log4j.properties file would be resolved using the classpath.
Relying on the classpath doesn't work so well -- some might even
say that it's unacceptable -- in real deployments.
This appears to no longer work, at least, not all of the time.
In eclipse, the ant based junit tests, or a 'bigdata start' deployment?
I haven't encountered it in the junit tests, and for the deployment
case, I've noticed that those properties need to be set in the
various bigdataClusterXX.config files for the given service that the
SMS is starting. Not sure about eclipse.
Also, I see the following pattern being used in some classes:
private static final org.apache.log4j.Logger utilLogger =
LogUtil.getLog4jLogger( NicUtil.class );
while most of the code base uses this pattern:
protected static final Logger log = Logger.getLogger(BufferService.class);
I am curious as to the difference in the manner in which these loggers are resolved and what the intent is of the pattern which makes use of LogUtil
The LogUtil.java class is a convenience utility that ultimately
calls Logger.getLogger(), but retrieves and processes the desired
logging configuration file once (in various forms, such as text or xml);
and displays the above error message if that config file cannot be
found.
and what the relationship is between LogUtil and Log4jLoggingHandler.
Bigdata uses Jini code, and the Jini code uses java.util.logging
instead of Log4j. Log4j and java.util.logging each use different
names for their log levels. The Log4jLoggingHandler.java class
translates the java.util.logging log levels that are specified in a
java.util.logging based logging configuration file to the corresponding
Log4j logging levels (as defined by the handler); which results in
the output logged by the various Jini classes being consistent
with the Log4j levels logged by the non-Jini code. For example,
items logged in a Jini class at the java.util.logging.FINEST level
will be written in the log file as Log4j.TRACE.
The above is useful when a logging configuration file is generated
that combines Log4j based configuration specifications with
java.util.logging based configuration specifications.
BrianM
|
|
From: Brian M. <btm...@gm...> - 2010-08-24 15:51:45
|
On Tue, Aug 24, 2010 at 11:10 AM, Bryan Thompson <br...@sy...> wrote: I will occasionally see messages when running a junit test under eclipse > such as: > > ERROR: could not initialize Log4J logging utility > set system property > '-Dlog4j.configuration=bigdata/src/resources/logging/log4j.properties > and/or > set system property > '-Dlog4j.primary.configuration=<installDir>/bigdata/src/resources/logging/log4j.properties' > This means that neither of the referenced system properties have been set on the current VM of the code that is attempting to initialize a logger. In a past life, this message has been quite useful to me in identifying where I've mis-configured logging; especially given that a mis-configured logger can often times fail silently. > Historically, I had simply included bigdata/src/resources/logging as a > source folder under eclipse such that the log4j.properties file would be > resolved using the classpath. Relying on the classpath doesn't work so well -- some might even say that it's unacceptable -- in real deployments. > This appears to no longer work, at least, not all of the time. > In eclipse, the ant based junit tests, or a 'bigdata start' deployment? I haven't encountered it in the junit tests, and for the deployment case, I've noticed that those properties need to be set in the various bigdataClusterXX.config files for the given service that the SMS is starting. Not sure about eclipse. > > Also, I see the following pattern being used in some classes: > > private static final org.apache.log4j.Logger utilLogger = > LogUtil.getLog4jLogger( NicUtil.class > ); > > while most of the code base uses this pattern: > > protected static final Logger log = > Logger.getLogger(BufferService.class); > > I am curious as to the difference in the manner in which these loggers are > resolved and what the intent is of the pattern which makes use of LogUtil The LogUtil.java class is a convenience utility that ultimately calls Logger.getLogger(), but retrieves and processes the desired logging configuration file once (in various forms, such as text or xml); and displays the above error message if that config file cannot be found. > and what the relationship is between LogUtil and Log4jLoggingHandler. > Bigdata uses Jini code, and the Jini code uses java.util.logging instead of Log4j. Log4j and java.util.logging each use different names for their log levels. The Log4jLoggingHandler.java class translates the java.util.logging log levels that are specified in a java.util.logging based logging configuration file to the corresponding Log4j logging levels (as defined by the handler); which results in the output logged by the various Jini classes being consistent with the Log4j levels logged by the non-Jini code. For example, items logged in a Jini class at the java.util.logging.FINEST level will be written in the log file as Log4j.TRACE. The above is useful when a logging configuration file is generated that combines Log4j based configuration specifications with java.util.logging based configuration specifications. BrianM |
|
From: Bryan T. <br...@sy...> - 2010-08-24 15:33:30
|
Just to follow up on this, I think that this message occurs when the first logger to be initialized is NicUtil, which uses the new system. When the first logger to be initialized uses the old system this message is not displayed. Bryan > -----Original Message----- > From: Bryan Thompson [mailto:br...@sy...] > Sent: Tuesday, August 24, 2010 11:10 AM > To: Bigdata Developers > Subject: [Bigdata-developers] LogUtil, Log4jLoggingHandler > > Brian, > > Can you comment on these files? They were introduced into > the trunk at revision 2493 and provide for log4j > initialization. I will occasionally see messages when > running a junit test under eclipse such as: > > ERROR: could not initialize Log4J logging utility > set system property > '-Dlog4j.configuration=bigdata/src/resources/logging/log4j.properties > and/or > set system property > '-Dlog4j.primary.configuration=<installDir>/bigdata/src/resour > ces/logging/log4j.properties' > > Historically, I had simply included > bigdata/src/resources/logging as a source folder under > eclipse such that the log4j.properties file would be resolved > using the classpath. This appears to no longer work, at > least, not all of the time. > > Also, I see the following pattern being used in some classes: > > private static final org.apache.log4j.Logger utilLogger = > LogUtil.getLog4jLogger( > NicUtil.class ); > > while most of the code base uses this pattern: > > protected static final Logger log = > Logger.getLogger(BufferService.class); > > I am curious as to the difference in the manner in which > these loggers are resolved and what the intent is of the > pattern which makes use of LogUtil and what the relationship > is between LogUtil and Log4jLoggingHandler. > > Also, I was wondering if people would comment on whether to > converge on the practice of making the loggers private such > that each class has its own logger or allowing loggers to be > protected and hence inherited unless overridden. I am happy > enough to go with either approach, but I would like to be > consistent in the practice that we adopt. > > Thanks, > Bryan > > -------------------------------------------------------------- > ---------------- > Sell apps to millions through the Intel(R) Atom(Tm) Developer > Program Be part of this innovative community and reach > millions of netbook users worldwide. Take advantage of > special opportunities to increase revenue and speed > time-to-market. Join now, and jumpstart your future. > http://p.sf.net/sfu/intel-atom-d2d > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
|
From: Bryan T. <br...@sy...> - 2010-08-24 15:12:03
|
Brian,
Can you comment on these files? They were introduced into the trunk at revision 2493 and provide for log4j initialization. I will occasionally see messages when running a junit test under eclipse such as:
ERROR: could not initialize Log4J logging utility
set system property '-Dlog4j.configuration=bigdata/src/resources/logging/log4j.properties
and/or
set system property '-Dlog4j.primary.configuration=<installDir>/bigdata/src/resources/logging/log4j.properties'
Historically, I had simply included bigdata/src/resources/logging as a source folder under eclipse such that the log4j.properties file would be resolved using the classpath. This appears to no longer work, at least, not all of the time.
Also, I see the following pattern being used in some classes:
private static final org.apache.log4j.Logger utilLogger =
LogUtil.getLog4jLogger( NicUtil.class );
while most of the code base uses this pattern:
protected static final Logger log = Logger.getLogger(BufferService.class);
I am curious as to the difference in the manner in which these loggers are resolved and what the intent is of the pattern which makes use of LogUtil and what the relationship is between LogUtil and Log4jLoggingHandler.
Also, I was wondering if people would comment on whether to converge on the practice of making the loggers private such that each class has its own logger or allowing loggers to be protected and hence inherited unless overridden. I am happy enough to go with either approach, but I would like to be consistent in the practice that we adopt.
Thanks,
Bryan
|
|
From: husdon <no...@no...> - 2010-08-23 18:28:16
|
See <http://localhost/job/BigData/changes> |
|
From: Bryan T. <br...@sy...> - 2010-08-20 11:56:47
|
Good morning, Please update your trac tickets today. Make sure that there is an issue for any work that you are doing and that you have changed the status to "accepted" before starting work on that issue. Thanks, Bryan |
|
From: husdon <no...@no...> - 2010-08-18 20:46:31
|
See <http://localhost/job/BigData/150/changes> |
|
From: husdon <no...@no...> - 2010-08-18 00:05:51
|
See <http://localhost/job/BigData/149/changes> |
|
From: Bryan T. <br...@sy...> - 2010-08-13 19:19:59
|
Stephane, please file an issue on this and we will track down the problem. Please include your configuration, properties, etc.
Bryan
Stephane Fellah <sf...@sm...> wrote:
Hi,
I tried to deploy the new version of big data on a sesame server (version 2.3.2) both on Windows and Linux Ubuntu. The creation of the repository through the console is successfull but when I tried to access the repository from the openrdf-workbench I have similar errors on both Windows and Linux deployment:
First time error when i tried to list namespaces I get:
javax.servlet.ServletException: org.openrdf.repository.RepositoryException: org.openrdf.repository.config.RepositoryConfigException: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:80)
org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40)
org.openrdf.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:93)
org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:131)
org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:90)
org.openrdf.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:97)
org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40)
org.openrdf.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:52)
root cause
org.openrdf.repository.RepositoryException: org.openrdf.repository.config.RepositoryConfigException: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
org.openrdf.http.client.HTTPClient.getTupleQueryResult(HTTPClient.java:992)
org.openrdf.http.client.HTTPClient.getNamespaces(HTTPClient.java:768)
org.openrdf.http.client.HTTPClient.getNamespaces(HTTPClient.java:750)
org.openrdf.repository.http.HTTPRepositoryConnection.getNamespaces(HTTPRepositoryConnection.java:324)
org.openrdf.workbench.commands.NamespacesServlet.service(NamespacesServlet.java:48)
org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:94)
org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:73)
org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40)
org.openrdf.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:93)
org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:131)
org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:90)
org.openrdf.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:97)
org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40)
org.openrdf.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:52)
Then the second query (list contexts for example I get the following error:
javax.servlet.ServletException: org.openrdf.repository.RepositoryException: org.openrdf.repository.config.RepositoryConfigException: java.nio.channels.OverlappingFileLockException
org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:80)
org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40)
org.openrdf.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:93)
org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:131)
org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:90)
org.openrdf.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:97)
org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40)
org.openrdf.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:52)
root cause
org.openrdf.repository.RepositoryException: org.openrdf.repository.config.RepositoryConfigException: java.nio.channels.OverlappingFileLockException
org.openrdf.http.client.HTTPClient.getTupleQueryResult(HTTPClient.java:992)
org.openrdf.http.client.HTTPClient.getNamespaces(HTTPClient.java:768)
org.openrdf.http.client.HTTPClient.getNamespaces(HTTPClient.java:750)
org.openrdf.repository.http.HTTPRepositoryConnection.getNamespaces(HTTPRepositoryConnection.java:324)
org.openrdf.workbench.base.TupleServlet.service(TupleServlet.java:38)
org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:73)
org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40)
org.openrdf.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:93)
org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:131)
org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:90)
org.openrdf.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:97)
org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40)
org.openrdf.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:52)
Please note that I have managed to get successfull deployment on Sesame (2.3.1) using previous version of Bigdata .
Any clue ?
--
Stephane Fellah, MS, B.Sc
Lead Software Engineer
smartRealm LLC
203 Loudoun St. SW suite #200
Leesburg, VA 20176
Tel: 703 669 5514
Cell: 703 447 2078
Fax: 703 669 5515
|
|
From: Stephane F. <sf...@sm...> - 2010-08-13 19:06:58
|
Hi, I tried to deploy the new version of big data on a sesame server (version 2.3.2) both on Windows and Linux Ubuntu. The creation of the repository through the console is successfull but when I tried to access the repository from the openrdf-workbench I have similar errors on both Windows and Linux deployment: First time error when i tried to list namespaces I get: javax.servlet.ServletException: org.openrdf.repository.RepositoryException: org.openrdf.repository.config.RepositoryConfigException: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:80) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:93) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:131) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:90) org.openrdf.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:97) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:52) *root cause* org.openrdf.repository.RepositoryException: org.openrdf.repository.config.RepositoryConfigException: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException org.openrdf.http.client.HTTPClient.getTupleQueryResult(HTTPClient.java:992) org.openrdf.http.client.HTTPClient.getNamespaces(HTTPClient.java:768) org.openrdf.http.client.HTTPClient.getNamespaces(HTTPClient.java:750) org.openrdf.repository.http.HTTPRepositoryConnection.getNamespaces(HTTPRepositoryConnection.java:324) org.openrdf.workbench.commands.NamespacesServlet.service(NamespacesServlet.java:48) org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:94) org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:73) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:93) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:131) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:90) org.openrdf.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:97) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:52) Then the second query (list contexts for example I get the following error: javax.servlet.ServletException: org.openrdf.repository.RepositoryException: org.openrdf.repository.config.RepositoryConfigException: java.nio.channels.OverlappingFileLockException org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:80) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:93) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:131) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:90) org.openrdf.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:97) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:52) *root cause* org.openrdf.repository.RepositoryException: org.openrdf.repository.config.RepositoryConfigException: java.nio.channels.OverlappingFileLockException org.openrdf.http.client.HTTPClient.getTupleQueryResult(HTTPClient.java:992) org.openrdf.http.client.HTTPClient.getNamespaces(HTTPClient.java:768) org.openrdf.http.client.HTTPClient.getNamespaces(HTTPClient.java:750) org.openrdf.repository.http.HTTPRepositoryConnection.getNamespaces(HTTPRepositoryConnection.java:324) org.openrdf.workbench.base.TupleServlet.service(TupleServlet.java:38) org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:73) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:93) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:131) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:90) org.openrdf.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:97) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:52) Please note that I have managed to get successfull deployment on Sesame (2.3.1) using previous version of Bigdata . Any clue ? -- Stephane Fellah, MS, B.Sc Lead Software Engineer smartRealm LLC 203 Loudoun St. SW suite #200 Leesburg, VA 20176 Tel: 703 669 5514 Cell: 703 447 2078 Fax: 703 669 5515 |
|
From: Bryan T. <br...@sy...> - 2010-08-13 12:40:04
|
Hello, I'd like to remind [1] everyone to: * file an issue on trac<https://sourceforge.net/apps/trac/bigdata/> for any planned work; * accept the issue before making changes; and * update the status for accepted issues at least weekly (Friday morning). This provides everyone with oversight on planned and active change sets via trac<https://sourceforge.net/apps/trac/bigdata/report/4> and makes it easier to minimize conflicts in the code base. Thanks, Bryan [1] https://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Contributors#Development |
|
From: husdon <no...@no...> - 2010-08-12 17:35:32
|
See <http://localhost/job/BigData/changes> |
|
From: Bryan T. <br...@sy...> - 2010-08-12 16:57:23
|
All,
Matt just pointed out a bug introduced by the XSD inlining. The bug primarily effects quads query and would have been introduced in 0.83.0. This is fixed in the trunk. The issue is inline below.
Thanks,
Bryan
-----Original Message-----
From: bigdata(r) [mailto:no...@so...]
Sent: Thursday, August 12, 2010 12:50 PM
Subject: Re: [bigdata(r)] #141: SPO.hashCode() always returns zero (linear scaling in default graph queries)
#141: SPO.hashCode() always returns zero (linear scaling in default graph
queries)
-----------------------------------+------------------------------------
-----------------------------------+----
Reporter: thompsonbry | Owner: thompsonbry
Type: defect | Status: closed
Priority: critical | Milestone:
Component: Bigdata RDF Database | Version: BIGDATA_RELEASE_0_83_2
Resolution: fixed | Keywords:
-----------------------------------+------------------------------------
-----------------------------------+----
Changes (by thompsonbry):
* status: new => closed
* resolution: => fixed
Comment:
Bug fix to SPO.hashCode() per
https://sourceforge.net/apps/trac/bigdata/ticket/141.
The historical behavior of SPO.hashCode() was based on the int64 term identifiers. Since the hash code is now computed from the int32 hash codes of the (s,p,o) IV objects, the original bit math was resulting in a hash code which was always zero (any 32 bit value shifted right by 32 bits is zero). The change was to remove the bit math.
Committed revision 3441.
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/141#comment:1>
bigdata(r) <http://www.bigdata.com/blog>
bigdata(r) is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: husdon <no...@no...> - 2010-08-09 13:50:09
|
See <http://localhost/job/BigData/changes> |
|
From: husdon <no...@no...> - 2010-08-09 13:05:46
|
See <http://localhost/job/BigData/changes> |
|
From: husdon <no...@no...> - 2010-08-09 11:40:53
|
See <http://localhost/job/BigData/changes> |
|
From: husdon <no...@no...> - 2010-08-08 16:05:25
|
See <http://localhost/job/BigData/changes> |
|
From: husdon <no...@no...> - 2010-08-06 21:20:26
|
See <http://localhost/job/BigData/changes> |
|
From: husdon <no...@no...> - 2010-08-06 20:34:25
|
See <http://localhost/job/BigData/changes> |