This list is closed, nobody may subscribe to it.
| 2010 |
Jan
|
Feb
(19) |
Mar
(8) |
Apr
(25) |
May
(16) |
Jun
(77) |
Jul
(131) |
Aug
(76) |
Sep
(30) |
Oct
(7) |
Nov
(3) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
(2) |
Jul
(16) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(7) |
Dec
(7) |
| 2012 |
Jan
(10) |
Feb
(1) |
Mar
(8) |
Apr
(6) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
(8) |
Dec
(2) |
| 2013 |
Jan
(5) |
Feb
(12) |
Mar
(2) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
(22) |
Aug
(50) |
Sep
(31) |
Oct
(64) |
Nov
(83) |
Dec
(28) |
| 2014 |
Jan
(31) |
Feb
(18) |
Mar
(27) |
Apr
(39) |
May
(45) |
Jun
(15) |
Jul
(6) |
Aug
(27) |
Sep
(6) |
Oct
(67) |
Nov
(70) |
Dec
(1) |
| 2015 |
Jan
(3) |
Feb
(18) |
Mar
(22) |
Apr
(121) |
May
(42) |
Jun
(17) |
Jul
(8) |
Aug
(11) |
Sep
(26) |
Oct
(15) |
Nov
(66) |
Dec
(38) |
| 2016 |
Jan
(14) |
Feb
(59) |
Mar
(28) |
Apr
(44) |
May
(21) |
Jun
(12) |
Jul
(9) |
Aug
(11) |
Sep
(4) |
Oct
(2) |
Nov
(1) |
Dec
|
| 2017 |
Jan
(20) |
Feb
(7) |
Mar
(4) |
Apr
(18) |
May
(7) |
Jun
(3) |
Jul
(13) |
Aug
(2) |
Sep
(4) |
Oct
(9) |
Nov
(2) |
Dec
(5) |
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Bryan T. <br...@sy...> - 2010-05-10 16:32:18
|
Brian, I think that it is still too early to say whether the metadata service (a shard locator service) should inherit from the data service (a shard container service) or not. The base abstraction for both the data service and the metadata service is really a container for named B+Trees. For the data service, those B+Trees are the shard views and include a mixture of journals and index segments for each shard. For the metadata service, the B+Trees are the shard locator indices, but are not themselves sharded. Currently, the client views of the scale-out indices are written to the common interface exposed by the data services and the metadata service. For example, a key-range query directed to the metadata service will report the shard locators for a key-range of the corresponding scale-out index. The client then issues the appropriate requests to the data services on which the shards are located. In the future, we may see a lot of evolution in the metadata service. For example, there is a proposal to use a P2P gossip protocol to distribute the shard locator service across the data services. The impact of that change on the metadata service has not been mapped out in detail yet. Thanks, Bryan ________________________________ From: Brian Murphy [mailto:btm...@gm...] Sent: Monday, May 10, 2010 11:58 AM To: big...@li... Subject: [Bigdata-developers] Question on the MetadataService The implementation of the MetadataService implements IMetadataService, which extends IDataService which extends IService, ITxCommitProtocol, and IRemoteExecutor. Additionally, MetadataService also extends DataService, which extends AbstractService. After an initial examination of the MetadataService implementation, I'm wondering if all the functionality required by the IDataService interface and provided by the DataService class is really necessary for the MetadataService. Was there a reason MetadataService was made a "super service" of DataService? Thanks, Brian |
|
From: Brian M. <btm...@gm...> - 2010-05-10 15:57:37
|
The implementation of the MetadataService implements IMetadataService, which extends IDataService which extends IService, ITxCommitProtocol, and IRemoteExecutor. Additionally, MetadataService also extends DataService, which extends AbstractService. After an initial examination of the MetadataService implementation, I'm wondering if all the functionality required by the IDataService interface and provided by the DataService class is really necessary for the MetadataService. Was there a reason MetadataService was made a "super service" of DataService? Thanks, Brian |
|
From: b. <no...@so...> - 2010-05-02 16:28:23
|
#72: Testing - Make changes to allow all tests of TestServiceStarter to pass
when run using both ant & eclipse
--------------------------------+-------------------------------------------
Reporter: btmurphy | Owner: btmurphy
Type: defect | Status: new
Priority: major | Milestone:
Component: Bigdata Federation | Version:
Keywords: |
--------------------------------+-------------------------------------------
Comment(by btmurphy):
Changeset 2732 - merged changes from branch dev-btm --> trunk
M bigdata-
jini/src/java/com/bigdata/jini/start/config/JiniCoreServicesConfiguration.java
M bigdata-
jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java
M bigdata-jini/src/resources/config/bigdataStandaloneTesting.config
M bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java
M bigdata-
jini/src/test/com/bigdata/jini/start/TestJiniCoreServicesProcessHelper.java
M bigdata-jini/src/test/com/bigdata/jini/start/TestServiceStarter.java
M bigdata-
jini/src/test/com/bigdata/jini/start/config/TestServiceConfiguration.java
M bigdata-jini/src/test/com/bigdata/jini/start/config/testfed.config
M bigdata-jini/src/test/com/bigdata/jini/start/testfed.config
M bigdata-jini/src/test/com/bigdata/jini/start/testjini.config
M bigdata-jini/src/test/com/bigdata/zookeeper/testzoo.config
M build.xml
M src/resources/bin/pstart
A bigdata-jini/src/test/com/bigdata/jini/start/testReggie.config
A bigdata-jini/src/test/com/bigdata/jini/start/testStartJini.config
A src/resources/bin/config
A src/resources/bin/config/browser.config
A src/resources/bin/config/reggie.config
A src/resources/bin/config/serviceStarter.config
A src/resources/bin/config/zookeeper.config
D src/resources/config/jini/browser.config.tmp
D src/resources/config/jini/reggie.config.tmp
D src/resources/config/jini/serviceStarterAll.config
D src/resources/config/jini/serviceStarterOne.config
D src/resources/config/jini/zookeeper.config
The changes above should now allow one to run -- under
either eclipse or ant -- the tests located in the
following namespaces under bigdata-jini/src/test:
com.bigdata.jini
com.bigdata.jini.start
com.bigdata.jini.start.config
com.bigdata.service.jini
com.bigdata.service.jini.master
com.bigdata.zookeeper
Note that the test com.bigdata.zookeeper.TestHierachicalZNodeWatcher
does not pass consistently; which may be due to the fact that
zookeeper does not guarantee that all events will be seen, as
indicated in the javadoc of
com.bigdata.zookeeper.HierarchicalZNodeWatcher.
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/72#comment:10>
bigdata® <http://www.bigdata.com/blog>
bigdata® is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: Bryan T. <br...@sy...> - 2010-05-01 13:48:00
|
Jürgen, If you want to look into this, per Mike's comment, there are two implementations of "BaseClosure": com.bigdata.rdf.rules.FullClosure [1] and com.bigdata.rdf.rules.FastClosure [2] The "fast closure" algorithm is based on [3]. It's possible that some of the ordering of the OWL rules is incorrect, which might account for the problem you are seeing. You can test this easily enough by changing to the FullClosure algorithm. If you want to look at the code for [2] you might be able to spot the problem there. Thanks, Bryan [1] http://www.bigdata.com/bigdata/docs/api/com/bigdata/rdf/rules/FullClosure.html [2] http://www.bigdata.com/bigdata/docs/api/com/bigdata/rdf/rules/FastClosure.html [3] "An approach to RDF(S) Query, Manipulation and Inference on Databases" by Lu, Yu, Tu, Lin, and Zhang.", http://www.cs.iastate.edu/~tukw/waim05.pdf. > -----Original Message----- > From: Jürgen Jakobitsch [mailto:jak...@pu...] > Sent: Saturday, May 01, 2010 9:06 AM > To: Mike Personick > Cc: big...@li... > Subject: Re: [Bigdata-developers] OWL Inference > > hi mike, thanks for your time, > > i will check the source anyway - since i'd really like to use > bigdata, notably i'm really interested in some of the > features like scalability and provenance. > > also let me note, that i think that bigdata source code is > very well done! > > currently i have two options > > 1. owlim - which infers correctly, but has persistence > problems in some cases (so not suitable for production use) > amongst others 2. bigdata - which has infer problems, but has > features it has and which i consider very interesting > > if i find something, i sure let you know. > > wkr turnguard.com/turnguard > > ----- Original Message ----- > From: "Mike Personick" <mi...@sy...> > To: "Jürgen Jakobitsch" <jak...@pu...>, > big...@li... > Sent: Friday, April 30, 2010 9:34:20 PM > Subject: RE: [Bigdata-developers] OWL Inference > > Jurgen, > > I just wanted to let you know that while I have not gotten to > the bottom of the problem yet, I agree with you that there is > indeed a problem. > OWLIM's answer looks more correct to me than does our answer. > I don't understand why we are seeing (narrower, TechnoPop) > and (narrower, > ElectroPop) since narrower is not itself a transitive > property. I think the fact that we are not seeing > (semanticRelation, TechnoPop) and (semanticRelation, > ElectroPop) might have to do with our FastClosure inference > program. I suspect that using the FullClosure program might > solve the latter problem. I would be surprised if it solved > the former too, but it is possible. Maybe give it a shot. > (See AbstractTripleStore.Options.CLOSURE_CLASS for details.) > > Long story short, yes I think there is a bug. :) > > I will log an issue for it in our bug tracker system. > > If you are able to uncover the source of the problem please > let me know! > > Thanks, > Mike > > > -----Original Message----- > From: Jürgen Jakobitsch [mailto:jak...@pu...] > Sent: Wednesday, April 28, 2010 1:52 AM > To: big...@li... > Subject: [Bigdata-developers] OWL Inference > > hi, first of all congrats to this great project, i checked it > out once a couple of months ago and i can see that there some > significant improvements. > > however might have found a bug in the inference engine, that > can be reproduced like so : > > 1. instantiate a bigdata sail with fullfeatured properties 2. > add http://www.w3.org/2009/08/skos-reference/skos-owl1-dl.rdf > with baseURL http://www.w3.org/2004/02/skos/core > 3. add http://turnguard.com/virtuoso/test10.rdf > > now do the following sparql query > > SELECT * > WHERE { > <http://www.turnguard.com/Music> ?p ?o > } > > the result from bigdata looks like this > > http://www.w3.org/1999/02/22-rdf-syntax-ns#type > http://www.w3.org/2004/02/skos/core#Concept > http://www.w3.org/2000/01/rdf-schema#label "Music"@en > http://www.w3.org/2004/02/skos/core#narrower > http://www.turnguard.com/ElectroPop > http://www.w3.org/2004/02/skos/core#narrower > http://www.turnguard.com/Pop > http://www.w3.org/2004/02/skos/core#narrower > http://www.turnguard.com/TechnoPop > http://www.w3.org/2004/02/skos/core#narrowerTransitive > http://www.turnguard.com/ElectroPop > http://www.w3.org/2004/02/skos/core#narrowerTransitive > http://www.turnguard.com/Pop > http://www.w3.org/2004/02/skos/core#narrowerTransitive > http://www.turnguard.com/TechnoPop > http://www.w3.org/2004/02/skos/core#prefLabel "Music"@en > http://www.w3.org/2004/02/skos/core#semanticRelation > http://www.turnguard.com/Pop > http://www.w3.org/1999/02/22-rdf-syntax-ns#type > http://www.w3.org/2000/01/rdf-schema#Resource > > > the result from owlim (swiftowlim beta 12) like this > > http://www.w3.org/2004/02/skos/core#narrowerTransitive > http://www.turnguard.com/ElectroPop > http://www.w3.org/2004/02/skos/core#narrowerTransitive > http://www.turnguard.com/TechnoPop > http://www.w3.org/2004/02/skos/core#narrowerTransitive > http://www.turnguard.com/Pop > http://www.w3.org/1999/02/22-rdf-syntax-ns#type > http://www.w3.org/2004/02/skos/core#Concept > http://www.w3.org/1999/02/22-rdf-syntax-ns#type _:node1 > http://www.w3.org/2004/02/skos/core#semanticRelation > http://www.turnguard.com/ElectroPop > http://www.w3.org/2004/02/skos/core#semanticRelation > http://www.turnguard.com/TechnoPop > http://www.w3.org/2004/02/skos/core#semanticRelation > http://www.turnguard.com/Pop > http://www.w3.org/2004/02/skos/core#prefLabel "Music"@en > http://www.w3.org/2000/01/rdf-schema#label "Music"@en > http://www.w3.org/2004/02/skos/core#narrower > http://www.turnguard.com/Pop > > note that i'm positivly surprised by the inferred > rdf-schema#Resource with bigdata but there are significant > differences between the two inference engines, where owlim > seems to be right in most cases. > > please note that : > > in skos : narrower is a subProp of narrowerTransitive (same > for broader), both are inverse, the transitives are subProps > of semanticRelation, so the construct is a bit complex. > > any suggestions if i'm doing some wrong or this is a bug > really welcome. > > wkr www.turnguard.com/turnguard > > -- punkt. netServices > ______________________________ Jürgen Jakobitsch Codeography > > Lerchenfelder Gürtel 43 Top 5/2 > A - 1160 Wien > Tel.: 01 / 897 41 22 - 29 > Fax: 01 / 897 41 22 - 22 > > netServices http://www.punkt.at > > > -------------------------------------------------------------- > ---------------- > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -- > punkt. netServices > ______________________________ > Jürgen Jakobitsch > Codeography > > Lerchenfelder Gürtel 43 Top 5/2 > A - 1160 Wien > Tel.: 01 / 897 41 22 - 29 > Fax: 01 / 897 41 22 - 22 > > netServices http://www.punkt.at > > > -------------------------------------------------------------- > ---------------- > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
|
From: Bryan T. <br...@sy...> - 2010-05-01 13:41:05
|
Jürgen, You can host as many triple stores as you like in a single bigdata instance. Would that help? Each triple store has its own namespace. All of the indices for a given triple store are located within that namespace. For example, the default namespace is "kb". You could assign namespaces to each customer: "customer1" "customer2" etc. Triple stores created in this manner are completely disjoint, though they are in the same bigdata database instance. While this may not be relevant to your effort, we plan to add a feature to let you query across some or all triple (or quad) stores in a bigdata database using a protocol hint to identify the sources which you would like to consider. This is similar to the named/default graphs of SPARQL, but you are specifying the triple (or quad) store instances to be queried separately from the named/default graphs. This approach makes particular sense when you have many different customers who want their own quad stores so they can use their own interpretation of the context role in these quad stores, but you also have use cases for querying across data which those customers choose to "publish" as "shared" sources. Bryan > -----Original Message----- > From: Jürgen Jakobitsch [mailto:jak...@pu...] > Sent: Saturday, May 01, 2010 9:18 AM > To: Bryan Thompson > Cc: Mike Personick; big...@li... > Subject: Re: [Bigdata-developers] Inference and Quads > > hi thank you both for your answers. > > one usage pattern would be : > > we're developing a thesaurus management environment that > should be able to host many projects, where a project would > be a skos thesaurus of a certain knowledge domain. > > now we've two possibilities. > > either put every project in a different sail repository or > put all projects for one customer into a one single sail > repository and distinguish the projects by named graphs (quads). > > so first there's need for owl inference (for example for > narrowerTransitives, maybe just to check loops in the > thesaurus structure (a skos:narrower b skos:narrower c > skos:narrower a, for example) and there's need to get this > data not for all different thesauri but only for the > skos:concepts from one named graph (from one project) > > so the exact setup for us would be : > > put the skos:ontology in a namedgraph (a context) which > represents a project add skos:concepts (build the thesaurus) > to that namedgraph (context, project) infer > skos:narrowerTransitives for example from only this > namedgraph (context, project) > > for example : put a thesaurus about the semantic web into one > namedgraph (context, project) > put a thesaurus about the marketing department > of a company into another namedgraph (context, project) > > wkr www.turnguard.com/turnguard > > > ----- Original Message ----- > From: "Bryan Thompson" <br...@sy...> > To: "Mike Personick" <mi...@sy...>, "Jürgen Jakobitsch" > <jak...@pu...>, big...@li... > Sent: Saturday, May 1, 2010 11:24:57 AM > Subject: RE: [Bigdata-developers] Inference and Quads > > Jürgen, > > Let me add that we are interested in usage patterns for quads > with inference. For example, would you stores ontologies in > some context(s) and then use other contexts for managing > provenance but always desire entailments over all contexts? > > In that case, I believe that quads + inference may be > significantly simpler. > > Thanks, > Bryan > > > -----Original Message----- > > From: Mike Personick [mailto:mi...@sy...] > > Sent: Friday, April 30, 2010 3:41 PM > > To: Jürgen Jakobitsch; big...@li... > > Subject: Re: [Bigdata-developers] Inference and Quads > > > > Jurgen, > > > > The answer to inference with quads really lies in full query-time > > inference, which we do not yet support. We have sketchy > plans to work > > on query-time inference later this year after attending to several > > other priorities, most notably the high-availability > architecture for > > scale-out. > > > > The problem with quad inference is that supports for inferred > > statements can exist in different contexts, and it is not > known until > > query which contexts can be considered. Our model right now is to > > compute most inferences at load time. > > So if we compute inferences using supports from different > contexts, we > > would have to filter out query results based on whether they are > > supported by the specific contexts listed in the query. > > > > Thanks, > > Mike > > > > > > -----Original Message----- > > From: Jürgen Jakobitsch [mailto:jak...@pu...] > > Sent: Wednesday, April 28, 2010 1:55 AM > > To: big...@li... > > Subject: [Bigdata-developers] Inference and Quads > > > > hi, > > > > can you tell, when there will inference support with the quad store. > > > > we're currently starting development of the next version of out > > thesaurus management tool poolparty (poolparty.punkt.at) and are > > further evaluating triple stores. > > but we have a significant need for the combination of quads and > > inference. > > > > wkr turnguard.com/turnguard > > > > > > -- punkt. netServices > > ______________________________ Jürgen Jakobitsch Codeography > > > > Lerchenfelder Gürtel 43 Top 5/2 > > A - 1160 Wien > > Tel.: 01 / 897 41 22 - 29 > > Fax: 01 / 897 41 22 - 22 > > > > netServices http://www.punkt.at > > > > > > -------------------------------------------------------------- > > ---------------- > > _______________________________________________ Bigdata-developers > > mailing list Big...@li... > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > -------------------------------------------------------------- > > ---------------- > > _______________________________________________ Bigdata-developers > > mailing list Big...@li... > > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > > > -- > punkt. netServices > ______________________________ > Jürgen Jakobitsch > Codeography > > Lerchenfelder Gürtel 43 Top 5/2 > A - 1160 Wien > Tel.: 01 / 897 41 22 - 29 > Fax: 01 / 897 41 22 - 22 > > netServices http://www.punkt.at > > |
|
From: Jürgen J. <jak...@pu...> - 2010-05-01 13:18:58
|
hi thank you both for your answers.
one usage pattern would be :
we're developing a thesaurus management environment that should be able to host many projects,
where a project would be a skos thesaurus of a certain knowledge domain.
now we've two possibilities.
either put every project in a different sail repository or put all projects for one customer
into a one single sail repository and distinguish the projects by named graphs (quads).
so first there's need for owl inference (for example for narrowerTransitives, maybe just
to check loops in the thesaurus structure (a skos:narrower b skos:narrower c skos:narrower a, for example)
and there's need to get this data not for all different thesauri but only for the skos:concepts
from one named graph (from one project)
so the exact setup for us would be :
put the skos:ontology in a namedgraph (a context) which represents a project
add skos:concepts (build the thesaurus) to that namedgraph (context, project)
infer skos:narrowerTransitives for example from only this namedgraph (context, project)
for example : put a thesaurus about the semantic web into one namedgraph (context, project)
put a thesaurus about the marketing department of a company into another namedgraph (context, project)
wkr www.turnguard.com/turnguard
----- Original Message -----
From: "Bryan Thompson" <br...@sy...>
To: "Mike Personick" <mi...@sy...>, "Jürgen Jakobitsch" <jak...@pu...>,
big...@li...
Sent: Saturday, May 1, 2010 11:24:57 AM
Subject: RE: [Bigdata-developers] Inference and Quads
Jürgen,
Let me add that we are interested in usage patterns for quads with
inference. For example, would you stores ontologies in some context(s)
and then use other
contexts for managing provenance but always desire entailments over all
contexts?
In that case, I believe that quads + inference may be significantly
simpler.
Thanks,
Bryan
> -----Original Message-----
> From: Mike Personick [mailto:mi...@sy...]
> Sent: Friday, April 30, 2010 3:41 PM
> To: Jürgen Jakobitsch; big...@li...
> Subject: Re: [Bigdata-developers] Inference and Quads
>
> Jurgen,
>
> The answer to inference with quads really lies in full
> query-time inference, which we do not yet support. We have
> sketchy plans to work on query-time inference later this year
> after attending to several other priorities, most notably the
> high-availability architecture for scale-out.
>
> The problem with quad inference is that supports for inferred
> statements can exist in different contexts, and it is not
> known until query which contexts can be considered. Our
> model right now is to compute most inferences at load time.
> So if we compute inferences using supports from different
> contexts, we would have to filter out query results based on
> whether they are supported by the specific contexts listed in
> the query.
>
> Thanks,
> Mike
>
>
> -----Original Message-----
> From: Jürgen Jakobitsch [mailto:jak...@pu...]
> Sent: Wednesday, April 28, 2010 1:55 AM
> To: big...@li...
> Subject: [Bigdata-developers] Inference and Quads
>
> hi,
>
> can you tell, when there will inference support with the quad store.
>
> we're currently starting development of the next version of
> out thesaurus management tool poolparty (poolparty.punkt.at)
> and are further evaluating triple stores.
> but we have a significant need for the combination of quads
> and inference.
>
> wkr turnguard.com/turnguard
>
>
> -- punkt. netServices
> ______________________________ Jürgen Jakobitsch
> Codeography
>
> Lerchenfelder Gürtel 43 Top 5/2
> A - 1160 Wien
> Tel.: 01 / 897 41 22 - 29
> Fax: 01 / 897 41 22 - 22
>
> netServices http://www.punkt.at
>
>
> --------------------------------------------------------------
> ----------------
> _______________________________________________ Bigdata-developers
> mailing list
> Big...@li...
> https://lists.sourceforge.net/lists/listinfo/bigdata-developers
>
> --------------------------------------------------------------
> ----------------
> _______________________________________________ Bigdata-developers
> mailing list
> Big...@li...
> https://lists.sourceforge.net/lists/listinfo/bigdata-developers
>
--
punkt. netServices
______________________________
Jürgen Jakobitsch
Codeography
Lerchenfelder Gürtel 43 Top 5/2
A - 1160 Wien
Tel.: 01 / 897 41 22 - 29
Fax: 01 / 897 41 22 - 22
netServices http://www.punkt.at
|
|
From: Jürgen J. <jak...@pu...> - 2010-05-01 13:07:47
|
hi mike, thanks for your time, i will check the source anyway - since i'd really like to use bigdata, notably i'm really interested in some of the features like scalability and provenance. also let me note, that i think that bigdata source code is very well done! currently i have two options 1. owlim - which infers correctly, but has persistence problems in some cases (so not suitable for production use) amongst others 2. bigdata - which has infer problems, but has features it has and which i consider very interesting if i find something, i sure let you know. wkr turnguard.com/turnguard ----- Original Message ----- From: "Mike Personick" <mi...@sy...> To: "Jürgen Jakobitsch" <jak...@pu...>, big...@li... Sent: Friday, April 30, 2010 9:34:20 PM Subject: RE: [Bigdata-developers] OWL Inference Jurgen, I just wanted to let you know that while I have not gotten to the bottom of the problem yet, I agree with you that there is indeed a problem. OWLIM's answer looks more correct to me than does our answer. I don't understand why we are seeing (narrower, TechnoPop) and (narrower, ElectroPop) since narrower is not itself a transitive property. I think the fact that we are not seeing (semanticRelation, TechnoPop) and (semanticRelation, ElectroPop) might have to do with our FastClosure inference program. I suspect that using the FullClosure program might solve the latter problem. I would be surprised if it solved the former too, but it is possible. Maybe give it a shot. (See AbstractTripleStore.Options.CLOSURE_CLASS for details.) Long story short, yes I think there is a bug. :) I will log an issue for it in our bug tracker system. If you are able to uncover the source of the problem please let me know! Thanks, Mike -----Original Message----- From: Jürgen Jakobitsch [mailto:jak...@pu...] Sent: Wednesday, April 28, 2010 1:52 AM To: big...@li... Subject: [Bigdata-developers] OWL Inference hi, first of all congrats to this great project, i checked it out once a couple of months ago and i can see that there some significant improvements. however might have found a bug in the inference engine, that can be reproduced like so : 1. instantiate a bigdata sail with fullfeatured properties 2. add http://www.w3.org/2009/08/skos-reference/skos-owl1-dl.rdf with baseURL http://www.w3.org/2004/02/skos/core 3. add http://turnguard.com/virtuoso/test10.rdf now do the following sparql query SELECT * WHERE { <http://www.turnguard.com/Music> ?p ?o } the result from bigdata looks like this http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.w3.org/2004/02/skos/core#Concept http://www.w3.org/2000/01/rdf-schema#label "Music"@en http://www.w3.org/2004/02/skos/core#narrower http://www.turnguard.com/ElectroPop http://www.w3.org/2004/02/skos/core#narrower http://www.turnguard.com/Pop http://www.w3.org/2004/02/skos/core#narrower http://www.turnguard.com/TechnoPop http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/ElectroPop http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/Pop http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/TechnoPop http://www.w3.org/2004/02/skos/core#prefLabel "Music"@en http://www.w3.org/2004/02/skos/core#semanticRelation http://www.turnguard.com/Pop http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.w3.org/2000/01/rdf-schema#Resource the result from owlim (swiftowlim beta 12) like this http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/ElectroPop http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/TechnoPop http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/Pop http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.w3.org/2004/02/skos/core#Concept http://www.w3.org/1999/02/22-rdf-syntax-ns#type _:node1 http://www.w3.org/2004/02/skos/core#semanticRelation http://www.turnguard.com/ElectroPop http://www.w3.org/2004/02/skos/core#semanticRelation http://www.turnguard.com/TechnoPop http://www.w3.org/2004/02/skos/core#semanticRelation http://www.turnguard.com/Pop http://www.w3.org/2004/02/skos/core#prefLabel "Music"@en http://www.w3.org/2000/01/rdf-schema#label "Music"@en http://www.w3.org/2004/02/skos/core#narrower http://www.turnguard.com/Pop note that i'm positivly surprised by the inferred rdf-schema#Resource with bigdata but there are significant differences between the two inference engines, where owlim seems to be right in most cases. please note that : in skos : narrower is a subProp of narrowerTransitive (same for broader), both are inverse, the transitives are subProps of semanticRelation, so the construct is a bit complex. any suggestions if i'm doing some wrong or this is a bug really welcome. wkr www.turnguard.com/turnguard -- punkt. netServices ______________________________ Jürgen Jakobitsch Codeography Lerchenfelder Gürtel 43 Top 5/2 A - 1160 Wien Tel.: 01 / 897 41 22 - 29 Fax: 01 / 897 41 22 - 22 netServices http://www.punkt.at ------------------------------------------------------------------------------ _______________________________________________ Bigdata-developers mailing list Big...@li... https://lists.sourceforge.net/lists/listinfo/bigdata-developers -- punkt. netServices ______________________________ Jürgen Jakobitsch Codeography Lerchenfelder Gürtel 43 Top 5/2 A - 1160 Wien Tel.: 01 / 897 41 22 - 29 Fax: 01 / 897 41 22 - 22 netServices http://www.punkt.at |
|
From: Bryan T. <br...@sy...> - 2010-05-01 09:25:59
|
Jürgen, Let me add that we are interested in usage patterns for quads with inference. For example, would you stores ontologies in some context(s) and then use other contexts for managing provenance but always desire entailments over all contexts? In that case, I believe that quads + inference may be significantly simpler. Thanks, Bryan > -----Original Message----- > From: Mike Personick [mailto:mi...@sy...] > Sent: Friday, April 30, 2010 3:41 PM > To: Jürgen Jakobitsch; big...@li... > Subject: Re: [Bigdata-developers] Inference and Quads > > Jurgen, > > The answer to inference with quads really lies in full > query-time inference, which we do not yet support. We have > sketchy plans to work on query-time inference later this year > after attending to several other priorities, most notably the > high-availability architecture for scale-out. > > The problem with quad inference is that supports for inferred > statements can exist in different contexts, and it is not > known until query which contexts can be considered. Our > model right now is to compute most inferences at load time. > So if we compute inferences using supports from different > contexts, we would have to filter out query results based on > whether they are supported by the specific contexts listed in > the query. > > Thanks, > Mike > > > -----Original Message----- > From: Jürgen Jakobitsch [mailto:jak...@pu...] > Sent: Wednesday, April 28, 2010 1:55 AM > To: big...@li... > Subject: [Bigdata-developers] Inference and Quads > > hi, > > can you tell, when there will inference support with the quad store. > > we're currently starting development of the next version of > out thesaurus management tool poolparty (poolparty.punkt.at) > and are further evaluating triple stores. > but we have a significant need for the combination of quads > and inference. > > wkr turnguard.com/turnguard > > > -- > punkt. netServices > ______________________________ > Jürgen Jakobitsch > Codeography > > Lerchenfelder Gürtel 43 Top 5/2 > A - 1160 Wien > Tel.: 01 / 897 41 22 - 29 > Fax: 01 / 897 41 22 - 22 > > netServices http://www.punkt.at > > > -------------------------------------------------------------- > ---------------- > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > > -------------------------------------------------------------- > ---------------- > _______________________________________________ > Bigdata-developers mailing list > Big...@li... > https://lists.sourceforge.net/lists/listinfo/bigdata-developers > |
|
From: Mike P. <mi...@sy...> - 2010-04-30 19:42:55
|
Jurgen, The answer to inference with quads really lies in full query-time inference, which we do not yet support. We have sketchy plans to work on query-time inference later this year after attending to several other priorities, most notably the high-availability architecture for scale-out. The problem with quad inference is that supports for inferred statements can exist in different contexts, and it is not known until query which contexts can be considered. Our model right now is to compute most inferences at load time. So if we compute inferences using supports from different contexts, we would have to filter out query results based on whether they are supported by the specific contexts listed in the query. Thanks, Mike -----Original Message----- From: Jürgen Jakobitsch [mailto:jak...@pu...] Sent: Wednesday, April 28, 2010 1:55 AM To: big...@li... Subject: [Bigdata-developers] Inference and Quads hi, can you tell, when there will inference support with the quad store. we're currently starting development of the next version of out thesaurus management tool poolparty (poolparty.punkt.at) and are further evaluating triple stores. but we have a significant need for the combination of quads and inference. wkr turnguard.com/turnguard -- punkt. netServices ______________________________ Jürgen Jakobitsch Codeography Lerchenfelder Gürtel 43 Top 5/2 A - 1160 Wien Tel.: 01 / 897 41 22 - 29 Fax: 01 / 897 41 22 - 22 netServices http://www.punkt.at ------------------------------------------------------------------------------ _______________________________________________ Bigdata-developers mailing list Big...@li... https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
|
From: Mike P. <mi...@sy...> - 2010-04-30 19:38:27
|
Jurgen, I just wanted to let you know that while I have not gotten to the bottom of the problem yet, I agree with you that there is indeed a problem. OWLIM's answer looks more correct to me than does our answer. I don't understand why we are seeing (narrower, TechnoPop) and (narrower, ElectroPop) since narrower is not itself a transitive property. I think the fact that we are not seeing (semanticRelation, TechnoPop) and (semanticRelation, ElectroPop) might have to do with our FastClosure inference program. I suspect that using the FullClosure program might solve the latter problem. I would be surprised if it solved the former too, but it is possible. Maybe give it a shot. (See AbstractTripleStore.Options.CLOSURE_CLASS for details.) Long story short, yes I think there is a bug. :) I will log an issue for it in our bug tracker system. If you are able to uncover the source of the problem please let me know! Thanks, Mike -----Original Message----- From: Jürgen Jakobitsch [mailto:jak...@pu...] Sent: Wednesday, April 28, 2010 1:52 AM To: big...@li... Subject: [Bigdata-developers] OWL Inference hi, first of all congrats to this great project, i checked it out once a couple of months ago and i can see that there some significant improvements. however might have found a bug in the inference engine, that can be reproduced like so : 1. instantiate a bigdata sail with fullfeatured properties 2. add http://www.w3.org/2009/08/skos-reference/skos-owl1-dl.rdf with baseURL http://www.w3.org/2004/02/skos/core 3. add http://turnguard.com/virtuoso/test10.rdf now do the following sparql query SELECT * WHERE { <http://www.turnguard.com/Music> ?p ?o } the result from bigdata looks like this http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.w3.org/2004/02/skos/core#Concept http://www.w3.org/2000/01/rdf-schema#label "Music"@en http://www.w3.org/2004/02/skos/core#narrower http://www.turnguard.com/ElectroPop http://www.w3.org/2004/02/skos/core#narrower http://www.turnguard.com/Pop http://www.w3.org/2004/02/skos/core#narrower http://www.turnguard.com/TechnoPop http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/ElectroPop http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/Pop http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/TechnoPop http://www.w3.org/2004/02/skos/core#prefLabel "Music"@en http://www.w3.org/2004/02/skos/core#semanticRelation http://www.turnguard.com/Pop http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.w3.org/2000/01/rdf-schema#Resource the result from owlim (swiftowlim beta 12) like this http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/ElectroPop http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/TechnoPop http://www.w3.org/2004/02/skos/core#narrowerTransitive http://www.turnguard.com/Pop http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.w3.org/2004/02/skos/core#Concept http://www.w3.org/1999/02/22-rdf-syntax-ns#type _:node1 http://www.w3.org/2004/02/skos/core#semanticRelation http://www.turnguard.com/ElectroPop http://www.w3.org/2004/02/skos/core#semanticRelation http://www.turnguard.com/TechnoPop http://www.w3.org/2004/02/skos/core#semanticRelation http://www.turnguard.com/Pop http://www.w3.org/2004/02/skos/core#prefLabel "Music"@en http://www.w3.org/2000/01/rdf-schema#label "Music"@en http://www.w3.org/2004/02/skos/core#narrower http://www.turnguard.com/Pop note that i'm positivly surprised by the inferred rdf-schema#Resource with bigdata but there are significant differences between the two inference engines, where owlim seems to be right in most cases. please note that : in skos : narrower is a subProp of narrowerTransitive (same for broader), both are inverse, the transitives are subProps of semanticRelation, so the construct is a bit complex. any suggestions if i'm doing some wrong or this is a bug really welcome. wkr www.turnguard.com/turnguard -- punkt. netServices ______________________________ Jürgen Jakobitsch Codeography Lerchenfelder Gürtel 43 Top 5/2 A - 1160 Wien Tel.: 01 / 897 41 22 - 29 Fax: 01 / 897 41 22 - 22 netServices http://www.punkt.at ------------------------------------------------------------------------------ _______________________________________________ Bigdata-developers mailing list Big...@li... https://lists.sourceforge.net/lists/listinfo/bigdata-developers |
|
From: b. <no...@so...> - 2010-04-28 21:35:58
|
#72: Testing - Make changes to allow all tests of TestServiceStarter to pass
when run using both ant & eclipse
--------------------------------+-------------------------------------------
Reporter: btmurphy | Owner: btmurphy
Type: defect | Status: new
Priority: major | Milestone:
Component: Bigdata Federation | Version:
Keywords: |
--------------------------------+-------------------------------------------
Comment(by btmurphy):
Changeset 2718 & 2720
M bigdata-jini/src/test/com/bigdata/zookeeper/testzoo.config
M bigdata-jini/src/resources/config/bigdataStandaloneTesting.config
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/72#comment:9>
bigdata® <http://www.bigdata.com/blog>
bigdata® is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: b. <no...@so...> - 2010-04-28 18:53:52
|
#72: Testing - Make changes to allow all tests of TestServiceStarter to pass
when run using both ant & eclipse
--------------------------------+-------------------------------------------
Reporter: btmurphy | Owner: btmurphy
Type: defect | Status: new
Priority: major | Milestone:
Component: Bigdata Federation | Version:
Keywords: |
--------------------------------+-------------------------------------------
Comment(by btmurphy):
More changes to allow the use of either ant or eclipse
to run the tests; in particular, the tests under
com.bigdata.service.jini, which use the config file
src/resources/config/bigdataStandaloneTesting.config
(changeset 2716).
M bigdata-jini/src/test/com/bigdata/jini/start/testjini.config
M bigdata-jini/src/resources/config/bigdataStandaloneTesting.config
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/72#comment:8>
bigdata® <http://www.bigdata.com/blog>
bigdata® is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: b. <no...@so...> - 2010-04-28 15:52:40
|
#72: Testing - Make changes to allow all tests of TestServiceStarter to pass
when run using both ant & eclipse
--------------------------------+-------------------------------------------
Reporter: btmurphy | Owner: btmurphy
Type: defect | Status: new
Priority: major | Milestone:
Component: Bigdata Federation | Version:
Keywords: |
--------------------------------+-------------------------------------------
Comment(by btmurphy):
Made changes to the files listed below to allow one
to run the test TestJiniCoreServicesProcessHelper using
either ant or eclipse. That test now uses the Jini
ServiceStarter to start a lookup service and an httpd
class server, verifies that the lookup service was
started, and then shutdowns both the lookup service
and the class server. The changed and added files have
been checked in to the 'dev-btm' branch (changeset 2713).
M bigdata-
jini/src/test/com/bigdata/jini/start/TestJiniCoreServicesProcessHelper.java
M bigdata-jini/src/test/com/bigdata/jini/start/testjini.config
M bigdata-
jini/src/java/com/bigdata/jini/start/config/JiniCoreServicesConfiguration.java
M build.xml
A bigdata-jini/src/test/com/bigdata/jini/start/testReggie.config
A bigdata-jini/src/test/com/bigdata/jini/start/testStartJini.config
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/72#comment:7>
bigdata® <http://www.bigdata.com/blog>
bigdata® is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: Jürgen J. <jak...@pu...> - 2010-04-28 07:56:14
|
hi, can you tell, when there will inference support with the quad store. we're currently starting development of the next version of out thesaurus management tool poolparty (poolparty.punkt.at) and are further evaluating triple stores. but we have a significant need for the combination of quads and inference. wkr turnguard.com/turnguard -- punkt. netServices ______________________________ Jürgen Jakobitsch Codeography Lerchenfelder Gürtel 43 Top 5/2 A - 1160 Wien Tel.: 01 / 897 41 22 - 29 Fax: 01 / 897 41 22 - 22 netServices http://www.punkt.at |
|
From: Bryan T. <br...@sy...> - 2010-04-24 14:34:48
|
Stephane, I have updated the documentation on the wiki to further clarify the install process, please take a look. If you copied the Sesame WARs into a running tomcat instance and tomcat was still running when you executed the 'ant install.sesame.server' step, then you probably need to restart tomcat so it can notice the jars installed by the 'ant install.sesame.server' task (there should be 15 of them, including the bigdata-jar you mention below). Thanks, Bryan ________________________________ From: Stephane Fellah [mailto:sf...@sm...] Sent: Thursday, April 22, 2010 11:57 PM To: bigdata-developers Subject: [Bigdata-developers] Issue to run BigData using Sesame HTTP Server on Windows Hi, I am trying to install BigData using Sesame HTTP Server following the instructions posted at: sourceforge.net/apps/mediawiki/bigdata/index.php?title=Using_Bigdata_with_the_OpenRDF_Sesame_HTTP_Server<http://sourceforge.net/apps/mediawiki/bigdata/index.php?title=Using_Bigdata_with_the_OpenRDF_Sesame_HTTP_Server>. My configuration is: JVM: Java 1.6.0.12 Tomcat 6.0.16 Windows XP. I manage to run all the steps except the last one (to run the DemoSesameServer or using the Sesame Workbench). I have the following exception: javax.servlet.ServletException: org.openrdf.repository.RepositoryException: org.openrdf.repository.config.RepositoryConfigException: Unsupported Sail type: bigdata:BigdataSail org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:80) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:93) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:131) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:90) org.openrdf.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:97) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:52) This indicates that the BigdataSail Service provider (SPI) is not found by the ServiceRegistry used by Sesame. I checked the jar file bigdata-0.82b-220410.jar in the openrdf-sesame/WEB-INF/lib directory. It contains the META-INF/services with the adequate provider. Any clue how to fix this problem ? Stephane Fellah, MS, B.Sc Lead Software Engineer smartRealm LLC |
|
From: Stephane F. <sf...@sm...> - 2010-04-23 04:26:28
|
Hi, I am trying to install BigData using Sesame HTTP Server following the instructions posted at: sourceforge.net/apps/mediawiki/bigdata/index.php?title=Using_Bigdata_with_the_OpenRDF_Sesame_HTTP_Server . My configuration is: JVM: Java 1.6.0.12 Tomcat 6.0.16 Windows XP. I manage to run all the steps except the last one (to run the DemoSesameServer or using the Sesame Workbench). I have the following exception: javax.servlet.ServletException: org.openrdf.repository.RepositoryException: org.openrdf.repository.config.RepositoryConfigException: Unsupported Sail type: bigdata:BigdataSail org.openrdf.workbench.base.TransformationServlet.service(TransformationServlet.java:80) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:93) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:131) org.openrdf.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:90) org.openrdf.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:97) org.openrdf.workbench.base.BaseServlet.service(BaseServlet.java:40) org.openrdf.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:52) This indicates that the BigdataSail Service provider (SPI) is not found by the ServiceRegistry used by Sesame. I checked the jar file bigdata-0.82b-220410.jar in the openrdf-sesame/WEB-INF/lib directory. It contains the META-INF/services with the adequate provider. Any clue how to fix this problem ? Stephane Fellah, MS, B.Sc Lead Software Engineer smartRealm LLC |
|
From: b. <no...@so...> - 2010-04-22 15:18:25
|
#72: Testing - Make changes to allow all tests of TestServiceStarter to pass
when run using both ant & eclipse
--------------------------------+-------------------------------------------
Reporter: btmurphy | Owner: btmurphy
Type: defect | Status: new
Priority: major | Milestone:
Component: Bigdata Federation | Version:
Keywords: |
--------------------------------+-------------------------------------------
Comment(by btmurphy):
Made similar changes to allow the tests in com.bigdata.jini.start.config
to run and pass under either eclipse or ant. The changes proposed
have also been checked in to the 'dev-btm' branch (changeset 2669).
The files changed are as follows:
M bigdata-jini/src/test/com/bigdata/jini/start/config/testfed.config
M bigdata-
jini/src/test/com/bigdata/jini/start/config/TestServiceConfiguration.java
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/72#comment:6>
bigdata® <http://www.bigdata.com/blog>
bigdata® is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: Bryan T. <br...@sy...> - 2010-04-22 00:28:09
|
Stephane, No at this time. There is some interest in this and I expect that one will be available soon. We bundle all dependencies into SVN rather and in a public maven repository, so a Maven POM will be mostly useful for companies who maintain their own maven repositories for internal deployments. We are actively moving to simplify the deployment, configuration and administration of highly available bigdata clusters. At this time, it seems likely that we will create an RPM and perhaps a debian package for easier deployment. Thanks, Bryan ________________________________ From: Stephane Fellah [mailto:sf...@sm...] Sent: Wednesday, April 21, 2010 5:07 PM To: big...@li... Subject: [Bigdata-developers] Maven 2 pom file. I am currently evaluating BigData as a large scale RDF store. I was wondering if there is any maven 2 pom.xml available for the project. Thank u. -- Stephane Fellah, MS, B.Sc Lead Software Engineer smartRealm LLC 203 Loudoun St. SW suite #200 Leesburg, VA 20176 Tel: 703 669 5514 Cell: 703 447 2078 Fax: 703 669 5515 |
|
From: Stephane F. <sf...@sm...> - 2010-04-21 22:03:44
|
I am currently evaluating BigData as a large scale RDF store. I was wondering if there is any maven 2 pom.xml available for the project. Thank u. -- Stephane Fellah, MS, B.Sc Lead Software Engineer smartRealm LLC 203 Loudoun St. SW suite #200 Leesburg, VA 20176 Tel: 703 669 5514 Cell: 703 447 2078 Fax: 703 669 5515 |
|
From: b. <no...@so...> - 2010-04-21 19:05:16
|
#72: Testing - Make changes to allow all tests of TestServiceStarter to pass
when run using both ant & eclipse
--------------------------------+-------------------------------------------
Reporter: btmurphy | Owner: btmurphy
Type: defect | Status: new
Priority: major | Milestone:
Component: Bigdata Federation | Version:
Keywords: |
--------------------------------+-------------------------------------------
Comment(by btmurphy):
The changes proposed in the comments above have been
checked in to the branch named 'dev-btm' (changeset 2663)
and should probably be reviewed before they're
merged to the trunk; which I'll do in a day or two.
The files changed are as follows:
M bigdata-jini/src/test/com/bigdata/jini/start/testfed.config
M bigdata-jini/src/test/com/bigdata/jini/start/TestServiceStarter.java
M bigdata-jini/src/test/com/bigdata/jini/start/AbstractFedZooTestCase.java
M bigdata-
jini/src/java/com/bigdata/jini/start/config/JiniServiceConfiguration.java
M src/resources/bin/pstart
M build.xml
A src/resources/bin/config
A src/resources/bin/config/reggie.config
A src/resources/bin/config/browser.config
A src/resources/bin/config/zookeeper.config
A src/resources/bin/config/serviceStarter.config
D src/resources/config/jini/serviceStarterAll.config
D src/resources/config/jini/reggie.config.tmp
D src/resources/config/jini/browser.config.tmp
D src/resources/config/jini/zookeeper.config
D src/resources/config/jini/serviceStarterOne.config
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/72#comment:5>
bigdata® <http://www.bigdata.com/blog>
bigdata® is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: b. <no...@so...> - 2010-04-21 18:18:48
|
#72: Testing - Make changes to allow all tests of TestServiceStarter to pass
when run using both ant & eclipse
--------------------------------+-------------------------------------------
Reporter: btmurphy | Owner: btmurphy
Type: defect | Status: new
Priority: major | Milestone:
Component: Bigdata Federation | Version:
Keywords: |
--------------------------------+-------------------------------------------
Comment(by btmurphy):
To provide a convenient mechanism for externally
running a lookup service when running a test (such
as TestServiceStarter) under eclipse, as well as to
address changeset 2516, the following additions/deletions
are being proposed as part of the fix for this
trac issue:
1. Add the directory, src/resources/bin/config
2. Under src/resources/bin/config, add the following
config files: reggie.config, browser.config, zookeeper.config,
and serviceStarter.config
3. Under src/resources/config/jini/, delete the
following config files: serviceStarterAll.config,
serviceStarterOne.config, and zookeeper.config,
as well as, browser.config.tmp and reggie.config.tmp
4. Make the appropriate changes to build.xml to
accomodate the new config files (set new directory
properties, copy the config files to the staging
area, etc.)
5. Make the necessary changes to src/resources/bin/pstart
to use the new config files.
Doing the above should then allow one who wishes to
launch TestServiceStarter from eclipse to do something
like the following:
cmdWinA: cd <bigdata-dir>
cmdWinA: ./dist/bigdata/bin/pstart --mGroups=testFed reggie
cmdWinB: cd <bigdata-dir> [*** optional ***]
cmdWinB: ./dist/bigdata/bin/pstart --groups=testFed browser
[Note that reggie must be started with mGroups (groups
is optional for reggie), whereas the browser and other
services are started with groups.]
Once a lookup service has been started with member groups
equal to "testFed", TestServiceStarter tests can be
launched from eclipse.
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/72#comment:4>
bigdata® <http://www.bigdata.com/blog>
bigdata® is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: b. <no...@so...> - 2010-04-21 17:49:26
|
#72: Testing - Make changes to allow all tests of TestServiceStarter to pass
when run using both ant & eclipse
--------------------------------+-------------------------------------------
Reporter: btmurphy | Owner: btmurphy
Type: defect | Status: new
Priority: major | Milestone:
Component: Bigdata Federation | Version:
Keywords: |
--------------------------------+-------------------------------------------
Comment(by btmurphy):
To address the issues related to how the Zookeeper
client and server handled in TestServiceStarter, the
following changes are proposed for the
TestServiceStarter.test_startServer test:
1. Move the instantiation of the new serviceStarter
to occur before the creation of the "physical services"
znode.
2. Change the creation of the "physical services" znode
to take the serialized form of serviceStarter.UUID
instead of an empty byte array.
3. Change assertEquals(1, children.size()); to
assertEquals(2, children.size());
4. Prior to the call to ZNodeDeletedWatcher.awaitDelete,
call com.bigdata.zookeeper.ZooHelper.destroyZNodes with
a depth of 1. (Or simply remove the call to awaitDelete,
and let tearDown remove the znodes.)
5. In com.bigdata.jini.start.AbstractFedZooTestCase.setUp,
remove the try-catch block in which
zookeeper.create("/test", ... ) is called.
6. In com.bigdata.jini.start.config.JiniServiceConfiguration,
in the writeGroups and writeLocators methods, handle the
case where there are more than one element in the set,
by inserting a comma between each element that is written
to the output file.
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/72#comment:3>
bigdata® <http://www.bigdata.com/blog>
bigdata® is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: b. <no...@so...> - 2010-04-21 17:27:33
|
#72: Testing - Make changes to allow all tests of TestServiceStarter to pass
when run using both ant & eclipse
--------------------------------+-------------------------------------------
Reporter: btmurphy | Owner: btmurphy
Type: defect | Status: new
Priority: major | Milestone:
Component: Bigdata Federation | Version:
Keywords: |
--------------------------------+-------------------------------------------
Comment(by btmurphy):
To address the issue related to changeset 2614
and the overriding of the groups to join, rather
than overloading the fedname, a zrootname should
be added to com/bigdata/jini/start/testfed.config,
and the JiniClient groups should be augmented to
include both the fedname and the new zrootname.
Related to the above, references to 'fedname' in
com.bigdata.jini.start.AbstractFedZooTestCase.java
should be changed to zrootname.
Additionally, to address the ClassNotFoundException
issue (QuorumPeerMain) when launching TestServiceStarter
using ant, under the run-junit target of build.xml,
the java.class.path system property should be set to
the value of the run.class.path property.
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/72#comment:2>
bigdata® <http://www.bigdata.com/blog>
bigdata® is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|
|
From: b. <no...@so...> - 2010-04-21 17:10:02
|
#72: Testing - Make changes to allow all tests of TestServiceStarter to pass
when run using both ant & eclipse
--------------------------------+-------------------------------------------
Reporter: btmurphy | Owner: btmurphy
Type: defect | Status: new
Priority: major | Milestone:
Component: Bigdata Federation | Version:
Keywords: |
--------------------------------+-------------------------------------------
Comment(by btmurphy):
To set a policy file on the VM of a test being launched
by eclipse, do something like the following, (using
com.bigdata.jini.start.TestAll.java as an example):
1. In the Package Explorer of eclipse, double-click
"bigdata-jini/src/test"
2. Under "bigdata-jini/src/test", expand "com.bigdata.jini.start"
3. Right-click "TestAll.java"
4. Select "Properties" at the bottom of the drop-down menu
5. If not already selected, select "Run/Debug Settings"
(in the left hand window)
6. Select "TestAll" in the "Launch configurations" window
7. Click the button labeled "Edit..."
8. In the "Edit Configuration" window that appears, click on
the tab labeled "(x)= Arguments"
9. In the window labeled "VM arguments", enter the following:
-Djava.security.policy=src/resources/config/policy.all
10. Click "OK" in all windows
Note that I am not an eclipse user, and the above is what
worked for me. But I'm guessing that the eclipse experts
out there probably know a way for setting this system
property globally for all tests, rather than doing the
above on a test-by-test basis; which they should share
with us newbies.
--
Ticket URL: <http://sourceforge.net/apps/trac/bigdata/ticket/72#comment:1>
bigdata® <http://www.bigdata.com/blog>
bigdata® is a scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates.
|