You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(22) |
Nov
(85) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(47) |
Feb
(127) |
Mar
(268) |
Apr
(78) |
May
(47) |
Jun
(38) |
Jul
(131) |
Aug
(221) |
Sep
(187) |
Oct
(54) |
Nov
(111) |
Dec
(84) |
2011 |
Jan
(152) |
Feb
(106) |
Mar
(94) |
Apr
(90) |
May
(53) |
Jun
(20) |
Jul
(24) |
Aug
(37) |
Sep
(32) |
Oct
(70) |
Nov
(22) |
Dec
(15) |
2012 |
Jan
(33) |
Feb
(110) |
Mar
(24) |
Apr
(1) |
May
(11) |
Jun
(8) |
Jul
(12) |
Aug
(37) |
Sep
(39) |
Oct
(81) |
Nov
(38) |
Dec
(50) |
2013 |
Jan
(23) |
Feb
(53) |
Mar
(23) |
Apr
(5) |
May
(19) |
Jun
(16) |
Jul
(16) |
Aug
(9) |
Sep
(21) |
Oct
(1) |
Nov
(2) |
Dec
(8) |
2014 |
Jan
(16) |
Feb
(6) |
Mar
(27) |
Apr
(1) |
May
(10) |
Jun
(1) |
Jul
(4) |
Aug
(10) |
Sep
(19) |
Oct
(22) |
Nov
(4) |
Dec
(6) |
2015 |
Jan
(3) |
Feb
(6) |
Mar
(9) |
Apr
|
May
(11) |
Jun
(23) |
Jul
(14) |
Aug
(10) |
Sep
(10) |
Oct
(9) |
Nov
(18) |
Dec
(4) |
2016 |
Jan
(5) |
Feb
(5) |
Mar
|
Apr
(2) |
May
(15) |
Jun
(2) |
Jul
(8) |
Aug
(2) |
Sep
(6) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
(2) |
Feb
(12) |
Mar
(22) |
Apr
(6) |
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(5) |
Oct
(2) |
Nov
|
Dec
|
2018 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(5) |
Jul
(3) |
Aug
|
Sep
(7) |
Oct
(19) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Joe W. <jo...@gm...> - 2010-10-30 19:14:41
|
Hi Wolfgang and Dmitriy, Thanks for your helpful explanations. I've just committed the 2 tests mentioned in my last e-mail. The automated build server will definitely hit the failing tests and > start sending emails. But I still think it is extremely important to > have such tests. It's a great help in development. > Great to hear I can help by writing these tests! > In Java, you can temporarily skip certain tests by adding an @Ignore. > This is not possible using the XML test files, though adding such a > feature should be rather easy. I'll try to do so. In the meantime, > please add your tests, but comment them out, so anyone looking at the > file can see which tests need to be taken care of. > Sounds good. In the meantime, I commented out the failing test and made note of it in my commit message. > BTW, I definitely recommend to use the XML test files for all XQuery > tests which do not require a special setup in Java. The XML test > definitions are much easier to understand - for the one who writes > them as well for those who try to fix them. Great, I found it very interesting to write these tests. I have a few questions: 1. The test @output attribute values are "text" and "xml", but in my tests, even though both my "expected" and "results" were XML and clearly different from each other, the test unexpectedly passed when I made the @output "xml"! The test only failed (as expected) when I made the output "text". This shows the results when @output is "xml": <test n="52" pass="true"/> And this shows the results when @output is "text": <test n="52" pass="false"> <task>index-keys test 6</task> <expected> <terms> <t>chapter</t> <t>section</t> <t>subsection</t> </terms> </expected> <result> <terms/> </result> </test> Isn't this strange that a test that should fail is passing when @output is set to "xml"? 2. How do you like to run these tests? My method is cumbersome: I copy the queries.xml file into /db/queries.xml, and I run this query through the sandbox: xquery version "1.0"; import module namespace t="http://exist-db.org/xquery/testing"; let $doc := '/db/queries.xml' return t:run-testSet(doc($doc)/TestSet) Is there a better (i.e. more direct) way of running this test besides copying it into the database and running it this way? 3. I would like to contribute some more tests showing the problems I reported with retrieving index keys on range and ngram indexes. Ngram has a test folder (extensions/indexes/ngram/test) but no corresponding queries.xml file. If I create a queries.xml file like the one in the lucene folder, will the test be run automatically? Where should a range index test go? Thanks, Joe |
From: Wolfgang M. <wol...@ex...> - 2010-10-30 09:15:48
|
Hi Joe, > Before I commit this test, though, can you tell me if this will break > the build? In other words, will the automated build server choke when > it finds a failing test? The automated build server will definitely hit the failing tests and start sending emails. But I still think it is extremely important to have such tests. It's a great help in development. In Java, you can temporarily skip certain tests by adding an @Ignore. This is not possible using the XML test files, though adding such a feature should be rather easy. I'll try to do so. In the meantime, please add your tests, but comment them out, so anyone looking at the file can see which tests need to be taken care of. BTW, I definitely recommend to use the XML test files for all XQuery tests which do not require a special setup in Java. The XML test definitions are much easier to understand - for the one who writes them as well for those who try to fix them. Thanks, Wolfgang |
From: Dmitriy S. <sha...@gm...> - 2010-10-30 08:02:10
|
On Sat, Oct 30, 2010 at 9:58 AM, Joe Wicentowski <jo...@gm...> wrote: > > How is big test data? > > The lucene's tests have data upload & index configuration, so it could be > > right place to start with > > > http://exist.svn.sourceforge.net/viewvc/exist/trunk/eXist/extensions/indexes/lucene/test/src/org/exist/indexing/lucene/LuceneIndexTest.java?revision=12986&view=markup > > Thanks. I took a look at that file but was scared off by the java! > But I did find XML/XQuery-based tests in > trunk/eXist/extensions/indexes/lucene/test/src/xquery/lucene/queries.xml, > and I added some new tests to demonstrate one problem I had reported: > Lucene index keys on attributes returning properly with > util:index-keys but returning empty with util:index-keys-by-qname(). > > Before I commit this test, though, can you tell me if this will break > the build? In other words, will the automated build server choke when > it finds a failing test? If so, is there a better place for me to > share the test? If not, then I'll commit the new tests. Please let > me know what's best. > If one test failed other will be tested any way and failing test can be 'ignored'. Any way, better to have use case than not :-) -- Dmitriy Shabanov |
From: Joe W. <jo...@gm...> - 2010-10-30 04:58:39
|
Hi Dmitriy, > How is big test data? > The lucene's tests have data upload & index configuration, so it could be > right place to start with > http://exist.svn.sourceforge.net/viewvc/exist/trunk/eXist/extensions/indexes/lucene/test/src/org/exist/indexing/lucene/LuceneIndexTest.java?revision=12986&view=markup Thanks. I took a look at that file but was scared off by the java! But I did find XML/XQuery-based tests in trunk/eXist/extensions/indexes/lucene/test/src/xquery/lucene/queries.xml, and I added some new tests to demonstrate one problem I had reported: Lucene index keys on attributes returning properly with util:index-keys but returning empty with util:index-keys-by-qname(). Before I commit this test, though, can you tell me if this will break the build? In other words, will the automated build server choke when it finds a failing test? If so, is there a better place for me to share the test? If not, then I'll commit the new tests. Please let me know what's best. Thanks, Joe p.s. here are the new tests - test 5 passes, while test 6 fails: <test output="text"> <task>index-keys test 5</task> <code><![CDATA[ declare function local:key($key, $options) { <t>{$key}</t> }; <terms> { let $callback := util:function(xs:QName("local:key"), 2) return util:index-keys(doc("/db/test/text2.xml")//@type, "", $callback, 10000, "lucene-index") } </terms>]]></code> <expected> <terms> <t>chapter</t> <t>section</t> <t>subsection</t> </terms> </expected> </test> <test output="text"> <task>index-keys test 6</task> <code><![CDATA[ declare function local:key($key, $options) { <t>{$key}</t> }; <terms> { let $callback := util:function(xs:QName("local:key"), 2) return util:index-keys-by-qname(xs:QName("@type"), "", $callback, 10000, "lucene-index") } </terms>]]> </code> <expected> <terms> <t>chapter</t> <t>section</t> <t>subsection</t> </terms> </expected> </test> and here is the new version of test2.xml: <store collection="/db/test" name="text2.xml"> <test> <div type="chapter"> <head>Div1</head> <p>First level</p> <div type="section"> <head>Div2</head> <p>Second level</p> <div type="subsection"> <head>Div3</head> <p>Third level</p> </div> </div> </div> </test> </store> I also added this to the collection.xconf: <text qname="@type"/> I ran the tests by copying queries.xml into /db/queries.xml and running this in the sandbox: xquery version "1.0"; import module namespace t="http://exist-db.org/xquery/testing"; let $doc := '/db/queries.xml' return t:run-testSet(doc($doc)/TestSet) Is there a better way of running this test besides copying it into the database and running it this way? |
From: Adam R. <ad...@ex...> - 2010-10-29 09:32:31
|
xmldb:group-exists() was added to trunk as revision 13004 On 29 October 2010 10:16, Pierrick Brihaye <pie...@fr...> wrote: > Hi, > > When trying to administrate my local server > (http://localhost:8080/exist/admin/admin.xql) from current trunk, I get > this : > > Cannot compile xquery: error found while loading module setup: Error > while loading module setup.xqm: err:XPST0017: Function > xdb:group-exists() is not defined in module namespace: > http://exist-db.org/xquery/xmldb [at line 258, column 8] > > Indeed, from http://demo.exist-db.org/exist/xquery/functions.xql, it > looks like this function doesn't exist any more ?! > > Does that mean that our web application is not functionnal with current > trunk ? > > And... How can eXist's web version be functionnal ? > > Cheers, > > p.b. > > ------------------------------------------------------------------------------ > Nokia and AT&T present the 2010 Calling All Innovators-North America contest > Create new apps & games for the Nokia N8 for consumers in U.S. and Canada > $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing > Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store > http://p.sf.net/sfu/nokia-dev2dev > _______________________________________________ > Exist-development mailing list > Exi...@li... > https://lists.sourceforge.net/lists/listinfo/exist-development > -- Adam Retter eXist Developer { United Kingdom } ad...@ex... irc://irc.freenode.net/existdb |
From: Pierrick B. <pie...@fr...> - 2010-10-29 09:16:45
|
Hi, When trying to administrate my local server (http://localhost:8080/exist/admin/admin.xql) from current trunk, I get this : Cannot compile xquery: error found while loading module setup: Error while loading module setup.xqm: err:XPST0017: Function xdb:group-exists() is not defined in module namespace: http://exist-db.org/xquery/xmldb [at line 258, column 8] Indeed, from http://demo.exist-db.org/exist/xquery/functions.xql, it looks like this function doesn't exist any more ?! Does that mean that our web application is not functionnal with current trunk ? And... How can eXist's web version be functionnal ? Cheers, p.b. |
From: Dmitriy S. <sha...@gm...> - 2010-10-29 04:19:36
|
On Thu, Oct 28, 2010 at 10:43 PM, Joe Wicentowski <jo...@gm...> wrote: > I just wanted report that after Wolfgang's fix to > util:index-keys-by-qname() in 1.5dev rev. 13024, I went back to the tests I > sent last month and retried them, but I'm afraid the problems all still > appear to be present. > > I would be happy to keep running these tests at regular intervals. Or I > would be happy to adapt these tests so that they run as part of the test > suite. But if so, I could use some pointers about the best way to add tests > to the test suite? The tests rely on installing the sample data (which I > currently do via the admin page's Examples Setup panel), so I'm not sure if > the tests have a way of installing this data. Also, the tests affect > lucene, range, and ngram indexes, so where should the tests go? I'd > appreciate any suggestions pointers if people think adding these tests would > be a good idea. > How is big test data? The lucene's tests have data upload & index configuration, so it could be right place to start with http://exist.svn.sourceforge.net/viewvc/exist/trunk/eXist/extensions/indexes/lucene/test/src/org/exist/indexing/lucene/LuceneIndexTest.java?revision=12986&view=markup -- Dmitriy Shabanov |
From: Joe W. <jo...@gm...> - 2010-10-28 17:43:46
|
Hi all, I just wanted report that after Wolfgang's fix to util:index-keys-by-qname() in 1.5dev rev. 13024, I went back to the tests I sent last month and retried them, but I'm afraid the problems all still appear to be present. I would be happy to keep running these tests at regular intervals. Or I would be happy to adapt these tests so that they run as part of the test suite. But if so, I could use some pointers about the best way to add tests to the test suite? The tests rely on installing the sample data (which I currently do via the admin page's Examples Setup panel), so I'm not sure if the tests have a way of installing this data. Also, the tests affect lucene, range, and ngram indexes, so where should the tests go? I'd appreciate any suggestions pointers if people think adding these tests would be a good idea. (My final question in the e-mail below is still stumping me - namely, how to use util:index-keys-by-qname() on range indexes - but I think that's a matter of knowing the right name for range indexes rather than missing or broken functionality.) Joe On Fri, Sep 10, 2010 at 1:43 PM, Joe Wicentowski <jo...@gm...> wrote: > Hi all, > > Using trunk, I am seeing two problems: > > 1. util:index-keys() returns 0 results on nodes with range indexes (all xsd > types in my tests, but especially xs:string). > > 2. util:index-keys-by-qname() returns 0 results on attributes with NGram > and Lucene indexes. > > Note that these functions *do* actually return the expected results for the > following cases: > > 1. util:index-keys(): Lucene element, Lucene attribute, NGram element, > NGram attribute > > 2. util:index-keys-by-qname(): Lucene element, NGram element (Note: I can't > figure out how to test this function on range indexes - see final question > below) > > Here are the steps to reproduce the problems I am seeing: > > Problem 1: util:index-keys() returns 0 results on nodes with range indexes > ===== > Step 1: Load the examples that ship with eXist ( > http://localhost:8080/exist/admin/admin.xql?panel=setup) so the MODS > sample files are stored in /db/mods; we will be targeting the range index on > the mods:namePart qname. > > Step 2: Run the following Sandbox queries: > > a) Confirm data is present > declare namespace mods="http://www.loc.gov/mods/v3"; > collection('/db/mods')//mods:namePart > > result count: 113 results - good, expected > > > b) Test that namePart is configured with a range index > declare namespace mods="http://www.loc.gov/mods/v3"; > util:index-type(collection('/db/mods')//mods:namePart) > > result: xs:string - good, expected > > > c) Try util:index-keys() on namePart: > declare namespace mods="http://www.loc.gov/mods/v3"; > declare function local:term-callback($term as xs:string, $data as xs:int+) > as element() { > <term>{$term}</term> > }; > let $nodes := collection('/db/mods')//mods:namePart > return > util:index-keys($nodes, "", util:function(xs:QName("local:term-callback"), > 2), 1000) > > result count: 0 results! - bad! > > ==> Conclusion: util:index-keys() returns 0 results on nodes with defined > range indexes. The test has the same result on /db/mondial data -- range > indexes defined on name, population, etc. -- whether defined using qname or > path options. > > > Problem 2: util:index-keys-by-qname() returns 0 results on attributes with > NGram indexes > ===== > Step 1: Same as step 1 above (this time we will target the /db/xmlad data, > specifically the NGram index defined on the @id qname and the Lucene index > defined on the @expression qname.) > > Step 2: Run the following Sandbox queries: > > a) Confirm data is present > collection('/db/xmlad')//@id/string() > > => result count: 623 results > > collection('/db/xmlad')//@expansion/string() > > => result count: 623 results > > > b) Test that @id is configured with an NGram index > util:index-type(collection('/db/xmlad')//@id) > > => result: item() - weird! It should be xs:string. > > util:index-type(collection('/db/xmlad')//@expansion) > > => result: item() - weird! > > > c) Try util:index-keys(): > declare function local:term-callback($term as xs:string, $data as xs:int+) > as element() { > <term>{$term}</term> > }; > let $nodes := collection('/db/xmlad')//@id > return > util:index-keys($nodes, "", util:function(xs:QName("local:term-callback"), > 2), 1000, 'ngram-index') > > => result count: 1000 results - good, this is the expected result > > declare function local:term-callback($term as xs:string, $data as xs:int+) > as element() { > <term>{$term}</term> > }; > let $nodes := collection('/db/xmlad')//@expansion > return > util:index-keys($nodes, "", util:function(xs:QName("local:term-callback"), > 2), 1000, 'lucene-index') > > => result count: 782 results - good, this is the expected result > > > d) Try util:index-keys-by-qname(): > declare function local:term-callback($term as xs:string, $data as xs:int+) > as element() { > <term>{$term}</term> > }; > let $qname := xs:QName('@id') > return > util:index-keys-by-qname($qname, "", > util:function(xs:QName("local:term-callback"), 2), 1000, 'ngram-index') > > => result count: 0 results! bad, should be 1000? > > declare function local:term-callback($term as xs:string, $data as xs:int+) > as element() { > <term>{$term}</term> > }; > let $qname := xs:QName('@expansion') > return > util:index-keys-by-qname($qname, "", > util:function(xs:QName("local:term-callback"), 2), 1000, 'lucene-index') > > => result count: 0 results! bad, should be 782? > > > ==> Conclusion: util:index-keys-by-qname() returns 0 results on attributes > with NGram indexes > > > --- > Finally, a question: Unlike util:index-keys(), util:index-keys-by-qname() > *requires* a 5th parameter of the index name, e.g. 'lucene-index'. What is > the name of a plain old range index? I've tried everything I could think of > ('range-index' and 'range'), e.g.: > > declare namespace mods="http://www.loc.gov/mods/v3"; > declare function local:term-callback($term as xs:string, $data as xs:int+) > as element() { > <term>{$term}</term> > }; > let $qname := xs:QName('mods:namePart') > return > util:index-keys-by-qname($qname, "", > util:function(xs:QName("local:term-callback"), 2), 1000, 'range-index') > > but this returns an error of, "Unknown index: range-index [at line 7, > column 2, source: String]" > > Thanks in advance for any help with these issues, > Joe > |
From: Joe W. <jo...@gm...> - 2010-10-28 10:35:06
|
> Thanks Joe for reminding me. Since I put this information in to mark > this crucial piece of information for the documentation of the > extensions modules I should probably also change them to one of your > suggested replacements. Thanks, Leif-Jöran! Joe |
From: Leif-Jöran O. <lj...@ex...> - 2010-10-28 09:05:48
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Den 2010-10-28 05:58, Joe Wicentowski skrev: > Hi all, > > I've noticed that "< eXist-1.0" and "< eXist-1.5" appears > several times in the ajax eXist function docs at > http://demo.exist-db.org/exist/xquery/functions.xql if you select > "All" > "Browse" and then "Show All". This appears because the xqdocs > in question hard-code the entity in the function documentation: > > <since>&lt; eXist-1.0</since> > > I propose that the authors/maintainers of these modules changing all > such occurrences to "pre" or "before." > > I'd say that eliminating double-escaped entities is probably a good > thing, since seeing escaped entities isn't very pleasant. > > Thanks for considering this idea, Thanks Joe for reminding me. Since I put this information in to mark this crucial piece of information for the documentation of the extensions modules I should probably also change them to one of your suggested replacements. Cheers, Leif-Jöran -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.16 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/ iD8DBQFMyTzihcIn5aVXOPIRAlkVAKCGaVY90TGYOmwgY5HA8LeuswH2mQCgmzXq z579PDxoRT37JnyfTsP2yLk= =NDpW -----END PGP SIGNATURE----- |
From: Joe W. <jo...@gm...> - 2010-10-28 03:59:09
|
Hi all, I've noticed that "< eXist-1.0" and "< eXist-1.5" appears several times in the ajax eXist function docs at http://demo.exist-db.org/exist/xquery/functions.xql if you select "All" > "Browse" and then "Show All". This appears because the xqdocs in question hard-code the entity in the function documentation: <since>&lt; eXist-1.0</since> I propose that the authors/maintainers of these modules changing all such occurrences to "pre" or "before." I'd say that eliminating double-escaped entities is probably a good thing, since seeing escaped entities isn't very pleasant. Thanks for considering this idea, Joe p.s. I just committed some minor fixes to the RESTful function browser. If there are any issues that people notice, please let me know. |
From: Dannes W. <da...@ex...> - 2010-10-27 19:10:23
|
Hi, On 27 Oct 2010, at 16:34 , Dannes Wessels wrote: > NEW this time: uploading with the windows app "BitKinex" works. /In > place" editing there though fails when trying to get a lock. > > > 2010-10-27 15:28:49,202 [eXistThread-22] DEBUG (MiltonDocument.java > [getCurrentLock]:285) - No database lock token. > 2010-10-27 15:28:49,202 [eXistThread-22] DEBUG (LockHandler.java > [processNewLock]:205) - locking: some-binary.txt > 2010-10-27 15:28:49,202 [eXistThread-22] ERROR (StandardFilter.java > [process]:44) - process > java.lang.NullPointerException > at org.exist.webdav.MiltonResource.convertToken(MiltonResource.java:218) > at org.exist.webdav.MiltonDocument.lock(MiltonDocument.java:197) > at The bug appeared in the situation a LOCK is set without a locktime out. It should be fixed in rev13016 .... Kind regards Dannes -- eXist-db Native XML Database - http://exist-db.org Join us on linked-in: http://www.linkedin.com/groups?gid=35624 |
From: Dannes W. <da...@ex...> - 2010-10-27 14:34:58
|
Hi, On Wed, Oct 27, 2010 at 3:31 PM, Hungerburg <pc...@my...> wrote: > - copying 300 xml files into validating eXist collection via webdav > takes 10% longer than via curl/REST, but maybe due to network. > - options used: use_locks 0, delay_upload 0 > I have one additional change waiting , in which smaller files will be cached in memory instead of disk. This might improve performance.... > > NEW this time: uploading with the windows app "BitKinex" works. /In > place" editing there though fails when trying to get a lock. > > > 2010-10-27 15:28:49,202 [eXistThread-22] DEBUG (MiltonDocument.java > [getCurrentLock]:285) - No database lock token. > 2010-10-27 15:28:49,202 [eXistThread-22] DEBUG (LockHandler.java > [processNewLock]:205) - locking: some-binary.txt > 2010-10-27 15:28:49,202 [eXistThread-22] ERROR (StandardFilter.java > [process]:44) - process > java.lang.NullPointerException > at > org.exist.webdav.MiltonResource.convertToken(MiltonResource.java:218) > at org.exist.webdav.MiltonDocument.lock(MiltonDocument.java:197) > at > that is definitely a clear trace, i'll fix that tonight.... thnx for your input! Dannes -- eXist-db Native XML Database - http://exist-db.org Join us on linked-in: http://www.linkedin.com/groups?gid=35624 |
From: Hungerburg <pc...@my...> - 2010-10-27 13:32:07
|
Am 2010-10-26 21:46, schrieb Dannes Wessels: > - Need some help to check on recent version DAVFS , Konquerer and Nautilus davfs2 1.4.5 tested fine, anecdotically only though, no test suite: - editing with komodo-edit: smooth (komodo edit does a lot of scanning and is very slow with cifs over the wan, unlike eXist webdav.) - copying 300 xml files into validating eXist collection via webdav takes 10% longer than via curl/REST, but maybe due to network. - options used: use_locks 0, delay_upload 0 davfs2 1.2.1 also tests ok, options used: use_locks 0, delay_upload 0, use_expect100 0 gvfs1.6 (same as nautilus?) still fails trying to access "OPTIONS /exist/webdav/" without "db". NEW this time: uploading with the windows app "BitKinex" works. /In place" editing there though fails when trying to get a lock. 2010-10-27 15:28:49,202 [eXistThread-22] DEBUG (MiltonDocument.java [getCurrentLock]:285) - No database lock token. 2010-10-27 15:28:49,202 [eXistThread-22] DEBUG (LockHandler.java [processNewLock]:205) - locking: some-binary.txt 2010-10-27 15:28:49,202 [eXistThread-22] ERROR (StandardFilter.java [process]:44) - process java.lang.NullPointerException at org.exist.webdav.MiltonResource.convertToken(MiltonResource.java:218) at org.exist.webdav.MiltonDocument.lock(MiltonDocument.java:197) at com.bradmcevoy.http.webdav.LockHandler.processNewLock(LockHandler.java:208) at com.bradmcevoy.http.webdav.LockHandler.processExistingResource(LockHandler.java:93) at com.bradmcevoy.http.webdav.LockHandler.process(LockHandler.java:69) at com.bradmcevoy.http.StandardFilter.process(StandardFilter.java:32) at com.bradmcevoy.http.FilterChain.process(FilterChain.java:21) at com.bradmcevoy.http.HttpManager.process(HttpManager.java:152) at com.bradmcevoy.http.MiltonServlet.service(MiltonServlet.java:169) -- peter |
From: Dannes W. <da...@ex...> - 2010-10-26 19:46:18
|
All, a few moments ago I commited some changes regarding the new webdav implementation: [bugfix] Fix of URL-encoding of collection and document names. Now characters as space, + etc are accepted in the reightway. Plus: Re-upgraded to latest milton snapshot to provide better macos support and more fixes, Implemented NULL-resource locking for better support by misc clients. - Need some help to check on recent version DAVFS , Konquerer and Nautilus - ToDo: uploaded files are buffered on disc right now, we can probably use Jose Maria's Virtual file to buffer smaller files into memory I tested it on MacOsX (Finder, Transmit) and windowsXP/webfolders, here it works just fine. Kind regards Dannes -- eXist-db Native XML Database - http://exist-db.org Join us on linked-in: http://www.linkedin.com/groups?gid=35624 |
From: Andrzej J. T. <an...@ch...> - 2010-10-26 15:53:40
|
Just thought I would mention: Using the Admin web page, it doesn't seem to display running xqueries, if those xqueries were invoked through the REST interface (eg. if you clicked on an xquery on an Admin Browse Collections page). Not sure why this might be happening. -- Andrzej Taramina Chaeron Corporation: Enterprise System Solutions http://www.chaeron.com |
From: Adam R. <ad...@ex...> - 2010-10-26 11:15:27
|
No problem at all, thanks Joern. On 26 October 2010 11:03, Joern Turner <joe...@go...> wrote: > Hi, > > just to let you all know: > we are at it and still have 3 failing tests to fix. We hope we can fix > those this week, commit a new version and let you know. > > Thanks for your patience, > > Joern > > ------------------------------------------------------------------------------ > Nokia and AT&T present the 2010 Calling All Innovators-North America contest > Create new apps & games for the Nokia N8 for consumers in U.S. and Canada > $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing > Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store > http://p.sf.net/sfu/nokia-dev2dev > _______________________________________________ > Exist-development mailing list > Exi...@li... > https://lists.sourceforge.net/lists/listinfo/exist-development > -- Adam Retter eXist Developer { United Kingdom } ad...@ex... irc://irc.freenode.net/existdb |
From: Joern T. <joe...@go...> - 2010-10-26 10:03:27
|
Hi, just to let you all know: we are at it and still have 3 failing tests to fix. We hope we can fix those this week, commit a new version and let you know. Thanks for your patience, Joern |
From: Wolfgang M. <wol...@ex...> - 2010-10-21 14:47:52
|
> If anyone could give me access to the eXist project, I could add my code in > a branch so anyone intrested could take look. My sourceforge login: > anneschuth I would be interested to see your code before I start implementing further additions to the lucene index. I added you as a developer, so you should have access to SVN now. Wolfgang |
From: Anne <ann...@gm...> - 2010-10-20 07:35:59
|
Despite the silence on the mailing list from my side I've been busy implementing faceted search to my requirements. I thought I give you a heads up, to let you know the idea is not dying on me. It seems I am at the same stage as Dmitriy. I've faceted search working for simple use cases. Multiple facets on the same qname do not work yet (I guess I will look into the latest additions by Wolfgang to the lucene index, to see how he implemented a similar feature). What I allow now is functions like: facet:query(., "facet", "value"), in a similar fashion to the lucene query function. But I will also allow for querying in the following way: facet:query(., (("facet1", "value1"), ("facet2", "value2"), ("facet3", "value3"), ... )), that will allow for fast calculations since we are talking bitwise-and operations. Besides that, I support the fuction facet:counts(.), which is given a sequence and returns the counts for all facet/value pairs that are defined on the elements in that sequence.. Like Dmitriy, I do not store/reload to/from file yet. I am not sure still how to do that, since I am not too familiar with serializing. Nor do I allow for changing collections yet. These will be the first issues I am addressing next. If anyone could give me access to the eXist project, I could add my code in a branch so anyone intrested could take look. My sourceforge login: anneschuth Best, -- Anne On Mon, Oct 18, 2010 at 13:28, Dmitriy Shabanov <sha...@gm...> wrote: > On Sun, Oct 17, 2010 at 9:57 PM, Dmitriy Shabanov <sha...@gm...>wrote: > >> There is one use case taken from accounts: >> http://smartims.svn.sourceforge.net/viewvc/smartims/trunk/Freyja/test/src/org/smartims/indexing/olap/OLAP_index_tests.java?revision=101&view=markup >> >> > Update: The index do work in-memory, next step functions & storing it to > the file. I'm interesting in tests (junit one will be best, but any are > good). > > -- > Dmitriy Shabanov > |
From: Wolfgang M. <wol...@ex...> - 2010-10-19 11:58:07
|
It seems the new WebDAV interface does not preserve the mime type if you update an existing resource. I see this when using oXygen to edit .xql files. The log output is as follows: 19 Oct 2010 12:49:51,505 [eXistThread-25] DEBUG (MagicMimeMimeDetector.java [parse]:576) - Parsing "/usr/share/mimelnk/magic" took 217 msec. 19 Oct 2010 12:49:51,625 [eXistThread-25] DEBUG (MagicMimeMimeDetector.java [parse]:576) - Parsing "/usr/share/file/magic.mime" took 119 msec. 19 Oct 2010 12:49:51,626 [eXistThread-25] DEBUG (MagicMimeMimeDetector.java [parse]:576) - Parsing "/etc/magic.mime" took 0 msec. 19 Oct 2010 12:49:51,626 [eXistThread-25] DEBUG (MimeUtil.java [addMimeDetector]:876) - Registering MimeDetect with name [eu.medsea.mimeutil.detector.MagicMimeMimeDetector] and description [Get the mime types of files or streams using the Unix file(5) magic.mime files] 19 Oct 2010 12:49:51,627 [eXistThread-25] DEBUG (MimeUtil.java [getMimeTypes]:478) - Getting mime types for file name [controller.xql]. 19 Oct 2010 12:49:51,636 [eXistThread-25] DEBUG (MimeUtil.java [getMimeTypes]:974) - Retrieved mime types [application/octet-stream] It looks like the WebDAV library is using its own mime type detection mechanism. As a workaround, I changed line 390 of org.exist.webdav.ExistCollection to use the mime type determined by eXist instead of the one supplied by Milton: DocumentImpl doc = collection.addBinaryResource(txn, broker, newNameUri, dis, mime.getName(), length.intValue()); This fixed the problem for me. Wolfgang |
From: Dmitriy S. <sha...@gm...> - 2010-10-18 11:29:16
|
On Sun, Oct 17, 2010 at 9:57 PM, Dmitriy Shabanov <sha...@gm...>wrote: > There is one use case taken from accounts: > http://smartims.svn.sourceforge.net/viewvc/smartims/trunk/Freyja/test/src/org/smartims/indexing/olap/OLAP_index_tests.java?revision=101&view=markup > > Update: The index do work in-memory, next step functions & storing it to the file. I'm interesting in tests (junit one will be best, but any are good). -- Dmitriy Shabanov |
From: Dmitriy S. <sha...@gm...> - 2010-10-17 16:57:19
|
Hi Dan, You did write: "My initial reason for wanting to use eXist was to track metadata related to OLAP cubes!" . I did misunderstand you, the use case is not from metadata world, anyway good to do this in a group :-) On Sun, Oct 17, 2010 at 7:27 PM, Dan McCreary <dan...@gm...>wrote: > Here is a very good document that describes how Mondrian stores aggregate > information in tables and how the XML configuration tables are structured. > > > http://mondrian.pentaho.com/documentation/aggregate_tables.php > > This document also has some very good references at the end on the creation > and maintenance of aggregates. > The original idea was to use b+tree with some special operations. eXist do have quite good infrastructure, save/load optimization excellent, so why we can not reuse what we have ... > > I think that there are two use-cases that we might consider. One is where > data in a collection is static and the aggregates can be pre-computed in a > batch report. These are usually done in an overnight batch job. > > The second use-case would use triggers to update the aggregates so they > would be kept up-to-date after any insert/update or delete operation on > items in a collection. > I do use eXist's index strategy. Triggers doesn't sound good to me. The calculation optimization will be taken from "phantom nodes" research. MCD-optimization have no relation with "mind control device" :-) Memory-Computation-Demand optimization is the resource optimization for complex environment. > > This points to some deep-reaching research questions (which I don't have > answers to): > > 1) Can we reused the XML configuration file standards that Mondrian has > setup to define cubes, dimensions, aggregates and measures? > We shouldn't take it, because Mondrian configuration structure was design with SQL in a head. Take best practice & terminology and design xml based configuration. (my proposal at the end) > 2) Should we focus on building simple XQuery module to access the aggregate > information? > I think that xquery functions are better than special xml structures. > 3) Can the XQuery optimizer use the aggregate tables to do things like use > the pre-calculed item counts? > Yes, there is a plan to optimize 'sum' function (and others). Count's optimization too simple use case, it possible to have it base on range index after all. > 4) Can features of the MDX language be added to XQuery language to create > pivot-table-style reports? > I do believe, that xml-database OLAP MUST generate xml, why we should limit our-self by tables. There is one use case taken from accounts: http://smartims.svn.sourceforge.net/viewvc/smartims/trunk/Freyja/test/src/org/smartims/indexing/olap/OLAP_index_tests.java?revision=101&view=markup The configuration draft: <collection xmlns=\"http://exist-db.org/collection-config/1.0\"> <index> <fulltext default=\"none\"> </fulltext> <olap> <time qname=\"transactions/@date\"/> <bound qname=\"transaction/@gl\"/> <measure qname=\"transaction/@amount\"/> <measure qname=\"transaction/@qty\"/> </olap> </index> </collection> I do think to change index name ... can find name. The "OLAP" looks wrong here. Dan, you did say that 10k items can be process fast, but I do have big problem on 1k warehouse records, so the bottom "use limit" quite low. -- Dmitriy Shabanov |
From: Dan M. <dan...@gm...> - 2010-10-17 14:27:10
|
Here is a very good document that describes how Mondrian stores aggregate information in tables and how the XML configuration tables are structured. http://mondrian.pentaho.com/documentation/aggregate_tables.php This document also has some very good references at the end on the creation and maintenance of aggregates. I think that there are two use-cases that we might consider. One is where data in a collection is static and the aggregates can be pre-computed in a batch report. These are usually done in an overnight batch job. The second use-case would use triggers to update the aggregates so they would be kept up-to-date after any insert/update or delete operation on items in a collection. This points to some deep-reaching research questions (which I don't have answers to): 1) Can we reused the XML configuration file standards that Mondrian has setup to define cubes, dimensions, aggregates and measures? 2) Should we focus on building simple XQuery module to access the aggregate information? 3) Can the XQuery optimizer use the aggregate tables to do things like use the pre-calculed item counts? 4) Can features of the MDX language be added to XQuery language to create pivot-table-style reports? - Dan On Sat, Oct 16, 2010 at 9:25 AM, Dan McCreary <dan...@gm...>wrote: > *> Can you contribute a use case (or several)? I did make some progress, > but need more cases to cover bigger area.* > > Gladly! I am very interested in trying to understand how the world of > documents (XML), tables, graphs (RDF) and OLAP work together. Something > like a "grand unified theory of data". I think that eXist could be extended > to include all four models, but the other models will never be flexible > enough to include XML. I also think that XQuery extensions and modules > might include the functions of both SQL, SPARQL and MDX. > > There are a few core concepts we must try to understand how the work with > faceted search is opening the door to new thinking. The key is to integrate > with existing open-source concepts so new open source developers will > quickly understand what we are talking about. > > One of the first things to understand is the concepts of "categorical data" > and "measurement data". Categorical data is where you divide all the items > in any data set into discrete non-overlapping and potential hierarchical > categories. Open source tools like Mondrian already have had very specific > XML tags for configuring dimensions and measures. > > Measurement data is data that has can be "summed or averaged" such as > integers, decimal numbers or financial amounts. > > First, if people have not used OLAP before I would get familiar with > putting sample data sets with both categories (dimensions) and measures > (vales and amounts) into a spreadsheet and then start using the "pivot > table" features to create simple sums and totals. For using under 10,000 > records most modern computers can recalculate all the totals in under 1 > second. But this process gets your brain started to think about the > concepts of data categories and measures. I have training materials on this > if you are interested. Doing a Google of "Pivot Table Tutorial" is a good > start. > > The most common and "typical" use case for OLAP cubes comes from the retail > sales sector. > > Say you have 10,000,000 sales transactions items that look like this: > > <sales-transaction> > <item-sku>1234567</item-sku> > <datetime>2010-10-11T8:00.23</datetime> > <store-id>12345</store-id> > <shelf-height type="meters">1.2</shelf-height> > <promotion-code>spring-sale</promotion-code> > <price-paid currency="USD">1.23</price-paid> > </sales-transaction> > > The first step you want to "enrich" the data to include all the > descriptions about the item (color, size, weight) from your product > catalog. These extended attributes can be placed into a much larger XML > file will all the extended attributes of an item such as its color, size, > weight and create reports such as "How many people purchased a red item any > stores in when they are moved from the top to the middle shelf and what was > the average price paid?" > > The key is to put the data into "fact" tables with many categories and > pre-count the number of items in each category. > > If you have 10,000,000 items it can take a very long time to continually > re-calculate the number of items in each sub-category (red hats under > $10.00) > > Calculating the number of items in each sub-category can be done with > triggers. Each time new records are added to "fact tables" you increment > pre-computed data sets (called aggregate caches) that store the counts of > items in each category. The more categories you have the larger the > aggregates grow in disk space but the faster the sums are calculated. Most > systems allow you to tune aggregate caches based on the most frequent > queries. > > There are many other use cases from inventory management, order management, > task and project management, and workflow but I will try to document these > in other places. If anyone wants to volunteer an semi-public eXist server I > can load a use-case XRX application. > > I think that Anne's initial work on faceted search is starting us to all > think about the issues of storing pre-computed counts of items in > collections. Although Anne's focus was faceted search, the approach of > extending the configuration file was right on. And it made me ask - could > we come up with some XML file format that would store generalized aggregate > information used in both faceted search and in analytical reporting? > > BI and OLAP are trillion dollar market segments that all depend on the > ability of an information system to pre-compute sums. The concepts of > aggregates is also very general. If we designed this correctly eXist could > be the only system in the world that would BI functions using simple XQuery > extensions. Very, very exciting! > > Here are a few other references that might help: > > There are XML standards for defining dimensions and measure that are part > of the Common Warehouse Metamodel CWM standard here: > http://en.wikipedia.org/wiki/Common_Warehouse_Metamodel > > OLAP Overview > http://mondrian.pentaho.com/documentation/olap.php > > MDX Tutorial using Mondrian > http://www.cse.unsw.edu.au/~cs9318/10s1/lect/<http://www.cse.unsw.edu.au/%7Ecs9318/10s1/lect/> > *MDX*.pdf > > Mondrian Architecture > http://mondrian.pentaho.com/documentation/architecture.php > > Aggregate Tables > http://mondrian.pentaho.com/documentation/aggregate_tables.php > > This note got a bit long. I will try to re-post it on a blog. > > I hope this helps! > > - Dan > > > On Wed, Oct 13, 2010 at 1:25 PM, Dmitriy Shabanov <sha...@gm...>wrote: > >> Hi Dan, >> >> On Tue, Oct 12, 2010 at 1:49 AM, Dan McCreary <dan...@gm...>wrote: >> >>> It is very interesting to see that pre-computed counts of item >>> categories might be included in a native XML database! My initial >>> reason for wanting to use eXist was to track metadata related to OLAP >>> cubes! >>> >> >> Can you contribute a use case (or several)? I did make some progress, but >> need more cases to cover bigger area. >> >> -- >> Dmitriy Shabanov >> > > > > -- > Dan McCreary > Semantic Solutions Architect > office: (952) 931-9198 > cell: (612) 986-1552 > -- Dan McCreary Semantic Solutions Architect office: (952) 931-9198 cell: (612) 986-1552 |
From: Anton K. <ak...@de...> - 2010-10-17 13:33:01
|
Hi Dmitry, Wolfgang, For the query: declare namespace a="a"; declare namespace b="b"; let $node := doc(xmldb:store('/db', 'self-axis-bug.xml', <a:a/>))/* return ($node, $node/self::b:*, $node[self::b:*]) result in the trunk is 3x <a:a/> Probably, at least, getSelf/test.isWildcardTest(): http://exist.svn.sourceforge.net/viewvc/exist/trunk/eXist/src/org/exist/xquery/LocationStep.java?view=markup&pathrev=12646#l536 is not absolutely accurate. Best Regards, Anton |