#3 hsql cache that could be turned on/off (to be used when content is
bigger then can be accomplished with #2)
- this would be similar to grabbing remote data into a local hsql
database for abuse
  (this is not much different then implementing those attribute based

Another interesting option could be to upgrade to h2 and use a 'memory only' database.  See half way down on http://www.h2database.com/html/frame.html?features.html&main

I believe it would let you accomplish #2 with little additional programming.  And it would accomplish much of #1 as well.  Andrea pointed out h2 to me the last day in switzerland, and I've been drooling over it since.  It has the 2-phase commits for transactions that hsql was lacking, and has support for alternate indexing schemes.  Indeed people have already been playing with spatial indexing and spatialdb in a box for it - http://h2database.com/ipowerb/index.php?showtopic=243&hl=spatial and http://h2database.com/ipowerb/index.php?showtopic=212&hl=spatial.  Oh, and it's damn fast as well. 

Only downside appears to be it's just gotten to 1.0, so perhaps still a bit young.  But for GeoServer I'm definitely interested in trying it out and eventually making it so shapefiles are automatically converted to it by default so we're not in danger of multi-user transactions corrupting the files.


The reason we try to draw such a hard line about this is we do not want
uDig to be a joke in terms of scalability, I admit we were kind of
hoping that the datastore implementations would be more capable by now.
Whatever solution we use I would prefer if it was not something you had
to modify your code in order to take advantage of...

Sigh, if only we had time :-) If you are interested in any of the ideas
outlined above I can write up a design for you.

> sebastien.piau@ingencys.net ha scritto:
>> Andrea (and everybody),
>> Thanks for this response.
>> Be sure I've tried this test not on one single line, but on thousands...
>> To be more precise, i've made this bench on a task which take up to 2
>> minutes based on shapefile and only 2 seconds with in memory data
>> (really, it isn't a joke!)
> I do believe you. Disk access is usually 1000 times slower, and even when
> the shapefile is in the file cache, there are still all the decodes
> needed
> to turn shapefile data structures into features, and memory data store
> does
> not need to do anything of these...
>> But to be more exact, i am disapoint that I have to clone my
>> shapefile into an "in memory" layer myself and i am not sure to use
>> the right way. I think our famous Udig's mainteners have something
>> else to do that ;o)
> Fact is, udig is geared towards data sets that cannot be loaded into
> memory
> at all because they are too big. I think they took into consideration the
> idea of preloading data into memory, but it's hard to make it right,
> because
> on jdbc or wfs data stores data can change under your feet without you
> noticing
> (not all data stores support event notifications for data changes
> afaik, but
> I may be mistaken).
> Cheers
> Andrea Aime
> _______________________________________________
> User-friendly Desktop Internet GIS (uDig)
> http://udig.refractions.net
> http://lists.refractions.net/mailman/listinfo/udig-devel

Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
Geotools-gt2-users mailing list