From: Emily G. <eg...@re...> - 2007-05-03 16:44:26
|
Interesting. I'll try some of the workaround ideas; hopefully I'll have some success. Thanks for all your help! Emily John V. Sichi wrote: > Emily Gouge wrote: >> The select query results in a Java Out of Memory Error: >> >> 0: jdbc:luciddb:rmi://localhost> select count(*) from >> habc_extraction_schema.master_grid; >> >> Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0) > > Ah, I wonder if it could have anything to do with this? > > http://mail-archives.apache.org/mod_mbox/db-ojb-user/200504.mbox/%3C4...@ap...%3E > > http://postgis.refractions.net/pipermail/postgis-users/2005-August/008875.html > > > We may need to add something to the JDBC foreign data wrapper to allow > control over the fetch size to prevent the PostgreSQL JDBC driver from > effectively leaking per-row. Sigh. > > As a workaround, you could try loading the data in large chunks of rows > via a WHERE clause on some partitioning key (if there is one in the > source data). > > Another clunky alternative is to dump the data from PostgreSQL into a > csv file and load via LucidDB's flatfile reader. There have recently > been some problem reports about trying to loading the TPC-H 10gig > dataset via flatfiles due to a bug in the flatfile reader causing it to > go into an infinite loop, so it depends whether you're attempting to > load your full data set or a smaller test set. > > JVS |