You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(28) |
Jun
(2) |
Jul
(10) |
Aug
(1) |
Sep
(7) |
Oct
|
Nov
(1) |
Dec
(7) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
(5) |
Feb
(7) |
Mar
(10) |
Apr
(12) |
May
(30) |
Jun
(21) |
Jul
(19) |
Aug
(17) |
Sep
(25) |
Oct
(46) |
Nov
(14) |
Dec
(11) |
2009 |
Jan
(5) |
Feb
(36) |
Mar
(17) |
Apr
(20) |
May
(75) |
Jun
(143) |
Jul
(29) |
Aug
(41) |
Sep
(38) |
Oct
(71) |
Nov
(17) |
Dec
(56) |
2010 |
Jan
(48) |
Feb
(31) |
Mar
(56) |
Apr
(24) |
May
(7) |
Jun
(18) |
Jul
(2) |
Aug
(34) |
Sep
(17) |
Oct
(1) |
Nov
|
Dec
(18) |
2011 |
Jan
(12) |
Feb
(19) |
Mar
(25) |
Apr
(11) |
May
(26) |
Jun
(16) |
Jul
(2) |
Aug
(10) |
Sep
(8) |
Oct
(1) |
Nov
|
Dec
(5) |
2012 |
Jan
(1) |
Feb
(3) |
Mar
(3) |
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
(1) |
Sep
(2) |
Oct
|
Nov
(2) |
Dec
|
From: Rushan C. <rc...@lu...> - 2007-05-03 21:41:02
|
Hi Emily, The projection(select list items) and filters(where clause) are not pushed through the JDBC. That's why the SQL showing up on the postgresql server has no where clause and selects every column. Is it possible to create views on the postgresql to divide the original big table into smaller chunks and load them into LucidDb? In your script, the definition of habc_transformation_schema.location_view will have to be UNIONs of these source views. Hope this helps. Rushan Emily Gouge wrote: > > As a workaround, you could try loading the data in large chunks of rows > > via a WHERE clause on some partitioning key (if there is one in the > > source data). > > I tried this, however I am still getting Java heap space errors. This query should return only one row. > > select "x","y", "ecosec_v2_code", "lwdpbc_code", "bececolwd_v2_code", "dra_code" from > habc_extraction_schema."master_grid" where "x" = 0 and"y" = 0; > > causes: > Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0) > > > I have noticed that adding the where clause to the query does not change the query being run on the > postgresql database. Both cases cause a "SELECT * FROM "habc"."master_grid"" query to be run on the > postgresql database with no where clause. > > Any ideas on how the query: > select "x","y", "ecosec_v2_code", "lwdpbc_code", "bececolwd_v2_code", "dra_code" from > habc_extraction_schema."master_grid" where "x" = 0 and"y" = 0; > > is being converted to a: > select * from "habc"."master_grid" > > Thanks again. > > > > > > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Emily G. <eg...@re...> - 2007-05-03 20:39:37
|
> As a workaround, you could try loading the data in large chunks of rows > via a WHERE clause on some partitioning key (if there is one in the > source data). I tried this, however I am still getting Java heap space errors. This query should return only one row. select "x","y", "ecosec_v2_code", "lwdpbc_code", "bececolwd_v2_code", "dra_code" from habc_extraction_schema."master_grid" where "x" = 0 and"y" = 0; causes: Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0) I have noticed that adding the where clause to the query does not change the query being run on the postgresql database. Both cases cause a "SELECT * FROM "habc"."master_grid"" query to be run on the postgresql database with no where clause. Any ideas on how the query: select "x","y", "ecosec_v2_code", "lwdpbc_code", "bececolwd_v2_code", "dra_code" from habc_extraction_schema."master_grid" where "x" = 0 and"y" = 0; is being converted to a: select * from "habc"."master_grid" Thanks again. |
From: Emily G. <eg...@re...> - 2007-05-03 16:44:26
|
Interesting. I'll try some of the workaround ideas; hopefully I'll have some success. Thanks for all your help! Emily John V. Sichi wrote: > Emily Gouge wrote: >> The select query results in a Java Out of Memory Error: >> >> 0: jdbc:luciddb:rmi://localhost> select count(*) from >> habc_extraction_schema.master_grid; >> >> Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0) > > Ah, I wonder if it could have anything to do with this? > > http://mail-archives.apache.org/mod_mbox/db-ojb-user/200504.mbox/%3C4...@ap...%3E > > http://postgis.refractions.net/pipermail/postgis-users/2005-August/008875.html > > > We may need to add something to the JDBC foreign data wrapper to allow > control over the fetch size to prevent the PostgreSQL JDBC driver from > effectively leaking per-row. Sigh. > > As a workaround, you could try loading the data in large chunks of rows > via a WHERE clause on some partitioning key (if there is one in the > source data). > > Another clunky alternative is to dump the data from PostgreSQL into a > csv file and load via LucidDB's flatfile reader. There have recently > been some problem reports about trying to loading the TPC-H 10gig > dataset via flatfiles due to a bug in the flatfile reader causing it to > go into an infinite loop, so it depends whether you're attempting to > load your full data set or a smaller test set. > > JVS |
From: John V. S. <js...@gm...> - 2007-05-03 16:34:42
|
Emily Gouge wrote: > The select query results in a Java Out of Memory Error: > > 0: jdbc:luciddb:rmi://localhost> select count(*) from > habc_extraction_schema.master_grid; > > Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0) Ah, I wonder if it could have anything to do with this? http://mail-archives.apache.org/mod_mbox/db-ojb-user/200504.mbox/%3C4...@ap...%3E http://postgis.refractions.net/pipermail/postgis-users/2005-August/008875.html We may need to add something to the JDBC foreign data wrapper to allow control over the fetch size to prevent the PostgreSQL JDBC driver from effectively leaking per-row. Sigh. As a workaround, you could try loading the data in large chunks of rows via a WHERE clause on some partitioning key (if there is one in the source data). Another clunky alternative is to dump the data from PostgreSQL into a csv file and load via LucidDB's flatfile reader. There have recently been some problem reports about trying to loading the TPC-H 10gig dataset via flatfiles due to a bug in the flatfile reader causing it to go into an infinite loop, so it depends whether you're attempting to load your full data set or a smaller test set. JVS |
From: Emily G. <eg...@re...> - 2007-05-03 16:08:16
|
The select query results in a Java Out of Memory Error: 0: jdbc:luciddb:rmi://localhost> select count(*) from habc_extraction_schema.master_grid; Error: java.lang.OutOfMemoryError: Java heap space (state=,code=0) I can however run counts and extract from other tables/views with fewer records: 0: jdbc:luciddb:rmi://localhost> select count(*) from habc_extraction_schema.lwdpbc; +---------+ | EXPR$0 | +---------+ | 19249 | +---------+ 1 row selected (7.179 seconds) Emily John V. Sichi wrote: > Emily Gouge wrote: >> I set the net.sf.farrago.jdbc.level=FINER in the Trace.properties file >> and attached the new logfile. However I'm not sure it has any more >> information that the first one I sent. I've attached both the new log >> file and my Trace.properties file. > > Hmmm...the trace has this in the log just before the crash: > > SEVERE: org.eigenbase.util.EigenbaseException: Failed to access data > server for execution > > Usually that means there was some problem when LucidDB calls the foreign > server's JDBC driver to prepare and execute the query, but for some > reason the underlying exception isn't being traced. > > Instead of the insert statement, can you try just a query: > > select count(*) from habc_extraction_schema.master_grid > > This will attempt to pull back all the rows from the PostgreSQL server > and count them. > > JVS |
From: John V. S. <js...@gm...> - 2007-05-03 15:55:04
|
Emily Gouge wrote: > I set the net.sf.farrago.jdbc.level=FINER in the Trace.properties file > and attached the new logfile. However I'm not sure it has any more > information that the first one I sent. I've attached both the new log > file and my Trace.properties file. Hmmm...the trace has this in the log just before the crash: SEVERE: org.eigenbase.util.EigenbaseException: Failed to access data server for execution Usually that means there was some problem when LucidDB calls the foreign server's JDBC driver to prepare and execute the query, but for some reason the underlying exception isn't being traced. Instead of the insert statement, can you try just a query: select count(*) from habc_extraction_schema.master_grid This will attempt to pull back all the rows from the PostgreSQL server and count them. JVS |
From: Emily G. <eg...@re...> - 2007-05-03 15:03:16
|
Leo, I set the net.sf.farrago.jdbc.level=FINER in the Trace.properties file and attached the new logfile. However I'm not sure it has any more information that the first one I sent. I've attached both the new log file and my Trace.properties file. Thanks, Emily Leo Giertz wrote: > Hi Emily! > > The log file you attached doesn't really contain enough information since the > default settings in luciddb are a bit terse. Could you please set > net.sf.farrago.jdbc.level=FINER in your Trace.properties? > > Hopefully the real problem will show up in the logfile then. > > Thanks! > > -L > > Emily Gouge wrote: >> Thanks for pointing out the double-quote solution. >> >> I've attached the Trace.log file and I'll try it again on 0.7 when it is >> released and let you know >> if I continue to have problems. >> >> Thanks for your help. >> >> Emily >> >> John V. Sichi wrote: >>> Emily Gouge wrote: >>>> import foreign schema habc >>>> from server habc_link >>>> into habc_extraction_schema; >>>> >>>> no tables showed in the habc_extraction_schema. "habc" schema in our >>>> postgesql database has many tables, but there was no upper case "HABC" >>>> schema; so no tables/views were found. My work around for this was to >>>> create upper case schema and upper case views (with upper case column >>>> names) in Postgres. Is there a simpler way to do this? >>> Yes, LucidDB supports the SQL standard for using double-quotes around >>> any identifier to preserve case, so: >>> >>> import foreign schema "habc" >>> from server habc_link >>> into habc_extraction_schema; >>> >>>> 2. The second, larger problem I had, is that the Lucid server crashes >>>> when I try to load in large amounts of data from our postgresql >>>> database. The script I used to load data and the resulting error >>>> message are listed below. The master_grid table I am trying to load >>>> from contains approx. 270,000,000 rows (and about 50 columns; approx >>>> 30G of data). If I make a subset of the table that is approximately 1 >>>> million rows (and 5 columns) I can load that data fine. Any ideas on >>>> how to resolve this issue? >>>> >>>> We are running LucidDB (version 0.6.0) on linux [Centos v4.4, Kernal >>>> v2.6.9]. Java Version: 1.6.0_01 >>> There have been a lot of bugfixes and enhancements (like support for >>> concurrent read/write) checked into Perforce since the 0.6.0 release in >>> January. The crash below looks like an error unwind problem which has >>> been fixed. This means there's probably some other earlier error logged >>> before that in /mnt/lucid/luciddb-0.6.0/trace/LucidDbTrace.log. Could >>> you mail the contents of that file to this list (or enough of the tail >>> to show what happened before the crash)? If we can figure out what's >>> causing the ealier error, you may be able to get past this without a new >>> version. >>> >>> If not, the latest code is stable enough to put out an 0.7 release >>> within a few days to see if that resolves the problem. >>> >>> (Note that as far as I know, most testing up until now has been on Java >>> 1.5.) >>> >>> JVS > > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users |
From: Leo G. <lg...@lu...> - 2007-05-03 01:17:34
|
Hi Emily! The log file you attached doesn't really contain enough information since the default settings in luciddb are a bit terse. Could you please set net.sf.farrago.jdbc.level=FINER in your Trace.properties? Hopefully the real problem will show up in the logfile then. Thanks! -L Emily Gouge wrote: > Thanks for pointing out the double-quote solution. > > I've attached the Trace.log file and I'll try it again on 0.7 when it is > released and let you know > if I continue to have problems. > > Thanks for your help. > > Emily > > John V. Sichi wrote: > > Emily Gouge wrote: > >> import foreign schema habc > >> from server habc_link > >> into habc_extraction_schema; > >> > >> no tables showed in the habc_extraction_schema. "habc" schema in our > >> postgesql database has many tables, but there was no upper case "HABC" > >> schema; so no tables/views were found. My work around for this was to > >> create upper case schema and upper case views (with upper case column > >> names) in Postgres. Is there a simpler way to do this? > > > > Yes, LucidDB supports the SQL standard for using double-quotes around > > any identifier to preserve case, so: > > > > import foreign schema "habc" > > from server habc_link > > into habc_extraction_schema; > > > >> 2. The second, larger problem I had, is that the Lucid server crashes > >> when I try to load in large amounts of data from our postgresql > >> database. The script I used to load data and the resulting error > >> message are listed below. The master_grid table I am trying to load > >> from contains approx. 270,000,000 rows (and about 50 columns; approx > >> 30G of data). If I make a subset of the table that is approximately 1 > >> million rows (and 5 columns) I can load that data fine. Any ideas on > >> how to resolve this issue? > >> > >> We are running LucidDB (version 0.6.0) on linux [Centos v4.4, Kernal > >> v2.6.9]. Java Version: 1.6.0_01 > > > > There have been a lot of bugfixes and enhancements (like support for > > concurrent read/write) checked into Perforce since the 0.6.0 release in > > January. The crash below looks like an error unwind problem which has > > been fixed. This means there's probably some other earlier error logged > > before that in /mnt/lucid/luciddb-0.6.0/trace/LucidDbTrace.log. Could > > you mail the contents of that file to this list (or enough of the tail > > to show what happened before the crash)? If we can figure out what's > > causing the ealier error, you may be able to get past this without a new > > version. > > > > If not, the latest code is stable enough to put out an 0.7 release > > within a few days to see if that resolves the problem. > > > > (Note that as far as I know, most testing up until now has been on Java > > 1.5.) > > > > JVS |
From: Emily G. <eg...@re...> - 2007-05-02 22:24:11
|
Thanks for pointing out the double-quote solution. I've attached the Trace.log file and I'll try it again on 0.7 when it is released and let you know if I continue to have problems. Thanks for your help. Emily John V. Sichi wrote: > Emily Gouge wrote: >> import foreign schema habc >> from server habc_link >> into habc_extraction_schema; >> >> no tables showed in the habc_extraction_schema. "habc" schema in our >> postgesql database has many tables, but there was no upper case "HABC" >> schema; so no tables/views were found. My work around for this was to >> create upper case schema and upper case views (with upper case column >> names) in Postgres. Is there a simpler way to do this? > > Yes, LucidDB supports the SQL standard for using double-quotes around > any identifier to preserve case, so: > > import foreign schema "habc" > from server habc_link > into habc_extraction_schema; > >> 2. The second, larger problem I had, is that the Lucid server crashes >> when I try to load in large amounts of data from our postgresql >> database. The script I used to load data and the resulting error >> message are listed below. The master_grid table I am trying to load >> from contains approx. 270,000,000 rows (and about 50 columns; approx >> 30G of data). If I make a subset of the table that is approximately 1 >> million rows (and 5 columns) I can load that data fine. Any ideas on >> how to resolve this issue? >> >> We are running LucidDB (version 0.6.0) on linux [Centos v4.4, Kernal >> v2.6.9]. Java Version: 1.6.0_01 > > There have been a lot of bugfixes and enhancements (like support for > concurrent read/write) checked into Perforce since the 0.6.0 release in > January. The crash below looks like an error unwind problem which has > been fixed. This means there's probably some other earlier error logged > before that in /mnt/lucid/luciddb-0.6.0/trace/LucidDbTrace.log. Could > you mail the contents of that file to this list (or enough of the tail > to show what happened before the crash)? If we can figure out what's > causing the ealier error, you may be able to get past this without a new > version. > > If not, the latest code is stable enough to put out an 0.7 release > within a few days to see if that resolves the problem. > > (Note that as far as I know, most testing up until now has been on Java > 1.5.) > > JVS |
From: John V. S. <js...@gm...> - 2007-05-02 21:50:07
|
Emily Gouge wrote: > import foreign schema habc > from server habc_link > into habc_extraction_schema; > > no tables showed in the habc_extraction_schema. "habc" schema in our postgesql database has many > tables, but there was no upper case "HABC" schema; so no tables/views were found. My work around > for this was to create upper case schema and upper case views (with upper case column names) in > Postgres. Is there a simpler way to do this? Yes, LucidDB supports the SQL standard for using double-quotes around any identifier to preserve case, so: import foreign schema "habc" from server habc_link into habc_extraction_schema; > 2. The second, larger problem I had, is that the Lucid server crashes when I try to load in large > amounts of data from our postgresql database. The script I used to load data and the resulting > error message are listed below. The master_grid table I am trying to load from contains approx. > 270,000,000 rows (and about 50 columns; approx 30G of data). If I make a subset of the table that > is approximately 1 million rows (and 5 columns) I can load that data fine. Any ideas on how to > resolve this issue? > > We are running LucidDB (version 0.6.0) on linux [Centos v4.4, Kernal v2.6.9]. Java Version: 1.6.0_01 There have been a lot of bugfixes and enhancements (like support for concurrent read/write) checked into Perforce since the 0.6.0 release in January. The crash below looks like an error unwind problem which has been fixed. This means there's probably some other earlier error logged before that in /mnt/lucid/luciddb-0.6.0/trace/LucidDbTrace.log. Could you mail the contents of that file to this list (or enough of the tail to show what happened before the crash)? If we can figure out what's causing the ealier error, you may be able to get past this without a new version. If not, the latest code is stable enough to put out an 0.7 release within a few days to see if that resolves the problem. (Note that as far as I know, most testing up until now has been on Java 1.5.) JVS |
From: Emily G. <eg...@re...> - 2007-05-02 17:43:38
|
All, I've been testing out the LucidDB instance for the project Paul has described. I went through the ETL tutorial and got all the examples to work. And then I moved on to trying to get some data loaded from our PostgreSQL 8.1.1 database loaded into the Lucid environment and have run into a few issues. 1. The first challenge I came across was that Postgres usually names everything in lower case. However the Lucid interface (or maybe this is a jdbc thing) converted everything to upper case. So when I did a import foreign schema habc from server habc_link into habc_extraction_schema; no tables showed in the habc_extraction_schema. "habc" schema in our postgesql database has many tables, but there was no upper case "HABC" schema; so no tables/views were found. My work around for this was to create upper case schema and upper case views (with upper case column names) in Postgres. Is there a simpler way to do this? 2. The second, larger problem I had, is that the Lucid server crashes when I try to load in large amounts of data from our postgresql database. The script I used to load data and the resulting error message are listed below. The master_grid table I am trying to load from contains approx. 270,000,000 rows (and about 50 columns; approx 30G of data). If I make a subset of the table that is approximately 1 million rows (and 5 columns) I can load that data fine. Any ideas on how to resolve this issue? We are running LucidDB (version 0.6.0) on linux [Centos v4.4, Kernal v2.6.9]. Java Version: 1.6.0_01 Thanks! Emily SAMPLE LOADING SCRIPT: --create server link create server habc_link foreign data wrapper sys_jdbc options( driver_class 'org.postgresql.Driver', url 'jdbc:postgresql://dbserver:port/dbname', user_name 'user' ); --create transformation schema create schema habc_transformation_schema; --import the postgresql habc schema import foreign schema habc from server habc_link into habc_extraction_schema; --the postgresql habc schema has a master_grid table create view habc_transformation_schema.location_view as select x,y from habc_extraction_schema.master_grid; create schema habc; create table habc.location_dimension( loc_key int generated always as identity not null primary key, x integer not null, y integer not null, unique(x,y) ); --This is where the data is loaded and cases the server to crash; insert into habc.location_dimension (x,y) select x,y from habc_transformation_schema.location_view; ERROR MESSAGE: # # An unexpected error has been detected by Java Runtime Environment: # # SIGSEGV (0xb) at pc=0xb4dba792, pid=31847, tid=2727599024 # # Java VM: Java HotSpot(TM) Client VM (1.6.0_01-b06 mixed mode, sharing) # Problematic frame: # C [libfennel_btree.so+0x1d792] _ZN6fennel11BTreeReader9endSearchEv+0x12 # # An error report file with more information is saved as hs_err_pid31847.log # # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp # *** CAUGHT SIGNAL 6; BACKTRACE: /mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::AutoBacktrace::signal_handler(int)+0x37) [0x179f7] /lib/tls/libpthread.so.0 [0xa01898] /lib/ld-linux.so.2 [0x7227a2] /lib/tls/libc.so.6(gsignal+0x55) [0x7677a5] /lib/tls/libc.so.6(abort+0xe9) [0x769209] /usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x630358b] /usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x63ae3c1] /usr/java/jre1.6.0_01/lib/i386/client/libjvm.so(JVM_handle_linux_signal+0x1f0) [0x63079c0] /usr/java/jre1.6.0_01/lib/i386/client/libjvm.so [0x6305278] /lib/tls/libpthread.so.0 [0xa01890] /mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_lu_colstore.so(fennel::LbmSplicerExecStream::closeImpl()+0x28) [0x62718] /mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::ClosableObject::close()+0x1e) [0x1d29e] /mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_exec.so(fennel::ExecStreamGraphImpl::closeImpl()+0x26b) [0x51d4b] /mnt/lucid/luciddb-0.6.0/lib/fennel/libfennel_common.so(fennel::ClosableObject::close()+0x1e) [0x1d29e] /mnt/lucid/luciddb-0.6.0/lib/fennel/libfarrago.so(Java_net_sf_farrago_fennel_FennelStorage_tupleStreamGraphClose+0x170) [0xb4f45f30] [0xb5d0267e] [0xb5cfae9d] [0xb5cfae9d] [0xb5cfae9d] [0xb5cfae9d] [0xb5cfb379] [0xb5cfb213] [0xb5cfb379] [0xb5cfb14d] [0xb5cfb213] [0xb5cfad37] [0xb5cfad37] [0xb5cfb213] [0xb5fccc81] [0xb5fce0d3] [0xb5cfad37] [0xb5cfb379] ./lucidDbServer: line 9: 31847 Aborted ${JAVA_EXEC} ${JAVA_ARGS} com.lucidera.farrago.LucidDbServer Paul Ramsey wrote: > > Hi folks, > > We are going a project for which Lucid and OLAP tools look like an > excellent choice. It goes something like this: > > - Divide the province of British Columbia up into 100M equally sized > squares. > - For each square, measure a few hundred different environmental and > topographic variables. > - Allow people to summarize information about the province by > arbitrarily grouping up the squares. > > In OLAP terms it means we will have a system with between 100M and 200M > facts, 50-100 or so dimensions and 50-100 or so measurements. > > As you can imagine, working with transactional databases is starting to > get unwieldy. We found Lucid and tried to give it a go, but have been > stymied at the data loading stage. I'll leave it to my colleague to > describe our particular environment and techniques. > > Paul > |
From: Paul R. <pr...@re...> - 2007-05-02 15:54:11
|
Hi folks, We are going a project for which Lucid and OLAP tools look like an excellent choice. It goes something like this: - Divide the province of British Columbia up into 100M equally sized squares. - For each square, measure a few hundred different environmental and topographic variables. - Allow people to summarize information about the province by arbitrarily grouping up the squares. In OLAP terms it means we will have a system with between 100M and 200M facts, 50-100 or so dimensions and 50-100 or so measurements. As you can imagine, working with transactional databases is starting to get unwieldy. We found Lucid and tried to give it a go, but have been stymied at the data loading stage. I'll leave it to my colleague to describe our particular environment and techniques. Paul -- Paul Ramsey Refractions Research http://www.refractions.net pr...@re... Phone: 250-383-3022 Cell: 250-885-0632 |