From: Emily G. <eg...@re...> - 2007-05-03 15:03:16
|
Leo, I set the net.sf.farrago.jdbc.level=FINER in the Trace.properties file and attached the new logfile. However I'm not sure it has any more information that the first one I sent. I've attached both the new log file and my Trace.properties file. Thanks, Emily Leo Giertz wrote: > Hi Emily! > > The log file you attached doesn't really contain enough information since the > default settings in luciddb are a bit terse. Could you please set > net.sf.farrago.jdbc.level=FINER in your Trace.properties? > > Hopefully the real problem will show up in the logfile then. > > Thanks! > > -L > > Emily Gouge wrote: >> Thanks for pointing out the double-quote solution. >> >> I've attached the Trace.log file and I'll try it again on 0.7 when it is >> released and let you know >> if I continue to have problems. >> >> Thanks for your help. >> >> Emily >> >> John V. Sichi wrote: >>> Emily Gouge wrote: >>>> import foreign schema habc >>>> from server habc_link >>>> into habc_extraction_schema; >>>> >>>> no tables showed in the habc_extraction_schema. "habc" schema in our >>>> postgesql database has many tables, but there was no upper case "HABC" >>>> schema; so no tables/views were found. My work around for this was to >>>> create upper case schema and upper case views (with upper case column >>>> names) in Postgres. Is there a simpler way to do this? >>> Yes, LucidDB supports the SQL standard for using double-quotes around >>> any identifier to preserve case, so: >>> >>> import foreign schema "habc" >>> from server habc_link >>> into habc_extraction_schema; >>> >>>> 2. The second, larger problem I had, is that the Lucid server crashes >>>> when I try to load in large amounts of data from our postgresql >>>> database. The script I used to load data and the resulting error >>>> message are listed below. The master_grid table I am trying to load >>>> from contains approx. 270,000,000 rows (and about 50 columns; approx >>>> 30G of data). If I make a subset of the table that is approximately 1 >>>> million rows (and 5 columns) I can load that data fine. Any ideas on >>>> how to resolve this issue? >>>> >>>> We are running LucidDB (version 0.6.0) on linux [Centos v4.4, Kernal >>>> v2.6.9]. Java Version: 1.6.0_01 >>> There have been a lot of bugfixes and enhancements (like support for >>> concurrent read/write) checked into Perforce since the 0.6.0 release in >>> January. The crash below looks like an error unwind problem which has >>> been fixed. This means there's probably some other earlier error logged >>> before that in /mnt/lucid/luciddb-0.6.0/trace/LucidDbTrace.log. Could >>> you mail the contents of that file to this list (or enough of the tail >>> to show what happened before the crash)? If we can figure out what's >>> causing the ealier error, you may be able to get past this without a new >>> version. >>> >>> If not, the latest code is stable enough to put out an 0.7 release >>> within a few days to see if that resolves the problem. >>> >>> (Note that as far as I know, most testing up until now has been on Java >>> 1.5.) >>> >>> JVS > > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users |