You can subscribe to this list here.
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(28) |
Jun
(2) |
Jul
(10) |
Aug
(1) |
Sep
(7) |
Oct
|
Nov
(1) |
Dec
(7) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2008 |
Jan
(5) |
Feb
(7) |
Mar
(10) |
Apr
(12) |
May
(30) |
Jun
(21) |
Jul
(19) |
Aug
(17) |
Sep
(25) |
Oct
(46) |
Nov
(14) |
Dec
(11) |
2009 |
Jan
(5) |
Feb
(36) |
Mar
(17) |
Apr
(20) |
May
(75) |
Jun
(143) |
Jul
(29) |
Aug
(41) |
Sep
(38) |
Oct
(71) |
Nov
(17) |
Dec
(56) |
2010 |
Jan
(48) |
Feb
(31) |
Mar
(56) |
Apr
(24) |
May
(7) |
Jun
(18) |
Jul
(2) |
Aug
(34) |
Sep
(17) |
Oct
(1) |
Nov
|
Dec
(18) |
2011 |
Jan
(12) |
Feb
(19) |
Mar
(25) |
Apr
(11) |
May
(26) |
Jun
(16) |
Jul
(2) |
Aug
(10) |
Sep
(8) |
Oct
(1) |
Nov
|
Dec
(5) |
2012 |
Jan
(1) |
Feb
(3) |
Mar
(3) |
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
(1) |
Sep
(2) |
Oct
|
Nov
(2) |
Dec
|
From: Jeremy L. <je...@vo...> - 2010-12-31 02:51:38
|
System Parameters http://luciddb-users.1374590.n2.nabble.com/file/n5877674/system_parameters.txt system_parameters.txt After seeing the jstack I am also leaning towards a problem with exhausted free memory as opposed to my original concern that it was deadlock. Because of this I have requested more RAM for this machine and also held off on submitting anything to JIRA. if you disagree let me know and I will submit the details we have discussed. As for the buffer pool, early on I tried several different settings. 4G for Java Heap and 6G for the Buffer Pool seemed to work best at the time. My theory for not making the min and max heap both 4G was that I would not be able to run more than one instance of sqllineClient. Given that it is only a 16G system and that lucidDbServer and sqllineClient share Java heap settings as defined in the defineFarragoRuntime.sh script, it seemed better to allow those clients that do not require 4G to use as little as 512M and grow dynamically. However running as many as 5 (memory hungry) instances of sqllineClient simultaneously, each of which being capable of consuming a max of 4G of RAM, I can see how memory could quickly become an issue on a 16G system. My understanding of Java heap however, is that the app will just chew up swap once it runs out of free which could be why it appears to hang. Maybe it is not hanging at all but instead just swapping like crazy and going sloooow. Seemingly this would explain the analyze statements not completing, but could it go slow enough not to service the socket connections properly? I don't recall excessive swap but I will be sure to check if this happens again. For now I have made a change to do all inserts in parallel and all analyzes w/ ESTIMATE (not COMPUTE) serially and this appears to have worked around the problem. Going forward I will try and get this going on a 32G machine with version 0.9.3. Also within the next couple of months I should have a Hadoop cluster in place to offload some of the computation and storage that LucidDb is needlessly doing now and allow it to focus on OLAP jobs. I think these changes will make my LucidDb setup much happier. Let me know if there is any other information you would like and if you think a JIRA entry is still warranted. -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p5877674.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2010-12-30 06:40:39
|
Thanks for the stack. After studying it, I have not been able to identify an explicit deadlock (and if there were one, I think jstack would have reported it since these are standard ReadWriteLocks). So I think most likely a thread has died holding the repository lock (causing subsequent lock requests on the repository lock to hang). Normally, this shouldn't be possible since the exception handling is careful for these cases, but I suspect that if it died due to running out of memory, then some of the exception handlers can fail too, leading the unlock portion to be skipped. 4G for the max Java heap size would normally be enough, but if it's the leak which was fixed in 0.9.3, then the heap could have been exhausted. Since you said you have a lot of concurrent queries from the web service, it's hard to say. It's generally a good idea to set the min and max Java heap to the same size (4G in this case) to make sure that you have all the requested memory dedicated to the JVM up front. Also, can you send the output of the following statement so we can see the buffer pool size etc? select * from sys_root.dba_system_parameters; BTW, for the ANALYZE, make sure you are using ESTIMATE (not COMPUTE) to keep the runtime as short as possible. JVS On Wed, Dec 29, 2010 at 11:18 AM, Jeremy Lemaire <je...@vo...> wrote: > > I have not had a chance to upgrade to 0.9.3 yet but here is the data you > requested from 0.9.2: > > Jstack Output > http://luciddb-users.1374590.n2.nabble.com/file/n5875274/LucidDbServer_Stack.txt > LucidDbServer_Stack.txt > > Some Observations > 1. lowering java heap seems to cause server to run out of java heap space > and cause system to crash > 2. upping java heap seems to run out of physical memory and cause system > to hang > 3. Using less memory by doing analyze operations serially instead of in > parallel fixes the problem but > takes too long to process causing daily imports to overlap. > > > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p5875274.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Learn how Oracle Real Application Clusters (RAC) One Node allows customers > to consolidate database storage, standardize their database environment, and, > should the need arise, upgrade to a full multi-node Oracle RAC database > without downtime or disruption > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Jeremy L. <je...@vo...> - 2010-12-29 19:18:36
|
I have not had a chance to upgrade to 0.9.3 yet but here is the data you requested from 0.9.2: Jstack Output http://luciddb-users.1374590.n2.nabble.com/file/n5875274/LucidDbServer_Stack.txt LucidDbServer_Stack.txt Some Observations 1. lowering java heap seems to cause server to run out of java heap space and cause system to crash 2. upping java heap seems to run out of physical memory and cause system to hang 3. Using less memory by doing analyze operations serially instead of in parallel fixes the problem but takes too long to process causing daily imports to overlap. -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p5875274.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Jeremy L. <je...@vo...> - 2010-12-22 20:38:45
|
Java Version: lucid@adsdw02:~/luciddb$ java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) 64-Bit Server VM (build 16.0-b13, mixed mode) Java Heap (16GB Physical RAM): JAVA_ARGS="-Xms512m -Xmx4096m -cp `cat $MAIN_DIR/bin/classpath.gen` \ -Dnet.sf.farrago.home=$MAIN_DIR \ -Dorg.eigenbase.util.AWT_WORKAROUND=off \ -Djava.util.logging.config.file=$MAIN_DIR/trace/Trace.properties" I'll upgrade to 0.9.3 and report back. -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p5860538.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2010-12-22 20:10:17
|
Your assumption is correct (you should get an "out of scratch space" error if you exhaust the buffer pool). An environment like the one you describe may require a large Java heap. Also, could you run java -version and provide the output? There was one JVM hang bug which got fixed somewhere between 1.6.0_07 and 1.6.0_18. We also fixed one LucidDB leak in 0.9.3, so you should consider upgrading to that if you're still running 0.9.2: http://issues.eigenbase.org/browse/FNL-89 JVS On Wed, Dec 22, 2010 at 6:30 AM, Jeremy Lemaire <je...@vo...> wrote: > > The expectedConcurrentStatements variable is set to 32 but this server is > also getting hit by a web service so there is the potential for lots of > lingering sessions/statements if things get backed up. However I am not > seeing the "out of scratch space" error so I assumed this was not the case. > Is this a valid assumption? > > I will run jstack and create a JIRA issue the next time this occurs. > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p5859333.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Forrester recently released a report on the Return on Investment (ROI) of > Google Apps. They found a 300% ROI, 38%-56% cost savings, and break-even > within 7 months. Over 3 million businesses have gone Google with Google Apps: > an online email calendar, and document program that's accessible from your > browser. Read the Forrester report: http://p.sf.net/sfu/googleapps-sfnew > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Jeremy L. <je...@vo...> - 2010-12-22 14:30:36
|
The expectedConcurrentStatements variable is set to 32 but this server is also getting hit by a web service so there is the potential for lots of lingering sessions/statements if things get backed up. However I am not seeing the "out of scratch space" error so I assumed this was not the case. Is this a valid assumption? I will run jstack and create a JIRA issue the next time this occurs. -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p5859333.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Yanis G. <yg...@ca...> - 2010-12-22 09:39:22
|
Sorry John, Everything wen weel, my bad I read to quickly the Wiki. On Thu, Dec 16, 2010 at 6:35 PM, John Sichi <js...@gm...> wrote: > I don't see any error in the installation output. It looks like it > worked fine. I just verified on my own Ubuntu installation and I'm > using the same JAVA_HOME setting as you. > > JVS > > On Thu, Dec 16, 2010 at 6:14 AM, Yanis Guenane <yg...@ca...> > wrote: > > Hi Community, > > I am brand new to LucidDB and when I wrote about column store database I > > wanted to start using it right away. > > Problem. First step of the installation. It seems not working for me. > > On the wiki it's said > >> > >> the correct location for your JRE (make sure it's Java 1.6 or higher) > > > > Then I put the following line on my .bashrc file > >> > >> export JAVA_HOME="/usr/lib/jvm/java-6-sun-1.6.0.22/" > > > > That is where ... /lib/tools.jar is located. > > Then when I launch the ./install.sh script it always result the same way > : > > an Error. > > This is the trace : > >> > >> export LD_LIBRARY_PATH=$LIB_DIR/fennel > >> > >> # configure tracing > >> > >> mkdir $TRACE_DIR > >> > >> cat >$TRACE_DIR/Trace.properties <<EOF > >> > >> # Tracing configuration > >> > >> handlers=java.util.logging.FileHandler > >> > >> java.util.logging.FileHandler.append=true > >> > >> > java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter > >> > >> java.util.logging.FileHandler.pattern=$TRACE_DIR/Trace.log > >> > >> .level=CONFIG > >> > >> EOF > >> > >> LOCALCLASSPATH=$JAVA_HOME/lib/tools.jar > >> > >> for lib in `find $LIB_DIR -path $LIB_DIR/plugin -not -prune -o -name > >> "*.jar"`; do > >> > >> LOCALCLASSPATH=$LOCALCLASSPATH:$lib > >> > >> done > >> > >> cygwin=false > >> > >> case "`uname`" in > >> > >> CYWGIN*) cygwin=true ;; > >> > >> esac > >> > >> if $cygwin; then > >> > >> LOCALCLASSPATH=`cygpath --path --windows "$LOCALCLASSPATH"` > >> > >> fi > >> > >> echo $LOCALCLASSPATH >$BIN_DIR/classpath.gen > > > > I don't know if this help much but doest anyone know what's going wrong > with > > my installation procedure ? > > I am usgin Ubutun 10.04 64Bits, with LucidDB 0.9.3 > > Thank you, > > -- > > Yanis Guenane > > Cassini Solutions > > BI Developer > > 5 rue Sextius Michel > > 75015, Paris, FRANCE > > > > Phone : (+33)1.71.19.45.33 > > E-mail : yg...@ca... > > > > > > > ------------------------------------------------------------------------------ > > Lotusphere 2011 > > Register now for Lotusphere 2011 and learn how > > to connect the dots, take your collaborative environment > > to the next level, and enter the era of Social Business. > > http://p.sf.net/sfu/lotusphere-d2d > > _______________________________________________ > > luciddb-users mailing list > > luc...@li... > > https://lists.sourceforge.net/lists/listinfo/luciddb-users > > > > > > > ------------------------------------------------------------------------------ > Lotusphere 2011 > Register now for Lotusphere 2011 and learn how > to connect the dots, take your collaborative environment > to the next level, and enter the era of Social Business. > http://p.sf.net/sfu/lotusphere-d2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > -- *Yanis Guenane* *Cassini Solutions** * *BI Developer* 5 rue Sextius Michel 75015, Paris, FRANCE Phone : (+33)1.71.19.45.33 E-mail : yg...@ca... |
From: John S. <js...@gm...> - 2010-12-21 22:48:33
|
Hi Jeremy, This looks like it may be a different problem from the one originally described in this thread, since that issue had to do with running CREATE INDEX (as opposed to updating an existing index implicitly as part of a load, which should not have any concurrency problems). The next time the lockup happens, could you run jstack on the server process (make sure you get the server and not the clients) and create a JIRA issue containing the stack dump? We may be able to debug it from that. Also, since you are running more than 4 concurrent statements, did you increase system parameter "expectedConcurrentStatements" from the default of 4? JVS On Tue, Dec 21, 2010 at 7:42 AM, Jeremy Lemaire <je...@vo...> wrote: > > I am seeing a similar issue with v0.9.2. I have a complex set of > transformations and inserts. In a daily batch process about 15 million rows > are inserted into two raw tables and then this data is broken out in various > dimensions and facts. 1 out 9 times the system locks up after the inserts > take place and the analyze table commands are being run. During this time > when I try to connect via sqllineClient or a JDBC connection it just hangs. > Using lsof I can see the socket is ESTABLISHED but sqllineClient never > returns a prompt and the JDBC connnection never returns a result. > > This seem to only occur when I am doing concurrent selects from one table > into 5 or so other tables followed by an analyze. Here is the processes > (note that processes 1-5 run concurrently): > > process 1 > select from table A insert into table B > analyze table B > > process 2 > select from table A insert into table C > analyze table C > > process 3 > select from table A insert into table D > analyze table D > > process 4 > select from table A insert into table E > analyze table E > > process 5 > select from table A insert into table F > analyze table F > > There are two locations in my script that sporadically hang, both are > concurrent processes and both hang on the analyze. All processes show up in > top but there is little to no cpu utilzation on the server, 0-1% on one of > the 8 cores. The only way to fix this is to kill all client processes and > then restart the server. When I try to restart the server the first time > after killing it I get thousands of lines of errors in the terminal and then > a segmentation fault. > > /lib/libpthread.so.0 [0x7fce5b85ca80] > /home/lucid/luciddb-0.9.2/lib/fennel/libfarrago.so(fennel::JavaTraceTarget::notifyTrace(stlp_std::basic_string<char, > stlp_std::char_traits<char>, stlp_std::allocator<char> >, > fennel::TraceLevel, stlp_std::basic_string<char, > stlp_std::char_traits<char>, stlp_std::allocator<char> >)+0x58) [0x229e38] > /home/lucid/luciddb/lib/fennel/libfennel_common.so(fennel::AutoBacktrace::signal_handler(int)+0x1bd) > [0x2ae8d] > /lib/libpthread.so.0 [0x7fce5b85ca80] > /lib/libc.so.6(gsignal+0x35) [0x7fce5b328ed5] > /lib/libc.so.6(abort+0x183) [0x7fce5b32a3f3] > /usr/opt/jdk1.6.0_18/jre/lib/amd64/server/libjvm.so [0x7fce5ae55679] > /usr/opt/jdk1.6.0_18/jre/lib/amd64/server/libjvm.so [0x7fce5af8ec4f] > /usr/opt/jdk1.6.0_18/jre/lib/amd64/server/libjvm.so [0x7fce5af8f211] > /lib/libpthread.so.0 [0x7fce5b85ca80] > /home/lucid/luciddb-0.9.2/lib/fennel/libfarrago.so(fennel::JavaTraceTarget::notifyTrace(stlp_std::basic_string<char, > stlp_std::char_traits<char>, stlp_std::allocator<char> >, > fennel::TraceLevel, stlp_std::basic_string<char, > stlp_std::char_traits<char>, stlp_std::allocator<char> >)+0x58) [0x229e38] > /home/lucid/luciddb/lib/fennel/libfennel_common.so(fennel::AutoBacktrace::signal_handler(int)+0x1bd) > [0x2ae8d] > /lib/libpthread.so.0 [0x7fce5b85ca80] > /lib/libc.so.6(gsignal+0x35) [0x7fce5b328ed5] > /lib/libc.so.6(abort+0x183) [0x7fce5b32a3f3] > /usr/opt/jdk1.6.0_18/jre/lib/amd64/server/libjvm.so [0x7fce5ae55679] > [Too many errors, abort] > ./lucidDbServer: line 9: 1729 Segmentation fault ${JAVA_EXEC} > ${JAVA_ARGS} com.lucidera.farrago.LucidDbServer > > The second restart seems to be alright but it takes several days of this > lockup-restart process before it seems to straighten out. This seems like a > concurrency issue to me so I am going to start doing the 5 processes in a > synchronous manner to see if it helps. I fear this will cause my import to > finish too late however, too late is better than not finishing at all. I'll > report back with the results from this test. > > Incidently, there are several indexes on the tables being inserted into. > Here is an example of one of them: > > create table caller_inventory_by_carrier_2009_q2( > caller_inventory_by_carrier_2009_q2_key int generated always as identity > not null primary key, > "COUNT" int, > caller_id varchar(32), > source_id int, > datetime timestamp not null, > filled boolean, > npa varchar(32), > carrier varchar(32), > unique ( caller_id, source_id, datetime, filled, npa, carrier ) > ); > create index caller_inventory_by_carrier_2009_q2_source_id_idx on > caller_inventory_by_carrier_2009_q2(source_id); > create index caller_inventory_by_carrier_2009_q2_caller_id_idx on > caller_inventory_by_carrier_2009_q2(caller_id); > create index caller_inventory_by_carrier_2009_q2_datetime_idx on > caller_inventory_by_carrier_2009_q2(datetime); > create index caller_inventory_by_carrier_2009_q2_filled_idx on > caller_inventory_by_carrier_2009_q2(filled); > create index caller_inventory_by_carrier_2009_q2_npa_idx on > caller_inventory_by_carrier_2009_q2(npa); > create index caller_inventory_by_carrier_2009_q2_carrier_idx on > caller_inventory_by_carrier_2009_q2(carrier); > > Any workarounds or comments would be greatly appreciated. > > > John Sichi wrote: >> >> Francisco Reyes wrote: >>> John V. Sichi writes: >>> >>>> Creating an index on a table with existing data is one of our few >>>> remaining-to-be-fixed concurrency problems: >>> >>> That was it. After the index finished the login went through. >>> >>> Updated the "create index" on the docs page to reflect this issue. >>> >>> Anything else is affected? >> >> Thanks for this and the other wiki updates; I've added to the note to >> indicate that other catalog-access activites such as query preparation >> are also affected. >> >> JVS >> >> >> ------------------------------------------------------------------------------ >> Are you an open source citizen? Join us for the Open Source Bridge >> conference! >> Portland, OR, June 17-19. Two days of sessions, one day of unconference: >> $250. >> Need another reason to go? 24-hour hacker lounge. Register today! >> http://ad.doubleclick.net/clk;215844324;13503038;v?http://opensourcebridge.org >> _______________________________________________ >> luciddb-users mailing list >> luc...@li... >> https://lists.sourceforge.net/lists/listinfo/luciddb-users >> >> > > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p5855594.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Forrester recently released a report on the Return on Investment (ROI) of > Google Apps. They found a 300% ROI, 38%-56% cost savings, and break-even > within 7 months. Over 3 million businesses have gone Google with Google Apps: > an online email calendar, and document program that's accessible from your > browser. Read the Forrester report: http://p.sf.net/sfu/googleapps-sfnew > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Jeremy L. <je...@vo...> - 2010-12-21 16:00:35
|
I am seeing a similar issue with v0.9.2. I have a complex set of transformations and inserts. In a daily batch process about 15 million rows are inserted into two raw tables and then this data is broken out in various dimensions and facts. 1 out 9 times the system locks up after the inserts take place and the analyze table commands are being run. During this time when I try to connect via sqllineClient or a JDBC connection it just hangs. Using lsof I can see the socket is ESTABLISHED but sqllineClient never returns a prompt and the JDBC connnection never returns a result. This seem to only occur when I am doing concurrent selects from one table into 5 or so other tables followed by an analyze. Here is the processes (note that processes 1-5 run concurrently): process 1 select from table A insert into table B analyze table B process 2 select from table A insert into table C analyze table C process 3 select from table A insert into table D analyze table D process 4 select from table A insert into table E analyze table E process 5 select from table A insert into table F analyze table F There are two locations in my script that sporadically hang, both are concurrent processes and both hang on the analyze. All processes show up in top but there is little to no cpu utilzation on the server, 0-1% on one of the 8 cores. The only way to fix this is to kill all client processes and then restart the server. When I try to restart the server the first time after killing it I get thousands of lines of errors in the terminal and then a segmentation fault. /lib/libpthread.so.0 [0x7fce5b85ca80] /home/lucid/luciddb-0.9.2/lib/fennel/libfarrago.so(fennel::JavaTraceTarget::notifyTrace(stlp_std::basic_string<char, stlp_std::char_traits<char>, stlp_std::allocator<char> >, fennel::TraceLevel, stlp_std::basic_string<char, stlp_std::char_traits<char>, stlp_std::allocator<char> >)+0x58) [0x229e38] /home/lucid/luciddb/lib/fennel/libfennel_common.so(fennel::AutoBacktrace::signal_handler(int)+0x1bd) [0x2ae8d] /lib/libpthread.so.0 [0x7fce5b85ca80] /lib/libc.so.6(gsignal+0x35) [0x7fce5b328ed5] /lib/libc.so.6(abort+0x183) [0x7fce5b32a3f3] /usr/opt/jdk1.6.0_18/jre/lib/amd64/server/libjvm.so [0x7fce5ae55679] /usr/opt/jdk1.6.0_18/jre/lib/amd64/server/libjvm.so [0x7fce5af8ec4f] /usr/opt/jdk1.6.0_18/jre/lib/amd64/server/libjvm.so [0x7fce5af8f211] /lib/libpthread.so.0 [0x7fce5b85ca80] /home/lucid/luciddb-0.9.2/lib/fennel/libfarrago.so(fennel::JavaTraceTarget::notifyTrace(stlp_std::basic_string<char, stlp_std::char_traits<char>, stlp_std::allocator<char> >, fennel::TraceLevel, stlp_std::basic_string<char, stlp_std::char_traits<char>, stlp_std::allocator<char> >)+0x58) [0x229e38] /home/lucid/luciddb/lib/fennel/libfennel_common.so(fennel::AutoBacktrace::signal_handler(int)+0x1bd) [0x2ae8d] /lib/libpthread.so.0 [0x7fce5b85ca80] /lib/libc.so.6(gsignal+0x35) [0x7fce5b328ed5] /lib/libc.so.6(abort+0x183) [0x7fce5b32a3f3] /usr/opt/jdk1.6.0_18/jre/lib/amd64/server/libjvm.so [0x7fce5ae55679] [Too many errors, abort] ./lucidDbServer: line 9: 1729 Segmentation fault ${JAVA_EXEC} ${JAVA_ARGS} com.lucidera.farrago.LucidDbServer The second restart seems to be alright but it takes several days of this lockup-restart process before it seems to straighten out. This seems like a concurrency issue to me so I am going to start doing the 5 processes in a synchronous manner to see if it helps. I fear this will cause my import to finish too late however, too late is better than not finishing at all. I'll report back with the results from this test. Incidently, there are several indexes on the tables being inserted into. Here is an example of one of them: create table caller_inventory_by_carrier_2009_q2( caller_inventory_by_carrier_2009_q2_key int generated always as identity not null primary key, "COUNT" int, caller_id varchar(32), source_id int, datetime timestamp not null, filled boolean, npa varchar(32), carrier varchar(32), unique ( caller_id, source_id, datetime, filled, npa, carrier ) ); create index caller_inventory_by_carrier_2009_q2_source_id_idx on caller_inventory_by_carrier_2009_q2(source_id); create index caller_inventory_by_carrier_2009_q2_caller_id_idx on caller_inventory_by_carrier_2009_q2(caller_id); create index caller_inventory_by_carrier_2009_q2_datetime_idx on caller_inventory_by_carrier_2009_q2(datetime); create index caller_inventory_by_carrier_2009_q2_filled_idx on caller_inventory_by_carrier_2009_q2(filled); create index caller_inventory_by_carrier_2009_q2_npa_idx on caller_inventory_by_carrier_2009_q2(npa); create index caller_inventory_by_carrier_2009_q2_carrier_idx on caller_inventory_by_carrier_2009_q2(carrier); Any workarounds or comments would be greatly appreciated. John Sichi wrote: > > Francisco Reyes wrote: >> John V. Sichi writes: >> >>> Creating an index on a table with existing data is one of our few >>> remaining-to-be-fixed concurrency problems: >> >> That was it. After the index finished the login went through. >> >> Updated the "create index" on the docs page to reflect this issue. >> >> Anything else is affected? > > Thanks for this and the other wiki updates; I've added to the note to > indicate that other catalog-access activites such as query preparation > are also affected. > > JVS > > > ------------------------------------------------------------------------------ > Are you an open source citizen? Join us for the Open Source Bridge > conference! > Portland, OR, June 17-19. Two days of sessions, one day of unconference: > $250. > Need another reason to go? 24-hour hacker lounge. Register today! > http://ad.doubleclick.net/clk;215844324;13503038;v?http://opensourcebridge.org > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > > -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p5855594.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2010-12-16 17:35:33
|
I don't see any error in the installation output. It looks like it worked fine. I just verified on my own Ubuntu installation and I'm using the same JAVA_HOME setting as you. JVS On Thu, Dec 16, 2010 at 6:14 AM, Yanis Guenane <yg...@ca...> wrote: > Hi Community, > I am brand new to LucidDB and when I wrote about column store database I > wanted to start using it right away. > Problem. First step of the installation. It seems not working for me. > On the wiki it's said >> >> the correct location for your JRE (make sure it's Java 1.6 or higher) > > Then I put the following line on my .bashrc file >> >> export JAVA_HOME="/usr/lib/jvm/java-6-sun-1.6.0.22/" > > That is where ... /lib/tools.jar is located. > Then when I launch the ./install.sh script it always result the same way : > an Error. > This is the trace : >> >> export LD_LIBRARY_PATH=$LIB_DIR/fennel >> >> # configure tracing >> >> mkdir $TRACE_DIR >> >> cat >$TRACE_DIR/Trace.properties <<EOF >> >> # Tracing configuration >> >> handlers=java.util.logging.FileHandler >> >> java.util.logging.FileHandler.append=true >> >> java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter >> >> java.util.logging.FileHandler.pattern=$TRACE_DIR/Trace.log >> >> .level=CONFIG >> >> EOF >> >> LOCALCLASSPATH=$JAVA_HOME/lib/tools.jar >> >> for lib in `find $LIB_DIR -path $LIB_DIR/plugin -not -prune -o -name >> "*.jar"`; do >> >> LOCALCLASSPATH=$LOCALCLASSPATH:$lib >> >> done >> >> cygwin=false >> >> case "`uname`" in >> >> CYWGIN*) cygwin=true ;; >> >> esac >> >> if $cygwin; then >> >> LOCALCLASSPATH=`cygpath --path --windows "$LOCALCLASSPATH"` >> >> fi >> >> echo $LOCALCLASSPATH >$BIN_DIR/classpath.gen > > I don't know if this help much but doest anyone know what's going wrong with > my installation procedure ? > I am usgin Ubutun 10.04 64Bits, with LucidDB 0.9.3 > Thank you, > -- > Yanis Guenane > Cassini Solutions > BI Developer > 5 rue Sextius Michel > 75015, Paris, FRANCE > > Phone : (+33)1.71.19.45.33 > E-mail : yg...@ca... > > > ------------------------------------------------------------------------------ > Lotusphere 2011 > Register now for Lotusphere 2011 and learn how > to connect the dots, take your collaborative environment > to the next level, and enter the era of Social Business. > http://p.sf.net/sfu/lotusphere-d2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > > |
From: Yanis G. <yg...@ca...> - 2010-12-16 14:45:26
|
Hi Community, I am brand new to LucidDB and when I wrote about column store database I wanted to start using it right away. Problem. First step of the installation. It seems not working for me. On the wiki it's said the correct location for your JRE (make sure it's Java 1.6 or higher) Then I put the following line on my .bashrc file export JAVA_HOME="/usr/lib/jvm/java-6-sun-1.6.0.22/" That is where ... /lib/tools.jar is located. Then when I launch the ./install.sh script it always result the same way : an Error. This is the trace : export LD_LIBRARY_PATH=$LIB_DIR/fennel > # configure tracing mkdir $TRACE_DIR cat >$TRACE_DIR/Trace.properties <<EOF # Tracing configuration > handlers=java.util.logging.FileHandler java.util.logging.FileHandler.append=true java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter > java.util.logging.FileHandler.pattern=$TRACE_DIR/Trace.log > .level=CONFIG EOF > LOCALCLASSPATH=$JAVA_HOME/lib/tools.jar for lib in `find $LIB_DIR -path $LIB_DIR/plugin -not -prune -o -name > "*.jar"`; do LOCALCLASSPATH=$LOCALCLASSPATH:$lib done > cygwin=false case "`uname`" in CYWGIN*) cygwin=true ;; esac > if $cygwin; then LOCALCLASSPATH=`cygpath --path --windows "$LOCALCLASSPATH"` fi > echo $LOCALCLASSPATH >$BIN_DIR/classpath.gen I don't know if this help much but doest anyone know what's going wrong with my installation procedure ? I am usgin Ubutun 10.04 64Bits, with LucidDB 0.9.3 Thank you, -- *Yanis Guenane* *Cassini Solutions** * *BI Developer* 5 rue Sextius Michel 75015, Paris, FRANCE Phone : (+33)1.71.19.45.33 E-mail : yg...@ca... |
From: John S. <js...@gm...> - 2010-12-14 05:05:12
|
On Mon, Dec 13, 2010 at 5:25 AM, mktmjn <mar...@ib...> wrote: > > John, i am using a BI product WebFOCUS and connecting via java. Some of our > customers are looking at Luciddb and I want to make sure they can connect > and run queries. I can test whatever fix you supply me. > Thank you in advance. First, could you take a look at these (which I found by searching) and explain exactly why WebFocus is trying to commit? http://techsupport.informationbuilders.com/solutions/40302507.html http://techsupport.informationbuilders.com/solutions/32062528.html If it's after a query, it shouldn't be doing that. If it's after a load, is someone really trying to load data via WebFocus? With LucidDB, that's likely to have very bad performance since I doubt they have a custom loader for it (and if they did, it wouldn't be trying to call commit). JVS |
From: mktmjn <mar...@ib...> - 2010-12-13 13:25:30
|
John, i am using a BI product WebFOCUS and connecting via java. Some of our customers are looking at Luciddb and I want to make sure they can connect and run queries. I can test whatever fix you supply me. Thank you in advance. -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/i-am-trying-to-turn-off-autocommit-from-the-luciddb-engine-tp5829043p5830718.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2010-12-13 00:00:37
|
On Sun, Dec 12, 2010 at 3:41 PM, mktmjn <mar...@ib...> wrote: > > John, that would be great when can i get the fix/setting? Could you tell me what the application is and how it can be tested? Before adding a compatibility mode to LucidDB, I'd like to make sure we can test it to make sure it solves the problem. If the application is open source, we would also like to make sure the application itself gets fixed as well. If it's closed source, at least they should be notified that their behavior is incorrect. JVS |
From: mktmjn <mar...@ib...> - 2010-12-12 23:41:59
|
John, that would be great when can i get the fix/setting? -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/i-am-trying-to-turn-off-autocommit-from-the-luciddb-engine-tp5829043p5829244.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: John S. <js...@gm...> - 2010-12-12 23:25:02
|
LucidDB only supports autocommit mode. What application are you using? If it's not updating data, it shouldn't even need to deal with transactions at all, but some misbehaving applications do so. Since I've heard this same problem coming up in other contexts (e.g. combining the pg2luciddb bridge with Tableau), I guess we could put in a compatibility mode for misbehaving apps (maybe enabled via a JDBC connect string parameter). JVS On Sun, Dec 12, 2010 at 1:49 PM, mktmjn <mar...@ib...> wrote: > > Is there a way to turn off autocommit from the luciddb engine. I tried to > turn it off with the !autocommit off but get the error Transactions not > supported. I need to turn if off from the engine so I can issue a request > via java whcih always issues a commit_work. > Thank you. > > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/i-am-trying-to-turn-off-autocommit-from-the-luciddb-engine-tp5829043p5829043.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Oracle to DB2 Conversion Guide: Learn learn about native support for PL/SQL, > new data types, scalar functions, improved concurrency, built-in packages, > OCI, SQL*Plus, data movement tools, best practices and more. > http://p.sf.net/sfu/oracle-sfdev2dev > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: mktmjn <mar...@ib...> - 2010-12-12 21:49:38
|
Is there a way to turn off autocommit from the luciddb engine. I tried to turn it off with the !autocommit off but get the error Transactions not supported. I need to turn if off from the engine so I can issue a request via java whcih always issues a commit_work. Thank you. -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/i-am-trying-to-turn-off-autocommit-from-the-luciddb-engine-tp5829043p5829043.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Nicholas G. <ngo...@dy...> - 2010-10-11 22:30:02
|
Just a reminder - the next meetup is Wednesday, October 20 @ Splunk. Since Boris has to email out his mobile # to attendees so they can call and be let in, please make sure you RSVP in the next few days. If you don't RSVP you might not get his digits and be staring down at the sidewalk on Brannan street regretting the 3 seconds it takes to click Yes or No. :) Looking forward to seeing everyone who can make it! Nick On Sep 27, 2010, at 4:50 PM, Nicholas Goodman wrote: > Boris Chen and the folks at Splunk have offered to host the next meetup. Many thanks for everyone who attended the last meetup! > > Again the format is unconference style here is a potential list of topics (some spilled over from last event): > - Perfmon style Fennel counter monitoring > - Deployment Descriptors (new in Farrago 0.9.3) > - PG2LucidDB (ODBC, PHP, Python, etc) > - Firewater SSB benchmarks > - Splunk via SQL/MED > > Details are here (Beer and Pizza will be provided!): > http://www.meetup.com/San-Francisco-Eigenbase-Developers/calendar/14840195 > > Boris will email out his mobile # to any RSVP-ed yes/maybes; you'll need to call him in order to be let into the building. > > Nick |
From: Nicholas G. <ngo...@dy...> - 2010-09-27 23:56:21
|
Boris Chen and the folks at Splunk have offered to host the next meetup. Many thanks for everyone who attended the last meetup! Again the format is unconference style here is a potential list of topics (some spilled over from last event): - Perfmon style Fennel counter monitoring - Deployment Descriptors (new in Farrago 0.9.3) - PG2LucidDB (ODBC, PHP, Python, etc) - Firewater SSB benchmarks - Splunk via SQL/MED Details are here (Beer and Pizza will be provided!): http://www.meetup.com/San-Francisco-Eigenbase-Developers/calendar/14840195 Boris will email out his mobile # to any RSVP-ed yes/maybes; you'll need to call him in order to be let into the building. Nick |
From: Michael L. <mic...@in...> - 2010-09-15 21:13:08
|
Done - http://issues.eigenbase.org/browse/LDB-231 Thank you. Michael Lynch Software Architect Integrated Services, Inc. mic...@in... www.ints.com P:503-968-8100 F:503-968-9100 On Wed, Sep 15, 2010 at 12:36 AM, Julian Hyde <jul...@sq...>wrote: > I'm looking into this now. It seems to be something to do with the > 'SPECIFIC > <name>' clause in the definition of the UDF. If you omit that, the system > correctly recognizes that the two calls to the UDF are structurally > identical. > > Can one of you please log a jira case for this? > > Julian > > > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: Julian H. <jul...@sq...> - 2010-09-15 08:04:10
|
I'm looking into this now. It seems to be something to do with the 'SPECIFIC <name>' clause in the definition of the UDF. If you omit that, the system correctly recognizes that the two calls to the UDF are structurally identical. Can one of you please log a jira case for this? Julian |
From: Jeremy L. <je...@vo...> - 2010-09-10 20:13:23
|
I have been able to group by user defined functions but trying the applib function did not work for me either. I was able to do it this way: select SUM("Invoice.TRANSACTION_NUMBER") as "Invoice.TRANSACTION_NUMBER", character_date from ( select COUNT(invoice_prejoin.TRANSACTION_NUMBER) as "Invoice.TRANSACTION_NUMBER", applib.date_to_char('M', invoice_prejoin.INVOICE_DATE) as character_date from DATA_WAREHOUSE_SCHEMA.X_INVOICE_PREJOIN invoice_prejoin group by invoice_prejoin.INVOICE_DATE ) group by character_date; -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/group-by-named-function-call-tp5519568p5519755.html Sent from the luciddb-users mailing list archive at Nabble.com. |
From: Michael L. <mic...@in...> - 2010-09-10 19:27:35
|
Cannot seem to group by a function, for example: select COUNT(invoice_prejoin.TRANSACTION_NUMBER) as "Invoice.TRANSACTION_NUMBER", applib.date_to_char('M', invoice_prejoin.INVOICE_DATE) from DATA_WAREHOUSE_SCHEMA.X_INVOICE_PREJOIN invoice_prejoin group by applib.date_to_char('M', invoice_prejoin.INVOICE_DATE) I get the following error: Error: From line 1, column 108 to line 1, column 135: Expression 'INVOICE_PREJOIN.INVOICE_DATE' is not being grouped SQLState: null ErrorCode: 0 I believe this is supported based on my reading on the wiki: http://pub.eigenbase.org/wiki/LucidDbSelectExpression and http://pub.eigenbase.org/wiki/LucidDbValueExpression. Is this supported? |
From: Michael L. <mic...@in...> - 2010-09-09 21:14:14
|
Thank you. On Thu, Sep 9, 2010 at 2:06 PM, John Sichi <js...@gm...> wrote: > This is a known limitation: > > http://issues.eigenbase.org/browse/FRG-140 > > Fully qualified column names were introduced in the SQL:2003 standard > (before that they were illegal), but LucidDB's validator has not > caught up yet. > > JVS > > On Thu, Sep 9, 2010 at 2:02 PM, Michael Lynch <mic...@in...> > wrote: > > > > I noticed that I cannot use a fully qualified table name in a > select-item. > > For example, > > * the field alone name works: select TRANSACTION_NUMBER as > > "Invoice.TRANSACTION_NUMBER" from DATA_WAREHOUSE_SCHEMA.X_INVOICE_PREJOIN > > * field name.table name works: select > X_INVOICE_PREJOIN.TRANSACTION_NUMBER > > as "Invoice.TRANSACTION_NUMBER" from > DATA_WAREHOUSE_SCHEMA.X_INVOICE_PREJOIN > > * fully qualified table name does not work: select > > DATA_WAREHOUSE_SCHEMA.X_INVOICE_PREJOIN.TRANSACTION_NUMBER as > > "Invoice.TRANSACTION_NUMBER" from DATA_WAREHOUSE_SCHEMA.X_INVOICE_PREJOIN > > Is this the expected behavior? The BI tool we are using generates this > > style of field reference, and it works in other databases. > > This is not a bit issue as I can introduce an alias that the tool will > use. > > Michael Lynch > > Software Architect > > Integrated Services, Inc. > > > > > ------------------------------------------------------------------------------ > > This SF.net Dev2Dev email is sponsored by: > > > > Show off your parallel programming skills. > > Enter the Intel(R) Threading Challenge 2010. > > http://p.sf.net/sfu/intel-thread-sfd > > _______________________________________________ > > luciddb-users mailing list > > luc...@li... > > https://lists.sourceforge.net/lists/listinfo/luciddb-users > > > > > > > ------------------------------------------------------------------------------ > This SF.net Dev2Dev email is sponsored by: > > Show off your parallel programming skills. > Enter the Intel(R) Threading Challenge 2010. > http://p.sf.net/sfu/intel-thread-sfd > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
From: John S. <js...@gm...> - 2010-09-09 21:06:52
|
This is a known limitation: http://issues.eigenbase.org/browse/FRG-140 Fully qualified column names were introduced in the SQL:2003 standard (before that they were illegal), but LucidDB's validator has not caught up yet. JVS On Thu, Sep 9, 2010 at 2:02 PM, Michael Lynch <mic...@in...> wrote: > > I noticed that I cannot use a fully qualified table name in a select-item. > For example, > * the field alone name works: select TRANSACTION_NUMBER as > "Invoice.TRANSACTION_NUMBER" from DATA_WAREHOUSE_SCHEMA.X_INVOICE_PREJOIN > * field name.table name works: select X_INVOICE_PREJOIN.TRANSACTION_NUMBER > as "Invoice.TRANSACTION_NUMBER" from DATA_WAREHOUSE_SCHEMA.X_INVOICE_PREJOIN > * fully qualified table name does not work: select > DATA_WAREHOUSE_SCHEMA.X_INVOICE_PREJOIN.TRANSACTION_NUMBER as > "Invoice.TRANSACTION_NUMBER" from DATA_WAREHOUSE_SCHEMA.X_INVOICE_PREJOIN > Is this the expected behavior? The BI tool we are using generates this > style of field reference, and it works in other databases. > This is not a bit issue as I can introduce an alias that the tool will use. > Michael Lynch > Software Architect > Integrated Services, Inc. > > ------------------------------------------------------------------------------ > This SF.net Dev2Dev email is sponsored by: > > Show off your parallel programming skills. > Enter the Intel(R) Threading Challenge 2010. > http://p.sf.net/sfu/intel-thread-sfd > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > > |